Sample records for system failure analysis

  1. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  2. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  3. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  4. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  5. 14 CFR 417.309 - Flight safety system analysis.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...

  6. Risk Based Reliability Centered Maintenance of DOD Fire Protection Systems

    DTIC Science & Technology

    1999-01-01

    2.2.3 Failure Mode and Effect Analysis ( FMEA )............................ 2.2.4 Failure Mode Risk Characterization...Step 2 - System functions and functional failures definition Step 3 - Failure mode and effect analysis ( FMEA ) Step 4 - Failure mode risk...system). The Interface Location column identifies the location where the FMEA of the fire protection system began or stopped. For example, for the fire

  7. Failure Mode, Effects, and Criticality Analysis (FMECA)

    DTIC Science & Technology

    1993-04-01

    Preliminary Failure Modes, Effects and Criticality Analysis (FMECA) of the Brayton Isotope Power System Ground Demonstration System, Report No. TID 27301...No. TID/SNA - 3015, Aeroject Nuclear Systems Co., Sacramento, California: 1970. 95. Taylor , J.R. A Formalization of Failure Mode Analysis of Control...Roskilde, Denmark: 1973. 96. Taylor , J.R. A Semi-Automatic Method for Oualitative Failure Mode Analysis. Report No. RISO-M-1707. Available from a

  8. Comprehension and retrieval of failure cases in airborne observatories

    NASA Technical Reports Server (NTRS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-01-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  9. Comprehension and retrieval of failure cases in airborne observatories

    NASA Astrophysics Data System (ADS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-05-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  10. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  11. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  12. Graphical Displays Assist In Analysis Of Failures

    NASA Technical Reports Server (NTRS)

    Pack, Ginger; Wadsworth, David; Razavipour, Reza

    1995-01-01

    Failure Environment Analysis Tool (FEAT) computer program enables people to see and better understand effects of failures in system. Uses digraph models to determine what will happen to system if set of failure events occurs and to identify possible causes of selected set of failures. Digraphs or engineering schematics used. Also used in operations to help identify causes of failures after they occur. Written in C language.

  13. Reliability Analysis of Systems Subject to First-Passage Failure

    NASA Technical Reports Server (NTRS)

    Lutes, Loren D.; Sarkani, Shahram

    2009-01-01

    An obvious goal of reliability analysis is the avoidance of system failure. However, it is generally recognized that it is often not feasible to design a practical or useful system for which failure is impossible. Thus it is necessary to use techniques that estimate the likelihood of failure based on modeling the uncertainty about such items as the demands on and capacities of various elements in the system. This usually involves the use of probability theory, and a design is considered acceptable if it has a sufficiently small probability of failure. This report contains findings of analyses of systems subject to first-passage failure.

  14. Reliability and sensitivity analysis of a system with multiple unreliable service stations and standby switching failures

    NASA Astrophysics Data System (ADS)

    Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung

    2007-07-01

    This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.

  15. Independent Orbiter Assessment (IOA): Analysis of the purge, vent and drain subsystem

    NASA Technical Reports Server (NTRS)

    Bynum, M. C., III

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter PV and D (Purge, Vent and Drain) Subsystem hardware. The PV and D Subsystem controls the environment of unpressurized compartments and window cavities, senses hazardous gases, and purges Orbiter/ET Disconnect. The subsystem is divided into six systems: Purge System (controls the environment of unpressurized structural compartments); Vent System (controls the pressure of unpressurized compartments); Drain System (removes water from unpressurized compartments); Hazardous Gas Detection System (HGDS) (monitors hazardous gas concentrations); Window Cavity Conditioning System (WCCS) (maintains clear windows and provides pressure control of the window cavities); and External Tank/Orbiter Disconnect Purge System (prevents cryo-pumping/icing of disconnect hardware). Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Four of the sixty-two failure modes analyzed were determined as single failures which could result in the loss of crew or vehicle. A possible loss of mission could result if any of twelve single failures occurred. Two of the criticality 1/1 failures are in the Window Cavity Conditioning System (WCCS) outer window cavity, where leakage and/or restricted flow will cause failure to depressurize/repressurize the window cavity. Two criticality 1/1 failures represent leakage and/or restricted flow in the Orbiter/ET disconnect purge network which prevent cryopumping/icing of disconnect hardware. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  16. Program Helps In Analysis Of Failures

    NASA Technical Reports Server (NTRS)

    Stevenson, R. W.; Austin, M. E.; Miller, J. G.

    1993-01-01

    Failure Environment Analysis Tool (FEAT) computer program developed to enable people to see and better understand effects of failures in system. User selects failures from either engineering schematic diagrams or digraph-model graphics, and effects or potential causes of failures highlighted in color on same schematic-diagram or digraph representation. Uses digraph models to answer two questions: What will happen to system if set of failure events occurs? and What are possible causes of set of selected failures? Helps design reviewers understand exactly what redundancies built into system and where there is need to protect weak parts of system or remove them by redesign. Program also useful in operations, where it helps identify causes of failure after they occur. FEAT reduces costs of evaluation of designs, training, and learning how failures propagate through system. Written using Macintosh Programmers Workshop C v3.1. Can be linked with CLIPS 5.0 (MSC-21927, available from COSMIC).

  17. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  18. Failure environment analysis tool applications

    NASA Astrophysics Data System (ADS)

    Pack, Ginger L.; Wadsworth, David B.

    1993-02-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  19. Failure environment analysis tool applications

    NASA Technical Reports Server (NTRS)

    Pack, Ginger L.; Wadsworth, David B.

    1993-01-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  20. Failure environment analysis tool applications

    NASA Technical Reports Server (NTRS)

    Pack, Ginger L.; Wadsworth, David B.

    1994-01-01

    Understanding risks and avoiding failure are daily concerns for the women and men of NASA. Although NASA's mission propels us to push the limits of technology, and though the risks are considerable, the NASA community has instilled within it, the determination to preserve the integrity of the systems upon which our mission and, our employees lives and well-being depend. One of the ways this is being done is by expanding and improving the tools used to perform risk assessment. The Failure Environment Analysis Tool (FEAT) was developed to help engineers and analysts more thoroughly and reliably conduct risk assessment and failure analysis. FEAT accomplishes this by providing answers to questions regarding what might have caused a particular failure; or, conversely, what effect the occurrence of a failure might have on an entire system. Additionally, FEAT can determine what common causes could have resulted in other combinations of failures. FEAT will even help determine the vulnerability of a system to failures, in light of reduced capability. FEAT also is useful in training personnel who must develop an understanding of particular systems. FEAT facilitates training on system behavior, by providing an automated environment in which to conduct 'what-if' evaluation. These types of analyses make FEAT a valuable tool for engineers and operations personnel in the design, analysis, and operation of NASA space systems.

  1. Stingray Failure Mode, Effects and Criticality Analysis: WEC Risk Registers

    DOE Data Explorer

    Ken Rhinefrank

    2016-07-25

    Analysis method to systematically identify all potential failure modes and their effects on the Stingray WEC system. This analysis is incorporated early in the development cycle such that the mitigation of the identified failure modes can be achieved cost effectively and efficiently. The FMECA can begin once there is enough detail to functions and failure modes of a given system, and its interfaces with other systems. The FMECA occurs coincidently with the design process and is an iterative process which allows for design changes to overcome deficiencies in the analysis.Risk Registers for major subsystems completed according to the methodology described in "Failure Mode Effects and Criticality Analysis Risk Reduction Program Plan.pdf" document below, in compliance with the DOE Risk Management Framework developed by NREL.

  2. Global resilience analysis of water distribution systems.

    PubMed

    Diao, Kegong; Sweetapple, Chris; Farmani, Raziyeh; Fu, Guangtao; Ward, Sarah; Butler, David

    2016-12-01

    Evaluating and enhancing resilience in water infrastructure is a crucial step towards more sustainable urban water management. As a prerequisite to enhancing resilience, a detailed understanding is required of the inherent resilience of the underlying system. Differing from traditional risk analysis, here we propose a global resilience analysis (GRA) approach that shifts the objective from analysing multiple and unknown threats to analysing the more identifiable and measurable system responses to extreme conditions, i.e. potential failure modes. GRA aims to evaluate a system's resilience to a possible failure mode regardless of the causal threat(s) (known or unknown, external or internal). The method is applied to test the resilience of four water distribution systems (WDSs) with various features to three typical failure modes (pipe failure, excess demand, and substance intrusion). The study reveals GRA provides an overview of a water system's resilience to various failure modes. For each failure mode, it identifies the range of corresponding failure impacts and reveals extreme scenarios (e.g. the complete loss of water supply with only 5% pipe failure, or still meeting 80% of demand despite over 70% of pipes failing). GRA also reveals that increased resilience to one failure mode may decrease resilience to another and increasing system capacity may delay the system's recovery in some situations. It is also shown that selecting an appropriate level of detail for hydraulic models is of great importance in resilience analysis. The method can be used as a comprehensive diagnostic framework to evaluate a range of interventions for improving system resilience in future studies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Fault management for the Space Station Freedom control center

    NASA Technical Reports Server (NTRS)

    Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet

    1992-01-01

    This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.

  4. Real-time automated failure analysis for on-orbit operations

    NASA Technical Reports Server (NTRS)

    Kirby, Sarah; Lauritsen, Janet; Pack, Ginger; Ha, Anhhoang; Jowers, Steven; Mcnenny, Robert; Truong, The; Dell, James

    1993-01-01

    A system which is to provide real-time failure analysis support to controllers at the NASA Johnson Space Center Control Center Complex (CCC) for both Space Station and Space Shuttle on-orbit operations is described. The system employs monitored systems' models of failure behavior and model evaluation algorithms which are domain-independent. These failure models are viewed as a stepping stone to more robust algorithms operating over models of intended function. The described system is designed to meet two sets of requirements. It must provide a useful failure analysis capability enhancement to the mission controller. It must satisfy CCC operational environment constraints such as cost, computer resource requirements, verification, and validation. The underlying technology and how it may be used to support operations is also discussed.

  5. Reliability analysis of airship remote sensing system

    NASA Astrophysics Data System (ADS)

    Qin, Jun

    1998-08-01

    Airship Remote Sensing System (ARSS) for obtain the dynamic or real time images in the remote sensing of the catastrophe and the environment, is a mixed complex system. Its sensor platform is a remote control airship. The achievement of a remote sensing mission depends on a series of factors. For this reason, it is very important for us to analyze reliability of ARSS. In first place, the system model was simplified form multi-stage system to two-state system on the basis of the result of the failure mode and effect analysis and the failure tree failure mode effect and criticality analysis. The failure tree was created after analyzing all factors and their interrelations. This failure tree includes four branches, e.g. engine subsystem, remote control subsystem, airship construction subsystem, flying metrology and climate subsystem. By way of failure tree analysis and basic-events classing, the weak links were discovered. The result of test running shown no difference in comparison with theory analysis. In accordance with the above conclusions, a plan of the reliability growth and reliability maintenance were posed. System's reliability are raised from 89 percent to 92 percent with the reformation of the man-machine interactive interface, the augmentation of the secondary better-groupie and the secondary remote control equipment.

  6. Sensor failure and multivariable control for airbreathing propulsion systems. Ph.D. Thesis - Dec. 1979 Final Report

    NASA Technical Reports Server (NTRS)

    Behbehani, K.

    1980-01-01

    A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.

  7. A Comprehensive Reliability Methodology for Assessing Risk of Reusing Failed Hardware Without Corrective Actions with and Without Redundancy

    NASA Technical Reports Server (NTRS)

    Putcha, Chandra S.; Mikula, D. F. Kip; Dueease, Robert A.; Dang, Lan; Peercy, Robert L.

    1997-01-01

    This paper deals with the development of a reliability methodology to assess the consequences of using hardware, without failure analysis or corrective action, that has previously demonstrated that it did not perform per specification. The subject of this paper arose from the need to provide a detailed probabilistic analysis to calculate the change in probability of failures with respect to the base or non-failed hardware. The methodology used for the analysis is primarily based on principles of Monte Carlo simulation. The random variables in the analysis are: Maximum Time of Operation (MTO) and operation Time of each Unit (OTU) The failure of a unit is considered to happen if (OTU) is less than MTO for the Normal Operational Period (NOP) in which this unit is used. NOP as a whole uses a total of 4 units. Two cases are considered. in the first specialized scenario, the failure of any operation or system failure is considered to happen if any of the units used during the NOP fail. in the second specialized scenario, the failure of any operation or system failure is considered to happen only if any two of the units used during the MOP fail together. The probability of failure of the units and the system as a whole is determined for 3 kinds of systems - Perfect System, Imperfect System 1 and Imperfect System 2. in a Perfect System, the operation time of the failed unit is the same as that of the MTO. In an Imperfect System 1, the operation time of the failed unit is assumed as 1 percent of the MTO. In an Imperfect System 2, the operation time of the failed unit is assumed as zero. in addition, simulated operation time of failed units is assumed as 10 percent of the corresponding units before zero value. Monte Carlo simulation analysis is used for this study. Necessary software has been developed as part of this study to perform the reliability calculations. The results of the analysis showed that the predicted change in failure probability (P(sub F)) for the previously failed units is as high as 49 percent above the baseline (perfect system) for the worst case. The predicted change in system P(sub F) for the previously failed units is as high as 36% for single unit failure without any redundancy. For redundant systems, with dual unit failure, the predicted change in P(sub F) for the previously failed units is as high as 16%. These results will help management to make decisions regarding the consequences of using previously failed units without adequate failure analysis or corrective action.

  8. Reliability analysis of the F-8 digital fly-by-wire system

    NASA Technical Reports Server (NTRS)

    Brock, L. D.; Goodman, H. A.

    1981-01-01

    The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.

  9. To the systematization of failure analysis for perturbed systems (in German)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haller, U.

    1974-01-01

    The paper investigates the reliable functioning of complex technical systems. Of main importance is the question of how the functioning of technical systems which may fail or whose design still has some faults can be determined in the very earliest planning stages. The present paper is to develop a functioning schedule and to look for possible methods of systematic failure analysis of systems with stochastic failures. (RW/AK)

  10. Independent Orbiter Assessment (IOA): Analysis of the auxiliary power unit

    NASA Technical Reports Server (NTRS)

    Barnes, J. E.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Auxiliary Power Unit (APU). The APUs are required to provide power to the Orbiter hydraulics systems during ascent and entry flight phases for aerosurface actuation, main engine gimballing, landing gear extension, and other vital functions. For analysis purposes, the APU system was broken down into ten functional subsystems. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. A preponderance of 1/1 criticality items were related to failures that allowed the hydrazine fuel to escape into the Orbiter aft compartment, creating a severe fire hazard, and failures that caused loss of the gas generator injector cooling system.

  11. Fault tree analysis of failure cause of crushing plant and mixing bed hall at Khoy cement factory in Iran☆

    PubMed Central

    Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad

    2014-01-01

    Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433

  12. Fault tree analysis of failure cause of crushing plant and mixing bed hall at Khoy cement factory in Iran.

    PubMed

    Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad

    2014-04-01

    Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.

  13. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  14. Weighted Fuzzy Risk Priority Number Evaluation of Turbine and Compressor Blades Considering Failure Mode Correlations

    NASA Astrophysics Data System (ADS)

    Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-06-01

    Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.

  15. How to apply clinical cases and medical literature in the framework of a modified "failure mode and effects analysis" as a clinical reasoning tool--an illustration using the human biliary system.

    PubMed

    Wong, Kam Cheong

    2016-04-06

    Clinicians use various clinical reasoning tools such as Ishikawa diagram to enhance their clinical experience and reasoning skills. Failure mode and effects analysis, which is an engineering methodology in origin, can be modified and applied to provide inputs into an Ishikawa diagram. The human biliary system is used to illustrate a modified failure mode and effects analysis. The anatomical and physiological processes of the biliary system are reviewed. Failure is defined as an abnormality caused by infective, inflammatory, obstructive, malignancy, autoimmune and other pathological processes. The potential failures, their effect(s), main clinical features, and investigation that can help a clinician to diagnose at each anatomical part and physiological process are reviewed and documented in a modified failure mode and effects analysis table. Relevant medical and surgical cases are retrieved from the medical literature and weaved into the table. A total of 80 clinical cases which are relevant to the modified failure mode and effects analysis for the human biliary system have been reviewed and weaved into a designated table. The table is the backbone and framework for further expansion. Reviewing and updating the table is an iterative and continual process. The relevant clinical features in the modified failure mode and effects analysis are then extracted and included in the relevant Ishikawa diagram. This article illustrates an application of engineering methodology in medicine, and it sows the seeds of potential cross-pollination between engineering and medicine. Establishing a modified failure mode and effects analysis can be a teamwork project or self-directed learning process, or a mix of both. Modified failure mode and effects analysis can be deployed to obtain inputs for an Ishikawa diagram which in turn can be used to enhance clinical experiences and clinical reasoning skills for clinicians, medical educators, and students.

  16. Risk assessment for enterprise resource planning (ERP) system implementations: a fault tree analysis approach

    NASA Astrophysics Data System (ADS)

    Zeng, Yajun; Skibniewski, Miroslaw J.

    2013-08-01

    Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.

  17. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

  18. Orbiter subsystem hardware/software interaction analysis. Volume 8: Forward reaction control system

    NASA Technical Reports Server (NTRS)

    Becker, D. D.

    1980-01-01

    The results of the orbiter hardware/software interaction analysis for the AFT reaction control system are presented. The interaction between hardware failure modes and software are examined in order to identify associated issues and risks. All orbiter subsystems and interfacing program elements which interact with the orbiter computer flight software are analyzed. The failure modes identified in the subsystem/element failure mode and effects analysis are discussed.

  19. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  20. [Examination of safety improvement by failure record analysis that uses reliability engineering].

    PubMed

    Kato, Kyoichi; Sato, Hisaya; Abe, Yoshihisa; Ishimori, Yoshiyuki; Hirano, Hiroshi; Higashimura, Kyoji; Amauchi, Hiroshi; Yanakita, Takashi; Kikuchi, Kei; Nakazawa, Yasuo

    2010-08-20

    How the maintenance checks of the medical treatment system, including start of work check and the ending check, was effective for preventive maintenance and the safety improvement was verified. In this research, date on the failure of devices in multiple facilities was collected, and the data of the trouble repair record was analyzed by the technique of reliability engineering. An analysis of data on the system (8 general systems, 6 Angio systems, 11 CT systems, 8 MRI systems, 8 RI systems, and the radiation therapy system 9) used in eight hospitals was performed. The data collection period assumed nine months from April to December 2008. Seven items were analyzed. (1) Mean time between failures (MTBF) (2) Mean time to repair (MTTR) (3) Mean down time (MDT) (4) Number found by check in morning (5) Failure generation time according to modality. The classification of the breakdowns per device, the incidence, and the tendency could be understood by introducing reliability engineering. Analysis, evaluation, and feedback on the failure generation history are useful to keep downtime to a minimum and to ensure safety.

  1. Failure modes and effects analysis automation

    NASA Technical Reports Server (NTRS)

    Kamhieh, Cynthia H.; Cutts, Dannie E.; Purves, R. Byron

    1988-01-01

    A failure modes and effects analysis (FMEA) assistant was implemented as a knowledge based system and will be used during design of the Space Station to aid engineers in performing the complex task of tracking failures throughout the entire design effort. The three major directions in which automation was pursued were the clerical components of the FMEA process, the knowledge acquisition aspects of FMEA, and the failure propagation/analysis portions of the FMEA task. The system is accessible to design, safety, and reliability engineers at single user workstations and, although not designed to replace conventional FMEA, it is expected to decrease by many man years the time required to perform the analysis.

  2. Model-OA wind turbine generator - Failure modes and effects analysis

    NASA Technical Reports Server (NTRS)

    Klein, William E.; Lali, Vincent R.

    1990-01-01

    The results failure modes and effects analysis (FMEA) conducted for wind-turbine generators are presented. The FMEA was performed for the functional modes of each system, subsystem, or component. The single-point failures were eliminated for most of the systems. The blade system was the only exception. The qualitative probability of a blade separating was estimated at level D-remote. Many changes were made to the hardware as a result of this analysis. The most significant change was the addition of the safety system. Operational experience and need to improve machine availability have resulted in subsequent changes to the various systems, which are also reflected in this FMEA.

  3. A global analysis approach for investigating structural resilience in urban drainage systems.

    PubMed

    Mugume, Seith N; Gomez, Diego E; Fu, Guangtao; Farmani, Raziyeh; Butler, David

    2015-09-15

    Building resilience in urban drainage systems requires consideration of a wide range of threats that contribute to urban flooding. Existing hydraulic reliability based approaches have focused on quantifying functional failure caused by extreme rainfall or increase in dry weather flows that lead to hydraulic overloading of the system. Such approaches however, do not fully explore the full system failure scenario space due to exclusion of crucial threats such as equipment malfunction, pipe collapse and blockage that can also lead to urban flooding. In this research, a new analytical approach based on global resilience analysis is investigated and applied to systematically evaluate the performance of an urban drainage system when subjected to a wide range of structural failure scenarios resulting from random cumulative link failure. Link failure envelopes, which represent the resulting loss of system functionality (impacts) are determined by computing the upper and lower limits of the simulation results for total flood volume (failure magnitude) and average flood duration (failure duration) at each link failure level. A new resilience index that combines the failure magnitude and duration into a single metric is applied to quantify system residual functionality at each considered link failure level. With this approach, resilience has been tested and characterised for an existing urban drainage system in Kampala city, Uganda. In addition, the effectiveness of potential adaptation strategies in enhancing its resilience to cumulative link failure has been tested. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. X-framework: Space system failure analysis framework

    NASA Astrophysics Data System (ADS)

    Newman, John Steven

    Space program and space systems failures result in financial losses in the multi-hundred million dollar range every year. In addition to financial loss, space system failures may also represent the loss of opportunity, loss of critical scientific, commercial and/or national defense capabilities, as well as loss of public confidence. The need exists to improve learning and expand the scope of lessons documented and offered to the space industry project team. One of the barriers to incorporating lessons learned include the way in which space system failures are documented. Multiple classes of space system failure information are identified, ranging from "sound bite" summaries in space insurance compendia, to articles in journals, lengthy data-oriented (what happened) reports, and in some rare cases, reports that treat not only the what, but also the why. In addition there are periodically published "corporate crisis" reports, typically issued after multiple or highly visible failures that explore management roles in the failure, often within a politically oriented context. Given the general lack of consistency, it is clear that a good multi-level space system/program failure framework with analytical and predictive capability is needed. This research effort set out to develop such a model. The X-Framework (x-fw) is proposed as an innovative forensic failure analysis approach, providing a multi-level understanding of the space system failure event beginning with the proximate cause, extending to the directly related work or operational processes and upward through successive management layers. The x-fw focus is on capability and control at the process level and examines: (1) management accountability and control, (2) resource and requirement allocation, and (3) planning, analysis, and risk management at each level of management. The x-fw model provides an innovative failure analysis approach for acquiring a multi-level perspective, direct and indirect causation of failures, and generating better and more consistent reports. Through this approach failures can be more fully understood, existing programs can be evaluated and future failures avoided. The x-fw development involved a review of the historical failure analysis and prevention literature, coupled with examination of numerous failure case studies. Analytical approaches included use of a relational failure "knowledge base" for classification and sorting of x-fw elements and attributes for each case. In addition a novel "management mapping" technique was developed as a means of displaying an integrated snapshot of indirect causes within the management chain. Further research opportunities will extend the depth of knowledge available for many of the component level cases. In addition, the x-fw has the potential to expand the scope of space sector lessons learned, and contribute to knowledge management and organizational learning.

  5. Determination of UAV pre-flight Checklist for flight test purpose using qualitative failure analysis

    NASA Astrophysics Data System (ADS)

    Hendarko; Indriyanto, T.; Syardianto; Maulana, F. A.

    2018-05-01

    Safety aspects are of paramount importance in flight, especially in flight test phase. Before performing any flight tests of either manned or unmanned aircraft, one should include pre-flight checklists as a required safety document in the flight test plan. This paper reports on the development of a new approach for determination of pre-flight checklists for UAV flight test based on aircraft’s failure analysis. The Lapan’s LSA (Light Surveillance Aircraft) is used as a study case, assuming this aircraft has been transformed into the unmanned version. Failure analysis is performed on LSA using fault tree analysis (FTA) method. Analysis is focused on propulsion system and flight control system, which fail of these systems will lead to catastrophic events. Pre-flight checklist of the UAV is then constructed based on the basic causes obtained from failure analysis.

  6. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  7. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  8. Failure Mode and Effects Analysis (FMEA) Introductory Overview

    DTIC Science & Technology

    2012-06-14

    Failure Mode and Effects Analysis ( FMEA ) Introductory Overview TARDEC Systems Engineering Risk Management Team POC: Kadry Rizk or Gregor Ratajczak...2. REPORT TYPE Briefing Charts 3. DATES COVERED 01-05-2012 to 23-05-2012 4. TITLE AND SUBTITLE Failure Mode and Effects Analysis ( FMEA ) 5a...18 WELCOME Welcome to “An introductory overview of Failure Mode and Effects Analysis ( FMEA )”, A brief concerning the use and benefits of FMEA

  9. Space tug propulsion system failure mode, effects and criticality analysis

    NASA Technical Reports Server (NTRS)

    Boyd, J. W.; Hardison, E. P.; Heard, C. B.; Orourke, J. C.; Osborne, F.; Wakefield, L. T.

    1972-01-01

    For purposes of the study, the propulsion system was considered as consisting of the following: (1) main engine system, (2) auxiliary propulsion system, (3) pneumatic system, (4) hydrogen feed, fill, drain and vent system, (5) oxygen feed, fill, drain and vent system, and (6) helium reentry purge system. Each component was critically examined to identify possible failure modes and the subsequent effect on mission success. Each space tug mission consists of three phases: launch to separation from shuttle, separation to redocking, and redocking to landing. The analysis considered the results of failure of a component during each phase of the mission. After the failure modes of each component were tabulated, those components whose failure would result in possible or certain loss of mission or inability to return the Tug to ground were identified as critical components and a criticality number determined for each. The criticality number of a component denotes the number of mission failures in one million missions due to the loss of that component. A total of 68 components were identified as critical with criticality numbers ranging from 1 to 2990.

  10. Operations analysis (study 2.1). Contingency analysis. [of failure modes anticipated during space shuttle upper stage planning

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Future operational concepts for the space transportation system were studied in terms of space shuttle upper stage failure contingencies possible during deployment, retrieval, or space servicing of automated satellite programs. Problems anticipated during mission planning were isolated using a modified 'fault tree' technique, normally used in safety analyses. A comprehensive space servicing hazard analysis is presented which classifies possible failure modes under the catagories of catastrophic collision, failure to rendezvous and dock, servicing failure, and failure to undock. The failure contingencies defined are to be taken into account during design of the upper stage.

  11. Risk management of key issues of FPSO

    NASA Astrophysics Data System (ADS)

    Sun, Liping; Sun, Hai

    2012-12-01

    Risk analysis of key systems have become a growing topic late of because of the development of offshore structures. Equipment failures of offloading system and fire accidents were analyzed based on the floating production, storage and offloading (FPSO) features. Fault tree analysis (FTA), and failure modes and effects analysis (FMEA) methods were examined based on information already researched on modules of relex reliability studio (RRS). Equipment failures were also analyzed qualitatively by establishing a fault tree and Boolean structure function based on the shortage of failure cases, statistical data, and risk control measures examined. Failure modes of fire accident were classified according to the different areas of fire occurrences during the FMEA process, using risk priority number (RPN) methods to evaluate their severity rank. The qualitative analysis of FTA gave the basic insight of forming the failure modes of FPSO offloading, and the fire FMEA gave the priorities and suggested processes. The research has practical importance for the security analysis problems of FPSO.

  12. Materials testing of the IUS techroll seal material

    NASA Technical Reports Server (NTRS)

    Nichols, R. L.; Hall, W. B.

    1984-01-01

    As a part of the investigation of the control system failure Inertial Upper Stage on IUS-1 flight to position a Tracking and Data Relay Satellite (TDRS) in geosynchronous orbit, the materials utilized in the techroll seal are evaluated for possible failure models. Studies undertaken included effect of temperature on the strength of the system, effect of fatigue on the strength of the system, thermogravimetric analysis, thermomechanical analysis, differential scanning calorimeter analysis, dynamic mechanical analysis, and peel test. The most likely failure mode is excessive temperature in the seal. In addition, the seal material is susceptible to fatigue damage which could be a contributing factor.

  13. A Comprehensive Availability Modeling and Analysis of a Virtualized Servers System Using Stochastic Reward Nets

    PubMed Central

    Kim, Dong Seong; Park, Jong Sou

    2014-01-01

    It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732

  14. Critical Infrastructure Vulnerability to Spatially Localized Failures with Applications to Chinese Railway System.

    PubMed

    Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun

    2017-01-17

    This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.

  15. Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.

    PubMed

    Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi

    2015-10-01

    In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.

  16. System safety in Stirling engine development

    NASA Technical Reports Server (NTRS)

    Bankaitis, H.

    1981-01-01

    The DOE/NASA Stirling Engine Project Office has required that contractors make safety considerations an integral part of all phases of the Stirling engine development program. As an integral part of each engine design subtask, analyses are evolved to determine possible modes of failure. The accepted system safety analysis techniques (Fault Tree, FMEA, Hazards Analysis, etc.) are applied in various degrees of extent at the system, subsystem and component levels. The primary objectives are to identify critical failure areas, to enable removal of susceptibility to such failures or their effects from the system and to minimize risk.

  17. Introduction to Concurrent Engineering: Electronic Circuit Design and Production Applications

    DTIC Science & Technology

    1992-09-01

    STD-1629. Failure mode distribution data for many different types of parts may be found in RAC publication FMD -91. FMEA utilizes inductive logic in a...contrasts with a Fault Tree Analysis ( FTA ) which utilizes deductive logic in a "top down" approach. In FTA , a system failure is assumed and traced down...Analysis ( FTA ) is a graphical method of risk analysis used to identify critical failure modes within a system or equipment. Utilizing a pictorial approach

  18. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  19. Fuzzy-based failure mode and effect analysis (FMEA) of a hybrid molten carbonate fuel cell (MCFC) and gas turbine system for marine propulsion

    NASA Astrophysics Data System (ADS)

    Ahn, Junkeon; Noh, Yeelyong; Park, Sung Ho; Choi, Byung Il; Chang, Daejun

    2017-10-01

    This study proposes a fuzzy-based FMEA (failure mode and effect analysis) for a hybrid molten carbonate fuel cell and gas turbine system for liquefied hydrogen tankers. An FMEA-based regulatory framework is adopted to analyze the non-conventional propulsion system and to understand the risk picture of the system. Since the participants of the FMEA rely on their subjective and qualitative experiences, the conventional FMEA used for identifying failures that affect system performance inevitably involves inherent uncertainties. A fuzzy-based FMEA is introduced to express such uncertainties appropriately and to provide flexible access to a risk picture for a new system using fuzzy modeling. The hybrid system has 35 components and has 70 potential failure modes, respectively. Significant failure modes occur in the fuel cell stack and rotary machine. The fuzzy risk priority number is used to validate the crisp risk priority number in the FMEA.

  20. 3-Dimensional Root Cause Diagnosis via Co-analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Ziming; Lan, Zhiling; Yu, Li

    2012-01-01

    With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less

  1. Independent Orbiter Assessment (IOA): Analysis of the active thermal control subsystem

    NASA Technical Reports Server (NTRS)

    Sinclair, S. K.; Parkman, W. E.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical (PCIs) items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Active Thermal Control Subsystem (ATCS) are documented. The major purpose of the ATCS is to remove the heat, generated during normal Shuttle operations from the Orbiter systems and subsystems. The four major components of the ATCS contributing to the heat removal are: Freon Coolant Loops; Radiator and Flow Control Assembly; Flash Evaporator System; and Ammonia Boiler System. In order to perform the analysis, the IOA process utilized available ATCS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 310 failure modes analyzed, 101 were determined to be PCIs.

  2. Use of failure mode and effects analysis for proactive identification of communication and handoff failures from organ procurement to transplantation.

    PubMed

    Steinberger, Dina M; Douglas, Stephen V; Kirschbaum, Mark S

    2009-09-01

    A multidisciplinary team from the University of Wisconsin Hospital and Clinics transplant program used failure mode and effects analysis to proactively examine opportunities for communication and handoff failures across the continuum of care from organ procurement to transplantation. The team performed a modified failure mode and effects analysis that isolated the multiple linked, serial, and complex information exchanges occurring during the transplantation of one solid organ. Failure mode and effects analysis proved effective for engaging a diverse group of persons who had an investment in the outcome in analysis and discussion of opportunities to improve the system's resilience for avoiding errors during a time-pressured and complex process.

  3. Failure Analysis of a Complex Learning Framework Incorporating Multi-Modal and Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pullum, Laura L; Symons, Christopher T

    2011-01-01

    Machine learning is used in many applications, from machine vision to speech recognition to decision support systems, and is used to test applications. However, though much has been done to evaluate the performance of machine learning algorithms, little has been done to verify the algorithms or examine their failure modes. Moreover, complex learning frameworks often require stepping beyond black box evaluation to distinguish between errors based on natural limits on learning and errors that arise from mistakes in implementation. We present a conceptual architecture, failure model and taxonomy, and failure modes and effects analysis (FMEA) of a semi-supervised, multi-modal learningmore » system, and provide specific examples from its use in a radiological analysis assistant system. The goal of the research described in this paper is to provide a foundation from which dependability analysis of systems using semi-supervised, multi-modal learning can be conducted. The methods presented provide a first step towards that overall goal.« less

  4. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  5. DEPEND - A design environment for prediction and evaluation of system dependability

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.; Iyer, Ravishankar K.

    1990-01-01

    The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.

  6. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less

  7. A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.

    ERIC Educational Resources Information Center

    Stephens, Kent G.

    Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…

  8. Sensitivity analysis by approximation formulas - Illustrative examples. [reliability analysis of six-component architectures

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1983-01-01

    This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.

  9. Product Support Manager Guidebook

    DTIC Science & Technology

    2011-04-01

    package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA

  10. Independent Orbiter Assessment (IOA): Analysis of the Orbiter Experiment (OEX) subsystem

    NASA Technical Reports Server (NTRS)

    Compton, J. M.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Experiments hardware. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The Orbiter Experiments (OEX) Program consists of a multiple set of experiments for the purpose of gathering environmental and aerodynamic data to develop more accurate ground models for Shuttle performance and to facilitate the design of future spacecraft. This assessment only addresses currently manifested experiments and their support systems. Specifically this list consists of: Shuttle Entry Air Data System (SEADS); Shuttle Upper Atmosphere Mass Spectrometer (SUMS); Forward Fuselage Support System for OEX (FFSSO); Shuttle Infrared Laced Temperature Sensor (SILTS); Aerodynamic Coefficient Identification Package (ACIP); and Support System for OEX (SSO). There are only two potential critical items for the OEX, since the experiments only gather data for analysis post mission and are totally independent systems except for power. Failure of any experiment component usually only causes a loss of experiment data and in no way jeopardizes the crew or mission.

  11. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.

  12. Mod 1 wind turbine generator failure modes and effects analysis

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A failure modes and effects analysis (FMEA) was directed primarily at identifying those critical failure modes that would be hazardous to life or would result in major damage to the system. Each subsystem was approached from the top down, and broken down to successive lower levels where it appeared that the criticality of the failure mode warranted more detail analysis. The results were reviewed by specialists from outside the Mod 1 program, and corrective action taken wherever recommended.

  13. Operational modes, health, and status monitoring

    NASA Astrophysics Data System (ADS)

    Taljaard, Corrie

    2016-08-01

    System Engineers must fully understand the system, its support system and operational environment to optimise the design. Operations and Support Managers must also identify the correct metrics to measure the performance and to manage the operations and support organisation. Reliability Engineering and Support Analysis provide methods to design a Support System and to optimise the Availability of a complex system. Availability modelling and Failure Analysis during the design is intended to influence the design and to develop an optimum maintenance plan for a system. The remote site locations of the SKA Telescopes place emphasis on availability, failure identification and fault isolation. This paper discusses the use of Failure Analysis and a Support Database to design a Support and Maintenance plan for the SKA Telescopes. It also describes the use of modelling to develop an availability dashboard and performance metrics.

  14. Determining Component Probability using Problem Report Data for Ground Systems used in Manned Space Flight

    NASA Technical Reports Server (NTRS)

    Monaghan, Mark W.; Gillespie, Amanda M.

    2013-01-01

    During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.

  15. User-Defined Material Model for Progressive Failure Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F. Jr.; Reeder, James R. (Technical Monitor)

    2006-01-01

    An overview of different types of composite material system architectures and a brief review of progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model (or UMAT) for use with the ABAQUS/Standard1 nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details and use of the UMAT subroutine are described in the present paper. Parametric studies for composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented.

  16. Poster - Thur Eve - 05: Safety systems and failure modes and effects analysis for a magnetic resonance image guided radiation therapy system.

    PubMed

    Lamey, M; Carlone, M; Alasti, H; Bissonnette, J P; Borg, J; Breen, S; Coolens, C; Heaton, R; Islam, M; van Proojen, M; Sharpe, M; Stanescu, T; Jaffray, D

    2012-07-01

    An online Magnetic Resonance guided Radiation Therapy (MRgRT) system is under development. The system is comprised of an MRI with the capability of travel between and into HDR brachytherapy and external beam radiation therapy vaults. The system will provide on-line MR images immediately prior to radiation therapy. The MR images will be registered to a planning image and used for image guidance. With the intention of system safety we have performed a failure modes and effects analysis. A process tree of the facility function was developed. Using the process tree as well as an initial design of the facility as guidelines possible failure modes were identified, for each of these failure modes root causes were identified. For each possible failure the assignment of severity, detectability and occurrence scores was performed. Finally suggestions were developed to reduce the possibility of an event. The process tree consists of nine main inputs and each of these main inputs consisted of 5 - 10 sub inputs and tertiary inputs were also defined. The process tree ensures that the overall safety of the system has been considered. Several possible failure modes were identified and were relevant to the design, construction, commissioning and operating phases of the facility. The utility of the analysis can be seen in that it has spawned projects prior to installation and has lead to suggestions in the design of the facility. © 2012 American Association of Physicists in Medicine.

  17. Weighing of risk factors for penetrating keratoplasty graft failure: application of Risk Score System.

    PubMed

    Tourkmani, Abdo Karim; Sánchez-Huerta, Valeria; De Wit, Guillermo; Martínez, Jaime D; Mingo, David; Mahillo-Fernández, Ignacio; Jiménez-Alfaro, Ignacio

    2017-01-01

    To analyze the relationship between the score obtained in the Risk Score System (RSS) proposed by Hicks et al with penetrating keratoplasty (PKP) graft failure at 1y postoperatively and among each factor in the RSS with the risk of PKP graft failure using univariate and multivariate analysis. The retrospective cohort study had 152 PKPs from 152 patients. Eighteen cases were excluded from our study due to primary failure (10 cases), incomplete medical notes (5 cases) and follow-up less than 1y (3 cases). We included 134 PKPs from 134 patients stratified by preoperative risk score. Spearman coefficient was calculated for the relationship between the score obtained and risk of failure at 1y. Univariate and multivariate analysis were calculated for the impact of every single risk factor included in the RSS over graft failure at 1y. Spearman coefficient showed statistically significant correlation between the score in the RSS and graft failure ( P <0.05). Multivariate logistic regression analysis showed no statistically significant relationship ( P >0.05) between diagnosis and lens status with graft failure. The relationship between the other risk factors studied and graft failure was significant ( P <0.05), although the results for previous grafts and graft failure was unreliable. None of our patients had previous blood transfusion, thus, it had no impact. After the application of multivariate analysis techniques, some risk factors do not show the expected impact over graft failure at 1y.

  18. Weighing of risk factors for penetrating keratoplasty graft failure: application of Risk Score System

    PubMed Central

    Tourkmani, Abdo Karim; Sánchez-Huerta, Valeria; De Wit, Guillermo; Martínez, Jaime D.; Mingo, David; Mahillo-Fernández, Ignacio; Jiménez-Alfaro, Ignacio

    2017-01-01

    AIM To analyze the relationship between the score obtained in the Risk Score System (RSS) proposed by Hicks et al with penetrating keratoplasty (PKP) graft failure at 1y postoperatively and among each factor in the RSS with the risk of PKP graft failure using univariate and multivariate analysis. METHODS The retrospective cohort study had 152 PKPs from 152 patients. Eighteen cases were excluded from our study due to primary failure (10 cases), incomplete medical notes (5 cases) and follow-up less than 1y (3 cases). We included 134 PKPs from 134 patients stratified by preoperative risk score. Spearman coefficient was calculated for the relationship between the score obtained and risk of failure at 1y. Univariate and multivariate analysis were calculated for the impact of every single risk factor included in the RSS over graft failure at 1y. RESULTS Spearman coefficient showed statistically significant correlation between the score in the RSS and graft failure (P<0.05). Multivariate logistic regression analysis showed no statistically significant relationship (P>0.05) between diagnosis and lens status with graft failure. The relationship between the other risk factors studied and graft failure was significant (P<0.05), although the results for previous grafts and graft failure was unreliable. None of our patients had previous blood transfusion, thus, it had no impact. CONCLUSION After the application of multivariate analysis techniques, some risk factors do not show the expected impact over graft failure at 1y. PMID:28393027

  19. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems

    PubMed Central

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul

    2010-01-01

    Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN≥125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software∕hardware upgrades. System latency was determined to be ∼193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%–3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was ∼35 min, while that taken for comprehensive testing was ∼3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation of clinical resources because the most critical failure modes receive the most attention. It is expected that the set of guidelines proposed here will serve as a living document that is updated with the accumulation of progressively more intrainstitutional and interinstitutional experience with DMLC tracking. PMID:21302802

  20. A systems engineering approach to automated failure cause diagnosis in space power systems

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Faymon, Karl A.

    1987-01-01

    Automatic failure-cause diagnosis is a key element in autonomous operation of space power systems such as Space Station's. A rule-based diagnostic system has been developed for determining the cause of degraded performance. The knowledge required for such diagnosis is elicited from the system engineering process by using traditional failure analysis techniques. Symptoms, failures, causes, and detector information are represented with structured data; and diagnostic procedural knowledge is represented with rules. Detected symptoms instantiate failure modes and possible causes consistent with currently held beliefs about the likelihood of the cause. A diagnosis concludes with an explanation of the observed symptoms in terms of a chain of possible causes and subcauses.

  1. Availability Estimate of a Conceptual ESM System.

    DTIC Science & Technology

    1979-06-01

    affect mission operation.t A functional block level failure modes and effects analysis ( FMEA ) performed on the filter resulted in an assessed failure rate...is based on an FMEA of failures that disable the function (see Appendix A). A further 29 examination of the filter piece-parts reveals that the driver...Digital-to-analog converter DC Direct current DF Direction finding ESM Electronic Support Measures FMEA Failure modes and effects analysis FMPO

  2. SCADA alarms processing for wind turbine component failure detection

    NASA Astrophysics Data System (ADS)

    Gonzalez, E.; Reder, M.; Melero, J. J.

    2016-09-01

    Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.

  3. Independent Orbiter Assessment (IOA): Analysis of the life support and airlock support subsystems

    NASA Technical Reports Server (NTRS)

    Arbet, Jim; Duffy, R.; Barickman, K.; Saiidi, Mo J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Life Support System (LSS) and Airlock Support System (ALSS). Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The LSS provides for the management of the supply water, collection of metabolic waste, management of waste water, smoke detection, and fire suppression. The ALSS provides water, oxygen, and electricity to support an extravehicular activity in the airlock.

  4. Effect of system workload on operating system reliability - A study on IBM 3081

    NASA Technical Reports Server (NTRS)

    Iyer, R. K.; Rossetti, D. J.

    1985-01-01

    This paper presents an analysis of operating system failures on an IBM 3081 running VM/SP. Three broad categories of software failures are found: error handling, program control or logic, and hardware related; it is found that more than 25 percent of software failures occur in the hardware/software interface. Measurements show that results on software reliability cannot be considered representative unless the system workload is taken into account. The overall CPU execution rate, although measured to be close to 100 percent most of the time, is not found to correlate strongly with the occurrence of failures. Possible reasons for the observed workload failure dependency, based on detailed investigations of the failure data, are discussed.

  5. Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.

  6. A structured analysis of in vitro failure loads and failure modes of fiber, metal, and ceramic post-and-core systems.

    PubMed

    Fokkinga, Wietske A; Kreulen, Cees M; Vallittu, Pekka K; Creugers, Nico H J

    2004-01-01

    This study sought to aggregate literature data on in vitro failure loads and failure modes of prefabricated fiber-reinforced composite (FRC) post systems and to compare them to those of prefabricated metal, custom-cast, and ceramic post systems. The literature was searched using MEDLINE from 1984 to 2003 for dental articles in English. Keywords used were (post or core or buildup or dowel) and (teeth or tooth). Additional inclusion/exclusion steps were conducted, each step by two independent readers: (1) Abstracts describing post-and-core techniques to reconstruct endodontically treated teeth and their mechanical and physical characteristics were included (descriptive studies or reviews were excluded); (2) articles that included FRC post systems were selected; (3) in vitro studies, single-rooted human teeth, prefabricated FRC posts, and composite as the core material were the selection criteria; and (4) failure loads and modes were extracted from the selected papers, and failure modes were dichotomized (distinction was made between "favorable failures," defined as reparable failures, and "unfavorable failures,"defined as irreparable [root] fractures). The literature search revealed 1,984 abstracts. Included were 244, 42, and 12 articles in the first, second, and third selection steps, respectively. Custom-cast post systems showed higher failure loads than prefabricated FRC post systems, whereas ceramic showed lower failure loads. Significantly more favorable failures occurred with prefabricated FRC post systems than with prefabricated and custom-cast metal post systems. The variable "post system" had a significant effect on mean failure loads. FRC post systems more frequently showed favorable failure modes than did metal post systems.

  7. Parametric Testing of Launch Vehicle FDDR Models

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar

    2011-01-01

    For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.

  8. Meteorological Satellites (METSAT) and Earth Observing System (EOS) Advanced Microwave Sounding Unit-A (AMSU-A) Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    This Failure Modes and Effects Analysis (FMEA) is for the Advanced Microwave Sounding Unit-A (AMSU-A) instruments that are being designed and manufactured for the Meteorological Satellites Project (METSAT) and the Earth Observing System (EOS) integrated programs. The FMEA analyzes the design of the METSAT and EOS instruments as they currently exist. This FMEA is intended to identify METSAT and EOS failure modes and their effect on spacecraft-instrument and instrument-component interfaces. The prime objective of this FMEA is to identify potential catastrophic and critical failures so that susceptibility to the failures and their effects can be eliminated from the METSAT/EOS instruments.

  9. A case study in nonconformance and performance trend analysis

    NASA Technical Reports Server (NTRS)

    Maloy, Joseph E.; Newton, Coy P.

    1990-01-01

    As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.

  10. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  11. Procedure for Failure Mode, Effects, and Criticality Analysis (FMECA)

    NASA Technical Reports Server (NTRS)

    1966-01-01

    This document provides guidelines for the accomplishment of Failure Mode, Effects, and Criticality Analysis (FMECA) on the Apollo program. It is a procedure for analysis of hardware items to determine those items contributing most to system unreliability and crew safety problems.

  12. Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Danny H; Elwood Jr, Robert H

    2011-01-01

    Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less

  13. Fault tree analysis for integrated and probabilistic risk analysis of drinking water systems.

    PubMed

    Lindhe, Andreas; Rosén, Lars; Norberg, Tommy; Bergstedt, Olof

    2009-04-01

    Drinking water systems are vulnerable and subject to a wide range of risks. To avoid sub-optimisation of risk-reduction options, risk analyses need to include the entire drinking water system, from source to tap. Such an integrated approach demands tools that are able to model interactions between different events. Fault tree analysis is a risk estimation tool with the ability to model interactions between events. Using fault tree analysis on an integrated level, a probabilistic risk analysis of a large drinking water system in Sweden was carried out. The primary aims of the study were: (1) to develop a method for integrated and probabilistic risk analysis of entire drinking water systems; and (2) to evaluate the applicability of Customer Minutes Lost (CML) as a measure of risk. The analysis included situations where no water is delivered to the consumer (quantity failure) and situations where water is delivered but does not comply with water quality standards (quality failure). Hard data as well as expert judgements were used to estimate probabilities of events and uncertainties in the estimates. The calculations were performed using Monte Carlo simulations. CML is shown to be a useful measure of risks associated with drinking water systems. The method presented provides information on risk levels, probabilities of failure, failure rates and downtimes of the system. This information is available for the entire system as well as its different sub-systems. Furthermore, the method enables comparison of the results with performance targets and acceptable levels of risk. The method thus facilitates integrated risk analysis and consequently helps decision-makers to minimise sub-optimisation of risk-reduction options.

  14. Failure Modes and Effects Analysis (FMEA): A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Failure modes and effects analysis (FMEA) is a bottom-up analytical process that identifies process hazards, which helps managers understand vulnerabilities of systems, as well as assess and mitigate risk. It is one of several engineering tools and techniques available to program and project managers aimed at increasing the likelihood of safe and successful NASA programs and missions. This bibliography references 465 documents in the NASA STI Database that contain the major concepts, failure modes or failure analysis, in either the basic index of the major subject terms.

  15. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  16. Independent Orbiter Assessment (IOA): Assessment of the EPD and C/remote manipulator system FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Robinson, W. W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Electrical Power Distribution and Control (EPD and C)/Remote Manipulator System (RMS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA analysis of the EPD and C/RMS hardware initially generated 345 failure mode worksheets and identified 117 Potential Critical Items (PCIs) before starting the assessment process. These analysis results were compared to the proposed NASA Post 51-L baseline of 132 FMEAs and 66 CIL items.

  17. Development of Safety Analysis Code System of Beam Transport and Core for Accelerator Driven System

    NASA Astrophysics Data System (ADS)

    Aizawa, Naoto; Iwasaki, Tomohiko

    2014-06-01

    Safety analysis code system of beam transport and core for accelerator driven system (ADS) is developed for the analyses of beam transients such as the change of the shape and position of incident beam. The code system consists of the beam transport analysis part and the core analysis part. TRACE 3-D is employed in the beam transport analysis part, and the shape and incident position of beam at the target are calculated. In the core analysis part, the neutronics, thermo-hydraulics and cladding failure analyses are performed by the use of ADS dynamic calculation code ADSE on the basis of the external source database calculated by PHITS and the cross section database calculated by SRAC, and the programs of the cladding failure analysis for thermoelastic and creep. By the use of the code system, beam transient analyses are performed for the ADS proposed by Japan Atomic Energy Agency. As a result, the rapid increase of the cladding temperature happens and the plastic deformation is caused in several seconds. In addition, the cladding is evaluated to be failed by creep within a hundred seconds. These results have shown that the beam transients have caused a cladding failure.

  18. Interrelation of structure and operational states in cascading failure of overloading lines in power grids

    NASA Astrophysics Data System (ADS)

    Xue, Fei; Bompard, Ettore; Huang, Tao; Jiang, Lin; Lu, Shaofeng; Zhu, Huaiying

    2017-09-01

    As the modern power system is expected to develop to a more intelligent and efficient version, i.e. the smart grid, or to be the central backbone of energy internet for free energy interactions, security concerns related to cascading failures have been raised with consideration of catastrophic results. The researches of topological analysis based on complex networks have made great contributions in revealing structural vulnerabilities of power grids including cascading failure analysis. However, existing literature with inappropriate assumptions in modeling still cannot distinguish the effects between the structure and operational state to give meaningful guidance for system operation. This paper is to reveal the interrelation between network structure and operational states in cascading failure and give quantitative evaluation by integrating both perspectives. For structure analysis, cascading paths will be identified by extended betweenness and quantitatively described by cascading drop and cascading gradient. Furthermore, the operational state for cascading paths will be described by loading level. Then, the risk of cascading failure along a specific cascading path can be quantitatively evaluated considering these two factors. The maximum cascading gradient of all possible cascading paths can be used as an overall metric to evaluate the entire power grid for its features related to cascading failure. The proposed method is tested and verified on IEEE30-bus system and IEEE118-bus system, simulation evidences presented in this paper suggests that the proposed model can identify the structural causes for cascading failure and is promising to give meaningful guidance for the protection of system operation in the future.

  19. Space Shuttle Main Engine Quantitative Risk Assessment: Illustrating Modeling of a Complex System with a New QRA Software Package

    NASA Technical Reports Server (NTRS)

    Smart, Christian

    1998-01-01

    During 1997, a team from Hernandez Engineering, MSFC, Rocketdyne, Thiokol, Pratt & Whitney, and USBI completed the first phase of a two year Quantitative Risk Assessment (QRA) of the Space Shuttle. The models for the Shuttle systems were entered and analyzed by a new QRA software package. This system, termed the Quantitative Risk Assessment System(QRAS), was designed by NASA and programmed by the University of Maryland. The software is a groundbreaking PC-based risk assessment package that allows the user to model complex systems in a hierarchical fashion. Features of the software include the ability to easily select quantifications of failure modes, draw Event Sequence Diagrams(ESDs) interactively, perform uncertainty and sensitivity analysis, and document the modeling. This paper illustrates both the approach used in modeling and the particular features of the software package. The software is general and can be used in a QRA of any complex engineered system. The author is the project lead for the modeling of the Space Shuttle Main Engines (SSMEs), and this paper focuses on the modeling completed for the SSMEs during 1997. In particular, the groundrules for the study, the databases used, the way in which ESDs were used to model catastrophic failure of the SSMES, the methods used to quantify the failure rates, and how QRAS was used in the modeling effort are discussed. Groundrules were necessary to limit the scope of such a complex study, especially with regard to a liquid rocket engine such as the SSME, which can be shut down after ignition either on the pad or in flight. The SSME was divided into its constituent components and subsystems. These were ranked on the basis of the possibility of being upgraded and risk of catastrophic failure. Once this was done the Shuttle program Hazard Analysis and Failure Modes and Effects Analysis (FMEA) were used to create a list of potential failure modes to be modeled. The groundrules and other criteria were used to screen out the many failure modes that did not contribute significantly to the catastrophic risk. The Hazard Analysis and FMEA for the SSME were also used to build ESDs that show the chain of events leading from the failure mode occurence to one of the following end states: catastrophic failure, engine shutdown, or siccessful operation( successful with respect to the failure mode under consideration).

  20. Matrix Failure Modes and Effects Analysis as a Knowledge Base for a Real Time Automated Diagnosis Expert System

    NASA Technical Reports Server (NTRS)

    Herrin, Stephanie; Iverson, David; Spukovska, Lilly; Souza, Kenneth A. (Technical Monitor)

    1994-01-01

    Failure Modes and Effects Analysis contain a wealth of information that can be used to create the knowledge base required for building automated diagnostic Expert systems. A real time monitoring and diagnosis expert system based on an actual NASA project's matrix failure modes and effects analysis was developed. This Expert system Was developed at NASA Ames Research Center. This system was first used as a case study to monitor the Research Animal Holding Facility (RAHF), a Space Shuttle payload that is used to house and monitor animals in orbit so the effects of space flight and microgravity can be studied. The techniques developed for the RAHF monitoring and diagnosis Expert system are general enough to be used for monitoring and diagnosis of a variety of other systems that undergo a Matrix FMEA. This automated diagnosis system was successfully used on-line and validated on the Space Shuttle flight STS-58, mission SLS-2 in October 1993.

  1. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  2. Independent Orbiter Assessment (IOA): Analysis of the hydraulics/water spray boiler subsystem

    NASA Technical Reports Server (NTRS)

    Duval, J. D.; Davidson, W. R.; Parkman, William E.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Orbiter Hydraulics/Water Spray Boiler Subsystem. The hydraulic system provides hydraulic power to gimbal the main engines, actuate the main engine propellant control valves, move the aerodynamic flight control surfaces, lower the landing gear, apply wheel brakes, steer the nosewheel, and dampen the external tank (ET) separation. Each hydraulic system has an associated water spray boiler which is used to cool the hydraulic fluid and APU lubricating oil. The IOA analysis process utilized available HYD/WSB hardware drawings, schematics and documents for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 430 failure modes analyzed, 166 were determined to be PCIs.

  3. Independent Orbiter Assessment (IOA): Analysis of the remote manipulator system

    NASA Technical Reports Server (NTRS)

    Tangorra, F.; Grasmeder, R. F.; Montgomery, A. D.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Remote Manipulator System (RMS) are documented. The RMS hardware and software are primarily required for deploying and/or retrieving up to five payloads during a single mission, capture and retrieve free-flying payloads, and for performing Manipulator Foot Restraint operations. Specifically, the RMS hardware consists of the following components: end effector; displays and controls; manipulator controller interface unit; arm based electronics; and the arm. The IOA analysis process utilized available RMS hardware drawings, schematics and documents for defining hardware assemblies, components and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 574 failure modes analyzed, 413 were determined to be PCIs.

  4. Demonstration Advanced Avionics System (DAAS), Phase 1

    NASA Technical Reports Server (NTRS)

    Bailey, A. J.; Bailey, D. G.; Gaabo, R. J.; Lahn, T. G.; Larson, J. C.; Peterson, E. M.; Schuck, J. W.; Rodgers, D. L.; Wroblewski, K. A.

    1981-01-01

    Demonstration advanced anionics system (DAAS) function description, hardware description, operational evaluation, and failure mode and effects analysis (FMEA) are provided. Projected advanced avionics system (PAAS) description, reliability analysis, cost analysis, maintainability analysis, and modularity analysis are discussed.

  5. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control subsystem, volume 1

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 1671 failure modes analyzed, 9 single failures were determined to result in loss of crew or vehicle. Three single failures unique to intact abort were determined to result in possible loss of the crew or vehicle. A possible loss of mission could result if any of 136 single failures occurred. Six of the criticality 1/1 failures are in two rotary and two pushbutton switches that control External Tank and Solid Rocket Booster separation. The other 6 criticality 1/1 failures are fuses, one each per Aft Power Control Assembly (APCA) 4, 5, and 6 and one each per Forward Power Control Assembly (FPCA) 1, 2, and 3, that supply power to certain Main Propulsion System (MPS) valves and Forward Reaction Control System (RCS) circuits.

  6. Analysis of Failures of High Speed Shaft Bearing System in a Wind Turbine

    NASA Astrophysics Data System (ADS)

    Wasilczuk, Michał; Gawarkiewicz, Rafał; Bastian, Bartosz

    2018-01-01

    During the operation of wind turbines with gearbox of traditional configuration, consisting of one planetary stage and two helical stages high failure rate of high speed shaft bearings is observed. Such a high failures frequency is not reflected in the results of standard calculations of bearing durability. Most probably it can be attributed to atypical failure mechanism. The authors studied problems in 1.5 MW wind turbines of one of Polish wind farms. The analysis showed that the problems of high failure rate are commonly met all over the world and that the statistics for the analysed turbines were very similar. After the study of potential failure mechanism and its potential reasons, modification of the existing bearing system was proposed. Various options, with different bearing types were investigated. Different versions were examined for: expected durability increase, extent of necessary gearbox modifications and possibility to solve existing problems in operation.

  7. A mobile system for the improvement of heart failure management: Evaluation of a prototype.

    PubMed

    Haynes, Sarah C; Kim, Katherine K

    2017-01-01

    Management of heart failure is complex, often involving interaction with multiple providers, monitoring of symptoms, and numerous medications. Employing principles of user-centered design, we developed a high- fidelity prototype of a mobile system for heart failure self-management and care coordination. Participants, including both heart failure patients and health care providers, tested the mobile system during a one-hour one-on-one session with a facilitator. The facilitator interviewed participants about the strengths and weaknesses of the prototype, necessary features, and willingness to use the technology. We performed a qualitative content analysis using the transcripts of these interviews. Fourteen distinct themes were identified in the analysis. Of these themes, integration, technology literacy, memory, and organization were the most common. Privacy was the least common theme. Our study suggests that this integration is essential for adoption of a mobile system for chronic disease management and care coordination.

  8. Response analysis of curved bridge with unseating failure control system under near-fault ground motions

    NASA Astrophysics Data System (ADS)

    Zuo, Ye; Sun, Guangjun; Li, Hongjing

    2018-01-01

    Under the action of near-fault ground motions, curved bridges are prone to pounding, local damage of bridge components and even unseating. A multi-scale fine finite element model of a typical three-span curved bridge is established by considering the elastic-plastic behavior of piers and pounding effect of adjacent girders. The nonlinear time-history method is used to study the seismic response of the curved bridge equipped with unseating failure control system under the action of near-fault ground motion. An in-depth analysis is carried to evaluate the control effect of the proposed unseating failure control system. The research results indicate that under the near-fault ground motion, the seismic response of the curved bridge is strong. The unseating failure control system perform effectively to reduce the pounding force of the adjacent girders and the probability of deck unseating.

  9. Reliability analysis of forty-five strain-gage systems mounted on the first fan stage of a YF-100 engine

    NASA Technical Reports Server (NTRS)

    Holanda, R.; Frause, L. M.

    1977-01-01

    The reliability of 45 state-of-the-art strain gage systems under full scale engine testing was investigated. The flame spray process was used to install 23 systems on the first fan rotor of a YF-100 engine; the others were epoxy cemented. A total of 56 percent of the systems failed in 11 hours of engine operation. Flame spray system failures were primarily due to high gage resistance, probably caused by high stress levels. Epoxy system failures were principally erosion failures, but only on the concave side of the blade. Lead-wire failures between the blade-to-disk jump and the control room could not be analyzed.

  10. Numerical Analysis of Solids at Failure

    DTIC Science & Technology

    2011-08-20

    failure analyses include the formulation of invariant finite elements for thin Kirchhoff rods, and preliminary initial studies of growth in...analysis of the failure of other structural/mechanical systems, including the finite element modeling of thin Kirchhoff rods and the constitutive...algorithm based on the connectivity graph of the underlying finite element mesh. In this setting, the discontinuities are defined by fronts propagating

  11. Reliability/safety analysis of a fly-by-wire system

    NASA Technical Reports Server (NTRS)

    Brock, L. D.; Goddman, H. A.

    1980-01-01

    An analysis technique has been developed to estimate the reliability of a very complex, safety-critical system by constructing a diagram of the reliability equations for the total system. This diagram has many of the characteristics of a fault-tree or success-path diagram, but is much easier to construct for complex redundant systems. The diagram provides insight into system failure characteristics and identifies the most likely failure modes. A computer program aids in the construction of the diagram and the computation of reliability. Analysis of the NASA F-8 Digital Fly-by-Wire Flight Control System is used to illustrate the technique.

  12. Measurement and Analysis of Failures in Computer Systems

    NASA Technical Reports Server (NTRS)

    Thakur, Anshuman

    1997-01-01

    This thesis presents a study of software failures spanning several different releases of Tandem's NonStop-UX operating system running on Tandem Integrity S2(TMR) systems. NonStop-UX is based on UNIX System V and is fully compliant with industry standards, such as the X/Open Portability Guide, the IEEE POSIX standards, and the System V Interface Definition (SVID) extensions. In addition to providing a general UNIX interface to the hardware, the operating system has built-in recovery mechanisms and audit routines that check the consistency of the kernel data structures. The analysis is based on data on software failures and repairs collected from Tandem's product report (TPR) logs for a period exceeding three years. A TPR log is created when a customer or an internal developer observes a failure in a Tandem Integrity system. This study concentrates primarily on those TPRs that report a UNIX panic that subsequently crashes the system. Approximately 200 of the TPRs fall into this category. Approximately 50% of the failures reported are from field systems, and the rest are from the testing and development sites. It has been observed by Tandem developers that fewer cases are encountered from the field than from the test centers. Thus, the data selection mechanism has introduced a slight skew.

  13. Orbiter subsystem hardware/software interaction analysis. Volume 8: AFT reaction control system, part 2

    NASA Technical Reports Server (NTRS)

    Becker, D. D.

    1980-01-01

    The orbiter subsystems and interfacing program elements which interact with the orbiter computer flight software are analyzed. The failure modes identified in the subsystem/element failure mode and effects analysis are examined. Potential interaction with the software is examined through an evaluation of the software requirements. The analysis is restricted to flight software requirements and excludes utility/checkout software. The results of the hardware/software interaction analysis for the forward reaction control system are presented.

  14. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    NASA Technical Reports Server (NTRS)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  15. Remote maintenance monitoring system

    NASA Technical Reports Server (NTRS)

    Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)

    1992-01-01

    A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.

  16. ANALYSIS OF SEQUENTIAL FAILURES FOR ASSESSMENT OF RELIABILITY AND SAFETY OF MANUFACTURING SYSTEMS. (R828541)

    EPA Science Inventory

    Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...

  17. Combining System Safety and Reliability to Ensure NASA CoNNeCT's Success

    NASA Technical Reports Server (NTRS)

    Havenhill, Maria; Fernandez, Rene; Zampino, Edward

    2012-01-01

    Hazard Analysis, Failure Modes and Effects Analysis (FMEA), the Limited-Life Items List (LLIL), and the Single Point Failure (SPF) List were applied by System Safety and Reliability engineers on NASA's Communications, Navigation, and Networking reConfigurable Testbed (CoNNeCT) Project. The integrated approach involving cross reviews of these reports by System Safety, Reliability, and Design engineers resulted in the mitigation of all identified hazards. The outcome was that the system met all the safety requirements it was required to meet.

  18. Independent Orbiter Assessment (IOA): Analysis of the extravehicular mobility unit

    NASA Technical Reports Server (NTRS)

    Raffaelli, Gary G.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Extravehicular Mobility Unit (EMU) hardware. The EMU is an independent anthropomorphic system that provides environmental protection, mobility, life support, and communications for the Shuttle crewmember to perform Extravehicular Activity (EVA) in Earth orbit. Two EMUs are included on each baseline Orbiter mission, and consumables are provided for three two-man EVAs. The EMU consists of the Life Support System (LSS), Caution and Warning System (CWS), and the Space Suit Assembly (SSA). Each level of hardware was evaluated and analyzed for possible failure modes and effects. The majority of these PCIs are resultant from failures which cause loss of one or more primary functions: pressurization, oxygen delivery, environmental maintenance, and thermal maintenance. It should also be noted that the quantity of PCIs would significantly increase if the SOP were to be treated as an emergency system rather than as an unlike redundant element.

  19. Performance and Reliability Analysis of Water Distribution Systems under Cascading Failures and the Identification of Crucial Pipes

    PubMed Central

    Shuang, Qing; Zhang, Mingyuan; Yuan, Yongbo

    2014-01-01

    As a mean of supplying water, Water distribution system (WDS) is one of the most important complex infrastructures. The stability and reliability are critical for urban activities. WDSs can be characterized by networks of multiple nodes (e.g. reservoirs and junctions) and interconnected by physical links (e.g. pipes). Instead of analyzing highest failure rate or highest betweenness, reliability of WDS is evaluated by introducing hydraulic analysis and cascading failures (conductive failure pattern) from complex network. The crucial pipes are identified eventually. The proposed methodology is illustrated by an example. The results show that the demand multiplier has a great influence on the peak of reliability and the persistent time of the cascading failures in its propagation in WDS. The time period when the system has the highest reliability is when the demand multiplier is less than 1. There is a threshold of tolerance parameter exists. When the tolerance parameter is less than the threshold, the time period with the highest system reliability does not meet minimum value of demand multiplier. The results indicate that the system reliability should be evaluated with the properties of WDS and the characteristics of cascading failures, so as to improve its ability of resisting disasters. PMID:24551102

  20. Beta-Blockers (Carvedilol) in Children with Systemic Ventricle Systolic Dysfunction - Systematic Review and Meta-Analysis.

    PubMed

    Prijic, Sergej; Buchhorn, Reiner; Kosutic, Jovan; Vukomanovic, Vladislav; Prijic, Andreja; Bjelakovic, Bojko; Zdravkovic, Marija

    2014-01-01

    Numerous prospective randomized clinical trials demonstrated favorable effect of beta-blockers in adults with chronic heart failure. However, effectiveness of beta blockers in pediatric patients with systemic ventricle systolic dysfunction was not recognized sufficiently. Limited number of pediatric patients might be the course of unrecognized carvediolol treatment benefit. Currently, no meta-analysis has examined the impact of carvedilol and conventional therapy on the clinical outcome in children with chronic heart failure due to impaired systemic ventricle systolic function. We have systematically searched the Medline/PubMed and Cochrane Library for the controlled clinical trials that examine carvedilol and standard treatment efficacy in pediatric patients with systemic ventricle systolic dysfunction. Mean differences for continuous variables, odds ratios for dichotomous outcomes, heterogeneity between studies and publication bias were calculated using Cochrane Review Manager (Rev Man 5.2). Total of 8 prospective/observational studies met established criteria. Odds ratio for chronic heart failure related mortality/heart transplantation secondary to carvedilol was 0.52 (95% CI: 0.28-0.97, I(2) = 0%). Our analysis showed that carvedilol could prevent 1 death/ heart transplantation by treating 14 pediatric patients with impaired systemic ventricle systolic function. Meta-analysis demonstrated clinical outcome benefit of carvedilol in children with chronic heart failure.

  1. A simplified fragility analysis of fan type cable stayed bridges

    NASA Astrophysics Data System (ADS)

    Khan, R. A.; Datta, T. K.; Ahmad, S.

    2005-06-01

    A simplified fragility analysis of fan type cable stayed bridges using Probabilistic Risk Analysis (PRA) procedure is presented for determining their failure probability under random ground motion. Seismic input to the bridge support is considered to be a risk consistent response spectrum which is obtained from a separate analysis. For the response analysis, the bridge deck is modeles as a beam supported on spring at different points. The stiffnesses of the springs are determined by a separate 2D static analysis of cable-tower-deck system. The analysis provides a coupled stiffness matrix for the spring system. A continuum method of analysis using dynamic stiffness is used to determine the dynamic properties of the bridges. The response of the bridge deck is obtained by the response spectrum method of analysis as applied to multidegree of freedom system which duly takes into account the quasi-static component of bridge deck vibration. The fragility analysis includes uncertainties arising due to the variation in ground motion, material property, modeling, method of analysis, ductility factor and damage concentration effect. Probability of failure of the bridge deck is determined by the First Order Second Moment (FOSM) method of reliability. A three span double plane symmetrical fan type cable stayed bridge of total span 689 m, is used as an illustrative example. The fragility curves for the bridge deck failure are obtained under a number of parametric variations. Some of the important conclusions of the study indicate that (i) not only vertical component but also the horizontal component of ground motion has considerable effect on the probability of failure; (ii) ground motion with no time lag between support excitations provides a smaller probability of failure as compared to ground motion with very large time lag between support excitation; and (iii) probability of failure may considerably increase soft soil condition.

  2. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  3. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  4. Probabilistic safety analysis of earth retaining structures during earthquakes

    NASA Astrophysics Data System (ADS)

    Grivas, D. A.; Souflis, C.

    1982-07-01

    A procedure is presented for determining the probability of failure of Earth retaining structures under static or seismic conditions. Four possible modes of failure (overturning, base sliding, bearing capacity, and overall sliding) are examined and their combined effect is evaluated with the aid of combinatorial analysis. The probability of failure is shown to be a more adequate measure of safety than the customary factor of safety. As Earth retaining structures may fail in four distinct modes, a system analysis can provide a single estimate for the possibility of failure. A Bayesian formulation of the safety retaining walls is found to provide an improved measure for the predicted probability of failure under seismic loading. The presented Bayesian analysis can account for the damage incurred to a retaining wall during an earthquake to provide an improved estimate for its probability of failure during future seismic events.

  5. Independent Orbiter Assessment (IOA): Analysis of the orbiter main propulsion system

    NASA Technical Reports Server (NTRS)

    Mcnicoll, W. J.; Mcneely, M.; Holden, K. A.; Emmons, T. E.; Lowery, H. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Main Propulsion System (MPS) hardware are documented. The Orbiter MPS consists of two subsystems: the Propellant Management Subsystem (PMS) and the Helium Subsystem. The PMS is a system of manifolds, distribution lines and valves by which the liquid propellants pass from the External Tank (ET) to the Space Shuttle Main Engines (SSMEs) and gaseous propellants pass from the SSMEs to the ET. The Helium Subsystem consists of a series of helium supply tanks and their associated regulators, check valves, distribution lines, and control valves. The Helium Subsystem supplies helium that is used within the SSMEs for inflight purges and provides pressure for actuation of SSME valves during emergency pneumatic shutdowns. The balance of the helium is used to provide pressure to operate the pneumatically actuated valves within the PMS. Each component was evaluated and analyzed for possible failure modes and effects. Criticalities were assigned based on the worst possible effect of each failure mode. Of the 690 failure modes analyzed, 349 were determined to be PCIs.

  6. Computer-aided operations engineering with integrated models of systems and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  7. Twenty Years of Endodontic Success and Failure at West Virginia University.

    DTIC Science & Technology

    1982-01-01

    Analysis of Success and Failure - Apical Termination of Filling Material ........... 21 7 Analysis of Success and Failure - Posttreatment Restoration...system with healthy periapical and periodontal tissues. The dentist must reduce or eliminate toxic or irritating substances from within the root canals to...adequate for evaluating endodontic success and that success should be based on the radiographic presence of a periodontal membrane space of approximately

  8. Independent Orbiter Assessment (IOA): Analysis of the nose wheel steering subsystem

    NASA Technical Reports Server (NTRS)

    Mediavilla, Anthony Scott

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Nose Wheel Steering (NWS) hardware are documented. The NWS hardware provides primary directional control for the Orbiter vehicle during landing rollout. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The original NWS design was envisioned as a backup system to differential braking for directional control of the Orbiter during landing rollout. No real effort was made to design the NWS system as fail operational. The brakes have much redundancy built into their design but the poor brake/tire performance has forced the NSTS to upgrade NWS to the primary mode of directional control during rollout. As a result, a large percentage of the NWS system components have become Potential Critical Items (PCI).

  9. Failure Analysis of Network Based Accessible Pedestrian Signals in Closed-Loop Operation

    DOT National Transportation Integrated Search

    2011-03-01

    The potential failure modes of a network based accessible pedestrian system were analyzed to determine the limitations and benefits of closed-loop operation. The vulnerabilities of the system are accessed using the industry standard process known as ...

  10. Failure Analysis in Platelet Molded Composite Systems

    NASA Astrophysics Data System (ADS)

    Kravchenko, Sergii G.

    Long-fiber discontinuous composite systems in the form of chopped prepreg tapes provide an advanced, structural grade, molding compound allowing for fabrication of complex three-dimensional components. Understanding of process-structure-property relationship is essential for application of prerpeg platelet molded components, especially because of their possible irregular disordered heterogeneous morphology. Herein, a structure-property relationship was analyzed in the composite systems of many platelets. Regular and irregular morphologies were considered. Platelet-based systems with more ordered morphology possess superior mechanical performance. While regular morphologies allow for a careful inspection of failure mechanisms derived from the morphological characteristics, irregular morphologies are representative of the composite architectures resulting from uncontrolled deposition and molding with chopped prerpegs. Progressive failure analysis (PFA) was used to study the damaged deformation up to ultimate failure in a platelet-based composite system. Computational damage mechanics approaches were utilized to conduct the PFA. The developed computational models granted understanding of how the composite structure details, meaning the platelet geometry and system morphology (geometrical arrangement and orientation distribution of platelets), define the effective mechanical properties of a platelet-molded composite system, its stiffness, strength and variability in properties.

  11. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  12. Method of Testing and Predicting Failures of Electronic Mechanical Systems

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, Frances A.

    1996-01-01

    A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.

  13. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    A high-angle-of-attack flush airdata sensing system was installed and flight tested on the F-18 High Alpha Research Vehicle at NASA-Dryden. This system uses a matrix of pressure orifices arranged in concentric circles on the nose of the vehicle to determine angles of attack, angles of sideslip, dynamic pressure, and static pressure as well as other airdata parameters. Results presented use an arrangement of 11 symmetrically distributed ports on the aircraft nose. Experience with this sensing system data indicates that the primary concern for real-time implementation is the detection and management of overall system and individual pressure sensor failures. The multiple port sensing system is more tolerant to small disturbances in the measured pressure data than conventional probe-based intrusive airdata systems. However, under adverse circumstances, large undetected failures in individual pressure ports can result in algorithm divergence and catastrophic failure of the entire system. How system and individual port failures may be detected using chi sq. analysis is shown. Once identified, the effects of failures are eliminated using weighted least squares.

  14. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  15. Independent Orbiter Assessment (IOA): Analysis of the orbital maneuvering system

    NASA Technical Reports Server (NTRS)

    Prust, C. D.; Paul, D. J.; Burkemper, V. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbital Maneuvering System (OMS) hardware are documented. The OMS provides the thrust to perform orbit insertion, orbit circularization, orbit transfer, rendezvous, and deorbit. The OMS is housed in two independent pods located one on each side of the tail and consists of the following subsystems: Helium Pressurization; Propellant Storage and Distribution; Orbital Maneuvering Engine; and Electrical Power Distribution and Control. The IOA analysis process utilized available OMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluted and analyzed for possible failure modes and effects. Criticality was asigned based upon the severity of the effect for each failure mode.

  16. Fault trees for decision making in systems analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Howard E.

    1975-10-09

    The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less

  17. Explosive Event in MON-3 Oxidizer System Resulting from Pressure Transducer Failure

    NASA Technical Reports Server (NTRS)

    Baker, David L.; Reynolds, Michael; Anderson, John

    2006-01-01

    In 2003, a Druck(Registered Trademark) pressure transducer failed catastrophically in a test system circulating nitrogen tetroxide at NASA Johnson Space Center White Sands Test Facility. The cause of the explosion was not immediately obvious since the wetted areas of the pressure transducer were constructed of materials compatible with nitrogen tetroxide. Chemical analysis of the resulting residue and a materials analysis of the diaphragm and its weld zones were used to determine the chain of events that led to the catastrophic failure. Due to excessive dynamic pressure loading in the test system, the diaphragm in the pressure transducer suffered cyclic failure and allowed the silicon oil located behind the isolation diaphragm to mix with the nitrogen tetroxide. The reaction between these two chemicals formed a combination of 2,4-di and 2,4,6-trinitrophenol, which are shock sensitive explosives that caused the failure of the pressure transducer. Further research indicated numerous manufacturers offer similar pressure transducers with silicone oil separated from the test fluid by a thin stainless steel isolation diaphragm. Caution must be exercised when purchasing a pressure transducer for a particular system to avoid costly failures and test system contamination.

  18. Direct modeling parameter signature analysis and failure mode prediction of physical systems using hybrid computer optimization

    NASA Technical Reports Server (NTRS)

    Drake, R. L.; Duvoisin, P. F.; Asthana, A.; Mather, T. W.

    1971-01-01

    High speed automated identification and design of dynamic systems, both linear and nonlinear, are discussed. Special emphasis is placed on developing hardware and techniques which are applicable to practical problems. The basic modeling experiment and new results are described. Using the improvements developed successful identification of several systems, including a physical example as well as simulated systems, was obtained. The advantages of parameter signature analysis over signal signature analysis in go-no go testing of operational systems were demonstrated. The feasibility of using these ideas in failure mode prediction in operating systems was also investigated. An improved digital controlled nonlinear function generator was developed, de-bugged, and completely documented.

  19. Reliability analysis and initial requirements for FC systems and stacks

    NASA Astrophysics Data System (ADS)

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  20. Ferrographic and spectrometer oil analysis from a failed gas turbine engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1982-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, a plasma, an atomic absorption, and an emission spectrometer to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations ( 2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure.

  1. Direct Adaptive Control of Systems with Actuator Failures: State of the Art and Continuing Challenges

    NASA Technical Reports Server (NTRS)

    Tao, Gang; Joshi, Suresh M.

    2008-01-01

    In this paper, the problem of controlling systems with failures and faults is introduced, and an overview of recent work on direct adaptive control for compensation of uncertain actuator failures is presented. Actuator failures may be characterized by some unknown system inputs being stuck at some unknown (fixed or varying) values at unknown time instants, that cannot be influenced by the control signals. The key task of adaptive compensation is to design the control signals in such a manner that the remaining actuators can automatically and seamlessly take over for the failed ones, and achieve desired stability and asymptotic tracking. A certain degree of redundancy is necessary to accomplish failure compensation. The objective of adaptive control design is to effectively use the available actuation redundancy to handle failures without the knowledge of the failure patterns, parameters, and time of occurrence. This is a challenging problem because failures introduce large uncertainties in the dynamic structure of the system, in addition to parametric uncertainties and unknown disturbances. The paper addresses some theoretical issues in adaptive actuator failure compensation: actuator failure modeling, redundant actuation requirements, plant-model matching, error system dynamics, adaptation laws, and stability, tracking, and performance analysis. Adaptive control designs can be shown to effectively handle uncertain actuator failures without explicit failure detection. Some open technical challenges and research problems in this important research area are discussed.

  2. Reducing unscheduled plant maintenance delays -- Field test of a new method to predict electric motor failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Homce, G.T.; Thalimer, J.R.

    1996-05-01

    Most electric motor predictive maintenance methods have drawbacks that limit their effectiveness in the mining environment. The US Bureau of Miens (USBM) is developing an alternative approach to detect winding insulation breakdown in advance of complete motor failure. In order to evaluate the analysis algorithms necessary for this approach, the USBM has designed and installed a system to monitor 120 electric motors in a coal preparation plant. The computer-based experimental system continuously gathers, stores, and analyzes electrical parameters for each motor. The results are then correlated to data from conventional motor-maintenance methods and in-service failures to determine if the analysismore » algorithms can detect signs of insulation deterioration and impending failure. This paper explains the on-line testing approach used in this research, and describes monitoring system design and implementation. At this writing data analysis is underway, but conclusive results are not yet available.« less

  3. System and method for floating-substrate passive voltage contrast

    DOEpatents

    Jenkins, Mark W [Albuquerque, NM; Cole, Jr., Edward I.; Tangyunyong, Paiboon [Albuquerque, NM; Soden, Jerry M [Placitas, NM; Walraven, Jeremy A [Albuquerque, NM; Pimentel, Alejandro A [Albuquerque, NM

    2009-04-28

    A passive voltage contrast (PVC) system and method are disclosed for analyzing ICs to locate defects and failure mechanisms. During analysis a device side of a semiconductor die containing the IC is maintained in an electrically-floating condition without any ground electrical connection while a charged particle beam is scanned over the device side. Secondary particle emission from the device side of the IC is detected to form an image of device features, including electrical vias connected to transistor gates or to other structures in the IC. A difference in image contrast allows the defects or failure mechanisms be pinpointed. Varying the scan rate can, in some instances, produce an image reversal to facilitate precisely locating the defects or failure mechanisms in the IC. The system and method are useful for failure analysis of ICs formed on substrates (e.g. bulk semiconductor substrates and SOI substrates) and other types of structures.

  4. CRYOGENIC UPPER STAGE SYSTEM SAFETY

    NASA Technical Reports Server (NTRS)

    Smith, R. Kenneth; French, James V.; LaRue, Peter F.; Taylor, James L.; Pollard, Kathy (Technical Monitor)

    2005-01-01

    NASA s Exploration Initiative will require development of many new systems or systems of systems. One specific example is that safe, affordable, and reliable upper stage systems to place cargo and crew in stable low earth orbit are urgently required. In this paper, we examine the failure history of previous upper stages with liquid oxygen (LOX)/liquid hydrogen (LH2) propulsion systems. Launch data from 1964 until midyear 2005 are analyzed and presented. This data analysis covers upper stage systems from the Ariane, Centaur, H-IIA, Saturn, and Atlas in addition to other vehicles. Upper stage propulsion system elements have the highest impact on reliability. This paper discusses failure occurrence in all aspects of the operational phases (Le., initial burn, coast, restarts, and trends in failure rates over time). In an effort to understand the likelihood of future failures in flight, we present timelines of engine system failures relevant to initial flight histories. Some evidence suggests that propulsion system failures as a result of design problems occur shortly after initial development of the propulsion system; whereas failures because of manufacturing or assembly processing errors may occur during any phase of the system builds process, This paper also explores the detectability of historical failures. Observations from this review are used to ascertain the potential for increased upper stage reliability given investments in integrated system health management. Based on a clear understanding of the failure and success history of previous efforts by multiple space hardware development groups, the paper will investigate potential improvements that can be realized through application of system safety principles.

  5. Stress Analysis of B-52B and B-52H Air-Launching Systems Failure-Critical Structural Components

    NASA Technical Reports Server (NTRS)

    Ko, William L.

    2005-01-01

    The operational life analysis of any airborne failure-critical structural component requires the stress-load equation, which relates the applied load to the maximum tangential tensile stress at the critical stress point. The failure-critical structural components identified are the B-52B Pegasus pylon adapter shackles, B-52B Pegasus pylon hooks, B-52H airplane pylon hooks, B-52H airplane front fittings, B-52H airplane rear pylon fitting, and the B-52H airplane pylon lower sway brace. Finite-element stress analysis was performed on the said structural components, and the critical stress point was located and the stress-load equation was established for each failure-critical structural component. The ultimate load, yield load, and proof load needed for operational life analysis were established for each failure-critical structural component.

  6. Intelligent on-line fault tolerant control for unanticipated catastrophic failures.

    PubMed

    Yen, Gary G; Ho, Liang-Wei

    2004-10-01

    As dynamic systems become increasingly complex, experience rapidly changing environments, and encounter a greater variety of unexpected component failures, solving the control problems of such systems is a grand challenge for control engineers. Traditional control design techniques are not adequate to cope with these systems, which may suffer from unanticipated dynamic failures. In this research work, we investigate the on-line fault tolerant control problem and propose an intelligent on-line control strategy to handle the desired trajectories tracking problem for systems suffering from various unanticipated catastrophic faults. Through theoretical analysis, the sufficient condition of system stability has been derived and two different on-line control laws have been developed. The approach of the proposed intelligent control strategy is to continuously monitor the system performance and identify what the system's current state is by using a fault detection method based upon our best knowledge of the nominal system and nominal controller. Once a fault is detected, the proposed intelligent controller will adjust its control signal to compensate for the unknown system failure dynamics by using an artificial neural network as an on-line estimator to approximate the unexpected and unknown failure dynamics. The first control law is derived directly from the Lyapunov stability theory, while the second control law is derived based upon the discrete-time sliding mode control technique. Both control laws have been implemented in a variety of failure scenarios to validate the proposed intelligent control scheme. The simulation results, including a three-tank benchmark problem, comply with theoretical analysis and demonstrate a significant improvement in trajectory following performance based upon the proposed intelligent control strategy.

  7. Dynamic decision-making for reliability and maintenance analysis of manufacturing systems based on failure effects

    NASA Astrophysics Data System (ADS)

    Zhang, Ding; Zhang, Yingjie

    2017-09-01

    A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.

  8. Use of Failure Mode and Effects Analysis to Improve Emergency Department Handoff Processes.

    PubMed

    Sorrentino, Patricia

    2016-01-01

    The purpose of this article is to describe a quality improvement process using failure mode and effects analysis (FMEA) to evaluate systems handoff communication processes, improve emergency department (ED) throughput and reduce crowding through development of a standardized handoff, and, ultimately, improve patient safety. Risk of patient harm through ineffective communication during handoff transitions is a major reason for breakdown of systems. Complexities of ED processes put patient safety at risk. An increased incidence of submitted patient safety event reports for handoff communication failures between the ED and inpatient units solidified a decision to implement the use of FMEA to identify handoff failures to mitigate patient harm through redesign. The clinical nurse specialist implemented an FMEA. Handoff failure themes were created from deidentified retrospective reviews. Weekly meetings were held over a 3-month period to identify failure modes and determine cause and effect on the process. A functional block diagram process map tool was used to illustrate handoff processes. An FMEA grid was used to list failure modes and assign a risk priority number to quantify results. Multiple areas with actionable failures were identified. A majority of causes for high-priority failure modes were specific to communications. Findings demonstrate the complexity of transition and handoff processes. The FMEA served to identify and evaluate risk of handoff failures and provide a framework for process improvement. A focus on mentoring nurses to quality handoff processes so that it becomes habitual practice is crucial to safe patient transitions. Standardizing content and hardwiring within the system are best practice. The clinical nurse specialist is prepared to provide strong leadership to drive and implement system-wide quality projects.

  9. A Proposal of Operational Risk Management Method Using FMEA for Drug Manufacturing Computerized System

    NASA Astrophysics Data System (ADS)

    Takahashi, Masakazu; Nanba, Reiji; Fukue, Yoshinori

    This paper proposes operational Risk Management (RM) method using Failure Mode and Effects Analysis (FMEA) for drug manufacturing computerlized system (DMCS). The quality of drug must not be influenced by failures and operational mistakes of DMCS. To avoid such situation, DMCS has to be conducted enough risk assessment and taken precautions. We propose operational RM method using FMEA for DMCS. To propose the method, we gathered and compared the FMEA results of DMCS, and develop a list that contains failure modes, failures and countermeasures. To apply this list, we can conduct RM in design phase, find failures, and conduct countermeasures efficiently. Additionally, we can find some failures that have not been found yet.

  10. Tribology symposium -- 1994. PD-Volume 61

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masudi, H.

    This year marks the first Tribology Symposium within the Energy-Sources Technology Conference, sponsored by the ASME Petroleum Division. The program was divided into five sessions: Tribology in High Technology, a historical discussion of some watershed events in tribology; Research/Development, design, research and development on modern manufacturing; Tribology in Manufacturing, the impact of tribology on modern manufacturing; Design/Design Representation, aspects of design related to tribological systems; and Failure Analysis, an analysis of failure, failure detection, and failure monitoring as relating to manufacturing processes. Eleven papers have been processed separately for inclusion on the data base.

  11. Reliability analysis and fault-tolerant system development for a redundant strapdown inertial measurement unit. [inertial platforms

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.

  12. Low-thrust mission risk analysis, with application to a 1980 rendezvous with the comet Encke

    NASA Technical Reports Server (NTRS)

    Yen, C. L.; Smith, D. B.

    1973-01-01

    A computerized failure process simulation procedure is used to evaluate the risk in a solar electric space mission. The procedure uses currently available thrust-subsystem reliability data and performs approximate simulations of the thrust sybsystem burn operation, the system failure processes, and the retargeting operations. The method is applied to assess the risks in carrying out a 1980 rendezvous mission to the comet Encke. Analysis of the results and evaluation of the effects of various risk factors on the mission show that system component failure rates are the limiting factors in attaining a high mission relability. It is also shown that a well-designed trajectory and system operation mode can be used effectively to partially compensate for unreliable thruster performance.

  13. Ayame/PAM-D apogee kick motor nozzle failure analysis

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The failure of two communication satellites during firing sequence were examined. The correlation/comparison of the circumstances of the Ayame incidents and the failure of the STAR 48 (DM-2) motor are reviewed. The massive nozzle failure of the AKM to determine the impact on spacecraft performance is examined. It is recommended that a closer watch is kept on systems techniques,

  14. Computational Methods for Failure Analysis and Life Prediction

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Harris, Charles E. (Compiler); Housner, Jerrold M. (Compiler); Hopkins, Dale A. (Compiler)

    1993-01-01

    This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

  15. Small vulnerable sets determine large network cascades in power grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  16. Small vulnerable sets determine large network cascades in power grids

    DOE PAGES

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    2017-11-17

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  17. Identification of Bearing Failure Using Signal Vibrations

    NASA Astrophysics Data System (ADS)

    Yani, Irsyadi; Resti, Yulia; Burlian, Firmansyah

    2018-04-01

    Vibration analysis can be used to identify damage to mechanical systems such as journal bearings. Identification of failure can be done by observing the resulting vibration spectrum by measuring the vibration signal occurring in a mechanical system Bearing is one of the engine elements commonly used in mechanical systems. The main purpose of this research is to monitor the bearing condition and to identify bearing failure on a mechanical system by observing the resulting vibration. Data collection techniques based on recordings of sound caused by the vibration of the mechanical system were used in this study, then created a database system based bearing failure due to vibration signal recording sounds on a mechanical system The next step is to group the bearing damage by type based on the databases obtained. The results show the percentage of success in identifying bearing damage is 98 %.

  18. Independent Orbiter Assessment (IOA): Analysis of the elevon subsystem

    NASA Technical Reports Server (NTRS)

    Wilson, R. E.; Riccio, J. R.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Orbiter Elevon system hardware. The elevon actuators are located at the trailing edge of the wing surface. The proper function of the elevons is essential during the dynamic flight phases of ascent and entry. In the ascent phase of flight, the elevons are used for relieving high wing loads. For entry, the elevons are used to pitch and roll the vehicle. Specifically, the elevon system hardware comprises the following components: flow cutoff valve; switching valve; electro-hydraulic (EH) servoactuator; secondary delta pressure transducer; bypass valve; power valve; power valve check valve; primary actuator; primary delta pressure transducer; and primary actuator position transducer. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 25 failure modes analyzed, 18 were determined to be PCIs.

  19. Independent Orbiter Assessment (IOA): Assessment of the life support and airlock support systems, volume 1

    NASA Technical Reports Server (NTRS)

    Arbet, J. D.; Duffy, R. E.; Barickman, K.; Saiidi, M. J.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Life Support and Airlock Support Systems (LSS and ALSS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the NASA FMEA/CIL baseline with proposed Post 51-L updates included. The discrepancies were flagged for potential future resolution. This report documents the results of that comparison for the Orbiter LSS and ALSS hardware. The IOA product for the LSS and ALSS analysis consisted of 511 failure mode worksheets that resulted in 140 potential critical items. Comparison was made to the NASA baseline which consisted of 456 FMEAs and 101 CIL items. The IOA analysis identified 39 failure modes, 6 of which were classified as CIL items, for components not covered by the NASA FMEAs. It was recommended that these failure modes be added to the NASA FMEA baseline. The overall assessment produced agreement on all but 301 FMEAs which caused differences in 111 CIL items.

  20. Reliability measurement for mixed mode failures of 33/11 kilovolt electric power distribution stations.

    PubMed

    Alwan, Faris M; Baharum, Adam; Hassan, Geehan S

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.

  1. Reliability Measurement for Mixed Mode Failures of 33/11 Kilovolt Electric Power Distribution Stations

    PubMed Central

    Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346

  2. Functional Fault Model Development Process to Support Design Analysis and Operational Assessment

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Maul, William A.; Hemminger, Joseph A.

    2016-01-01

    A functional fault model (FFM) is an abstract representation of the failure space of a given system. As such, it simulates the propagation of failure effects along paths between the origin of the system failure modes and points within the system capable of observing the failure effects. As a result, FFMs may be used to diagnose the presence of failures in the modeled system. FFMs necessarily contain a significant amount of information about the design, operations, and failure modes and effects. One of the important benefits of FFMs is that they may be qualitative, rather than quantitative and, as a result, may be implemented early in the design process when there is more potential to positively impact the system design. FFMs may therefore be developed and matured throughout the monitored system's design process and may subsequently be used to provide real-time diagnostic assessments that support system operations. This paper provides an overview of a generalized NASA process that is being used to develop and apply FFMs. FFM technology has been evolving for more than 25 years. The FFM development process presented in this paper was refined during NASA's Ares I, Space Launch System, and Ground Systems Development and Operations programs (i.e., from about 2007 to the present). Process refinement took place as new modeling, analysis, and verification tools were created to enhance FFM capabilities. In this paper, standard elements of a model development process (i.e., knowledge acquisition, conceptual design, implementation & verification, and application) are described within the context of FFMs. Further, newer tools and analytical capabilities that may benefit the broader systems engineering process are identified and briefly described. The discussion is intended as a high-level guide for future FFM modelers.

  3. CPLOAS_2 User Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sallaberry, Cedric Jean-Marie; Helton, Jon C.

    2015-05-01

    Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high - consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to d eactivate the entire system before the SL system fails (i.e., degrades into a configurationmore » that could allow an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). This report describes the Fortran 90 program CPLOAS_2 that implements the following representations for PLOAS for situations in which both link physical properties and link failure properties are time - dependent: (i) failure of all SLs before failure of any WL, (ii) failure of any SL before f ailure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS can be included in the calculations performed by CPLOAS_2. Keywords: Aleatory uncertainty, CPLOAS_2, Epistemic uncertainty, Probability of loss of assured safety, Strong link, Uncertainty analysis, Weak link« less

  4. Analysis of Jordan’s Proposed Emergency Communication Interoperability Plan (JECIP) for Disaster Response

    DTIC Science & Technology

    2008-12-01

    Transmission quality measurements start once the call is established which includes low voice volume, level of noise , echo, crosstalk, and garbling...to failure, and finally, there is restorability which is a measure of how easy the system is restored upon failure. To reduce frequency of failure...Silicon and Germanium. These systems are friendly for the environment, have low - noise , have no fuel consumption, are maintenance-free, and have no

  5. Ferrographic and spectrometer oil analysis from a failed gas turbine engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1983-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor parts that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, and with plasma, atomic absorption, and emission spectrometers to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations (2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure. Previously announced in STAR as N83-12433

  6. Putting Integrated Systems Health Management Capabilities to Work: Development of an Advanced Caution and Warning System for Next-Generation Crewed Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Mccann, Robert S.; Spirkovska, Lilly; Smith, Irene

    2013-01-01

    Integrated System Health Management (ISHM) technologies have advanced to the point where they can provide significant automated assistance with real-time fault detection, diagnosis, guided troubleshooting, and failure consequence assessment. To exploit these capabilities in actual operational environments, however, ISHM information must be integrated into operational concepts and associated information displays in ways that enable human operators to process and understand the ISHM system information rapidly and effectively. In this paper, we explore these design issues in the context of an advanced caution and warning system (ACAWS) for next-generation crewed spacecraft missions. User interface concepts for depicting failure diagnoses, failure effects, redundancy loss, "what-if" failure analysis scenarios, and resolution of ambiguity groups are discussed and illustrated.

  7. Closed-Loop Evaluation of an Integrated Failure Identification and Fault Tolerant Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine; Khong, thuan

    2006-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems developed for failure detection, identification, and reconfiguration, as well as upset recovery, need to be evaluated over broad regions of the flight envelope or under extreme flight conditions, and should include various sources of uncertainty. To apply formal robustness analysis, formulation of linear fractional transformation (LFT) models of complex parameter-dependent systems is required, which represent system uncertainty due to parameter uncertainty and actuator faults. This paper describes a detailed LFT model formulation procedure from the nonlinear model of a transport aircraft by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The closed-loop system is evaluated over the entire flight envelope based on the generated LFT model which can cover nonlinear dynamics. The robustness analysis results of the closed-loop fault tolerant control system of a transport aircraft are presented. A reliable flight envelope (safe flight regime) is also calculated from the robust performance analysis results, over which the closed-loop system can achieve the desired performance of command tracking and failure detection.

  8. Independent Orbiter Assessment (IOA): Assessment of the reaction control system, volume 5

    NASA Technical Reports Server (NTRS)

    Prust, Chet D.; Hartman, Dan W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the aft and forward Reaction Control System (RCS) hardware and Electrical Power Distribution and Control (EPD and C), generating draft failure modes and potential critical items. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline. This report documents the results of that comparison for the Orbiter RCS hardware and EPD and C systems. Volume 5 contains detailed analysis and superseded analysis worksheets and the NASA FMEA to IOA worksheet cross reference and recommendations.

  9. Main propulsion system design recommendations for an advanced Orbit Transfer Vehicle

    NASA Technical Reports Server (NTRS)

    Redd, L.

    1985-01-01

    Various main propulsion system configurations of an advanced OTV are evaluated with respect to the probability of nonindependent failures, i.e., engine failures that disable the entire main propulsion system. Analysis of the life-cycle cost (LCC) indicates that LCC is sensitive to the main propulsion system reliability, vehicle dry weight, and propellant cost; it is relatively insensitive to the number of missions/overhaul, failures per mission, and EVA and IVA cost. In conclusion, two or three engines are recommended in view of their highest reliability, minimum life-cycle cost, and fail operational/fail safe capability.

  10. Levelized cost-benefit analysis of proposed diagnostics for the Ammunition Transfer Arm of the US Army`s Future Armored Resupply Vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, V.K.; Young, J.M.

    1995-07-01

    The US Army`s Project Manager, Advanced Field Artillery System/Future Armored Resupply Vehicle (PM-AFAS/FARV) is sponsoring the development of technologies that can be applied to the resupply vehicle for the Advanced Field Artillery System. The Engineering Technology Division of the Oak Ridge National Laboratory has proposed adding diagnostics/prognostics systems to four components of the Ammunition Transfer Arm of this vehicle, and a cost-benefit analysis was performed on the diagnostics/prognostics to show the potential savings that may be gained by incorporating these systems onto the vehicle. Possible savings could be in the form of reduced downtime, less unexpected or unnecessary maintenance, fewermore » regular maintenance checks. and/or tower collateral damage or loss. The diagnostics/prognostics systems are used to (1) help determine component problems, (2) determine the condition of the components, and (3) estimate the remaining life of the monitored components. The four components on the arm that are targeted for diagnostics/prognostics are (1) the electromechanical brakes, (2) the linear actuators, (3) the wheel/roller bearings, and (4) the conveyor drive system. These would be monitored using electrical signature analysis, vibration analysis, or a combination of both. Annual failure rates for the four components were obtained along with specifications for vehicle costs, crews, number of missions, etc. Accident scenarios based on component failures were postulated, and event trees for these scenarios were constructed to estimate the annual loss of the resupply vehicle, crew, arm. or mission aborts. A levelized cost-benefit analysis was then performed to examine the costs of such failures, both with and without some level of failure reduction due to the diagnostics/prognostics systems. Any savings resulting from using diagnostics/prognostics were calculated.« less

  11. Model authoring system for fail safe analysis

    NASA Technical Reports Server (NTRS)

    Sikora, Scott E.

    1990-01-01

    The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.

  12. Fractography can be used to analyze failure modes in polytetrafluoroethylene

    NASA Technical Reports Server (NTRS)

    Nerren, B. H.

    1969-01-01

    Fractographic principles used for analyzing failure in metals are applied to the analysis of the microstructure and fracture of polytetrafluoroethylene. This material is used as seals in cryogenic systems.

  13. Stress Transfer and Structural Failure of Bilayered Material Systems

    NASA Astrophysics Data System (ADS)

    Prieto-Munoz, Pablo Arthur

    Bilayered material systems are common in naturally formed or artificially engineered structures. Understanding how loads transfer within these structural systems is necessary to predict failure and develop effective designs. Existing methods for evaluating the stress transfer in bilayered materials are limited to overly simplified models or require experimental calibration. As a result, these methods have failed to accurately account for such structural failures as the creep induced roofing panel collapse of Boston's I-90 connector tunnel, which was supported by adhesive anchors. The one-dimensional stress analyses currently used for adhesive anchor design cannot account for viscoelastic creep failure, and consequently results in dangerously under-designed structural systems. In this dissertation, a method for determining the two-dimensional stress and displacement fields for a generalized bilayered material system is developed, and proposes a closed-form analytical solution. A general linear-elastic solution is first proposed by decoupling the elastic governing equations from one another through the so-called plane assumption. Based on this general solution, an axisymmetric problem and a plane strain problem are formulated. These are applied to common bilayered material systems such as: (1) concrete adhesive anchors, (2) material coatings, (3) asphalt pavements, and (4) layered sedimentary rocks. The stress and displacement fields determined by this analytical analysis are validated through the use of finite element models. Through the correspondence principle, the linear-elastic solution is extended to consider time-dependent viscoelastic material properties, thus facilitating the analysis of adhesive anchors and asphalt pavements while incorporating their viscoelastic material behavior. Furthermore, the elastic stress analysis can explain the fracturing phenomenon of material coatings, pavements, and layered rocks, successfully predicting their fracture saturation ratio---which is the ratio of fracture spacing to the thickness of the weak layer where an increase in load will not cause any new fractures to form. Moreover, these specific material systems are looked at in the context of existing and novel experimental results, further demonstrating the advantage of the stress transfer analysis proposed. This research provides a closed-form stress solution for various structural systems that is applied to different failure analyses. The versatility of this method is in the flexibility and the ease upon which the stress and displacement field results can be applied to existing stress- or displacement-based structural failure criteria. As presented, this analysis can be directly used to: (1) design adhesive anchoring systems for long-term creep loading, (2) evaluate the fracture mechanics behind bilayered material coatings and pavement overlay systems, and (3) determine the fracture spacing to layer thickness ratio of layered sedimentary rocks. As is shown in the four material systems presented, this general solution has far reaching applications in facilitating design and analysis of typical bilayered structural systems.

  14. Failure rate and reliability of the KOMATSU hydraulic excavator in surface limestone mine

    NASA Astrophysics Data System (ADS)

    Harish Kumar N., S.; Choudhary, R. P.; Murthy, Ch. S. N.

    2018-04-01

    The model with failure rate function of bathtub-shaped is helpful in reliability analysis of any system and particularly in reliability associated privative maintenance. The usual Weibull distribution is, however, not capable to model the complete lifecycle of the any with a bathtub-shaped failure rate function. In this paper, failure rate and reliability analysis of the KOMATSU hydraulic excavator/shovel in surface mine is presented and also to improve the reliability and decrease the failure rate of each subsystem of the shovel based on the preventive maintenance. The model of the bathtub-shaped for shovel can also be seen as a simplification of the Weibull distribution.

  15. Ferrographic and spectrographic analysis of oil sampled before and after failure of a jet engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1980-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph as well as plasma, atomic absorption, and emission spectrometers. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism, nor a high level of wear debris was detected in the oil sample from the engine just prior to the test in which the failure occurred. However, low concentrations of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure.

  16. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  17. Software dependability in the Tandem GUARDIAN system

    NASA Technical Reports Server (NTRS)

    Lee, Inhwan; Iyer, Ravishankar K.

    1995-01-01

    Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.

  18. Automated Mixed Traffic Vehicle (AMTV) technology and safety study

    NASA Technical Reports Server (NTRS)

    Johnston, A. R.; Peng, T. K. C.; Vivian, H. C.; Wang, P. K.

    1978-01-01

    Technology and safety related to the implementation of an Automated Mixed Traffic Vehicle (AMTV) system are discussed. System concepts and technology status were reviewed and areas where further development is needed are identified. Failure and hazard modes were also analyzed and methods for prevention were suggested. The results presented are intended as a guide for further efforts in AMTV system design and technology development for both near term and long term applications. The AMTV systems discussed include a low speed system, and a hybrid system consisting of low speed sections and high speed sections operating in a semi-guideway. The safety analysis identified hazards that may arise in a properly functioning AMTV system, as well as hardware failure modes. Safety related failure modes were emphasized. A risk assessment was performed in order to create a priority order and significant hazards and failure modes were summarized. Corrective measures were proposed for each hazard.

  19. PV System Component Fault and Failure Compilation and Analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne

    This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.

  20. Integrating FMEA in a Model-Driven Methodology

    NASA Astrophysics Data System (ADS)

    Scippacercola, Fabio; Pietrantuono, Roberto; Russo, Stefano; Esper, Alexandre; Silva, Nuno

    2016-08-01

    Failure Mode and Effects Analysis (FMEA) is a well known technique for evaluating the effects of potential failures of components of a system. FMEA demands for engineering methods and tools able to support the time- consuming tasks of the analyst. We propose to make FMEA part of the design of a critical system, by integration into a model-driven methodology. We show how to conduct the analysis of failure modes, propagation and effects from SysML design models, by means of custom diagrams, which we name FMEA Diagrams. They offer an additional view of the system, tailored to FMEA goals. The enriched model can then be exploited to automatically generate FMEA worksheet and to conduct qualitative and quantitative analyses. We present a case study from a real-world project.

  1. Evaluation of high temperature structural adhesives for extended service. [supersonic cruise aircraft research

    NASA Technical Reports Server (NTRS)

    Hill, S. G.

    1981-01-01

    Eight different Ti-6Al-4V surface treatments were investigated for each of 10 candidate resins. Primers (two for each resin) were studied for appropriate cure and thickness and initial evaluation of bond joints began using various combinations of the adhesive resins and surface treatments. Surface failure areas of bonded titanium coupons were analyzed by electron microscopy and surface chemical analysis techniques. Results of surface characterization and failure analysis are described for lap shear bond joints occurring with adhesive systems consisting of: (1) LARC-13 adhesive, Pasa jell surface treatment; (2) LARC-13 adhesive, 10 volt CAA treatment; (3) PPQ adhesive, 10 volt CAA treatment; and (4) PPQ adhesive, 5 volt CAA treatment. The failure analysis concentrated on the 10,000 hr 505K (450 F) exposed specimens which exhibited adhesive failure. Environmental exposure data being generated on the PPQ-10 volt CAA and the LARC-TPI-10 volt CAA adhesive systems is included.

  2. Model 0A wind turbine generator FMEA

    NASA Technical Reports Server (NTRS)

    Klein, William E.; Lalli, Vincent R.

    1989-01-01

    The results of Failure Modes and Effects Analysis (FMEA) conducted for the Wind Turbine Generators are presented. The FMEA was performed for the functional modes of each system, subsystem, or component. The single-point failures were eliminated for most of the systems. The blade system was the only exception. The qualitative probability of a blade separating was estimated at level D-remote. Many changes were made to the hardware as a result of this analysis. The most significant change was the addition of the safety system. Operational experience and need to improve machine availability have resulted in subsequent changes to the various systems which are also reflected in this FMEA.

  3. Analysis of rural intersection accidents caused by stop sign violation and failure to yield the right-of-way

    DOT National Transportation Integrated Search

    2000-09-01

    The objectives of this study were to (1) identify the factors that contribute to accidents caused by failure to stop and failure to yield the right-of-way at rural two-way stop-controlled intersections on the state highway system, and (2) determine w...

  4. Risk Analysis Methods for Deepwater Port Oil Transfer Systems

    DOT National Transportation Integrated Search

    1976-06-01

    This report deals with the risk analysis methodology for oil spills from the oil transfer systems in deepwater ports. Failure mode and effect analysis in combination with fault tree analysis are identified as the methods best suited for the assessmen...

  5. Remote monitoring of LED lighting system performance

    NASA Astrophysics Data System (ADS)

    Thotagamuwa, Dinusha R.; Perera, Indika U.; Narendran, Nadarajah

    2016-09-01

    The concept of connected lighting systems using LED lighting for the creation of intelligent buildings is becoming attractive to building owners and managers. In this application, the two most important parameters include power demand and the remaining useful life of the LED fixtures. The first enables energy-efficient buildings and the second helps building managers schedule maintenance services. The failure of an LED lighting system can be parametric (such as lumen depreciation) or catastrophic (such as complete cessation of light). Catastrophic failures in LED lighting systems can create serious consequences in safety critical and emergency applications. Therefore, both failure mechanisms must be considered and the shorter of the two must be used as the failure time. Furthermore, because of significant variation between the useful lives of similar products, it is difficult to accurately predict the life of LED systems. Real-time data gathering and analysis of key operating parameters of LED systems can enable the accurate estimation of the useful life of a lighting system. This paper demonstrates the use of a data-driven method (Euclidean distance) to monitor the performance of an LED lighting system and predict its time to failure.

  6. TU-FG-201-01: 18-Month Clinical Experience of a Linac Daily Quality Assurance (QA) Solution Using Only EPID and OBI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, B; Sun, B; Yaddanapudi, S

    Purpose: To describe the clinical use of a Linear Accelerator (Linac) DailyQA system with only EPID and OBI. To assess the reliability over an 18-month period and improve the robustness of this system based on QA failure analysis. Methods: A DailyQA solution utilizing an in-house designed phantom, combined EPID and OBI image acquisitions, and a web-based data analysis and reporting system was commissioned and used in our clinic to measure geometric, dosimetry and imaging components of a Varian Truebeam Linac. During an 18-month period (335 working days), the Daily QA results, including the output constancy, beam flatness and symmetry, uniformity,more » TPR20/10, MV and KV imaging quality, were collected and analyzed. For output constancy measurement, an independent monthly QA system with an ionization chamber (IC) and annual/incidental TG51 measurements with ADCL IC were performed and cross-compared to Daily QA system. Thorough analyses were performed on the recorded QA failures to evaluate the machine performance, optimize the data analysis algorithm, adjust the tolerance setting and improve the training procedure to prevent future failures. Results: A clinical workflow including beam delivery, data analysis, QA report generation and physics approval was established and optimized to suit daily clinical operation. The output tests over the 335 working day period cross-correlated with the monthly QA system within 1.3% and TG51 results within 1%. QA passed with one attempt on 236 days out of 335 days. Based on the QA failures analysis, the Gamma criteria is revised from (1%, 1mm) to (2%, 1mm) considering both QA accuracy and efficiency. Data analysis algorithm is improved to handle multiple entries for a repeating test. Conclusion: We described our 18-month clinical experience on a novel DailyQA system using only EPID and OBI. The long term data presented demonstrated the system is suitable and reliable for Linac daily QA.« less

  7. Analysis of strain gage reliability in F-100 jet engine testing at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Holanda, R.

    1983-01-01

    A reliability analysis was performed on 64 strain gage systems mounted on the 3 rotor stages of the fan of a YF-100 engine. The strain gages were used in a 65 hour fan flutter research program which included about 5 hours of blade flutter. The analysis was part of a reliability improvement program. Eighty-four percent of the strain gages survived the test and performed satisfactorily. A post test analysis determined most failure causes. Five failures were caused by open circuits, three failed gages showed elevated circuit resistance, and one gage circuit was grounded. One failure was undetermined.

  8. Failure mode and effects analysis of witnessing protocols for ensuring traceability during IVF.

    PubMed

    Rienzi, Laura; Bariani, Fiorenza; Dalla Zorza, Michela; Romano, Stefania; Scarica, Catello; Maggiulli, Roberta; Nanni Costa, Alessandro; Ubaldi, Filippo Maria

    2015-10-01

    Traceability of cells during IVF is a fundamental aspect of treatment, and involves witnessing protocols. Failure mode and effects analysis (FMEA) is a method of identifying real or potential breakdowns in processes, and allows strategies to mitigate risks to be developed. To examine the risks associated with witnessing protocols, an FMEA was carried out in a busy IVF centre, before and after implementation of an electronic witnessing system (EWS). A multidisciplinary team was formed and moderated by human factors specialists. Possible causes of failures, and their potential effects, were identified and risk priority number (RPN) for each failure calculated. A second FMEA analysis was carried out after implementation of an EWS. The IVF team identified seven main process phases, 19 associated process steps and 32 possible failure modes. The highest RPN was 30, confirming the relatively low risk that mismatches may occur in IVF when a manual witnessing system is used. The introduction of the EWS allowed a reduction in the moderate-risk failure mode by two-thirds (highest RPN = 10). In our experience, FMEA is effective in supporting multidisciplinary IVF groups to understand the witnessing process, identifying critical steps and planning changes in practice to enable safety to be enhanced. Copyright © 2015 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  9. Evaluation of a fault tolerant system for an integrated avionics sensor configuration with TSRV flight data

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.

    1985-01-01

    The performance analysis results of a fault inferring nonlinear detection system (FINDS) using sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment is presented. First, a statistical analysis of the flight recorded sensor data was made in order to determine the characteristics of sensor inaccuracies. Next, modifications were made to the detection and decision functions in the FINDS algorithm in order to improve false alarm and failure detection performance under real modelling errors present in the flight data. Finally, the failure detection and false alarm performance of the FINDS algorithm were analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minute flight data. In general, the detection speed, failure level estimation, and false alarm performance showed a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed was faster for filter measurement sensors soon as MLS than for filter input sensors such as flight control accelerometers.

  10. Independent Orbiter Assessment (IOA): Assessment of the active thermal control system

    NASA Technical Reports Server (NTRS)

    Sinclair, S. K.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Active Thermal Control System (ATCS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the available NASA FMEA/CIL data. Discrepancies from the comparison were documented, and where enough information was available, recommendations for resolution of the discrepancies were made. This report documents the results of that comparison for the Orbiter ATCS hardware. The IOA product for the ATCS independent analysis consisted of 310 failure mode worksheets that resulted in 101 potential critical items (PCI) being identified. A comparison was made to the available NASA data which consisted of 252 FMEAs and 109 CIL items.

  11. Medication management strategies used by older adults with heart failure: A systems-based analysis.

    PubMed

    Mickelson, Robin S; Holden, Richard J

    2017-09-01

    Older adults with heart failure use strategies to cope with the constraining barriers impeding medication management. Strategies are behavioral adaptations that allow goal achievement despite these constraining conditions. When strategies do not exist, are ineffective or maladaptive, medication performance and health outcomes are at risk. While constraints to medication adherence are described in literature, strategies used by patients to manage medications are less well-described or understood. Guided by cognitive engineering concepts, the aim of this study was to describe and analyze the strategies used by older adults with heart failure to achieve their medication management goals. This mixed methods study employed an empirical strategies analysis method to elicit medication management strategies used by older adults with heart failure. Observation and interview data collected from 61 older adults with heart failure and 31 caregivers were analyzed using qualitative content analysis to derive categories, patterns and themes within and across cases. Data derived thematic sub-categories described planned and ad hoc methods of strategic adaptations. Stable strategies proactively adjusted the medication management process, environment, or the patients themselves. Patients applied situational strategies (planned or ad hoc) to irregular or unexpected situations. Medication non-adherence was a strategy employed when life goals conflicted with medication adherence. The health system was a source of constraints without providing commensurate strategies. Patients strived to control their medication system and achieve goals using adaptive strategies. Future patient self-mangement research can benefit from methods and theories used to study professional work, such as strategies analysis.

  12. Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.

    ERIC Educational Resources Information Center

    Spitzer, Dean

    1980-01-01

    Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)

  13. Logic analysis of complex systems by characterizing failure phenomena to achieve diagnosis and fault-isolation

    NASA Technical Reports Server (NTRS)

    Wong, J. T.; Andre, W. L.

    1981-01-01

    A recent result shows that, for a certain class of systems, the interdependency among the elements of such a system together with the elements constitutes a mathematical structure a partially ordered set. It is called a loop free logic model of the system. On the basis of an intrinsic property of the mathematical structure, a characterization of system component failure in terms of maximal subsets of bad test signals of the system was obtained. Also, as a consequence, information concerning the total number of failure components in the system was deduced. Detailed examples are given to show how to restructure real systems containing loops into loop free models for which the result is applicable.

  14. SU-E-T-421: Failure Mode and Effects Analysis (FMEA) of Xoft Electronic Brachytherapy for the Treatment of Superficial Skin Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoisak, J; Manger, R; Dragojevic, I

    Purpose: To perform a failure mode and effects analysis (FMEA) of the process for treating superficial skin cancers with the Xoft Axxent electronic brachytherapy (eBx) system, given the recent introduction of expanded quality control (QC) initiatives at our institution. Methods: A process map was developed listing all steps in superficial treatments with Xoft eBx, from the initial patient consult to the completion of the treatment course. The process map guided the FMEA to identify the failure modes for each step in the treatment workflow and assign Risk Priority Numbers (RPN), calculated as the product of the failure mode’s probability ofmore » occurrence (O), severity (S) and lack of detectability (D). FMEA was done with and without the inclusion of recent QC initiatives such as increased staffing, physics oversight, standardized source calibration, treatment planning and documentation. The failure modes with the highest RPNs were identified and contrasted before and after introduction of the QC initiatives. Results: Based on the FMEA, the failure modes with the highest RPN were related to source calibration, treatment planning, and patient setup/treatment delivery (Fig. 1). The introduction of additional physics oversight, standardized planning and safety initiatives such as checklists and time-outs reduced the RPNs of these failure modes. High-risk failure modes that could be mitigated with improved hardware and software interlocks were identified. Conclusion: The FMEA analysis identified the steps in the treatment process presenting the highest risk. The introduction of enhanced QC initiatives mitigated the risk of some of these failure modes by decreasing their probability of occurrence and increasing their detectability. This analysis demonstrates the importance of well-designed QC policies, procedures and oversight in a Xoft eBx programme for treatment of superficial skin cancers. Unresolved high risk failure modes highlight the need for non-procedural quality initiatives such as improved planning software and more robust hardware interlock systems.« less

  15. AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment

    DTIC Science & Technology

    2014-10-01

    Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The

  16. Control system failure monitoring using generalized parity relations. M.S. Thesis Interim Technical Report

    NASA Technical Reports Server (NTRS)

    Vanschalkwyk, Christiaan Mauritz

    1991-01-01

    Many applications require that a control system must be tolerant to the failure of its components. This is especially true for large space-based systems that must work unattended and with long periods between maintenance. Fault tolerance can be obtained by detecting the failure of the control system component, determining which component has failed, and reconfiguring the system so that the failed component is isolated from the controller. Component failure detection experiments that were conducted on an experimental space structure, the NASA Langley Mini-Mast are presented. Two methodologies for failure detection and isolation (FDI) exist that do not require the specification of failure modes and are applicable to both actuators and sensors. These methods are known as the Failure Detection Filter and the method of Generalized Parity Relations. The latter method was applied to three different sensor types on the Mini-Mast. Failures were simulated in input-output data that were recorded during operation of the Mini-Mast. Both single and double sensor parity relations were tested and the effect of several design parameters on the performance of these relations is discussed. The detection of actuator failures is also treated. It is shown that in all the cases it is possible to identify the parity relations directly from input-output data. Frequency domain analysis is used to explain the behavior of the parity relations.

  17. Failure probability analysis of optical grid

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  18. Independent Orbiter Assessment (IOA): Analysis of the DPS subsystem

    NASA Technical Reports Server (NTRS)

    Lowery, H. J.; Haufler, W. A.; Pietz, K. C.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) is presented. The IOA approach features a top-down analysis of the hardware to independently determine failure modes, criticality, and potential critical items. The independent analysis results corresponding to the Orbiter Data Processing System (DPS) hardware are documented. The DPS hardware is required for performing critical functions of data acquisition, data manipulation, data display, and data transfer throughout the Orbiter. Specifically, the DPS hardware consists of the following components: Multiplexer/Demultiplexer (MDM); General Purpose Computer (GPC); Multifunction CRT Display System (MCDS); Data Buses and Data Bus Couplers (DBC); Data Bus Isolation Amplifiers (DBIA); Mass Memory Unit (MMU); and Engine Interface Unit (EIU). The IOA analysis process utilized available DPS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the extensive redundancy built into the DPS the number of critical items are few. Those identified resulted from premature operation and erroneous output of the GPCs.

  19. The Shuttle processing contractors (SPC) reliability program at the Kennedy Space Center - The real world

    NASA Astrophysics Data System (ADS)

    McCrea, Terry

    The Shuttle Processing Contract (SPC) workforce consists of Lockheed Space Operations Co. as prime contractor, with Grumman, Thiokol Corporation, and Johnson Controls World Services as subcontractors. During the design phase, reliability engineering is instrumental in influencing the development of systems that meet the Shuttle fail-safe program requirements. Reliability engineers accomplish this objective by performing FMEA (failure modes and effects analysis) to identify potential single failure points. When technology, time, or resources do not permit a redesign to eliminate a single failure point, the single failure point information is formatted into a change request and presented to senior management of SPC and NASA for risk acceptance. In parallel with the FMEA, safety engineering conducts a hazard analysis to assure that potential hazards to personnel are assessed. The combined effort (FMEA and hazard analysis) is published as a system assurance analysis. Special ground rules and techniques are developed to perform and present the analysis. The reliability program at KSC is vigorously pursued, and has been extremely successful. The ground support equipment and facilities used to launch and land the Space Shuttle maintain an excellent reliability record.

  20. Load fatigue performance of four implant-abutment interface designs: effect of torque level and implant system.

    PubMed

    Quek, H C; Tan, Keson B; Nicholls, Jack I

    2008-01-01

    Biomechanical load-fatigue performance data on single-tooth implant systems with different implant-abutment interface designs is lacking in the literature. This study evaluated the load fatigue performance of 4 implant-abutment interface designs (Brånemark-CeraOne; 3i Osseotite-STA abutment; Replace Select-Easy abutment; and Lifecore Stage-1-COC abutment system). The number of load cycles to fatigue failure of 4 implant-abutment designs was tested with a custom rotational load fatigue machine. The effect of increasing and decreasing the tightening torque by 20% respectively on the load fatigue performance was also investigated. Three different tightening torque levels (recommended torque, -20% recommended torque, +20% recommended torque) were applied to the 4 implant systems. There were 12 test groups with 5 samples in each group. The rotational load fatigue machine subjected specimens to a sinusoidally applied 35 Ncm bending moment at a test frequency of 14 Hz. The number of cycles to failure was recorded. A cutoff of 5 x 10(6) cycles was applied as an upper limit. There were 2 implant failures and 1 abutment screw failure in the Brånemark group. Five abutment screw failures and 4 implant failures was recorded for the 3i system. The Replace Select system had 1 implant failure. Five cone screw failures were noted for the Lifecore system. Analysis of variance revealed no statistically significant difference in load cycles to failure for the 4 different implant-abutment systems torqued at recommended torque level. A statistically significant difference was found between the -20% torque group and the +20% torque group (P < .05) for the 3i system. Load fatigue performance and failure location is system specific and related to the design characteristics of the implant-abutment combination. It appeared that if the implant-abutment interface was maintained, load fatigue failure would occur at the weakest point of the implant. It is important to use the torque level recommended by the manufacturer.

  1. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  2. Integrated failure detection and management for the Space Station Freedom external active thermal control system

    NASA Technical Reports Server (NTRS)

    Mesloh, Nick; Hill, Tim; Kosyk, Kathy

    1993-01-01

    This paper presents the integrated approach toward failure detection, isolation, and recovery/reconfiguration to be used for the Space Station Freedom External Active Thermal Control System (EATCS). The on-board and on-ground diagnostic capabilities of the EATCS are discussed. Time and safety critical features, as well as noncritical failures, and the detection coverage for each provided by existing capabilities are reviewed. The allocation of responsibility between on-board software and ground-based systems, to be shown during ground testing at the Johnson Space Center, is described. Failure isolation capabilities allocated to the ground include some functionality originally found on orbit but moved to the ground to reduce on-board resource requirements. Complex failures requiring the analysis of multiple external variables, such as environmental conditions, heat loads, or station attitude, are also allocated to ground personnel.

  3. a New Method for Fmeca Based on Fuzzy Theory and Expert System

    NASA Astrophysics Data System (ADS)

    Byeon, Yoong-Tae; Kim, Dong-Jin; Kim, Jin-O.

    2008-10-01

    Failure Mode Effects and Criticality Analysis (FMECA) is one of most widely used methods in modern engineering system to investigate potential failure modes and its severity upon the system. FMECA evaluates criticality and severity of each failure mode and visualize the risk level matrix putting those indices to column and row variable respectively. Generally, those indices are determined subjectively by experts and operators. However, this process has no choice but to include uncertainty. In this paper, a method for eliciting expert opinions considering its uncertainty is proposed to evaluate the criticality and severity. In addition, a fuzzy expert system is constructed in order to determine the crisp value of risk level for each failure mode. Finally, an illustrative example system is analyzed in the case study. The results are worth considering in deciding the proper policies for each component of the system.

  4. Reliability analysis of a robotic system using hybridized technique

    NASA Astrophysics Data System (ADS)

    Kumar, Naveen; Komal; Lather, J. S.

    2017-09-01

    In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.

  5. Systems Theoretic Process Analysis Applied to an Offshore Supply Vessel Dynamic Positioning System

    DTIC Science & Technology

    2016-06-01

    additional safety issues that were either not identified or inadequately mitigated through the use of Fault Tree Analysis and Failure Modes and...Techniques ...................................................................................................... 15 1.3.1. Fault Tree Analysis...49 3.2. Fault Tree Analysis Comparison

  6. Conversion of Questionnaire Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Danny H; Elwood Jr, Robert H

    During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less

  7. Comprehensive Deployment Method for Technical Characteristics Base on Multi-failure Modes Correlation Analysis

    NASA Astrophysics Data System (ADS)

    Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.

    2017-12-01

    This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.

  8. Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study

    NASA Astrophysics Data System (ADS)

    Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash

    2018-02-01

    Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.

  9. Factors predicting mortality in severe acute pancreatitis.

    PubMed

    Compañy, L; Sáez, J; Martínez, J; Aparicio, J R; Laveda, R; Griñó, P; Pérez-Mateo, M

    2003-01-01

    Acute pancreatitis (AP) is a common disorder in which ensuing serious complications may lead to a fatal outcome in patients. To describe a large series of patients with severe AP (SAP) who were admitted to our hospital and to identify factors predicting mortality. In a retrospective study, all patients with SAP diagnosed between February 1996 and October 2000 according to the Atlanta criteria were studied. Out of a total of 363 AP patients, 67 developed SAP. The mean age of the patients was 69; the commonest etiology was biliary; 55.2% developed necrosis; the commonest systemic complication was respiratory failure (44.7%), followed by acute renal failure (35.8%) and shock (20.9%). A total of 31.3% of the patients died. Factors significantly related to mortality were age, upper digestive tract bleeding, acute renal failure, respiratory failure and shock by univariate analysis. However, pseudocysts seemed to have a protective effect. By multivariate analysis, independent prognostic factors were age, acute renal failure and respiratory failure. Patients with SAP mainly died due to systemic complications, especially acute renal failure and respiratory failure. Necrosis (in the absence or presence of infection) was not correlated with increased mortality. A pseudocyst was found to be a protective factor, probably because the definition itself led to the selection of patients who had survived multiorgan failure. Copyright 2003 S. Karger AG, Basel and IAP

  10. Metabolomic Analysis in Heart Failure.

    PubMed

    Ikegami, Ryutaro; Shimizu, Ippei; Yoshida, Yohko; Minamino, Tohru

    2017-12-25

    It is thought that at least 6,500 low-molecular-weight metabolites exist in humans, and these metabolites have various important roles in biological systems in addition to proteins and genes. Comprehensive assessment of endogenous metabolites is called metabolomics, and recent advances in this field have enabled us to understand the critical role of previously unknown metabolites or metabolic pathways in the cardiovascular system. In this review, we will focus on heart failure and how metabolomic analysis has contributed to improving our understanding of the pathogenesis of this critical condition.

  11. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  12. Risk Assessment Planning for Airborne Systems: An Information Assurance Failure Mode, Effects and Criticality Analysis Methodology

    DTIC Science & Technology

    2012-06-01

    Visa Investigate Data Breach March 30, 2012 Visa and MasterCard are investigating whether a data security breach at one of the main companies that...30). MasterCard and Visa Investigate Data Breach . New York Times . Stamatis, D. (2003). Failure Mode Effect Analysis: FMEA from Theory to Execution

  13. The Parable of the Boiled Safety Professional

    NASA Technical Reports Server (NTRS)

    Shivers, Charles H.

    2011-01-01

    Common and unique issues contribute to system failures. This paper touches on the concept of drift to failure as a cautionary message. Managers and leaders, design team members, fabricators and assemblers, analysis and assurance personnel, and others associated with operating and maintaining systems, need to pay attention to identify the manifestation of individual and collective behaviors that might indicate slips in rigor or focus or decisions that might eat away at safety margins as our system drifts to failure. Corrections to drift made during design and development phases may efficiently prevent or mitigate drift problems occurring in the operational phase.

  14. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    NASA Technical Reports Server (NTRS)

    Flores, Melissa; Malin, Jane T.

    2013-01-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component s functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  15. Tribology symposium 1995. PD-Volume 72

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masudi, H.

    After the keynote presentation by Professor Aaron Cohen of Texas A and M University, entitled Processes Used in Design, the program is divided into five major sessions: Research and Development -- Recent research and development of tribological components; Tribology in Manufacturing -- The impact of tribology on modern manufacturing; Design/Design Representation -- Aspects of design related to tribological systems; Tribo-Chemistry/Tribo-Physics -- Discussion of chemical and physical behavior of substances as related to tribology; and Failure Analysis -- An analysis of failure, failure detection, and failure monitoring as related to manufacturing processes. Papers have been processed separately for inclusion on themore » data base.« less

  16. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    NASA Astrophysics Data System (ADS)

    Flores, Melissa D.; Malin, Jane T.; Fleming, Land D.

    2013-09-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component's functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  17. Accuracy of a Rationally Derived Method for Identifying Treatment Failure in Children and Adolescents

    ERIC Educational Resources Information Center

    Bishop, Matthew J.; Bybee, Taige S.; Lambert, Michael J.; Burlingame, Gary M.; Wells, M. Gawain; Poppleton, Landon E.

    2005-01-01

    Psychotherapy outcome can be enhanced by early identification of potential treatment failures before they leave treatment. In adults, compelling data are emerging that provide evidence that an early warning system that identifies potential treatment failures can be developed and applied to enhance outcome. The present study reports an analysis of…

  18. Application of Quality Management Tools for Evaluating the Failure Frequency of Cutter-Loader and Plough Mining Systems

    NASA Astrophysics Data System (ADS)

    Biały, Witold

    2017-06-01

    Failure frequency in the mining process, with a focus on the mining machine, has been presented and illustrated by the example of two coal-mines. Two mining systems have been subjected to analysis: a cutter-loader and a plough system. In order to reduce costs generated by failures, maintenance teams should regularly make sure that the machines are used and operated in a rational and effective way. Such activities will allow downtimes to be reduced, and, in consequence, will increase the effectiveness of a mining plant. The evaluation of mining machines' failure frequency contained in this study has been based on one of the traditional quality management tools - the Pareto chart.

  19. Independent Orbiter Assessment (IOA): Assessment of the backup flight system FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Prust, E. E.; Ewell, J. J., Jr.; Hinsdale, L. W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Backup Flight System (BFS) hardware, generating draft failure modes and Potential Critical Items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the proposed NASA Post 51-L FMEA/CIL baseline. A resolution of each discrepancy from the comparison is provided through additional analysis as required. This report documents the results of that comparison for the Orbiter BFS hardware. The IOA product for the BFS analysis consisted of 29 failure mode worksheets that resulted in 21 Potential Critical Items (PCI) being identified. This product was originally compared with the proposed NASA BFS baseline and subsequently compared with the applicable Data Processing System (DPS), Electrical Power Distribution and Control (EPD and C), and Displays and Controls NASA CIL items. The comparisons determined if there were any results which had been found by the IOA but were not in the NASA baseline. The original assessment determined there were numerous failure modes and potential critical items in the IOA analysis that were not contained in the NASA BFS baseline. Conversely, the NASA baseline contained three FMEAs (IMU, ADTA, and Air Data Probe) for CIL items that were not identified in the IOA product.

  20. Independent Orbiter Assessment (IOA): Assessment of the remote manipulator system FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Tangorra, F.; Grasmeder, R. F.; Montgomery, A. D.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Remote Manipulator System (RMS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were than compared to the NASA FMEA/CIL baseline with proposed Post 51-L updates included. A resolution of each discrepancy from the comparison is provided through additional analysis as required. The results of that comparison for the Orbiter RMS hardware are documented. The IOA product for the RMS analysis consisted of 604 failure mode worksheets that resulted in 458 potential critical items being identified. Comparison was made to the NASA baseline which consisted of 45 FMEAs and 321 CIL items. This comparison produced agreement on all but 154 FMEAs which caused differences in 137 CIL items.

  1. Application of Failure Mode and Effect Analysis (FMEA), cause and effect analysis, and Pareto diagram in conjunction with HACCP to a corn curl manufacturing plant.

    PubMed

    Varzakas, Theodoros H; Arvanitoyannis, Ioannis S

    2007-01-01

    The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of corn curl manufacturing. A tentative approach of FMEA application to the snacks industry was attempted in an effort to exclude the presence of GMOs in the final product. This is of crucial importance both from the ethics and the legislation (Regulations EC 1829/2003; EC 1830/2003; Directive EC 18/2001) point of view. The Preliminary Hazard Analysis and the Fault Tree Analysis were used to analyze and predict the occurring failure modes in a food chain system (corn curls processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and the fishbone diagram). Finally, Pareto diagrams were employed towards the optimization of GMOs detection potential of FMEA.

  2. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control/remote manipulator system subsystem

    NASA Technical Reports Server (NTRS)

    Robinson, W. W.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the Electrical Power Distribution and Control (EPD and C)/Remote Manipulator System (RMS) hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained in the NASA FMEA/CIL documentation. This report documents the results of the independent analysis of the EPD and C/RMS (both port and starboard) hardware. The EPD and C/RMS subsystem hardware provides the electrical power and power control circuitry required to safely deploy, operate, control, and stow or guillotine and jettison two (one port and one starboard) RMSs. The EPD and C/RMS subsystem is subdivided into the four following functional divisions: Remote Manipulator Arm; Manipulator Deploy Control; Manipulator Latch Control; Manipulator Arm Shoulder Jettison; and Retention Arm Jettison. The IOA analysis process utilized available EPD and C/RMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based on the severity of the effect for each failure mode.

  3. Tree failures and accidents in recreation areas: a guide to data management for hazard control

    Treesearch

    Lee A. Paine; James W. Clarke

    1978-01-01

    A data management system has been developed for storage and retrieval of tree failure and hazard data, with provision for computer analyses and presentation of results in useful tables. This system emphasizes important relationships between tree characteristics, environmental factors, and the resulting hazard. The analysis programs permit easy selection of subsets of...

  4. An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction

    ERIC Educational Resources Information Center

    Bhasin, Harpreet

    2011-01-01

    Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…

  5. Independent Orbiter Assessment (IOA): Assessment of the reaction control system, volume 3

    NASA Technical Reports Server (NTRS)

    Prust, Chet D.; Hartman, Dan W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the aft and forward Reaction Control System (RCS) hardware and Electrical Power Distribution and Control (EPD and C), generating draft failure modes and potential critical items. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline. This report documents the results of that comparison for the Orbiter RCS hardware and EPD and C systems. Volume 3 continues the presentation of IOA worksheets.

  6. Independent Orbiter Assessment (IOA): Assessment of the reaction control system, volume 2

    NASA Technical Reports Server (NTRS)

    Prust, Chet D.; Hartman, Dan W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the aft and forward Reaction Control System (RCS) hardware and Electrical Power Distribution and Control (EPD and C), generating draft failure modes and potential critical items. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline. This report documents the results of that comparison for the Orbiter RCS hardware and EPD and C systems. Volume 2 continues the presentation of IOA worksheets.

  7. Analysis of failure and maintenance experiences of motor operated valves in a Finnish nuclear power plant

    NASA Astrophysics Data System (ADS)

    Simola, Kaisa; Laakso, Kari

    1992-01-01

    Eight years of operating experiences of 104 motor operated closing valves in different safety systems in nuclear power units were analyzed in a systematic way. The qualitative methods used were Failure Mode and Effect Analysis (FMEA) and Maintenance Effects and Criticality Analysis (MECA). These reliability engineering methods are commonly used in the design stage of equipment. The successful application of these methods for analysis and utilization of operating experiences was demonstrated.

  8. FEMA and RAM Analysis for the Multi Canister Overpack (MCO) Handling Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SWENSON, C.E.

    2000-06-01

    The Failure Modes and Effects Analysis and the Reliability, Availability, and Maintainability Analysis performed for the Multi-Canister Overpack Handling Machine (MHM) has shown that the current design provides for a safe system, but the reliability of the system (primarily due to the complexity of the interlocks and permissive controls) is relatively low. No specific failure modes were identified where significant consequences to the public occurred, or where significant impact to nearby workers should be expected. The overall reliability calculation for the MHM shows a 98.1 percent probability of operating for eight hours without failure, and an availability of the MHMmore » of 90 percent. The majority of the reliability issues are found in the interlocks and controls. The availability of appropriate spare parts and maintenance personnel, coupled with well written operating procedures, will play a more important role in successful mission completion for the MHM than other less complicated systems.« less

  9. Failure Diagnosis for the Holdup Tank System via ISFA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Huijuan; Bragg-Sitton, Shannon; Smidts, Carol

    This paper discusses the use of the integrated system failure analysis (ISFA) technique for fault diagnosis for the holdup tank system. ISFA is a simulation-based, qualitative and integrated approach used to study fault propagation in systems containing both hardware and software subsystems. The holdup tank system consists of a tank containing a fluid whose level is controlled by an inlet valve and an outlet valve. We introduce the component and functional models of the system, quantify the main parameters and simulate possible failure-propagation paths based on the fault propagation approach, ISFA. The results show that most component failures in themore » holdup tank system can be identified clearly and that ISFA is viable as a technique for fault diagnosis. Since ISFA is a qualitative technique that can be used in the very early stages of system design, this case study provides indications that it can be used early to study design aspects that relate to robustness and fault tolerance.« less

  10. Automatic Monitoring System Design and Failure Probability Analysis for River Dikes on Steep Channel

    NASA Astrophysics Data System (ADS)

    Chang, Yin-Lung; Lin, Yi-Jun; Tung, Yeou-Koung

    2017-04-01

    The purposes of this study includes: (1) design an automatic monitoring system for river dike; and (2) develop a framework which enables the determination of dike failure probabilities for various failure modes during a rainstorm. The historical dike failure data collected in this study indicate that most dikes in Taiwan collapsed under the 20-years return period discharge, which means the probability of dike failure is much higher than that of overtopping. We installed the dike monitoring system on the Chiu-She Dike which located on the middle stream of Dajia River, Taiwan. The system includes: (1) vertical distributed pore water pressure sensors in front of and behind the dike; (2) Time Domain Reflectometry (TDR) to measure the displacement of dike; (3) wireless floating device to measure the scouring depth at the toe of dike; and (4) water level gauge. The monitoring system recorded the variation of pore pressure inside the Chiu-She Dike and the scouring depth during Typhoon Megi. The recorded data showed that the highest groundwater level insides the dike occurred 15 hours after the peak discharge. We developed a framework which accounts for the uncertainties from return period discharge, Manning's n, scouring depth, soil cohesion, and friction angle and enables the determination of dike failure probabilities for various failure modes such as overtopping, surface erosion, mass failure, toe sliding and overturning. The framework was applied to Chiu-She, Feng-Chou, and Ke-Chuang Dikes on Dajia River. The results indicate that the toe sliding or overturning has the highest probability than other failure modes. Furthermore, the overall failure probability (integrate different failure modes) reaches 50% under 10-years return period flood which agrees with the historical failure data for the study reaches.

  11. User-Perceived Reliability of M-for-N (M: N) Shared Protection Systems

    NASA Astrophysics Data System (ADS)

    Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue

    In this paper we investigate the reliability of general type shared protection systems i.e. M for N (M: N) that can typically be applied to various telecommunication network devices. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner. The mathematical analysis gives the closed-form solution of the availability, the recursive computing algorithm of the MTTFF (Mean Time to First Failure) and the MTTF (Mean Time to Failure) perceived by an arbitrary end user. We also show that, under a certain condition, the probability distribution of TTFF (Time to First Failure) can be approximated by a simple exponential distribution. The analysis provides useful information for the analysis and the design of not only the telecommunication network devices but also other general shared protection systems that are subject to service level agreements (SLA) involving user-perceived reliability measures.

  12. Risk factors for technical failure of endoscopic double self-expandable metallic stent placement by partial stent-in-stent method.

    PubMed

    Kawakubo, Kazumichi; Kawakami, Hiroshi; Toyokawa, Yoshihide; Otani, Koichi; Kuwatani, Masaki; Abe, Yoko; Kawahata, Shuhei; Kubo, Kimitoshi; Kubota, Yoshimasa; Sakamoto, Naoya

    2015-01-01

    Endoscopic double self-expandable metallic stent (SEMS) placement by the partial stent-in-stent (PSIS) method has been reported to be useful for the management of unresectable hilar malignant biliary obstruction. However, it is technically challenging, and the optimal SEMS for the procedure remains unknown. The aim of this study was to identify the risk factors for technical failure of endoscopic double SEMS placement for unresectable malignant hilar biliary obstruction (MHBO). Between December 2009 and May 2013, 50 consecutive patients with MHBO underwent endoscopic double SEMS placement by the PSIS method. We retrospectively evaluated the rate of successful double SEMS placement and identified the risk factors for technical failure. The technical success rate for double SEMS placement was 82.0% (95% confidence interval [CI]: 69.2-90.2). On univariate analysis, the rate of technical failure was high in patients with metastatic disease and unilateral placement. Multivariate analysis revealed that metastatic disease was a significant risk factor for technical failure (odds ratio: 9.63, 95% CI: 1.11-105.5). The subgroup analysis after double guidewire insertion showed that the rate of technical success was higher in the laser-cut type SEMS with a large mesh and thick delivery system than in the braided type SEMS with a small mesh and thick delivery system. Metastatic disease was a significant risk factor for technical failure of double SEMS placement for unresectable MHBO. The laser-cut type SEMS with a large mesh and thin delivery system might be preferable for the PSIS procedure. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  13. Reliability-based management of buried pipelines considering external corrosion defects

    NASA Astrophysics Data System (ADS)

    Miran, Seyedeh Azadeh

    Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.

  14. Simulation Assisted Risk Assessment: Blast Overpressure Modeling

    NASA Technical Reports Server (NTRS)

    Lawrence, Scott L.; Gee, Ken; Mathias, Donovan; Olsen, Michael

    2006-01-01

    A probabilistic risk assessment (PRA) approach has been developed and applied to the risk analysis of capsule abort during ascent. The PRA is used to assist in the identification of modeling and simulation applications that can significantly impact the understanding of crew risk during this potentially dangerous maneuver. The PRA approach is also being used to identify the appropriate level of fidelity for the modeling of those critical failure modes. The Apollo launch escape system (LES) was chosen as a test problem for application of this approach. Failure modes that have been modeled and/or simulated to date include explosive overpressure-based failure, explosive fragment-based failure, land landing failures (range limits exceeded either near launch or Mode III trajectories ending on the African continent), capsule-booster re-contact during separation, and failure due to plume-induced instability. These failure modes have been investigated using analysis tools in a variety of technical disciplines at various levels of fidelity. The current paper focuses on the development and application of a blast overpressure model for the prediction of structural failure due to overpressure, including the application of high-fidelity analysis to predict near-field and headwinds effects.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mossahebi, S; Feigenberg, S; Nichols, E

    Purpose: GammaPod™, the first stereotactic radiotherapy device for early stage breast cancer treatment, has been recently installed and commissioned at our institution. A multidisciplinary working group applied the failure mode and effects analysis (FMEA) approach to perform a risk analysis. Methods: FMEA was applied to the GammaPod™ treatment process by: 1) generating process maps for each stage of treatment; 2) identifying potential failure modes and outlining their causes and effects; 3) scoring the potential failure modes using the risk priority number (RPN) system based on the product of severity, frequency of occurrence, and detectability (ranging 1–10). An RPN of highermore » than 150 was set as the threshold for minimal concern of risk. For these high-risk failure modes, potential quality assurance procedures and risk control techniques have been proposed. A new set of severity, occurrence, and detectability values were re-assessed in presence of the suggested mitigation strategies. Results: In the single-day image-and-treat workflow, 19, 22, and 27 sub-processes were identified for the stages of simulation, treatment planning, and delivery processes, respectively. During the simulation stage, 38 potential failure modes were found and scored, in terms of RPN, in the range of 9-392. 34 potential failure modes were analyzed in treatment planning with a score range of 16-200. For the treatment delivery stage, 47 potential failure modes were found with an RPN score range of 16-392. The most critical failure modes consisted of breast-cup pressure loss and incorrect target localization due to patient upper-body alignment inaccuracies. The final RPN score of these failure modes based on recommended actions were assessed to be below 150. Conclusion: FMEA risk analysis technique was applied to the treatment process of GammaPod™, a new stereotactic radiotherapy technology. Application of systematic risk analysis methods is projected to lead to improved quality of GammaPod™ treatments. Ying Niu and Cedric Yu are affiliated with Xcision Medical Systems.« less

  16. Heart failure services in the United Kingdom: rethinking the machine bureaucracy.

    PubMed

    Hawkins, Nathaniel M; Wright, David J; Capewell, Simon

    2013-01-20

    Poor outcomes and poor uptake of evidence based therapies persist for patients with heart failure in the United Kingdom. We offer a strategic analysis of services, defining the context, organization and objectives of the service, before focusing on implementation and performance. Critical flaws in past service development and performance are apparent, a consequence of failed performance management, policy and political initiative. The barriers to change and potential solutions are common to many health care systems. Integration, information, financing, incentives, innovation and values: all must be challenged and improved if heart failure services are to succeed. Modern healthcare requires open adaptive systems, continually learning and improving. The system also needs controls. Performance indicators should be simple, clinically relevant, and outcome focused. Heart failure presents one of the greatest opportunities to improve symptoms and survival with existing technology. To do so, heart failure services require radical reorganization. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Independent Orbiter Assessment (IOA): Analysis of the mechanical actuation subsystem

    NASA Technical Reports Server (NTRS)

    Bacher, J. L.; Montgomery, A. D.; Bradway, M. W.; Slaughter, W. T.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). The IOA analysis process utilized available MAS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  18. Aging assessment of large electric motors in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villaran, M.; Subudhi, M.

    1996-03-01

    Large electric motors serve as the prime movers to drive high capacity pumps, fans, compressors, and generators in a variety of nuclear plant systems. This study examined the stressors that cause degradation and aging in large electric motors operating in various plant locations and environments. The operating history of these machines in nuclear plant service was studied by review and analysis of failure reports in the NPRDS and LER databases. This was supplemented by a review of motor designs, and their nuclear and balance of plant applications, in order to characterize the failure mechanisms that cause degradation, aging, and failuremore » in large electric motors. A generic failure modes and effects analysis for large squirrel cage induction motors was performed to identify the degradation and aging mechanisms affecting various components of these large motors, the failure modes that result, and their effects upon the function of the motor. The effects of large motor failures upon the systems in which they are operating, and on the plant as a whole, were analyzed from failure reports in the databases. The effectiveness of the industry`s large motor maintenance programs was assessed based upon the failure reports in the databases and reviews of plant maintenance procedures and programs.« less

  19. Fault Tree Analysis: An Operations Research Tool for Identifying and Reducing Undesired Events in Training.

    ERIC Educational Resources Information Center

    Barker, Bruce O.; Petersen, Paul D.

    This paper explores the fault-tree analysis approach to isolating failure modes within a system. Fault tree investigates potentially undesirable events and then looks for failures in sequence that would lead to their occurring. Relationships among these events are symbolized by AND or OR logic gates, AND used when single events must coexist to…

  20. Comparison between four dissimilar solar panel configurations

    NASA Astrophysics Data System (ADS)

    Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.

    2017-12-01

    Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.

  1. Fault tree analysis for system modeling in case of intentional EMI

    NASA Astrophysics Data System (ADS)

    Genender, E.; Mleczko, M.; Döring, O.; Garbe, H.; Potthast, S.

    2011-08-01

    The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn.

  2. Performance evaluation of the croissant production line with reparable machines

    NASA Astrophysics Data System (ADS)

    Tsarouhas, Panagiotis H.

    2015-03-01

    In this study, the analytical probability models for an automated serial production system, bufferless that consists of n-machines in series with common transfer mechanism and control system was developed. Both time to failure and time to repair a failure are assumed to follow exponential distribution. Applying those models, the effect of system parameters on system performance in actual croissant production line was studied. The production line consists of six workstations with different numbers of reparable machines in series. Mathematical models of the croissant production line have been developed using Markov process. The strength of this study is in the classification of the whole system in states, representing failures of different machines. Failure and repair data from the actual production environment have been used to estimate reliability and maintainability for each machine, workstation, and the entire line is based on analytical models. The analysis provides a useful insight into the system's behaviour, helps to find design inherent faults and suggests optimal modifications to upgrade the system and improve its performance.

  3. Preliminary design-lift/cruise fan research and technology airplane flight control system

    NASA Technical Reports Server (NTRS)

    Gotlieb, P.; Lewis, G. E.; Little, L. J.

    1976-01-01

    This report presents the preliminary design of a stability augmentation system for a NASA V/STOL research and technology airplane. This stability augmentation system is postulated as the simplest system that meets handling qualities levels for research and technology missions flown by NASA test pilots. The airplane studied in this report is a T-39 fitted with tilting lift/cruise fan nacelles and a nose fan. The propulsion system features a shaft interconnecting the three variable pitch fans and three power plants. The mathematical modeling is based on pre-wind tunnel test estimated data. The selected stability augmentation system uses variable gains scheduled with airspeed. Failure analysis of the system illustrates the benign effect of engine failure. Airplane rate sensor failure must be solved with redundancy.

  4. The preliminary design of a lift-cruise fan airplane flight control system

    NASA Technical Reports Server (NTRS)

    Gotlieb, P.

    1977-01-01

    This paper presents the preliminary design of a stability augmentation system for a NASA V/STOL research and technology airplane. This stability augmentation system is postulated as the simplest system that meets handling-quality levels for research and technology missions flown by NASA test pilots. The airplane studied in this report is a modified T-39 fitted with tilting lift/cruise fan nacelles and a nose fan. The propulsion system features a shaft that interconnects three variable-pitch fans and three powerplants. The mathematical modeling is based on pre-wind tunnel test estimated data. The selected stability augmentation system uses variable gains scheduled with airspeed. Failure analysis of the system illustrates the benign effect of engine failure. Airplane rate sensor failure must be solved with redundancy.

  5. Expert systems for automated maintenance of a Mars oxygen production system

    NASA Technical Reports Server (NTRS)

    Ash, Robert L.; Huang, Jen-Kuang; Ho, Ming-Tsang

    1989-01-01

    A prototype expert system was developed for maintaining autonomous operation of a Mars oxygen production system. Normal operation conditions and failure modes according to certain desired criteria are tested and identified. Several schemes for failure detection and isolation using forward chaining, backward chaining, knowledge-based and rule-based are devised to perform several housekeeping functions. These functions include self-health checkout, an emergency shut down program, fault detection and conventional control activities. An effort was made to derive the dynamic model of the system using Bond-Graph technique in order to develop the model-based failure detection and isolation scheme by estimation method. Finally, computer simulations and experimental results demonstrated the feasibility of the expert system and a preliminary reliability analysis for the oxygen production system is also provided.

  6. Dynamic Modeling of ALS Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    The purpose of dynamic modeling and simulation of Advanced Life Support (ALS) systems is to help design them. Static steady state systems analysis provides basic information and is necessary to guide dynamic modeling, but static analysis is not sufficient to design and compare systems. ALS systems must respond to external input variations and internal off-nominal behavior. Buffer sizing, resupply scheduling, failure response, and control system design are aspects of dynamic system design. We develop two dynamic mass flow models and use them in simulations to evaluate systems issues, optimize designs, and make system design trades. One model is of nitrogen leakage in the space station, the other is of a waste processor failure in a regenerative life support system. Most systems analyses are concerned with optimizing the cost/benefit of a system at its nominal steady-state operating point. ALS analysis must go beyond the static steady state to include dynamic system design. All life support systems exhibit behavior that varies over time. ALS systems must respond to equipment operating cycles, repair schedules, and occasional off-nominal behavior or malfunctions. Biological components, such as bioreactors, composters, and food plant growth chambers, usually have operating cycles or other complex time behavior. Buffer sizes, material stocks, and resupply rates determine dynamic system behavior and directly affect system mass and cost. Dynamic simulation is needed to avoid the extremes of costly over-design of buffers and material reserves or system failure due to insufficient buffers and lack of stored material.

  7. Evaluation of Acoustic Emission NDE of Composite Crew Module Service Module/Alternate Launch Abort System (CCM SM/ALAS) Test Article Failure Tests

    NASA Technical Reports Server (NTRS)

    Horne, Michael R.; Madaras, Eric I.

    2010-01-01

    Failure tests of CCM SM/ALAS (Composite Crew Module Service Module / Alternate Launch Abort System) composite panels were conducted during July 10, 2008 and July 24, 2008 at Langley Research Center. This is a report of the analysis of the Acoustic Emission (AE) data collected during those tests.

  8. Environmental control system transducer development study

    NASA Technical Reports Server (NTRS)

    Brudnicki, M. J.

    1973-01-01

    A failure evaluation of the transducers used in the environmental control systems of the Apollo command service module, lunar module, and portable life support system is presented in matrix form for several generic categories of transducers to enable identification of chronic failure modes. Transducer vendors were contacted and asked to supply detailed information. The evaluation data generated for each category of transducer were compiled and published in failure design evaluation reports. The evaluation reports also present a review of the failure and design data for the transducers and suggest both design criteria to improve reliability of the transducers and, where necessary, design concepts for required redesign of the transducers. Remedial designs were implemented on a family of pressure transducers and on the oxygen flow transducer. The design concepts were subjected to analysis, breadboard fabrication, and verification testing.

  9. Independent Orbiter Assessment (IOA): Assessment of the data processing system FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Lowery, H. J.; Haufler, W. A.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Data Processing System (DPS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the NASA FMEA/CIL baseline with proposed Post 51-L updates included. A resolution of each discrepancy from the comparison is provided through additional analysis as required. The results of that comparison is documented for the Orbiter DPS hardware.

  10. Performance analysis of a fault inferring nonlinear detection system algorithm with integrated avionics flight data

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.

    1985-01-01

    This paper presents the performance analysis results of a fault inferring nonlinear detection system (FINDS) using integrated avionics sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. First, an overview of the FINDS algorithm structure is given. Then, aircraft state estimate time histories and statistics for the flight data sensors are discussed. This is followed by an explanation of modifications made to the detection and decision functions in FINDS to improve false alarm and failure detection performance. Next, the failure detection and false alarm performance of the FINDS algorithm are analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minutes of flight data. Results indicate that the detection speed, failure level estimation, and false alarm performance show a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed is faster for filter measurement sensors such as MLS than for filter input sensors such as flight control accelerometers. Finally, the progress in modifications of the FINDS algorithm design to accommodate flight computer constraints is discussed.

  11. Intelligent systems for strategic power infrastructure defense

    NASA Astrophysics Data System (ADS)

    Jung, Ju-Hwan

    A fault or disturbance in a power system can be severe due to the sources of vulnerability such as human errors, protection and control system failures, a failure of communication networks to deliver critical control signals, and market and load uncertainties. There have been several catastrophic failures resulting from disturbances involving the sources of vulnerability while power systems are designed to withstand disturbances or faults. To avoid catastrophic failures or minimize the impact of a disturbance(s), the state of the power system has to be analyzed correctly and preventive or corrective self-healing control actions have to be deployed. This dissertation addresses two aspects of power systems: Defense system and diagnosis, both concerned with the power system analysis and operation during events involving faults or disturbances. This study is intended to develop a defense system that is able to assess power system vulnerability and to perform self-healing control actions based on the system-wide analysis. In order to meet the requirements of the system-wide analysis, the defense system is designed with multi-agent system technologies. Since power systems are dynamic and uncertain the self-healing control actions need to be adaptive. This study applies the reinforcement learning technique to provide a theoretical basis for adaptation. One of the important issues in adaptation is the convergence of the learning algorithm. An appropriate convergence criterion is derived and an application with a load-shedding scheme is demonstrated in this study. This dissertation also demonstrates the feasibility of the defense system and self-healing control actions through multi-agent system technologies. The other subject of this research is to investigate the methodology for on-line fault diagnosis using the information from Sequence-of-Events Recorders (SER). The proposed multiple-hypothesis analysis generates one or more hypothetical fault scenarios to interpret the SER information. In order to avoid ambiguity of the hypotheses, this study proposes a new method to determine the credibility of each hypothesis. Even if there is not enough SER information, the proposed method is able to perform an accurate fault and malfunction analysis. To avoid exhaustive testing, a minimal set of test scenarios is derived, which is able to handle missing information and SERs. During extreme contingencies or cascading events, fault diagnosis is the first step in the operation of the power system. On-line fault diagnosis provides necessary and correct information for the defense system to make correct and efficient decisions on self-healing control actions. It has been shown in previous studies that incorrect fault diagnosis can lead to catastrophic failures in power systems. Fault diagnosis is an important issue for strategic power infrastructure defense.

  12. Fault Tree Analysis: An Emerging Methodology for Instructional Science.

    ERIC Educational Resources Information Center

    Wood, R. Kent; And Others

    1979-01-01

    Describes Fault Tree Analysis, a tool for systems analysis which attempts to identify possible modes of failure in systems to increase the probability of success. The article defines the technique and presents the steps of FTA construction, focusing on its application to education. (RAO)

  13. Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2008-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.

  14. A cost simulation for mammography examinations taking into account equipment failures and resource utilization characteristics.

    PubMed

    Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A

    2010-12-01

    This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.

  15. Prediction of line failure fault based on weighted fuzzy dynamic clustering and improved relational analysis

    NASA Astrophysics Data System (ADS)

    Meng, Xiaocheng; Che, Renfei; Gao, Shi; He, Juntao

    2018-04-01

    With the advent of large data age, power system research has entered a new stage. At present, the main application of large data in the power system is the early warning analysis of the power equipment, that is, by collecting the relevant historical fault data information, the system security is improved by predicting the early warning and failure rate of different kinds of equipment under certain relational factors. In this paper, a method of line failure rate warning is proposed. Firstly, fuzzy dynamic clustering is carried out based on the collected historical information. Considering the imbalance between the attributes, the coefficient of variation is given to the corresponding weights. And then use the weighted fuzzy clustering to deal with the data more effectively. Then, by analyzing the basic idea and basic properties of the relational analysis model theory, the gray relational model is improved by combining the slope and the Deng model. And the incremental composition and composition of the two sequences are also considered to the gray relational model to obtain the gray relational degree between the various samples. The failure rate is predicted according to the principle of weighting. Finally, the concrete process is expounded by an example, and the validity and superiority of the proposed method are verified.

  16. Tensile failure criteria for fiber composite materials

    NASA Technical Reports Server (NTRS)

    Rosen, B. W.; Zweben, C. H.

    1972-01-01

    The analysis provides insight into the failure mechanics of these materials and defines criteria which serve as tools for preliminary design material selection and for material reliability assessment. The model incorporates both dispersed and propagation type failures and includes the influence of material heterogeneity. The important effects of localized matrix damage and post-failure matrix shear stress transfer are included in the treatment. The model is used to evaluate the influence of key parameters on the failure of several commonly used fiber-matrix systems. Analyses of three possible failure modes were developed. These modes are the fiber break propagation mode, the cumulative group fracture mode, and the weakest link mode. Application of the new model to composite material systems has indicated several results which require attention in the development of reliable structural composites. Prominent among these are the size effect and the influence of fiber strength variability.

  17. Spatial correlation analysis of cascading failures: Congestions and Blackouts

    PubMed Central

    Daqing, Li; Yinan, Jiang; Rui, Kang; Havlin, Shlomo

    2014-01-01

    Cascading failures have become major threats to network robustness due to their potential catastrophic consequences, where local perturbations can induce global propagation of failures. Unlike failures spreading via direct contacts due to structural interdependencies, overload failures usually propagate through collective interactions among system components. Despite the critical need in developing protection or mitigation strategies in networks such as power grids and transportation, the propagation behavior of cascading failures is essentially unknown. Here we find by analyzing our collected data that jams in city traffic and faults in power grid are spatially long-range correlated with correlations decaying slowly with distance. Moreover, we find in the daily traffic, that the correlation length increases dramatically and reaches maximum, when morning or evening rush hour is approaching. Our study can impact all efforts towards improving actively system resilience ranging from evaluation of design schemes, development of protection strategies to implementation of mitigation programs. PMID:24946927

  18. Independent Orbiter Assessment (IOA): Assessment of the reaction control system, volume 4

    NASA Technical Reports Server (NTRS)

    Prust, Chet D.; Hartman, Dan W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the aft and forward Reaction Control System (RCS) hardware and Electrical Power Distribution and Control (EPD and C), generating draft failure modes and potential critical items. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline. This report documents the results of that comparison for the Orbiter RCS hardware and EPD and C systems. Volume 4 continues the presentation of IOA worksheets and contains the potential critical items list.

  19. Blowout Prevention System Events and Equipment Component Failures : 2016 SafeOCS Annual Report

    DOT National Transportation Integrated Search

    2017-09-22

    The SafeOCS 2016 Annual Report, produced by the Bureau of Transportation Statistics (BTS), summarizes blowout prevention (BOP) equipment failures on marine drilling rigs in the Outer Continental Shelf. It includes an analysis of equipment component f...

  20. Composite Structural Analysis of Flat-Back Shaped Blade for Multi-MW Class Wind Turbine

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Bang, Hyung-Joon; Shin, Hyung-Ki; Jang, Moon-Seok

    2014-06-01

    This paper provides an overview of failure mode estimation based on 3D structural finite element (FE) analysis of the flat-back shaped wind turbine blade. Buckling stability, fiber failure (FF), and inter-fiber failure (IFF) analyses were performed to account for delamination or matrix failure of composite materials and to predict the realistic behavior of the entire blade region. Puck's fracture criteria were used for IFF evaluation. Blade design loads applicable to multi-megawatt (MW) wind turbine systems were calculated according to the Germanischer Lloyd (GL) guideline and the International Electrotechnical Commission (IEC) 61400-1 standard, under Class IIA wind conditions. After the post-processing of final load results, a number of principal load cases were selected and converted into applied forces at the each section along the blade's radius of the FE model. Nonlinear static analyses were performed for laminate failure, FF, and IFF check. For buckling stability, linear eigenvalue analysis was performed. As a result, we were able to estimate the failure mode and locate the major weak point.

  1. Launch Vehicle Failure Dynamics and Abort Triggering Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Hill, Ashely D.; Beard, Bernard B.

    2011-01-01

    Launch vehicle ascent is a time of high risk for an on-board crew. There are many types of failures that can kill the crew if the crew is still on-board when the failure becomes catastrophic. For some failure scenarios, there is plenty of time for the crew to be warned and to depart, whereas in some there is insufficient time for the crew to escape. There is a large fraction of possible failures for which time is of the essence and a successful abort is possible if the detection and action happens quickly enough. This paper focuses on abort determination based primarily on data already available from the GN&C system. This work is the result of failure analysis efforts performed during the Ares I launch vehicle development program. Derivation of attitude and attitude rate abort triggers to ensure that abort occurs as quickly as possible when needed, but that false positives are avoided, forms a major portion of the paper. Some of the potential failure modes requiring use of these triggers are described, along with analysis used to determine the success rate of getting the crew off prior to vehicle demise.

  2. Probabilistic structural analysis methods for space transportation propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Moore, N.; Anis, C.; Newell, J.; Nagpal, V.; Singhal, S.

    1991-01-01

    Information on probabilistic structural analysis methods for space propulsion systems is given in viewgraph form. Information is given on deterministic certification methods, probability of failure, component response analysis, stress responses for 2nd stage turbine blades, Space Shuttle Main Engine (SSME) structural durability, and program plans. .

  3. Modelling of Safety Instrumented Systems by using Bernoulli trials: towards the notion of odds on for SIS failures analysis

    NASA Astrophysics Data System (ADS)

    Cauffriez, Laurent

    2017-01-01

    This paper deals with the modeling of a random failures process of a Safety Instrumented System (SIS). It aims to identify the expected number of failures for a SIS during its lifecycle. Indeed, the fact that the SIS is a system being tested periodically gives the idea to apply Bernoulli trials to characterize the random failure process of a SIS and thus to verify if the PFD (Probability of Failing Dangerously) experimentally obtained agrees with the theoretical one. Moreover, the notion of "odds on" found in Bernoulli theory allows engineers and scientists determining easily the ratio between “outcomes with success: failure of SIS” and “outcomes with unsuccess: no failure of SIS” and to confirm that SIS failures occur sporadically. A Stochastic P-temporised Petri net is proposed and serves as a reference model for describing the failure process of a 1oo1 SIS architecture. Simulations of this stochastic Petri net demonstrate that, during its lifecycle, the SIS is rarely in a state in which it cannot perform its mission. Experimental results are compared to Bernoulli trials in order to validate the powerfulness of Bernoulli trials for the modeling of the failures process of a SIS. The determination of the expected number of failures for a SIS during its lifecycle opens interesting research perspectives for engineers and scientists by completing the notion of PFD.

  4. Validating FMEA output against incident learning data: A study in stereotactic body radiation therapy.

    PubMed

    Yang, F; Cao, N; Young, L; Howard, J; Logan, W; Arbuckle, T; Sponseller, P; Korssjoen, T; Meyer, J; Ford, E

    2015-06-01

    Though failure mode and effects analysis (FMEA) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge, its output has never been validated against data on errors that actually occur. The objective of this study was to perform FMEA of a stereotactic body radiation therapy (SBRT) treatment planning process and validate the results against data recorded within an incident learning system. FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, dosimetrists, and IT technologists. Potential failure modes were identified through a systematic review of the process map. Failure modes were rated for severity, occurrence, and detectability on a scale of one to ten and risk priority number (RPN) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that has been active for two and a half years. Differences between FMEA anticipated failure modes and existing incidents were identified. FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. Combining both methods yielded a total of 76 possible process failures, of which 13 (17%) were missed by FMEA while 43 (57%) identified by FMEA only. When scored for RPN, the 13 events missed by FMEA ranked within the lower half of all failure modes and exhibited significantly lower severity relative to those identified by FMEA (p = 0.02). FMEA, though valuable, is subject to certain limitations. In this study, FMEA failed to identify 17% of actual failure modes, though these were of lower risk. Similarly, an incident learning system alone fails to identify a large number of potentially high-severity process errors. Using FMEA in combination with incident learning may render an improved overview of risks within a process.

  5. Availability Analysis of Dual Mode Systems

    DOT National Transportation Integrated Search

    1974-04-01

    The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...

  6. Independent Orbiter Assessment (IOA): Analysis of the reaction control system, volume 1

    NASA Technical Reports Server (NTRS)

    Burkemper, V. J.; Haufler, W. A.; Odonnell, R. A.; Paul, D. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Reaction Control System (RCS). The purpose of the RCS is to provide thrust in and about the X, Y, Z axes for External Tank (ET) separation; orbit insertion maneuvers; orbit translation maneuvers; on-orbit attitude control; rendezvous; proximity operations (payload deploy and capture); deorbit maneuvers; and abort attitude control. The RCS is situated in three independent modules, one forward in the orbiter nose and one in each OMS/RCS pod. Each RCS module consists of the following subsystems: Helium Pressurization Subsystem; Propellant Storage and Distribution Subsystem; Thruster Subsystem; and Electrical Power Distribution and Control Subsystem. Of the failure modes analyzed, 307 could potentially result in a loss of life and/or loss of vehicle.

  7. Independent Orbiter Assessment (IOA): Analysis of the guidance, navigation, and control subsystem

    NASA Technical Reports Server (NTRS)

    Trahan, W. H.; Odonnell, R. A.; Pietz, K. C.; Hiott, J. M.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) is presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Guidance, Navigation, and Control (GNC) Subsystem hardware are documented. The function of the GNC hardware is to respond to guidance, navigation, and control software commands to effect vehicle control and to provide sensor and controller data to GNC software. Some of the GNC hardware for which failure modes analysis was performed includes: hand controllers; Rudder Pedal Transducer Assembly (RPTA); Speed Brake Thrust Controller (SBTC); Inertial Measurement Unit (IMU); Star Tracker (ST); Crew Optical Alignment Site (COAS); Air Data Transducer Assembly (ADTA); Rate Gyro Assemblies; Accelerometer Assembly (AA); Aerosurface Servo Amplifier (ASA); and Ascent Thrust Vector Control (ATVC). The IOA analysis process utilized available GNC hardware drawings, workbooks, specifications, schematics, and systems briefs for defining hardware assemblies, components, and circuits. Each hardware item was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  8. Accelerated Aging System for Prognostics of Power Semiconductor Devices

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Vashchenko, Vladislav; Wysocki, Philip; Saha, Sankalita

    2010-01-01

    Prognostics is an engineering discipline that focuses on estimation of the health state of a component and the prediction of its remaining useful life (RUL) before failure. Health state estimation is based on actual conditions and it is fundamental for the prediction of RUL under anticipated future usage. Failure of electronic devices is of great concern as future aircraft will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. Therefore, development of prognostics solutions for electronics is of key importance. This paper presents an accelerated aging system for gate-controlled power transistors. This system allows for the understanding of the effects of failure mechanisms, and the identification of leading indicators of failure which are essential in the development of physics-based degradation models and RUL prediction. In particular, this system isolates electrical overstress from thermal overstress. Also, this system allows for a precise control of internal temperatures, enabling the exploration of intrinsic failure mechanisms not related to the device packaging. By controlling the temperature within safe operation levels of the device, accelerated aging is induced by electrical overstress only, avoiding the generation of thermal cycles. The temperature is controlled by active thermal-electric units. Several electrical and thermal signals are measured in-situ and recorded for further analysis in the identification of leading indicators of failures. This system, therefore, provides a unique capability in the exploration of different failure mechanisms and the identification of precursors of failure that can be used to provide a health management solution for electronic devices.

  9. Improving the Estimates of International Space Station (ISS) Induced K-Factor Failure Rates for On-Orbit Replacement Unit (ORU) Supportability Analyses

    NASA Technical Reports Server (NTRS)

    Anderson, Leif F.; Harrington, Sean P.; Omeke, Ojei, II; Schwaab, Douglas G.

    2009-01-01

    This is a case study on revised estimates of induced failure for International Space Station (ISS) on-orbit replacement units (ORUs). We devise a heuristic to leverage operational experience data by aggregating ORU, associated function (vehicle sub -system), and vehicle effective' k-factors using actual failure experience. With this input, we determine a significant failure threshold and minimize the difference between the actual and predicted failure rates. We conclude with a discussion on both qualitative and quantitative improvements the heuristic methods and potential benefits to ISS supportability engineering analysis.

  10. Cascading failure in scale-free networks with tunable clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-Jun; Gu, Bo; Guan, Xiang-Min; Zhu, Yan-Bo; Lv, Ren-Li

    2016-02-01

    Cascading failure is ubiquitous in many networked infrastructure systems, such as power grids, Internet and air transportation systems. In this paper, we extend the cascading failure model to a scale-free network with tunable clustering and focus on the effect of clustering coefficient on system robustness. It is found that the network robustness undergoes a nonmonotonic transition with the increment of clustering coefficient: both highly and lowly clustered networks are fragile under the intentional attack, and the network with moderate clustering coefficient can better resist the spread of cascading. We then provide an extensive explanation for this constructive phenomenon via the microscopic point of view and quantitative analysis. Our work can be useful to the design and optimization of infrastructure systems.

  11. Sensitivity Analysis of Digital I&C Modules in Protection and Safety Systems

    NASA Astrophysics Data System (ADS)

    Khalil Ur, Rahman; Zubair, M.; Heo, G.

    2013-12-01

    This research is performed to examine the sensitivity of digital Instrumentation and Control (I&C) components and modules used in regulating and protection systems architectures of nuclear industry. Fault Tree Analysis (FTA) was performed for four configurations of RPS channel architecture. The channel unavailability has been calculated by using AIMS-PSA, which comes out 4.517E-03, 2.551E-03, 2.246E-03 and 2.7613-04 for architecture configuration I, II, III and IV respectively. It is observed that unavailability decreases by 43.5 % & 50.4% by inserting partial redundancy whereas maximum reduction of 93.9 % in unavailability happens when double redundancy is inserted in architecture. Coincidence module output failure and bi-stable output failures are identified as sensitive failures by Risk Reduction Worth (RRW) and Fussell-Vesely (FV) importance. RRW highlights that risk from coincidence processor output failure can reduced by 48.83 folds and FV indicates that BP output is sensitive by 0.9796 (on a scale of 1).

  12. Loss of control air at Browns Ferry Unit One: accident sequence analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrington, R.M.; Hodge, S.A.

    1986-04-01

    This study describes the predicted response of the Browns Ferry Nuclear Plant to a postulated complete failure of plant control air. The failure of plant control air cascades to include the loss of drywell control air at Units 1 and 2. Nevertheless, this is a benign accident unless compounded by simultaneous failures in the turbine-driven high pressure injection systems. Accident sequence calculations are presented for Loss of Control Air sequences with assumed failure upon demand of the Reactor Core Isolation Cooling (RCIC) and the High Pressure Coolant Injection (HPCI) at Unit 1. Sequences with and without operator action are considered.more » Results show that the operators can prevent core uncovery if they take action to utilize the Control Rod Drive Hydraulic System as a backup high pressure injection system.« less

  13. Minding the Cyber-Physical Gap: Model-Based Analysis and Mitigation of Systemic Perception-Induced Failure.

    PubMed

    Mordecai, Yaniv; Dori, Dov

    2017-07-17

    The cyber-physical gap (CPG) is the difference between the 'real' state of the world and the way the system perceives it. This discrepancy often stems from the limitations of sensing and data collection technologies and capabilities, and is inevitable at some degree in any cyber-physical system (CPS). Ignoring or misrepresenting such limitations during system modeling, specification, design, and analysis can potentially result in systemic misconceptions, disrupted functionality and performance, system failure, severe damage, and potential detrimental impacts on the system and its environment. We propose CPG-Aware Modeling & Engineering (CPGAME), a conceptual model-based approach to capturing, explaining, and mitigating the CPG. CPGAME enhances the systems engineer's ability to cope with CPGs, mitigate them by design, and prevent erroneous decisions and actions. We demonstrate CPGAME by applying it for modeling and analysis of the 1979 Three Miles Island 2 nuclear accident, and show how its meltdown could be mitigated. We use ISO-19450:2015-Object Process Methodology as our conceptual modeling framework.

  14. Autonomous diagnostics and prognostics of signal and data distribution systems

    NASA Astrophysics Data System (ADS)

    Blemel, Kenneth G.

    2001-07-01

    Wiring is the nervous system of any complex system and is attached to or services nearly every subsystem. Damage to optical wiring systems can cause serious interruptions in communication, command and control systems. Electrical wiring faults and failures due to opens, shorts, and arcing probably result in adverse effects to the systems serviced by the wiring. Abnormalities in a system usually can be detected by monitoring some wiring parameter such as vibration, data activity or power consumption. This paper introduces the mapping of wiring to critical functions during system engineering to automatically define the Failure Modes Effects and Criticality Analysis. This mapping can be used to define the sensory processes needed to perform diagnostics during system engineering. This paper also explains the use of Operational Modes and Criticality Effects Analysis in the development of Sentient Wiring Systems as a means for diagnostic, prognostics and health management of wiring in aerospace and transportation systems.

  15. Revised Risk Priority Number in Failure Mode and Effects Analysis Model from the Perspective of Healthcare System

    PubMed Central

    Rezaei, Fatemeh; Yarmohammadian, Mohmmad H.; Haghshenas, Abbas; Fallah, Ali; Ferdosi, Masoud

    2018-01-01

    Background: Methodology of Failure Mode and Effects Analysis (FMEA) is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN)” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1) defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss), occurrence of incident (time - unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) then, (2) risks priority criteria quantified by using RPN index (361 for the highest rate failure). The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models. PMID:29441184

  16. Development of a GIS-based failure investigation system for highway soil slopes

    NASA Astrophysics Data System (ADS)

    Ramanathan, Raghav; Aydilek, Ahmet H.; Tanyu, Burak F.

    2015-06-01

    A framework for preparation of an early warning system was developed for Maryland, using a GIS database and a collective overlay of maps that highlight highway slopes susceptible to soil slides or slope failures in advance through spatial and statistical analysis. Data for existing soil slope failures was collected from geotechnical reports and field visits. A total of 48 slope failures were recorded and analyzed. Six factors, including event precipitation, geological formation, land cover, slope history, slope angle, and elevation were considered to affect highway soil slope stability. The observed trends indicate that precipitation and poor surface or subsurface drainage conditions are principal factors causing slope failures. 96% of the failed slopes have an open drainage section. A majority of the failed slopes lie in regions with relatively high event precipitation ( P>200 mm). 90% of the existing failures are surficial erosion type failures, and only 1 out of the 42 slope failures is deep rotational type failure. More than half of the analyzed slope failures have occurred in regions having low density land cover. 46% of failures are on slopes with slope angles between 20° and 30°. Influx of more data relating to failed slopes should give rise to more trends, and thus the developed slope management system will aid the state highway engineers in prudential budget allocation and prioritizing different remediation projects based on the literature reviewed on the principles, concepts, techniques, and methodology for slope instability evaluation (Leshchinsky et al., 2015).

  17. Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations

    NASA Technical Reports Server (NTRS)

    Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor

    2014-01-01

    One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.

  18. A Lipskian analysis of child protection failures from Victoria Climbié to "Baby P": a street-level re-evaluation of joined-up governance.

    PubMed

    Marinetto, Michael

    2011-01-01

    This paper explores the issue of joined-up governance by considering child protection failures, firstly, the case of Victoria Climbié who was killed by her guardians despite being known as an at risk child by various public agencies. The seeming inability of the child protection system to prevent Victoria Climbié's death resulted in a public inquiry under the chairmanship of Lord Laming. The Laming report of 2003 looked, in part, to the lack of joined-up working between agencies to explain this failure to intervene and made a number of recommendations to improve joined-up governance. Using evidence from detailed testimonies given by key personnel during the Laming Inquiry, the argument of this paper is that we cannot focus exclusively on formal structures or decision-making processes but must also consider the normal, daily and informal routines of professional workers. These very same routines may inadvertently culminate in the sort of systemic failures that lead to child protection tragedies. Analysis of the micro-world inhabited by professional workers would benefit most, it is argued here, from the policy-based concept of street-level bureaucracy developed by Michael Lipsky some 30 years ago. The latter half of the paper considers child protection failures that emerged after the Laming-inspired reforms. In particular, the case of ‘Baby P’ highlights, once again, how the working practices of street-level professionals, rather than a lack of joined-up systems, may possibly complement an analysis of, and help us to explain, failures in the child protection system. A Lipskian analysis generally offers, although there are some caveats, only pessimistic conclusions about the prospects of governing authorities being able to avoid future child protection disasters. These conclusions are not wholeheartedly accepted. There exists a glimmer of optimism because street-level bureaucrats still remain accountable, but not necessarily in terms of top-down relations of authority rather, in terms of interpersonal forms of accountability – accountability to professionals and citizen consumers of services.

  19. MO-G-BRE-09: Validating FMEA Against Incident Learning Data: A Study in Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, F; Cao, N; Young, L

    2014-06-15

    Purpose: Though FMEA (Failure Mode and Effects Analysis) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge it has never been validated against actual incident learning data. The objective of this study was to perform an FMEA analysis of an SBRT (Stereotactic Body Radiation Therapy) treatment planning process and validate this against data recorded within an incident learning system. Methods: FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, and dosimetrists. Potential failure modes were identified through a systematic review of the workflow process. Failuremore » modes were rated for severity, occurrence, and detectability on a scale of 1 to 10 and RPN (Risk Priority Number) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that had been active for two years. Differences were identified. Results: FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. FMEA failed to anticipate 13 of these events, among which 3 were registered with severity ratings of severe or critical in the incident learning system. Combining both methods yielded a total of 76 failure modes, and when scored for RPN the 13 events missed by FMEA ranked within the middle half of all failure modes. Conclusion: FMEA, though valuable, is subject to certain limitations, among them the limited ability to anticipate all potential errors for a given process. This FMEA exercise failed to identify a significant number of possible errors (17%). Integration of FMEA with retrospective incident data may be able to render an improved overview of risks within a process.« less

  20. Random safety auditing, root cause analysis, failure mode and effects analysis.

    PubMed

    Ursprung, Robert; Gray, James

    2010-03-01

    Improving quality and safety in health care is a major concern for health care providers, the general public, and policy makers. Errors and quality issues are leading causes of morbidity and mortality across the health care industry. There is evidence that patients in the neonatal intensive care unit (NICU) are at high risk for serious medical errors. To facilitate compliance with safe practices, many institutions have established quality-assurance monitoring procedures. Three techniques that have been found useful in the health care setting are failure mode and effects analysis, root cause analysis, and random safety auditing. When used together, these techniques are effective tools for system analysis and redesign focused on providing safe delivery of care in the complex NICU system. Copyright 2010 Elsevier Inc. All rights reserved.

  1. Reliability Analysis of the Gradual Degradation of Semiconductor Devices.

    DTIC Science & Technology

    1983-07-20

    under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation

  2. A comparative critical study between FMEA and FTA risk analysis methods

    NASA Astrophysics Data System (ADS)

    Cristea, G.; Constantinescu, DM

    2017-10-01

    Today there is used an overwhelming number of different risk analyses techniques with acronyms such as: FMEA (Failure Modes and Effects Analysis) and its extension FMECA (Failure Mode, Effects, and Criticality Analysis), DRBFM (Design Review by Failure Mode), FTA (Fault Tree Analysis) and and its extension ETA (Event Tree Analysis), HAZOP (Hazard & Operability Studies), HACCP (Hazard Analysis and Critical Control Points) and What-if/Checklist. However, the most used analysis techniques in the mechanical and electrical industry are FMEA and FTA. In FMEA, which is an inductive method, information about the consequences and effects of the failures is usually collected through interviews with experienced people, and with different knowledge i.e., cross-functional groups. The FMEA is used to capture potential failures/risks & impacts and prioritize them on a numeric scale called Risk Priority Number (RPN) which ranges from 1 to 1000. FTA is a deductive method i.e., a general system state is decomposed into chains of more basic events of components. The logical interrelationship of how such basic events depend on and affect each other is often described analytically in a reliability structure which can be visualized as a tree. Both methods are very time-consuming to be applied thoroughly, and this is why it is oftenly not done so. As a consequence possible failure modes may not be identified. To address these shortcomings, it is proposed to use a combination of FTA and FMEA.

  3. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  4. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  5. Defining Human Failure Events for Petroleum Risk Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Knut Øien

    2014-06-01

    In this paper, an identification and description of barriers and human failure events (HFEs) for human reliability analysis (HRA) is performed. The barriers, called target systems, are identified from risk significant accident scenarios represented as defined situations of hazard and accident (DSHAs). This report serves as the foundation for further work to develop petroleum HFEs compatible with the SPAR-H method and intended for reuse in future HRAs.

  6. Reliability and Maintainability Analysis for the Amine Swingbed Carbon Dioxide Removal System

    NASA Technical Reports Server (NTRS)

    Dunbar, Tyler

    2016-01-01

    I have performed a reliability & maintainability analysis for the Amine Swingbed payload system. The Amine Swingbed is a carbon dioxide removal technology that has gone through 2,400 hours of International Space Station on-orbit use between 2013 and 2016. While the Amine Swingbed is currently an experimental payload system, the Amine Swingbed may be converted to system hardware. If the Amine Swingbed becomes system hardware, it will supplement the Carbon Dioxide Removal Assembly (CDRA) as the primary CO2 removal technology on the International Space Station. NASA is also considering using the Amine Swingbed as the primary carbon dioxide removal technology for future extravehicular mobility units and for the Orion, which will be used for the Asteroid Redirect and Journey to Mars missions. The qualitative component of the reliability and maintainability analysis is a Failure Modes and Effects Analysis (FMEA). In the FMEA, I have investigated how individual components in the Amine Swingbed may fail, and what the worst case scenario is should a failure occur. The significant failure effects are the loss of ability to remove carbon dioxide, the formation of ammonia due to chemical degradation of the amine, and loss of atmosphere because the Amine Swingbed uses the vacuum of space to regenerate the Amine Swingbed. In the quantitative component of the reliability and maintainability analysis, I have assumed a constant failure rate for both electronic and nonelectronic parts. Using this data, I have created a Poisson distribution to predict the failure rate of the Amine Swingbed as a whole. I have determined a mean time to failure for the Amine Swingbed to be approximately 1,400 hours. The observed mean time to failure for the system is between 600 and 1,200 hours. This range includes initial testing of the Amine Swingbed, as well as software faults that are understood to be non-critical. If many of the commercial parts were switched to military-grade parts, the expected mean time to failure would be 2,300 hours. Both calculated mean times to failure for the Amine Swingbed use conservative failure rate models. The observed mean time to failure for CDRA is 2,500 hours. Working on this project and for NASA in general has helped me gain insight into current aeronautics missions, reliability engineering, circuit analysis, and different cultures. Prior my internship, I did not have a lot knowledge about the work being performed at NASA. As a chemical engineer, I had not really considered working for NASA as a career path. By engaging in interactions with civil servants, contractors, and other interns, I have learned a great deal about modern challenges that NASA is addressing. My work has helped me develop a knowledge base in safety and reliability that would be difficult to find elsewhere. Prior to this internship, I had not thought about reliability engineering. Now, I have gained a skillset in performing reliability analyses, and understanding the inner workings of a large mechanical system. I have also gained experience in understanding how electrical systems work while I was analyzing the electrical components of the Amine Swingbed. I did not expect to be exposed to as many different cultures as I have while working at NASA. I am referring to both within NASA and the Houston area. NASA employs individuals with a broad range of backgrounds. It has been great to learn from individuals who have highly diverse experiences and outlooks on the world. In the Houston area, I have come across individuals from different parts of the world. Interacting with such a high number of individuals with significantly different backgrounds has helped me to grow as a person in ways that I did not expect. My time at NASA has opened a window into the field of aeronautics. After earning a bachelor's degree in chemical engineering, I plan to go to graduate school for a PhD in engineering. Prior to coming to NASA, I was not aware of the graduate Pathways program. I intend to apply for the graduate Pathways program as positions are opened up. I would like to pursue future opportunities with NASA, especially as my engineering career progresses.

  7. The Range Safety Debris Catalog Analysis in Preparation for the Pad Abort One Flight Test

    NASA Technical Reports Server (NTRS)

    Kutty, Prasad M.; Pratt, William D.

    2010-01-01

    The Pad Abort One flight test of the Orion Abort Flight Test Program is currently under development with the goal of demonstrating the capability of the Launch Abort System. In the event of a launch failure, this system will propel the Crew Exploration Vehicle to safety. An essential component of this flight test is range safety, which ensures the security of range assets and personnel. A debris catalog analysis was done as part of a range safety data package delivered to the White Sands Missile Range in New Mexico where the test will be conducted. The analysis discusses the consequences of an overpressurization of the Abort Motor. The resulting structural failure was assumed to create a debris field of vehicle fragments that could potentially pose a hazard to the range. A statistical model was used to assemble the debris catalog of potential propellant fragments. Then, a thermodynamic, energy balance model was applied to the system in order to determine the imparted velocity to these propellant fragments. This analysis was conducted at four points along the flight trajectory to better understand the failure consequences over the entire flight. The methods used to perform this analysis are outlined in detail and the corresponding results are presented and discussed.

  8. The challenge of measuring emergency preparedness: integrating component metrics to build system-level measures for strategic national stockpile operations.

    PubMed

    Jackson, Brian A; Faith, Kay Sullivan

    2013-02-01

    Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.

  9. Probabilistic assessment of dynamic system performance. Part 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belhadj, Mohamed

    1993-01-01

    Accurate prediction of dynamic system failure behavior can be important for the reliability and risk analyses of nuclear power plants, as well as for their backfitting to satisfy given constraints on overall system reliability, or optimization of system performance. Global analysis of dynamic systems through investigating the variations in the structure of the attractors of the system and the domains of attraction of these attractors as a function of the system parameters is also important for nuclear technology in order to understand the fault-tolerance as well as the safety margins of the system under consideration and to insure a safemore » operation of nuclear reactors. Such a global analysis would be particularly relevant to future reactors with inherent or passive safety features that are expected to rely on natural phenomena rather than active components to achieve and maintain safe shutdown. Conventionally, failure and global analysis of dynamic systems necessitate the utilization of different methodologies which have computational limitations on the system size that can be handled. Using a Chapman-Kolmogorov interpretation of system dynamics, a theoretical basis is developed that unifies these methodologies as special cases and which can be used for a comprehensive safety and reliability analysis of dynamic systems.« less

  10. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  11. Independent Orbiter Assessment (IOA): Analysis of the reaction control system, volume 3

    NASA Technical Reports Server (NTRS)

    Burkemper, V. J.; Haufler, W. A.; Odonnell, R. A.; Paul, D. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Reaction Control System (RCS). The RCS is situated in three independent modules, one forward in the orbiter nose and one in each OMS/RCS pod. Each RCS module consists of the following subsystems: Helium Pressurization Subsystem; Propellant Storage and Distribution Subsystem; Thruster Subsystem; and Electrical Power Distribution and Control Subsystem. Volume 3 continues the presentation of IOA analysis worksheets and the potential critical items list.

  12. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T.

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less

  13. CRANS - CONFIGURABLE REAL-TIME ANALYSIS SYSTEM

    NASA Technical Reports Server (NTRS)

    Mccluney, K.

    1994-01-01

    In a real-time environment, the results of changes or failures in a complex, interconnected system need evaluation quickly. Tabulations showing the effects of changes and/or failures of a given item in the system are generally only useful for a single input, and only with regard to that item. Subsequent changes become harder to evaluate as combinations of failures produce a cascade effect. When confronted by multiple indicated failures in the system, it becomes necessary to determine a single cause. In this case, failure tables are not very helpful. CRANS, the Configurable Real-time ANalysis System, can interpret a logic tree, constructed by the user, describing a complex system and determine the effects of changes and failures in it. Items in the tree are related to each other by Boolean operators. The user is then able to change the state of these items (ON/OFF FAILED/UNFAILED). The program then evaluates the logic tree based on these changes and determines any resultant changes to other items in the tree. CRANS can also search for a common cause for multiple item failures, and allow the user to explore the logic tree from within the program. A "help" mode and a reference check provide the user with a means of exploring an item's underlying logic from within the program. A commonality check determines single point failures for an item or group of items. Output is in the form of a user-defined matrix or matrices of colored boxes, each box representing an item or set of items from the logic tree. Input is via mouse selection of the matrix boxes, using the mouse buttons to toggle the state of the item. CRANS is written in C-language and requires the MIT X Window System, Version 11 Revision 4 or Revision 5. It requires 78K of RAM for execution and a three button mouse. It has been successfully implemented on Sun4 workstations running SunOS, HP9000 workstations running HP-UX, and DECstations running ULTRIX. No executable is provided on the distribution medium; however, a sample makefile is included. Sample input files are also included. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. This program was developed in 1992.

  14. Knowledge representation and user interface concepts to support mixed-initiative diagnosis

    NASA Technical Reports Server (NTRS)

    Sobelman, Beverly H.; Holtzblatt, Lester J.

    1989-01-01

    The Remote Maintenance Monitoring System (RMMS) provides automated support for the maintenance and repair of ModComp computer systems used in the Launch Processing System (LPS) at Kennedy Space Center. RMMS supports manual and automated diagnosis of intermittent hardware failures, providing an efficient means for accessing and analyzing the data generated by catastrophic failure recovery procedures. This paper describes the design and functionality of the user interface for interactive analysis of memory dump data, relating it to the underlying declarative representation of memory dumps.

  15. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.

  16. Analysis of Alerting System Failures in Commercial Aviation Accidents

    NASA Technical Reports Server (NTRS)

    Mumaw, Randall J.

    2017-01-01

    The role of an alerting system is to make the system operator (e.g., pilot) aware of an impending hazard or unsafe state so the hazard can be avoided or managed successfully. A review of 46 commercial aviation accidents (between 1998 and 2014) revealed that, in the vast majority of events, either the hazard was not alerted or relevant hazard alerting occurred but failed to aid the flight crew sufficiently. For this set of events, alerting system failures were placed in one of five phases: Detection, Understanding, Action Selection, Prioritization, and Execution. This study also reviewed the evolution of alerting system schemes in commercial aviation, which revealed naive assumptions about pilot reliability in monitoring flight path parameters; specifically, pilot monitoring was assumed to be more effective than it actually is. Examples are provided of the types of alerting system failures that have occurred, and recommendations are provided for alerting system improvements.

  17. Fault Tree Analysis as a Planning and Management Tool: A Case Study

    ERIC Educational Resources Information Center

    Witkin, Belle Ruth

    1977-01-01

    Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)

  18. Advanced Self-Calibrating, Self-Repairing Data Acquisition System

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Eckhoff, Anthony J. (Inventor); Angel, Lucena R. (Inventor); Perotti, Jose M. (Inventor)

    2002-01-01

    An improved self-calibrating and self-repairing Data Acquisition System (DAS) for use in inaccessible areas, such as onboard spacecraft, and capable of autonomously performing required system health checks, failure detection. When required, self-repair is implemented utilizing a "spare parts/tool box" system. The available number of spare components primarily depends upon each component's predicted reliability which may be determined using Mean Time Between Failures (MTBF) analysis. Failing or degrading components are electronically removed and disabled to reduce power consumption, before being electronically replaced with spare components.

  19. A systematic risk management approach employed on the CloudSat project

    NASA Technical Reports Server (NTRS)

    Basilio, R. R.; Plourde, K. S.; Lam, T.

    2000-01-01

    The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.

  20. Phased-mission system analysis using Boolean algebraic methods

    NASA Technical Reports Server (NTRS)

    Somani, Arun K.; Trivedi, Kishor S.

    1993-01-01

    Most reliability analysis techniques and tools assume that a system is used for a mission consisting of a single phase. However, multiple phases are natural in many missions. The failure rates of components, system configuration, and success criteria may vary from phase to phase. In addition, the duration of a phase may be deterministic or random. Recently, several researchers have addressed the problem of reliability analysis of such systems using a variety of methods. A new technique for phased-mission system reliability analysis based on Boolean algebraic methods is described. Our technique is computationally efficient and is applicable to a large class of systems for which the failure criterion in each phase can be expressed as a fault tree (or an equivalent representation). Our technique avoids state space explosion that commonly plague Markov chain-based analysis. A phase algebra to account for the effects of variable configurations and success criteria from phase to phase was developed. Our technique yields exact (as opposed to approximate) results. The use of our technique was demonstrated by means of an example and present numerical results to show the effects of mission phases on the system reliability.

  1. Buckling Testing and Analysis of Space Shuttle Solid Rocket Motor Cylinders

    NASA Technical Reports Server (NTRS)

    Weidner, Thomas J.; Larsen, David V.; McCool, Alex (Technical Monitor)

    2002-01-01

    A series of full-scale buckling tests were performed on the space shuttle Reusable Solid Rocket Motor (RSRM) cylinders. The tests were performed to determine the buckling capability of the cylinders and to provide data for analytical comparison. A nonlinear ANSYS Finite Element Analysis (FEA) model was used to represent and evaluate the testing. Analytical results demonstrated excellent correlation to test results, predicting the failure load within 5%. The analytical value was on the conservative side, predicting a lower failure load than was applied to the test. The resulting study and analysis indicated the important parameters for FEA to accurately predict buckling failure. The resulting method was subsequently used to establish the pre-launch buckling capability of the space shuttle system.

  2. Analysis of the STS-126 Flow Control Valve Structural-Acoustic Coupling Failure

    NASA Technical Reports Server (NTRS)

    Jones, Trevor M.; Larko, Jeffrey M.; McNelis, Mark E.

    2010-01-01

    During the Space Transportation System mission STS-126, one of the main engine's flow control valves incurred an unexpected failure. A section of the valve broke off during liftoff. It is theorized that an acoustic mode of the flowing fuel, coupled with a structural mode of the valve, causing a high cycle fatigue failure. This report documents the analysis efforts conducted in an attempt to verify this theory. Hand calculations, computational fluid dynamics, and finite element methods are all implemented and analyses are performed using steady-state methods in addition to transient analysis methods. The conclusion of the analyses is that there is a critical acoustic mode that aligns with a structural mode of the valve

  3. Application of failure mode and effect analysis in an assisted reproduction technology laboratory.

    PubMed

    Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola

    2016-08-01

    Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  4. Application of failure mode and effect analysis in a radiology department.

    PubMed

    Thornton, Eavan; Brook, Olga R; Mendiratta-Lala, Mishal; Hallett, Donna T; Kruskal, Jonathan B

    2011-01-01

    With increasing deployment, complexity, and sophistication of equipment and related processes within the clinical imaging environment, system failures are more likely to occur. These failures may have varying effects on the patient, ranging from no harm to devastating harm. Failure mode and effect analysis (FMEA) is a tool that permits the proactive identification of possible failures in complex processes and provides a basis for continuous improvement. This overview of the basic principles and methodology of FMEA provides an explanation of how FMEA can be applied to clinical operations in a radiology department to reduce, predict, or prevent errors. The six sequential steps in the FMEA process are explained, and clinical magnetic resonance imaging services are used as an example for which FMEA is particularly applicable. A modified version of traditional FMEA called Healthcare Failure Mode and Effect Analysis, which was introduced by the U.S. Department of Veterans Affairs National Center for Patient Safety, is briefly reviewed. In conclusion, FMEA is an effective and reliable method to proactively examine complex processes in the radiology department. FMEA can be used to highlight the high-risk subprocesses and allows these to be targeted to minimize the future occurrence of failures, thus improving patient safety and streamlining the efficiency of the radiology department. RSNA, 2010

  5. Quantitative risk assessment system (QRAS)

    NASA Technical Reports Server (NTRS)

    Tan, Zhibin (Inventor); Mosleh, Ali (Inventor); Weinstock, Robert M (Inventor); Smidts, Carol S (Inventor); Chang, Yung-Hsien (Inventor); Groen, Francisco J (Inventor); Swaminathan, Sankaran (Inventor)

    2001-01-01

    A quantitative risk assessment system (QRAS) builds a risk model of a system for which risk of failure is being assessed, then analyzes the risk of the system corresponding to the risk model. The QRAS performs sensitivity analysis of the risk model by altering fundamental components and quantifications built into the risk model, then re-analyzes the risk of the system using the modifications. More particularly, the risk model is built by building a hierarchy, creating a mission timeline, quantifying failure modes, and building/editing event sequence diagrams. Multiplicities, dependencies, and redundancies of the system are included in the risk model. For analysis runs, a fixed baseline is first constructed and stored. This baseline contains the lowest level scenarios, preserved in event tree structure. The analysis runs, at any level of the hierarchy and below, access this baseline for risk quantitative computation as well as ranking of particular risks. A standalone Tool Box capability exists, allowing the user to store application programs within QRAS.

  6. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  7. Multi-Disciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  8. Con Edison power failure of July 13 and 14, 1977. Final staff report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1978-06-01

    On July 13, 1977 the entire electric load of the Con Edison system was lost, plunging New York City and Westchester County into darkness. The collapse resulted from a combination of natural events, equipment malfunctions, questionable system-design features, and operating errors. An attempt is made in this report to answer the following: what were the specific causes of the failure; if equipment malfunctions and operator errors contributed, could they have been prevented; to what extent was Con Edison prepared to handle such an emergency; and did Con Edison plan prudently reserve generation, for reserve transmission capability, for automatic equipment tomore » protect its system, and for proper operator response to a critical situation. Following the introductory and summary section, additional sections include: the Consolidated Edison system; prevention of bulk power-supply interruptions; the sequence of failure and restoration; analysis of the July 1977 power failure; restoration sequence and equipment damage assessment; and other investigations of the blackout. (MCW)« less

  9. Flight Test Comparison of Different Adaptive Augmentations for Fault Tolerant Control Laws for a Modified F-15 Aircraft

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Hanson, Curtis E.; Lee, James A.; Kaneshige, John T.

    2009-01-01

    This report describes the improvements and enhancements to a neural network based approach for directly adapting to aerodynamic changes resulting from damage or failures. This research is a follow-on effort to flight tests performed on the NASA F-15 aircraft as part of the Intelligent Flight Control System research effort. Previous flight test results demonstrated the potential for performance improvement under destabilizing damage conditions. Little or no improvement was provided under simulated control surface failures, however, and the adaptive system was prone to pilot-induced oscillations. An improved controller was designed to reduce the occurrence of pilot-induced oscillations and increase robustness to failures in general. This report presents an analysis of the neural networks used in the previous flight test, the improved adaptive controller, and the baseline case with no adaptation. Flight test results demonstrate significant improvement in performance by using the new adaptive controller compared with the previous adaptive system and the baseline system for control surface failures.

  10. Analysis of risk factors for cluster behavior of dental implant failures.

    PubMed

    Chrcanovic, Bruno Ramos; Kisch, Jenö; Albrektsson, Tomas; Wennerberg, Ann

    2017-08-01

    Some studies indicated that implant failures are commonly concentrated in few patients. To identify and analyze cluster behavior of dental implant failures among subjects of a retrospective study. This retrospective study included patients receiving at least three implants only. Patients presenting at least three implant failures were classified as presenting a cluster behavior. Univariate and multivariate logistic regression models and generalized estimating equations analysis evaluated the effect of explanatory variables on the cluster behavior. There were 1406 patients with three or more implants (8337 implants, 592 failures). Sixty-seven (4.77%) patients presented cluster behavior, with 56.8% of all implant failures. The intake of antidepressants and bruxism were identified as potential negative factors exerting a statistically significant influence on a cluster behavior at the patient-level. The negative factors at the implant-level were turned implants, short implants, poor bone quality, age of the patient, the intake of medicaments to reduce the acid gastric production, smoking, and bruxism. A cluster pattern among patients with implant failure is highly probable. Factors of interest as predictors for implant failures could be a number of systemic and local factors, although a direct causal relationship cannot be ascertained. © 2017 Wiley Periodicals, Inc.

  11. Safety Guided Design of Crew Return Vehicle in Concept Design Phase Using STAMP/STPA

    NASA Astrophysics Data System (ADS)

    Nakao, H.; Katahira, M.; Miyamoto, Y.; Leveson, N.

    2012-01-01

    In the concept development and design phase of a new space system, such as a Crew Vehicle, designers tend to focus on how to implement new technology. Designers also consider the difficulty of using the new technology and trade off several system design candidates. Then they choose an optimal design from the candidates. Safety should be a key aspect driving optimal concept design. However, in past concept design activities, safety analysis such as FTA has not used to drive the design because such analysis techniques focus on component failure and component failure cannot be considered in the concept design phase. The solution to these problems is to apply a new hazard analysis technique, called STAMP/STPA. STAMP/STPA defines safety as a control problem rather than a failure problem and identifies hazardous scenarios and their causes. Defining control flow is the essential in concept design phase. Therefore STAMP/STPA could be a useful tool to assess the safety of system candidates and to be part of the rationale for choosing a design as the baseline of the system. In this paper, we explain our case study of safety guided concept design using STPA, the new hazard analysis technique, and model-based specification technique on Crew Return Vehicle design and evaluate benefits of using STAMP/STPA in concept development phase.

  12. Modeling and Hazard Analysis Using STPA

    NASA Astrophysics Data System (ADS)

    Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka

    2010-09-01

    A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis and following the NASA standards for safety-critical systems, the results of our experimental application of STPA can be compared with these more traditional safety engineering approaches in terms of the problems identified and the resources required to use it.

  13. Failure modes and effects criticality analysis and accelerated life testing of LEDs for medical applications

    NASA Astrophysics Data System (ADS)

    Sawant, M.; Christou, A.

    2012-12-01

    While use of LEDs in Fiber Optics and lighting applications is common, their use in medical diagnostic applications is not very extensive. Since the precise value of light intensity will be used to interpret patient results, understanding failure modes [1-4] is very important. We used the Failure Modes and Effects Criticality Analysis (FMECA) tool to identify the critical failure modes of the LEDs. FMECA involves identification of various failure modes, their effects on the system (LED optical output in this context), their frequency of occurrence, severity and the criticality of the failure modes. The competing failure modes/mechanisms were degradation of: active layer (where electron-hole recombination occurs to emit light), electrodes (provides electrical contact to the semiconductor chip), Indium Tin Oxide (ITO) surface layer (used to improve current spreading and light extraction), plastic encapsulation (protective polymer layer) and packaging failures (bond wires, heat sink separation). A FMECA table is constructed and the criticality is calculated by estimating the failure effect probability (β), failure mode ratio (α), failure rate (λ) and the operating time. Once the critical failure modes were identified, the next steps were generation of prior time to failure distribution and comparing with our accelerated life test data. To generate the prior distributions, data and results from previous investigations were utilized [5-33] where reliability test results of similar LEDs were reported. From the graphs or tabular data, we extracted the time required for the optical power output to reach 80% of its initial value. This is our failure criterion for the medical diagnostic application. Analysis of published data for different LED materials (AlGaInP, GaN, AlGaAs), the Semiconductor Structures (DH, MQW) and the mode of testing (DC, Pulsed) was carried out. The data was categorized according to the materials system and LED structure such as AlGaInP-DH-DC, AlGaInP-MQW-DC, GaN-DH-DC, and GaN-DH-DC. Although the reported testing was carried out at different temperature and current, the reported data was converted to the present application conditions of the medical environment. Comparisons between the model data and accelerated test results carried out in the present are reported. The use of accelerating agent modeling and regression analysis was also carried out. We have used the Inverse Power Law model with the current density J as the accelerating agent and the Arrhenius model with temperature as the accelerating agent. Finally, our reported methodology is presented as an approach for analyzing LED suitability for the target medical diagnostic applications.

  14. Identification of Modeling Approaches To Support Common-Cause Failure Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsah, Kofi; Wood, Richard Thomas

    2015-06-01

    Experience with applying current guidance and practices for common-cause failure (CCF) mitigation to digital instrumentation and control (I&C) systems has proven problematic, and the regulatory environment has been unpredictable. The impact of CCF vulnerability is to inhibit I&C modernization and, thereby, challenge the long-term sustainability of existing plants. For new plants and advanced reactor concepts, the issue of CCF vulnerability for highly integrated digital I&C systems imposes a design burden resulting in higher costs and increased complexity. The regulatory uncertainty regarding which mitigation strategies are acceptable (e.g., what diversity is needed and how much is sufficient) drives designers to adoptmore » complicated, costly solutions devised for existing plants. The conditions that constrain the transition to digital I&C technology by the U.S. nuclear industry require crosscutting research to resolve uncertainty, demonstrate necessary characteristics, and establish an objective basis for qualification of digital technology for usage in Nuclear Power Plant (NPP) I&C applications. To fulfill this research need, Oak Ridge National Laboratory is conducting an investigation into mitigation of CCF vulnerability for nuclear-qualified applications. The outcome of this research is expected to contribute to a fundamentally sound, comprehensive technical basis for establishing the qualification of digital technology for nuclear power applications. This report documents the investigation of modeling approaches for representing failure of I&C systems. Failure models are used when there is a need to analyze how the probability of success (or failure) of a system depends on the success (or failure) of individual elements. If these failure models are extensible to represent CCF, then they can be employed to support analysis of CCF vulnerabilities and mitigation strategies. Specifically, the research findings documented in this report identify modeling approaches that can be adapted to contribute to the basis for developing systematic methods, quantifiable measures, and objective criteria for evaluating CCF vulnerabilities and mitigation strategies.« less

  15. Secure Embedded System Design Methodologies for Military Cryptographic Systems

    DTIC Science & Technology

    2016-03-31

    Fault- Tree Analysis (FTA); Built-In Self-Test (BIST) Introduction Secure access-control systems restrict operations to authorized users via methods...failures in the individual software/processor elements, the question of exactly how unlikely is difficult to answer. Fault- Tree Analysis (FTA) has a...Collins of Sandia National Laboratories for years of sharing his extensive knowledge of Fail-Safe Design Assurance and Fault- Tree Analysis

  16. Underground storage systems for high-pressure air and gases

    NASA Technical Reports Server (NTRS)

    Beam, B. H.; Giovannetti, A.

    1975-01-01

    This paper is a discussion of the safety and cost of underground high-pressure air and gas storage systems based on recent experience with a high-pressure air system installed at Moffett Field, California. The system described used threaded and coupled oil well casings installed vertically to a depth of 1200 ft. Maximum pressure was 3000 psi and capacity was 500,000 lb of air. A failure mode analysis is presented, and it is shown that underground storage offers advantages in avoiding catastrophic consequences from pressure vessel failure. Certain problems such as corrosion, fatigue, and electrolysis are discussed in terms of the economic life of such vessels. A cost analysis shows that where favorable drilling conditions exist, the cost of underground high-pressure storage is approximately one-quarter that of equivalent aboveground storage.

  17. Sensory redundancy management: The development of a design methodology for determining threshold values through a statistical analysis of sensor output data

    NASA Technical Reports Server (NTRS)

    Scalzo, F.

    1983-01-01

    Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.

  18. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  19. Fatigue damage accumulation in various metal matrix composites

    NASA Technical Reports Server (NTRS)

    Johnson, W. S.

    1987-01-01

    The purpose of this paper is to review some of the latest understanding of the fatigue behavior of continuous fiber reinforced metal matrix composites. The emphasis is on the development of an understanding of different fatigue damage mechanisms and why and how they occur. The fatigue failure modes in continuous fiber reinforced metal matrix composites are controlled by the three constituents of the system: fiber, matrix, and fiber/matrix interface. The relative strains to fatigue failure of the fiber and matrix will determine the failure mode. Several examples of matrix, fiber, and self-similar damage growth dominated fatigue damage are given for several metal matrix composite systems. Composite analysis, failure modes, and damage modeling are discussed. Boron/aluminum, silicon-carbide/aluminum, FP/aluminum, and borsic/titanium metal matrix composites are discussed.

  20. SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis Smith; James Knudsen

    As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less

  1. A guide to onboard checkout. Volume 4: Propulsion

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The propulsion system for a space station is considered with respect to onboard checkout requirements. Failure analysis, reliability, and maintenance features are presented. Computer analysis techniques are also discussed.

  2. Independent Orbiter Assessment (IOA): Assessment of the orbital maneuvering system FMEA/CIL, volume 1

    NASA Technical Reports Server (NTRS)

    Prust, Chet D.; Haufler, W. A.; Marino, A. J.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Orbital Maneuvering System (OMS) hardware and Electrical Power Distribution and Control (EPD and C), generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline. This report documents the results of that comparison for the Orbiter OMS hardware. The IOA analysis defined the OMS as being comprised of the following subsystems: helium pressurization, propellant storage and distribution, Orbital Maneuvering Engine, and EPD and C. The IOA product for the OMS analysis consisted of 284 hardware and 667 EPD and C failure mode worksheets that resulted in 160 hardware and 216 EPD and C potential critical items (PCIs) being identified. A comparison was made of the IOA product to the NASA FMEA/CIL baseline which consisted of 101 hardware and 142 EPD and C CIL items.

  3. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  4. Health information systems: failure, success and improvisation.

    PubMed

    Heeks, Richard

    2006-02-01

    The generalised assumption of health information systems (HIS) success is questioned by a few commentators in the medical informatics field. They point to widespread HIS failure. The purpose of this paper was therefore to develop a better conceptual foundation for, and practical guidance on, health information systems failure (and success). Literature and case analysis plus pilot testing of developed model. Defining HIS failure and success is complex, and the current evidence base on HIS success and failure rates was found to be weak. Nonetheless, the best current estimate is that HIS failure is an important problem. The paper therefore derives and explains the "design-reality gap" conceptual model. This is shown to be robust in explaining multiple cases of HIS success and failure, yet provides a contingency that encompasses the differences which exist in different HIS contexts. The design-reality gap model is piloted to demonstrate its value as a tool for risk assessment and mitigation on HIS projects. It also throws into question traditional, structured development methodologies, highlighting the importance of emergent change and improvisation in HIS. The design-reality gap model can be used to address the problem of HIS failure, both as a post hoc evaluative tool and as a pre hoc risk assessment and mitigation tool. It also validates a set of methods, techniques, roles and competencies needed to support the dynamic improvisations that are found to underpin cases of HIS success.

  5. NASA's Evolutionary Xenon Thruster (NEXT) Power Processing Unit (PPU) Capacitor Failure Root Cause Analysis

    NASA Technical Reports Server (NTRS)

    Soeder, James F.; Pinero, Luis; Schneidegger, Robert; Dunning, John; Birchenough, Art

    2012-01-01

    The NASA's Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. A critical element of the propulsion system is the Power Processing Unit (PPU) which supplies regulated power to the key components of the thruster. The PPU contains six different power supplies including the beam, discharge, discharge heater, neutralizer, neutralizer heater, and accelerator supplies. The beam supply is the largest and processes up to 93+% of the power. The NEXT PPU had been operated for approximately 200+ hours and has experienced a series of three capacitor failures in the beam supply. The capacitors are in the same, nominally non-critical location the input filter capacitor to a full wave switching inverter. The three failures occurred after about 20, 30, and 135 hours of operation. This paper provides background on the NEXT PPU and the capacitor failures. It discusses the failure investigation approach, the beam supply power switching topology and its operating modes, capacitor characteristics and circuit testing. Finally, it identifies root cause of the failures to be the unusual confluence of circuit switching frequency, the physical layout of the power circuits, and the characteristics of the capacitor.

  6. NASA's Evolutionary Xenon Thruster (NEXT) Power Processing Unit (PPU) Capacitor Failure Root Cause Analysis

    NASA Technical Reports Server (NTRS)

    Soeder, James F.; Scheidegger, Robert J.; Pinero, Luis R.; Birchenough, Arthur J.; Dunning, John W.

    2012-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. A critical element of the propulsion system is the Power Processing Unit (PPU) which supplies regulated power to the key components of the thruster. The PPU contains six different power supplies including the beam, discharge, discharge heater, neutralizer, neutralizer heater, and accelerator supplies. The beam supply is the largest and processes up to 93+% of the power. The NEXT PPU had been operated for approximately 200+ hr and has experienced a series of three capacitor failures in the beam supply. The capacitors are in the same, nominally non-critical location-the input filter capacitor to a full wave switching inverter. The three failures occurred after about 20, 30, and 135 hr of operation. This paper provides background on the NEXT PPU and the capacitor failures. It discusses the failure investigation approach, the beam supply power switching topology and its operating modes, capacitor characteristics and circuit testing. Finally, it identifies root cause of the failures to be the unusual confluence of circuit switching frequency, the physical layout of the power circuits, and the characteristics of the capacitor.

  7. [Application of root cause analysis in healthcare].

    PubMed

    Hsu, Tsung-Fu

    2007-12-01

    The main purpose of this study was to explore various aspects of root cause analysis (RCA), including its definition, rationale concept, main objective, implementation procedures, most common analysis methodology (fault tree analysis, FTA), and advantages and methodologic limitations in regard to healthcare. Several adverse events that occurred at a certain hospital were also analyzed by the author using FTA as part of this study. RCA is a process employed to identify basic and contributing causal factors underlying performance variations associated with adverse events. The rationale concept of RCA offers a systemic approach to improving patient safety that does not assign blame or liability to individuals. The four-step process involved in conducting an RCA includes: RCA preparation, proximate cause identification, root cause identification, and recommendation generation and implementation. FTA is a logical, structured process that can help identify potential causes of system failure before actual failures occur. Some advantages and significant methodologic limitations of RCA were discussed. Finally, we emphasized that errors stem principally from faults attributable to system design, practice guidelines, work conditions, and other human factors, which induce health professionals to make negligence or mistakes with regard to healthcare. We must explore the root causes of medical errors to eliminate potential RCA system failure factors. Also, a systemic approach is needed to resolve medical errors and move beyond a current culture centered on assigning fault to individuals. In constructing a real environment of patient-centered safety healthcare, we can help encourage clients to accept state-of-the-art healthcare services.

  8. Application of reliability-centered maintenance to boiling water reactor emergency core cooling systems fault-tree analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Y.A.; Feltus, M.A.

    1995-07-01

    Reliability-centered maintenance (RCM) methods are applied to boiling water reactor plant-specific emergency core cooling system probabilistic risk assessment (PRA) fault trees. The RCM is a technique that is system function-based, for improving a preventive maintenance (PM) program, which is applied on a component basis. Many PM programs are based on time-directed maintenance tasks, while RCM methods focus on component condition-directed maintenance tasks. Stroke time test data for motor-operated valves (MOVs) are used to address three aspects concerning RCM: (a) to determine if MOV stroke time testing was useful as a condition-directed PM task; (b) to determine and compare the plant-specificmore » MOV failure data from a broad RCM philosophy time period compared with a PM period and, also, compared with generic industry MOV failure data; and (c) to determine the effects and impact of the plant-specific MOV failure data on core damage frequency (CDF) and system unavailabilities for these emergency systems. The MOV stroke time test data from four emergency core cooling systems [i.e., high-pressure coolant injection (HPCI), reactor core isolation cooling (RCIC), low-pressure core spray (LPCS), and residual heat removal/low-pressure coolant injection (RHR/LPCI)] were gathered from Philadelphia Electric Company`s Peach Bottom Atomic Power Station Units 2 and 3 between 1980 and 1992. The analyses showed that MOV stroke time testing was not a predictor for eminent failure and should be considered as a go/no-go test. The failure data from the broad RCM philosophy showed an improvement compared with the PM-period failure rates in the emergency core cooling system MOVs. Also, the plant-specific MOV failure rates for both maintenance philosophies were shown to be lower than the generic industry estimates.« less

  9. Reliability culture at La Silla Paranal Observatory

    NASA Astrophysics Data System (ADS)

    Gonzalez, Sergio

    2010-07-01

    The Maintenance Department at the La Silla - Paranal Observatory has been an important base to keep the operations of the observatory at a good level of reliability and availability. Several strategies have been implemented and improved in order to cover these requirements and keep the system and equipment working properly when it is required. For that reason, one of the latest improvements has been the introduction of the concept of reliability, which implies that we don't simply speak about reliability concepts. It involves much more than that. It involves the use of technologies, data collecting, data analysis, decision making, committees concentrated in analysis of failure modes and how they can be eliminated, aligning the results with the requirements of our internal partners and establishing steps to achieve success. Some of these steps have already been implemented: data collection, use of technologies, analysis of data, development of priority tools, committees dedicated to analyze data and people dedicated to reliability analysis. This has permitted us to optimize our process, analyze where we can improve, avoid functional failures, reduce the failures range in several systems and subsystems; all this has had a positive impact in terms of results for our Observatory. All these tools are part of the reliability culture that allows our system to operate with a high level of reliability and availability.

  10. Space Shuttle Solid Rocket Booster Decelerator Subsystem Drop Test 3 - Anatomy of a failure

    NASA Technical Reports Server (NTRS)

    Runkle, R. E.; Woodis, W. R.

    1979-01-01

    A test failure dramatically points out a design weakness or the limits of the material in the test article. In a low budget test program, with a very limited number of tests, a test failure sparks supreme efforts to investigate, analyze, and/or explain the anomaly and to improve the design such that the failure will not recur. The third air drop of the Space Shuttle Solid Rocket Booster Recovery System experienced such a dramatic failure. On air drop 3, the 54-ft drogue parachute was totally destroyed 0.7 sec after deployment. The parachute failure investigation, based on analysis of drop test data and supporting ground element test results is presented. Drogue design modifications are also discussed.

  11. STS-3 main parachute failure

    NASA Technical Reports Server (NTRS)

    Runkle, R.; Henson, K.

    1982-01-01

    A failure analysis of the parachute on the Space Transportation System 3 flight's solid rocket booster's is presented. During the reentry phase of the two Solid Rocket Boosters (SRBs), one 115 ft diameter main parachute failed on the right hand SRB (A12). This parachute failure caused the SRB to impact the Ocean at 110 ft/sec in lieu of the expected 3 parachute impact velocity of 88 ft/sec. This higher impact velocity relates directly to more SRB aft skirt and more motor case damage. The cause of the parachute failure, the potential risks of losing an SRB as a result of this failure, and recommendations to ensure that the probability of chute failures of this type in the future will be low are discussed.

  12. Qualification of computerized monitoring systems in a cell therapy facility compliant with the good manufacturing practices.

    PubMed

    Del Mazo-Barbara, Anna; Mirabel, Clémentine; Nieto, Valentín; Reyes, Blanca; García-López, Joan; Oliver-Vila, Irene; Vives, Joaquim

    2016-09-01

    Computerized systems (CS) are essential in the development and manufacture of cell-based medicines and must comply with good manufacturing practice, thus pushing academic developers to implement methods that are typically found within pharmaceutical industry environments. Qualitative and quantitative risk analyses were performed by Ishikawa and Failure Mode and Effects Analysis, respectively. A process for qualification of a CS that keeps track of environmental conditions was designed and executed. The simplicity of the Ishikawa analysis permitted to identify critical parameters that were subsequently quantified by Failure Mode Effects Analysis, resulting in a list of test included in the qualification protocols. The approach presented here contributes to simplify and streamline the qualification of CS in compliance with pharmaceutical quality standards.

  13. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate ourmore » algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.« less

  14. Analysis, Design, and Prototyping Of Accounting Software for Navy Signal Intelligence Collection Systems Return On Investment Reporting

    DTIC Science & Technology

    2010-09-01

    The MasterNet project continued to expand in software and hardware complexity until its failure ( Szilagyi , n.d.). Despite all of the issues...were used for MasterNet ( Szilagyi , n.d.). Although executive management committed significant financial resources to MasterNet, Bank of America...implementation failure as well as project- management failure as a whole ( Szilagyi , n.d.). The lesson learned from this vignette is the importance of setting

  15. The effect of renin-angiotensin system inhibitors on mortality and heart failure hospitalization in patients with heart failure and preserved ejection fraction: a systematic review and meta-analysis.

    PubMed

    Shah, Ravi V; Desai, Akshay S; Givertz, Michael M

    2010-03-01

    Although renin-angiotensin system (RAS) inhibitors have little demonstrable effect on mortality in patients with heart failure and preserved ejection fraction (HF-PEF), some trials have suggested a benefit with regard to reduction in HF hospitalization. Here, we systematically review and evaluate prospective clinical studies of RAS inhibitors enrolling patients with HF-PEF, including the 3 major trials of RAS inhibition (Candesartan in Patients with Chronic Heart Failure and Preserved Left Ventricular Ejection Fraction [CHARM-Preserved], Irbesartan in Patients with Heart Failure and Preserved Ejection Fraction [I-PRESERVE], and Perindopril in Elderly People with Chronic Heart Failure [PEP-CHF]). We also conducted a pooled analysis of 8021 patients in the 3 major randomized trials of RAS inhibition in HF-PEF (CHARM-Preserved, I-PRESERVE, and PEP-CHF) in fixed-effect models, finding no clear benefit with regard to all-cause mortality (odds ratio [OR] 1.03, 95% confidence interval [CI], 0.92-1.15; P=.62), or HF hospitalization (OR 0.90, 95% CI 0.80-1.02; P=.09). Although RAS inhibition may be valuable in the management of comorbidities related to HF-PEF, RAS inhibition in HF-PEF is not associated with consistent reduction in HF hospitalization or mortality in this emerging cohort. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  16. Recent advances in computational structural reliability analysis methods

    NASA Astrophysics Data System (ADS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  17. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  18. Minding the Cyber-Physical Gap: Model-Based Analysis and Mitigation of Systemic Perception-Induced Failure

    PubMed Central

    2017-01-01

    The cyber-physical gap (CPG) is the difference between the ‘real’ state of the world and the way the system perceives it. This discrepancy often stems from the limitations of sensing and data collection technologies and capabilities, and is inevitable at some degree in any cyber-physical system (CPS). Ignoring or misrepresenting such limitations during system modeling, specification, design, and analysis can potentially result in systemic misconceptions, disrupted functionality and performance, system failure, severe damage, and potential detrimental impacts on the system and its environment. We propose CPG-Aware Modeling & Engineering (CPGAME), a conceptual model-based approach to capturing, explaining, and mitigating the CPG. CPGAME enhances the systems engineer’s ability to cope with CPGs, mitigate them by design, and prevent erroneous decisions and actions. We demonstrate CPGAME by applying it for modeling and analysis of the 1979 Three Miles Island 2 nuclear accident, and show how its meltdown could be mitigated. We use ISO-19450:2015—Object Process Methodology as our conceptual modeling framework. PMID:28714910

  19. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  20. Fault Tree Analysis: Its Implications for Use in Education.

    ERIC Educational Resources Information Center

    Barker, Bruce O.

    This study introduces the concept of Fault Tree Analysis as a systems tool and examines the implications of Fault Tree Analysis (FTA) as a technique for isolating failure modes in educational systems. A definition of FTA and discussion of its history, as it relates to education, are provided. The step by step process for implementation and use of…

  1. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  2. SU-E-T-495: Neutron Induced Electronics Failure Rate Analysis for a Single Room Proton Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knutson, N; DeWees, T; Klein, E

    2014-06-01

    Purpose: To determine the failure rate as a function of neutron dose of the range modulator's servo motor controller system (SMCS) while shielded with Borated Polyethylene (BPE) and unshielded in a single room proton accelerator. Methods: Two experimental setups were constructed using two servo motor controllers and two motors. Each SMCS was then placed 30 cm from the end of the plugged proton accelerator applicator. The motor was then turned on and observed from outside of the vault while being irradiated to known neutron doses determined from bubble detector measurements. Anytime the motor deviated from the programmed motion a failuremore » was recorded along with the delivered dose. The experiment was repeated using 9 cm of BPE shielding surrounding the SMCS. Results: Ten SMCS failures were recorded in each experiment. The dose per monitor unit for the unshielded SMCS was 0.0211 mSv/MU and 0.0144 mSv/MU for the shielded SMCS. The mean dose to produce a failure for the unshielded SMCS was 63.5 ± 58.3 mSv versus 17.0 ±12.2 mSv for the shielded. The mean number of MUs between failures were 2297 ± 1891 MU for the unshielded SMCS and 2122 ± 1523 MU for the shielded. A Wilcoxon Signed Ranked test showed the dose between failures were significantly different (P value = 0.044) while the number of MUs between failures were not (P value = 1.000). Statistical analysis determined a SMCS neutron dose of 5.3 mSv produces a 5% chance of failure. Depending on the workload and location of the SMCS, this failure rate could impede clinical workflow. Conclusion: BPE shielding was shown to not reduce the average failure of the SMCS and relocation of the system outside of the accelerator vault was required to lower the failure rate enough to avoid impeding clinical work flow.« less

  3. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control/electrical power generation subsystem

    NASA Technical Reports Server (NTRS)

    Patton, Jeff A.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C)/Electrical Power Generation (EPG) hardware. The EPD and C/EPG hardware is required for performing critical functions of cryogenic reactant storage, electrical power generation and product water distribution in the Orbiter. Specifically, the EPD and C/EPG hardware consists of the following components: Power Section Assembly (PSA); Reactant Control Subsystem (RCS); Thermal Control Subsystem (TCS); Water Removal Subsystem (WRS); and Power Reactant Storage and Distribution System (PRSDS). The IOA analysis process utilized available EPD and C/EPG hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  4. Andreas Acrivos Dissertation Award: Onset of Dynamic Wetting Failure - The Mechanics of High-Speed Fluid Displacement

    NASA Astrophysics Data System (ADS)

    Vandre, Eric

    2014-11-01

    Dynamic wetting is crucial to processes where a liquid displaces another fluid along a solid surface, such as the deposition of a coating liquid onto a moving substrate. Dynamic wetting fails when process speed exceeds some critical value, leading to incomplete fluid displacement and transient phenomena that impact a variety of applications, such as microfluidic devices, oil-recovery systems, and splashing droplets. Liquid coating processes are particularly sensitive to wetting failure, which can induce air entrainment and other catastrophic coating defects. Despite the industrial incentives for careful control of wetting behavior, the hydrodynamic factors that influence the transition to wetting failure remain poorly understood from empirical and theoretical perspectives. This work investigates the fundamentals of wetting failure in a variety of systems that are relevant to industrial coating flows. A hydrodynamic model is developed where an advancing fluid displaces a receding fluid along a smooth, moving substrate. Numerical solutions predict the onset of wetting failure at a critical substrate speed, which coincides with a turning point in the steady-state solution path for a given set of system parameters. Flow-field analysis reveals a physical mechanism where wetting failure results when capillary forces can no longer support the pressure gradients necessary to steadily displace the receding fluid. Novel experimental systems are used to measure the substrate speeds and meniscus shapes associated with the onset of air entrainment during wetting failure. Using high-speed visualization techniques, air entrainment is identified by the elongation of triangular air films with system-dependent size. Air films become unstable to thickness perturbations and ultimately rupture, leading to the entrainment of air bubbles. Meniscus confinement in a narrow gap between the substrate and a stationary plate is shown to delay air entrainment to higher speeds for a variety of water/glycerol solutions. In addition, liquid pressurization (relative to ambient air) further postpones air entrainment when the meniscus is located near a sharp corner along the plate. Recorded critical speeds compare well to predictions from the model, supporting the hydrodynamic mechanism for the onset of wetting failure. Lastly, the industrial practice of curtain coating is investigated using the hydrodynamic model. Due to the complexity of this system, a new computational approach is developed combining a finite element method and lubrication theory in order to improve the efficiency of the numerical analysis. Results show that the onset of wetting failure varies strongly with the operating conditions of this system. In addition, stresses from the air flow dramatically affect the steady wetting behavior of curtain coating. Ultimately, these findings emphasize the important role of two-fluid displacement mechanics in high-speed wetting systems.

  5. Systems Design Factors: The Essential Ingredients of System Design, Version 0.4

    DTIC Science & Technology

    1994-03-18

    Reliability Function). 4. Barry . W. Johnson, Design and Analysis of Fault Tolerant Digital Systems, p. 4, Addison- Wesley Publishing Company, 1985. METRICS...the system was performing correctly at time t. The unreliability is often referred to as the probability of failure. SOURCE: 1. Barry W. Johnson...Systems Enuineerinf. 3. Barry W. Johnson, Design and Analysis of Fault Tolerant Digital Systems, Addison-Wesley Publishing Company, 1985, p. 5

  6. Stiffness and strength of fiber reinforced polymer composite bridge deck systems

    NASA Astrophysics Data System (ADS)

    Zhou, Aixi

    This research investigates two principal characteristics that are of primary importance in Fiber Reinforced Polymer (FRP) bridge deck applications: STIFFNESS and STRENGTH. The research was undertaken by investigating the stiffness and strength characteristics of the multi-cellular FRP bridge deck systems consisting of pultruded FRP shapes. A systematic analysis procedure was developed for the stiffness analysis of multi-cellular FRP deck systems. This procedure uses the Method of Elastic Equivalence to model the cellular deck as an equivalent orthotropic plate. The procedure provides a practical method to predict the equivalent orthotropic plate properties of cellular FRP decks. Analytical solutions for the bending analysis of single span decks were developed using classical laminated plate theory. The analysis procedures can be extended to analyze continuous FRP decks. It can also be further developed using higher order plate theories. Several failure modes of the cellular FRP deck systems were recorded and analyzed through laboratory and field tests and Finite Element Analysis (FEA). Two schemes of loading patches were used in the laboratory test: a steel patch made according to the ASSHTO's bridge testing specifications; and a tire patch made from a real truck tire reinforced with silicon rubber. The tire patch was specially designed to simulate service loading conditions by modifying real contact loading from a tire. Our research shows that the effects of the stiffness and contact conditions of loading patches are significant in the stiffness and strength testing of FRP decks. Due to the localization of load, a simulated tire patch yields larger deflection than the steel patch under the same loading level. The tire patch produces significantly different failure compared to the steel patch: a local bending mode with less damage for the tire patch; and a local punching-shear mode for the steel patch. A deck failure function method is proposed for predicting the failure of FRP decks. Using developed laminated composite theories and FEA techniques, a strength analysis procedure containing ply-level information was proposed and detailed for FRP deck systems. The behavior of the deck's unsupported (free) edges was also investigated using ply-level FEA.

  7. Supporting Space Systems Design via Systems Dependency Analysis Methodology

    NASA Astrophysics Data System (ADS)

    Guariniello, Cesare

    The increasing size and complexity of space systems and space missions pose severe challenges to space systems engineers. When complex systems and Systems-of-Systems are involved, the behavior of the whole entity is not only due to that of the individual systems involved but also to the interactions and dependencies between the systems. Dependencies can be varied and complex, and designers usually do not perform analysis of the impact of dependencies at the level of complex systems, or this analysis involves excessive computational cost, or occurs at a later stage of the design process, after designers have already set detailed requirements, following a bottom-up approach. While classical systems engineering attempts to integrate the perspectives involved across the variety of engineering disciplines and the objectives of multiple stakeholders, there is still a need for more effective tools and methods capable to identify, analyze and quantify properties of the complex system as a whole and to model explicitly the effect of some of the features that characterize complex systems. This research describes the development and usage of Systems Operational Dependency Analysis and Systems Developmental Dependency Analysis, two methods based on parametric models of the behavior of complex systems, one in the operational domain and one in the developmental domain. The parameters of the developed models have intuitive meaning, are usable with subjective and quantitative data alike, and give direct insight into the causes of observed, and possibly emergent, behavior. The approach proposed in this dissertation combines models of one-to-one dependencies among systems and between systems and capabilities, to analyze and evaluate the impact of failures or delays on the outcome of the whole complex system. The analysis accounts for cascading effects, partial operational failures, multiple failures or delays, and partial developmental dependencies. The user of these methods can assess the behavior of each system based on its internal status and on the topology of its dependencies on systems connected to it. Designers and decision makers can therefore quickly analyze and explore the behavior of complex systems and evaluate different architectures under various working conditions. The methods support educated decision making both in the design and in the update process of systems architecture, reducing the need to execute extensive simulations. In particular, in the phase of concept generation and selection, the information given by the methods can be used to identify promising architectures to be further tested and improved, while discarding architectures that do not show the required level of global features. The methods, when used in conjunction with appropriate metrics, also allow for improved reliability and risk analysis, as well as for automatic scheduling and re-scheduling based on the features of the dependencies and on the accepted level of risk. This dissertation illustrates the use of the two methods in sample aerospace applications, both in the operational and in the developmental domain. The applications show how to use the developed methodology to evaluate the impact of failures, assess the criticality of systems, quantify metrics of interest, quantify the impact of delays, support informed decision making when scheduling the development of systems and evaluate the achievement of partial capabilities. A larger, well-framed case study illustrates how the Systems Operational Dependency Analysis method and the Systems Developmental Dependency Analysis method can support analysis and decision making, at the mid and high level, in the design process of architectures for the exploration of Mars. The case study also shows how the methods do not replace the classical systems engineering methodologies, but support and improve them.

  8. Inter-computer communication architecture for a mixed redundancy distributed system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Adams, Stuart J.

    1987-01-01

    The triply redundant intercomputer network for the Advanced Information Processing System (AIPS), an architecture developed to serve as the core avionics system for a broad range of aerospace vehicles, is discussed. The AIPS intercomputer network provides a high-speed, Byzantine-fault-resilient communication service between processing sites, even in the presence of arbitrary failures of simplex and duplex processing sites on the IC network. The IC network contention poll has evolved from the Laning Poll. An analysis of the failure modes and effects and a simulation of the AIPS contention poll, demonstrate the robustness of the system.

  9. Psychiatric DRGs: more risk for hospitals?

    PubMed

    Ehrman, C M; Funk, G; Cavanaugh, J

    1989-09-01

    The diagnosis related group (DRG) system, which replaced the cost-plus system of reimbursement, was implemented in 1983 by Medicare to cover medical expenses on a prospective basis. To date, the DRG system has not been applied to psychiatric illness. The authors compare the likelihood of cost overruns in psychiatric illness with that of cost overruns in medical illness. The data analysis demonstrates that a prospective payment system would have a high likelihood of failure in psychiatric illness. Possible reasons for failure include wide variations in treatments, diagnostics, and other related costs. Also, the number of DRG classifications for psychiatric illness is inadequate.

  10. Infrared thermography based diagnosis of inter-turn fault and cooling system failure in three phase induction motor

    NASA Astrophysics Data System (ADS)

    Singh, Gurmeet; Naikan, V. N. A.

    2017-12-01

    Thermography has been widely used as a technique for anomaly detection in induction motors. International Electrical Testing Association (NETA) proposed guidelines for thermographic inspection of electrical systems and rotating equipment. These guidelines help in anomaly detection and estimating its severity. However, it focus only on location of hotspot rather than diagnosing the fault. This paper addresses two such faults i.e. inter-turn fault and failure of cooling system, where both results in increase of stator temperature. Present paper proposes two thermal profile indicators using thermal analysis of IRT images. These indicators are in compliance with NETA standard. These indicators help in correctly diagnosing inter-turn fault and failure of cooling system. The work has been experimentally validated for healthy and with seeded faults scenarios of induction motors.

  11. [Failure modes and effects analysis in the prescription, validation and dispensing process].

    PubMed

    Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T

    2012-01-01

    To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  12. Experiences with Probabilistic Analysis Applied to Controlled Systems

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Giesy, Daniel P.

    2004-01-01

    This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.

  13. How to Predict Oral Rehydration Failure in Children With Gastroenteritis.

    PubMed

    Geurts, Dorien; Steyerberg, Ewout W; Moll, Henriëtte; Oostenbrink, Rianne

    2017-11-01

    Oral rehydration is the standard in most current guidelines for young children with acute gastroenteritis (AGE). Failure of oral rehydration can complicate the disease course, leading to morbidity due to severe dehydration. We aimed to identify prognostic factors of oral rehydration failure in children with AGE. A prospective, observational study was performed at the Emergency department, Erasmus Medical Centre, Rotterdam, The Netherlands, 2010-2012, including 802 previously healthy children, ages 1 month to 5 years with AGE. Failure of oral rehydration was defined by secondary rehydration by a nasogastric tube, or hospitalization or revisit for dehydration within 72 hours after initial emergency department visit. We observed 167 (21%) failures of oral rehydration in a population of 802 children with AGE (median 1.03 years old, interquartile range 0.4-2.1; 60% boys). In multivariate logistic regression analysis, independent predictors for failure of oral rehydration were a higher Manchester Triage System urgency level, abnormal capillary refill time, and a higher clinical dehydration scale score. Early recognition of young children with AGE at risk of failure of oral rehydration therapy is important, as emphasized by the 21% therapy failure in our population. Associated with oral rehydration failure are higher Manchester Triage System urgency level, abnormal capillary refill time, and a higher clinical dehydration scale score.

  14. Analysis of Rail Vehicle Suspension Spring with Special Emphasis on Curving, Tracking and Tractive Efforts

    NASA Astrophysics Data System (ADS)

    Kumbhalkar, M. A.; Bhope, D. V.; Vanalkar, A. V.

    2016-09-01

    The dynamics of the rail vehicle represents a balance between the forces acting between wheel and rail, the inertia forces and the forces exerted by suspension and articulation. Axial loading on helical spring causes vertical deflection at straight track but failures calls to investigate for lateral and longitudinal loading at horizontal and vertical curves respectively. Goods carrying vehicle has the frequent failures of middle axle inner suspension spring calls for investigation. The springs are analyzed for effect of stress concentration due to centripetal force and due to tractive and breaking effort. This paper also discusses shear failure analysis of spring at curvature and at uphill at various speeds for different loading condition analytically and by finite element analysis. Two mass rail vehicle suspension systems have been analyzed for vibration responses analytically using mathematical tool Matlab Simulink and the same will be evaluated using FFT vibration analyzer to find peak resonance in vertical, lateral and longitudinal direction. The results prove that the suspension acquires high repeated load in vertical and lateral direction due to tracking and curving causes maximum stress concentration on middle axle suspension spring as height of this spring is larger than end axle spring in primary suspension system and responsible for failure of middle axle suspension spring due to high stress acquisition.

  15. Cost-utility analysis of the EVOLVO study on remote monitoring for heart failure patients with implantable defibrillators: randomized controlled trial.

    PubMed

    Zanaboni, Paolo; Landolina, Maurizio; Marzegalli, Maurizio; Lunati, Maurizio; Perego, Giovanni B; Guenzati, Giuseppe; Curnis, Antonio; Valsecchi, Sergio; Borghetti, Francesca; Borghi, Gabriella; Masella, Cristina

    2013-05-30

    Heart failure patients with implantable defibrillators place a significant burden on health care systems. Remote monitoring allows assessment of device function and heart failure parameters, and may represent a safe, effective, and cost-saving method compared to conventional in-office follow-up. We hypothesized that remote device monitoring represents a cost-effective approach. This paper summarizes the economic evaluation of the Evolution of Management Strategies of Heart Failure Patients With Implantable Defibrillators (EVOLVO) study, a multicenter clinical trial aimed at measuring the benefits of remote monitoring for heart failure patients with implantable defibrillators. Two hundred patients implanted with a wireless transmission-enabled implantable defibrillator were randomized to receive either remote monitoring or the conventional method of in-person evaluations. Patients were followed for 16 months with a protocol of scheduled in-office and remote follow-ups. The economic evaluation of the intervention was conducted from the perspectives of the health care system and the patient. A cost-utility analysis was performed to measure whether the intervention was cost-effective in terms of cost per quality-adjusted life year (QALY) gained. Overall, remote monitoring did not show significant annual cost savings for the health care system (€1962.78 versus €2130.01; P=.80). There was a significant reduction of the annual cost for the patients in the remote arm in comparison to the standard arm (€291.36 versus €381.34; P=.01). Cost-utility analysis was performed for 180 patients for whom QALYs were available. The patients in the remote arm gained 0.065 QALYs more than those in the standard arm over 16 months, with a cost savings of €888.10 per patient. Results from the cost-utility analysis of the EVOLVO study show that remote monitoring is a cost-effective and dominant solution. Remote management of heart failure patients with implantable defibrillators appears to be cost-effective compared to the conventional method of in-person evaluations. ClinicalTrials.gov NCT00873899; http://clinicaltrials.gov/show/NCT00873899 (Archived by WebCite at http://www.webcitation.org/6H0BOA29f).

  16. Failure analysis of a tool steel torque shaft

    NASA Technical Reports Server (NTRS)

    Reagan, J. R.

    1981-01-01

    A low design load drive shaft used to deliver power from an experimental exhaust heat recovery system to the crankshaft of an experimental diesel truck engine failed during highway testing. An independent testing laboratory analyzed the failure by routine metallography and attributed the failure to fatigue induced by a banded microstructure. Visual examination by NASA of the failed shaft plus the knowledge of the torsional load that it carried pointed to a 100 percent ductile failure with no evidence of fatigue. Scanning electron microscopy confirmed this. Torsional test specimens were produced from pieces of the failed shaft and torsional overload testing produced identical failures to that which had occurred in the truck engine. This pointed to a failure caused by a high overload and although the microstructure was defective it was not the cause of the failure.

  17. Integrated versus nOn-integrated Peripheral inTravenous catheter. Which Is the most effective systeM for peripheral intravenoUs catheter Management? (The OPTIMUM study): a randomised controlled trial protocol

    PubMed Central

    Castillo, Maria Isabel; Larsen, Emily; Cooke, Marie; Marsh, Nicole M; Wallis, Marianne C; Finucane, Julie; Brown, Peter; Mihala, Gabor; Byrnes, Joshua; Walker, Rachel; Cable, Prudence; Zhang, Li; Sear, Candi; Jackson, Gavin; Rowsome, Anna; Ryan, Alison; Humphries, Julie C; Sivyer, Susan; Flanigan, Kathy; Rickard, Claire M

    2018-01-01

    Introduction Peripheral intravenous catheters (PIVCs) are frequently used in hospitals. However, PIVC complications are common, with failures leading to treatment delays, additional procedures, patient pain and discomfort, increased clinician workload and substantially increased healthcare costs. Recent evidence suggests integrated PIVC systems may be more effective than traditional non-integrated PIVC systems in reducing phlebitis, infiltration and costs and increasing functional dwell time. The study aim is to determine the efficacy, cost–utility and acceptability to patients and professionals of an integrated PIVC system compared with a non-integrated PIVC system. Methods and analysis Two-arm, multicentre, randomised controlled superiority trial of integrated versus non-integrated PIVC systems to compare effectiveness on clinical and economic outcomes. Recruitment of 1560 patients over 2 years, with randomisation by a centralised service ensuring allocation concealment. Primary outcomes: catheter failure (composite endpoint) for reasons of: occlusion, infiltration/extravasation, phlebitis/thrombophlebitis, dislodgement, localised or catheter-associated bloodstream infections. Secondary outcomes: first time insertion success, types of PIVC failure, device colonisation, insertion pain, functional dwell time, adverse events, mortality, cost–utility and consumer acceptability. One PIVC per patient will be included, with intention-to-treat analysis. Baseline group comparisons will be made for potentially clinically important confounders. The proportional hazards assumption will be checked, and Cox regression will test the effect of group, patient, device and clinical variables on failure. An as-treated analysis will assess the effect of protocol violations. Kaplan-Meier survival curves with log-rank tests will compare failure by group over time. Secondary endpoints will be compared between groups using parametric/non-parametric techniques. Ethics and dissemination Ethical approval from the Royal Brisbane and Women’s Hospital Human Research Ethics Committee (HREC/16/QRBW/527), Griffith University Human Research Ethics Committee (Ref No. 2017/002) and the South Metropolitan Health Services Human Research Ethics Committee (Ref No. 2016–239). Results will be published in peer-reviewed journals. Trial registration number ACTRN12617000089336. PMID:29764876

  18. Independent Orbiter Assessment (IOA): FMEA/CIL assessment

    NASA Technical Reports Server (NTRS)

    Saiidi, Mo J.; Swain, L. J.; Compton, J. M.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. Direction was given by the Orbiter and GFE Projects Office to perform the hardware analysis and assessment using the instructions and ground rules defined in NSTS 22206. The IOA analysis features a top-down approach to determine hardware failure modes, criticality, and potential critical items. To preserve independence, the anlaysis was accomplished without reliance upon the results contained within the NASA and prime contractor FMEA/CIL documentation. The assessment process compares the independently derived failure modes and criticality assignments to the proposed NASA Post 51-L FMEA/CIL documentation. When possible, assessment issues are discussed and resolved with the NASA subsystem managers. The assessment results for each subsystem are summarized. The most important Orbiter assessment finding was the previously unknown stuck autopilot push-button criticality 1/1 failure mode, having a worst case effect of loss of crew/vehicle when a microwave landing system is not active.

  19. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  20. Fracture and failure: Analyses, mechanisms and applications; Proceedings of the Symposium, Los Angeles, CA, March 17-20, 1980

    NASA Technical Reports Server (NTRS)

    Tung, P. P. (Editor); Agrawal, S. P.; Kumar, A.; Katcher, M.

    1981-01-01

    Papers are presented on the application of fracture mechanics to spacecraft design, fracture control applications on the Space Shuttle reaction control thrusters, and an assessment of fatigue crack growth rate relationships for metallic airframe materials. Also considered are fracture mechanisms and microstructural relationships in Ni-base alloy systems, the use of surface deformation markings to determine crack propagation directions, case histories of metallurgical failures in the electronics industry, and a failure analysis of silica phenolic nozzle liners.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uchibori, Akihiro; Kurihara, Akikazu; Ohshima, Hiroyuki

    A multiphysics analysis system for sodium-water reaction phenomena in a steam generator of sodium-cooled fast reactors was newly developed. The analysis system consists of the mechanistic numerical analysis codes, SERAPHIM, TACT, and RELAP5. The SERAPHIM code calculates the multicomponent multiphase flow and sodium-water chemical reaction caused by discharging of pressurized water vapor. Applicability of the SERAPHIM code was confirmed through the analyses of the experiment on water vapor discharging in liquid sodium. The TACT code was developed to calculate heat transfer from the reacting jet to the adjacent tube and to predict the tube failure occurrence. The numerical models integratedmore » into the TACT code were verified through some related experiments. The RELAP5 code evaluates thermal hydraulic behavior of water inside the tube. The original heat transfer correlations were corrected for the tube rapidly heated by the reacting jet. The developed system enables evaluation of the wastage environment and the possibility of the failure propagation.« less

  2. Reliability analysis of a phaser measurement unit using a generalized fuzzy lambda-tau(GFLT) technique.

    PubMed

    Komal

    2018-05-01

    Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Unified continuum damage model for matrix cracking in composite rotor blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollayi, Hemaraju; Harursampath, Dineshkumar

    This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system undermore » various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load.« less

  4. Bad Actors Criticality Assessment for Pipeline system

    NASA Astrophysics Data System (ADS)

    Nasir, Meseret; Chong, Kit wee; Osman, Sabtuni; Siaw Khur, Wee

    2015-04-01

    Failure of a pipeline system could bring huge economic loss. In order to mitigate such catastrophic loss, it is required to evaluate and rank the impact of each bad actor of the pipeline system. In this study, bad actors are known as the root causes or any potential factor leading to the system downtime. Fault Tree Analysis (FTA) is used to analyze the probability of occurrence for each bad actor. Bimbaum's Importance and criticality measure (BICM) is also employed to rank the impact of each bad actor on the pipeline system failure. The results demonstrate that internal corrosion; external corrosion and construction damage are critical and highly contribute to the pipeline system failure with 48.0%, 12.4% and 6.0% respectively. Thus, a minor improvement in internal corrosion; external corrosion and construction damage would bring significant changes in the pipeline system performance and reliability. These results could also be useful to develop efficient maintenance strategy by identifying the critical bad actors.

  5. Analysis on IGBT and Diode Failures in Distribution Electronic Power Transformers

    NASA Astrophysics Data System (ADS)

    Wang, Si-cong; Sang, Zi-xia; Yan, Jiong; Du, Zhi; Huang, Jia-qi; Chen, Zhu

    2018-02-01

    Fault characteristics of power electronic components are of great importance for a power electronic device, and are of extraordinary importance for those applied in power system. The topology structures and control method of Distribution Electronic Power Transformer (D-EPT) are introduced, and an exploration on fault types and fault characteristics for the IGBT and diode failures is presented. The analysis and simulation of different fault types for the fault characteristics lead to the D-EPT fault location scheme.

  6. Local-global analysis of crack growth in continuously reinfoced ceramic matrix composites

    NASA Technical Reports Server (NTRS)

    Ballarini, Roberto; Ahmed, Shamim

    1989-01-01

    This paper describes the development of a mathematical model for predicting the strength and micromechanical failure characteristics of continuously reinforced ceramic matrix composites. The local-global analysis models the vicinity of a propagating crack tip as a local heterogeneous region (LHR) consisting of spring-like representation of the matrix, fibers and interfaces. Parametric studies are conducted to investigate the effects of LHR size, component properties, and interface conditions on the strength and sequence of the failure processes in the unidirectional composite system.

  7. Failure mode and effects analysis drastically reduced potential risks in clinical trial conduct.

    PubMed

    Lee, Howard; Lee, Heechan; Baik, Jungmi; Kim, Hyunjung; Kim, Rachel

    2017-01-01

    Failure mode and effects analysis (FMEA) is a risk management tool to proactively identify and assess the causes and effects of potential failures in a system, thereby preventing them from happening. The objective of this study was to evaluate effectiveness of FMEA applied to an academic clinical trial center in a tertiary care setting. A multidisciplinary FMEA focus group at the Seoul National University Hospital Clinical Trials Center selected 6 core clinical trial processes, for which potential failure modes were identified and their risk priority number (RPN) was assessed. Remedial action plans for high-risk failure modes (RPN >160) were devised and a follow-up RPN scoring was conducted a year later. A total of 114 failure modes were identified with an RPN score ranging 3-378, which was mainly driven by the severity score. Fourteen failure modes were of high risk, 11 of which were addressed by remedial actions. Rescoring showed a dramatic improvement attributed to reduction in the occurrence and detection scores by >3 and >2 points, respectively. FMEA is a powerful tool to improve quality in clinical trials. The Seoul National University Hospital Clinical Trials Center is expanding its FMEA capability to other core clinical trial processes.

  8. Solar power satellite system definition study. Volume 7, phase 1: SPS and rectenna systems analyses

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A systems definition study of the solar power satellite systems is presented. The design and power distribution of the rectenna system is discussed. The communication subsystem and thermal control characteristics are described and a failure analysis performed on the systems is reported.

  9. FMEA of manual and automated methods for commissioning a radiotherapy treatment planning system.

    PubMed

    Wexler, Amy; Gu, Bruce; Goddu, Sreekrishna; Mutic, Maya; Yaddanapudi, Sridhar; Olsen, Lindsey; Harry, Taylor; Noel, Camille; Pawlicki, Todd; Mutic, Sasa; Cai, Bin

    2017-09-01

    To evaluate the level of risk involved in treatment planning system (TPS) commissioning using a manual test procedure, and to compare the associated process-based risk to that of an automated commissioning process (ACP) by performing an in-depth failure modes and effects analysis (FMEA). The authors collaborated to determine the potential failure modes of the TPS commissioning process using (a) approaches involving manual data measurement, modeling, and validation tests and (b) an automated process utilizing application programming interface (API) scripting, preloaded, and premodeled standard radiation beam data, digital heterogeneous phantom, and an automated commissioning test suite (ACTS). The severity (S), occurrence (O), and detectability (D) were scored for each failure mode and the risk priority numbers (RPN) were derived based on TG-100 scale. Failure modes were then analyzed and ranked based on RPN. The total number of failure modes, RPN scores and the top 10 failure modes with highest risk were described and cross-compared between the two approaches. RPN reduction analysis is also presented and used as another quantifiable metric to evaluate the proposed approach. The FMEA of a MTP resulted in 47 failure modes with an RPN ave of 161 and S ave of 6.7. The highest risk process of "Measurement Equipment Selection" resulted in an RPN max of 640. The FMEA of an ACP resulted in 36 failure modes with an RPN ave of 73 and S ave of 6.7. The highest risk process of "EPID Calibration" resulted in an RPN max of 576. An FMEA of treatment planning commissioning tests using automation and standardization via API scripting, preloaded, and pre-modeled standard beam data, and digital phantoms suggests that errors and risks may be reduced through the use of an ACP. © 2017 American Association of Physicists in Medicine.

  10. The early indicators of financial failure: a study of bankrupt and solvent health systems.

    PubMed

    Coyne, Joseph S; Singh, Sher G

    2008-01-01

    This article presents a series of pertinent predictors of financial failure based on analysis of solvent and bankrupt health systems to identify which financial measures show the clearest distinction between success and failure. Early warning signals are evident from the longitudinal analysis as early as five years before bankruptcy. The data source includes seven years of annual statements filed with the Securities and Exchange Commission by 13 health systems before they filed bankruptcy. Comparative data were compiled from five solvent health systems for the same seven-year period. Seven financial solvency ratios are included in this study, including four cash liquidity measures, two leverage measures, and one efficiency measure. The results show distinct financial trends between solvent and bankrupt health systems, in particular for the operating-cash-flow-related measures, namely Ratio 1: Operating Cash Flow Percentage Change, from prior to current period; Ratio 2: Operating Cash Flow to Net Revenues; and Ratio 4: Cash Flow to Total Liabilities, indicating sensitivity in the hospital industry to cash flow management. The high dependence on credit from third-party payers is cited as a reason for this; thus, there is a great need for cash to fund operations. Five managerial policy implications are provided to help health system managers avoid financial solvency problems in the future.

  11. NAC Off-Vehicle Brake Testing Project

    DTIC Science & Technology

    2007-05-01

    disc pads/rotors and drum shoe assemblies/ drums - Must use vehicle “OEM” brake /hub-end hardware, or ESA... brake component comparison analysis (primary)* - brake system design analysis - brake system component failure analysis - (*) limited to disc pads...e.g. disc pads/rotors, drum shoe assemblies/ drums . - Not limited to “OEM” brake /hub-end hardware as there is none ! - Weight transfer, plumbing,

  12. On possibilities of using global monitoring in effective prevention of tailings storage facilities failures.

    PubMed

    Stefaniak, Katarzyna; Wróżyńska, Magdalena

    2018-02-01

    Protection of common natural goods is one of the greatest challenges man faces every day. Extracting and processing natural resources such as mineral deposits contributes to the transformation of the natural environment. The number of activities designed to keep balance are undertaken in accordance with the concept of integrated order. One of them is the use of comprehensive systems of tailings storage facility monitoring. Despite the monitoring, system failures still occur. The quantitative aspect of the failures illustrates both the scale of the problem and the quantitative aspect of the consequences of tailings storage facility failures. The paper presents vast possibilities provided by the global monitoring in the effective prevention of these failures. Particular attention is drawn to the potential of using multidirectional monitoring, including technical and environmental monitoring by the example of one of the world's biggest hydrotechnical constructions-Żelazny Most Tailings Storage Facility (TSF), Poland. Analysis of monitoring data allows to take preventive action against construction failures of facility dams, which can have devastating effects on human life and the natural environment.

  13. Independent Orbiter Assessment (IOA): Assessment of the main propulsion subsystem FMEA/CIL, volume 4

    NASA Technical Reports Server (NTRS)

    Slaughter, B. C.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Main Propulsion System (MPS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were than compared to available data from the Rockwell Downey/NASA JSC FMEA/CIL review. Volume 4 contains the IOA analysis worksheets and the NASA FMEA to IOA worksheet cross reference and recommendations.

  14. Improving Common Security Risk Analysis (Amelioration d’un processus commun d’analyse de risques securite)

    DTIC Science & Technology

    2008-09-01

    publication has been reproduced directly from material supplied by RTO or the authors. Published September 2008 Copyright © RTO/NATO 2008 All Rights...requirements are met at each level within the composed system, a chain of belief is formed which provides assurance that the composed system is in...FAILURE OF AIR-CONDITIONING x 12 – LOSS OF POWER SUPPLY x 13 – FAILURE OF TELECOMMUNICATION EQUIPMENT x 4 – Disturbance Due to

  15. Analysis of field usage failure rate data for plastic encapsulated solid state devices

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Survey and questionnaire techniques were used to gather data from users and manufacturers on the failure rates in the field of plastic encapsulated semiconductors. It was found that such solid state devices are being successfully used by commercial companies which impose certain screening and qualification procedures. The reliability of these semiconductors is now adequate to support their consideration in NASA systems, particularly in low cost systems. The cost of performing necessary screening for NASA applications was assessed.

  16. Failure detection and recovery in the assembly/contingency subsystem

    NASA Technical Reports Server (NTRS)

    Gantenbein, Rex E.

    1993-01-01

    The Assembly/Contingency Subsystem (ACS) is the primary communications link on board the Space Station. Any failure in a component of this system or in the external devices through which it communicates with ground-based systems will isolate the Station. The ACS software design includes a failure management capability (ACFM) that provides protocols for failure detection, isolation, and recovery (FDIR). The the ACFM design requirements as outlined in the current ACS software requirements specification document are reviewed. The activities carried out in this review include: (1) an informal, but thorough, end-to-end failure mode and effects analysis of the proposed software architecture for the ACFM; and (2) a prototype of the ACFM software, implemented as a C program under the UNIX operating system. The purpose of this review is to evaluate the FDIR protocols specified in the ACS design and the specifications themselves in light of their use in implementing the ACFM. The basis of failure detection in the ACFM is the loss of signal between the ground and the Station, which (under the appropriate circumstances) will initiate recovery to restore communications. This recovery involves the reconfiguration of the ACS to either a backup set of components or to a degraded communications mode. The initiation of recovery depends largely on the criticality of the failure mode, which is defined by tables in the ACFM and can be modified to provide a measure of flexibility in recovery procedures.

  17. Common cause evaluations in applied risk analysis of nuclear power plants. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taniguchi, T.; Ligon, D.; Stamatelatos, M.

    1983-04-01

    Qualitative and quantitative approaches were developed for the evaluation of common cause failures (CCFs) in nuclear power plants and were applied to the analysis of the auxiliary feedwater systems of several pressurized water reactors (PWRs). Key CCF variables were identified through a survey of experts in the field and a review of failure experience in operating PWRs. These variables were classified into categories of high, medium, and low defense against a CCF. Based on the results, a checklist was developed for analyzing CCFs of systems. Several known techniques for quantifying CCFs were also reviewed. The information provided valuable insights inmore » the development of a new model for estimating CCF probabilities, which is an extension of and improvement over the Beta Factor method. As applied to the analysis of the PWR auxiliary feedwater systems, the method yielded much more realistic values than the original Beta Factor method for a one-out-of-three system.« less

  18. Operating Experience and Reliability Improvements on the 5 kW CW Klystron at Jefferson Lab

    NASA Astrophysics Data System (ADS)

    Nelson, R.; Holben, S.

    1997-05-01

    With substantial operating hours on the RF system, considerable information on reliability of the 5 kW CW klystrons has been obtained. High early failure rates led to examination of the operating conditions and failure modes. Internal ceramic contamination caused premature failure of gun potting material and ultimate tube demise through arcing or ceramic fracture. A planned course of repotting and reconditioning of approximately 300 klystrons, plus careful attention to operating conditions and periodic analysis of operational data, has substantially reduced the failure rate. It is anticipated that implementation of planned supplemental monitoring systems for the klystrons will allow most catastrophic failures to be avoided. By predicting end of life, tubes can be changed out before they fail, thus minimizing unplanned downtime. Initial tests have also been conducted on this same klystron operated at higher voltages with resultant higher output power. The outcome of these tests will provide information to be considered for future upgrades to the accelerator.

  19. Independent Orbiter Assessment (IOA): Assessment of the main propulsion subsystem FMEA/CIL, volume 3

    NASA Technical Reports Server (NTRS)

    Holden, K. A.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Main Propulsion System (MPS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to available data from the Rockwell Downey/NASA JSC FMEA/CIL review. Volume 3 continues the presentation of IOA worksheets and includes the potential critical items list.

  20. Independent Orbiter Assessment (IOA): Assessment of the main propulsion subsystem FMEA/CIL, volume 2

    NASA Technical Reports Server (NTRS)

    Holden, K. A.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Main Propulsion System (MPS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were than compared to available data from the Rockwell Downey/NASA JSC FMEA/CIL review. Volume 2 continues the presentation of IOA worksheets for MPS hardware items.

  1. Structural design and analysis of a Mach zero to five turbo-ramjet system

    NASA Technical Reports Server (NTRS)

    Spoth, Kevin A.; Moses, Paul L.

    1993-01-01

    The paper discusses the structural design and analysis of a Mach zero to five turbo-ramjet propulsion system for a Mach five waverider-derived cruise vehicle. The level of analysis detail necessary for a credible conceptual design is shown. The results of a finite-element failure mode sizing analysis for the engine primary structure is presented. The importance of engine/airframe integration is also discussed.

  2. Fracture and Failure at and Near Interfaces Under Pressure

    DTIC Science & Technology

    1998-06-18

    realistic data for comparison with improved analytical results, and to 2) initiate a new computational approach for stress analysis of cracks at and near...new computational approach for stress analysis of cracks in solid propellants at and near interfaces, which analysis can draw on the ever expanding...tactical and strategic missile systems. The most important and most difficult component of the system analysis has been the predictability or

  3. Sophisticated Calculation of the 1oo4-architecture for Safety-related Systems Conforming to IEC61508

    NASA Astrophysics Data System (ADS)

    Hayek, A.; Bokhaiti, M. Al; Schwarz, M. H.; Boercsoek, J.

    2012-05-01

    With the publication and enforcement of the standard IEC 61508 of safety related systems, recent system architectures have been presented and evaluated. Among a number of techniques and measures to the evaluation of safety integrity level (SIL) for safety-related systems, several measures such as reliability block diagrams and Markov models are used to analyze the probability of failure on demand (PFD) and mean time to failure (MTTF) which conform to IEC 61508. The current paper deals with the quantitative analysis of the novel 1oo4-architecture (one out of four) presented in recent work. Therefore sophisticated calculations for the required parameters are introduced. The provided 1oo4-architecture represents an advanced safety architecture based on on-chip redundancy, which is 3-failure safe. This means that at least one of the four channels have to work correctly in order to trigger the safety function.

  4. Analysis of beam loss induced abort kicker instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang W.; Sandberg, J.; Ahrens, L.

    2012-05-20

    Through more than a decade of operation, we have noticed the phenomena of beam loss induced kicker instability in the RHIC beam abort systems. In this study, we analyze the short term beam loss before abort kicker pre-fire events and operation conditions before capacitor failures. Beam loss has caused capacitor failures and elevated radiation level concentrated at failed end of capacitor has been observed. We are interested in beam loss induced radiation and heat dissipation in large oil filled capacitors and beam triggered thyratron conduction. We hope the analysis result would lead to better protection of the abort systems andmore » improved stability of the RHIC operation.« less

  5. Designing and Implementation of a Heart Failure Telemonitoring System

    PubMed Central

    Safdari, Reza; Jafarpour, Maryam; Mokhtaran, Mehrshad; Naderi, Nasim

    2017-01-01

    Introduction: The aim of this study was to identify patients at-risk, enhancing self-care management of HF patients at home and reduce the disease exacerbations and readmissions. Method: In this research according to standard heart failure guidelines and Semi-structured interviews with 10 heart failure Specialists, a draft heart failure rule set for alerts and patient instructions was developed. Eventually, the clinical champion of the project vetted the rule set. Also we designed a transactional system to enhance monitoring and follow up of CHF patients. With this system, CHF patients are required to measure their physiological measurements (vital signs and body weight) every day and to submit their symptoms using the app. additionally, based on their data, they will receive customized notifications and motivation messages to classify risk of disease exacerbation. The architecture of system comprised of six major components: 1) a patient data collection suite including a mobile app and website; 2) Data Receiver; 3) Database; 4) a Specialists expert Panel; 5) Rule engine classifier; 6) Notifier engine. Results: This system has implemented in Iran for the first time and we are currently in the testing phase with 10 patients to evaluate the technical performance of our system. The developed expert system generates alerts and instructions based on the patient’s data and the notify engine notifies responsible nurses and physicians and sometimes patients. Detailed analysis of those results will be reported in a future report. Conclusion: This study is based on the design of a telemonitoring system for heart failure self-care that intents to overcome the gap that occurs when patients discharge from the hospital and tries to accurate requirement of readmission. A rule set for classifying and resulting automated alerts and patient instructions for heart failure telemonitoring was developed. It also facilitates daily communication among patients and heart failure clinicians so any deterioration in health could be identified immediately. PMID:29114106

  6. Designing and Implementation of a Heart Failure Telemonitoring System.

    PubMed

    Safdari, Reza; Jafarpour, Maryam; Mokhtaran, Mehrshad; Naderi, Nasim

    2017-09-01

    The aim of this study was to identify patients at-risk, enhancing self-care management of HF patients at home and reduce the disease exacerbations and readmissions. In this research according to standard heart failure guidelines and Semi-structured interviews with 10 heart failure Specialists, a draft heart failure rule set for alerts and patient instructions was developed. Eventually, the clinical champion of the project vetted the rule set. Also we designed a transactional system to enhance monitoring and follow up of CHF patients. With this system, CHF patients are required to measure their physiological measurements (vital signs and body weight) every day and to submit their symptoms using the app. additionally, based on their data, they will receive customized notifications and motivation messages to classify risk of disease exacerbation. The architecture of system comprised of six major components: 1) a patient data collection suite including a mobile app and website; 2) Data Receiver; 3) Database; 4) a Specialists expert Panel; 5) Rule engine classifier; 6) Notifier engine. This system has implemented in Iran for the first time and we are currently in the testing phase with 10 patients to evaluate the technical performance of our system. The developed expert system generates alerts and instructions based on the patient's data and the notify engine notifies responsible nurses and physicians and sometimes patients. Detailed analysis of those results will be reported in a future report. This study is based on the design of a telemonitoring system for heart failure self-care that intents to overcome the gap that occurs when patients discharge from the hospital and tries to accurate requirement of readmission. A rule set for classifying and resulting automated alerts and patient instructions for heart failure telemonitoring was developed. It also facilitates daily communication among patients and heart failure clinicians so any deterioration in health could be identified immediately.

  7. Managing heart failure in the long-term care setting: nurses' experiences in Ontario, Canada.

    PubMed

    Strachan, Patricia H; Kaasalainen, Sharon; Horton, Amy; Jarman, Hellen; D'Elia, Teresa; Van Der Horst, Mary-Lou; Newhouse, Ian; Kelley, Mary Lou; McAiney, Carrie; McKelvie, Robert; Heckman, George A

    2014-01-01

    Implementation of heart failure guidelines in long-term care (LTC) settings is challenging. Understanding the conditions of nursing practice can improve management, reduce suffering, and prevent hospital admission of LTC residents living with heart failure. The aim of the study was to understand the experiences of LTC nurses managing care for residents with heart failure. This was a descriptive qualitative study nested in Phase 2 of a three-phase mixed methods project designed to investigate barriers and solutions to implementing the Canadian Cardiovascular Society heart failure guidelines into LTC homes. Five focus groups totaling 33 nurses working in LTC settings in Ontario, Canada, were audiorecorded, then transcribed verbatim, and entered into NVivo9. A complex adaptive systems framework informed this analysis. Thematic content analysis was conducted by the research team. Triangulation, rigorous discussion, and a search for negative cases were conducted. Data were collected between May and July 2010. Nurses characterized their experiences managing heart failure in relation to many influences on their capacity for decision-making in LTC settings: (a) a reactive versus proactive approach to chronic illness; (b) ability to interpret heart failure signs, symptoms, and acuity; (c) compromised information flow; (d) access to resources; and (e) moral distress. Heart failure guideline implementation reflects multiple dynamic influences. Leadership that addresses these factors is required to optimize the conditions of heart failure care and related nursing practice.

  8. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    NASA Technical Reports Server (NTRS)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    The theory of System Health Management (SHM) and of its operational subset Fault Management (FM) states that FM is implemented as a "meta" control loop, known as an FM Control Loop (FMCL). The FMCL detects that all or part of a system is now failed, or in the future will fail (that is, cannot be controlled within acceptable limits to achieve its objectives), and takes a control action (a response) to return the system to a controllable state. In terms of control theory, the effectiveness of each FMCL is estimated based on its ability to correctly estimate the system state, and on the speed of its response to the current or impending failure effects. This paper describes how this theory has been successfully applied on the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) Program to quantitatively estimate the effectiveness of proposed abort triggers so as to select the most effective suite to protect the astronauts from catastrophic failure of the SLS. The premise behind this process is to be able to quantitatively provide the value versus risk trade-off for any given abort trigger, allowing decision makers to make more informed decisions. All current and planned crewed launch vehicles have some form of vehicle health management system integrated with an emergency launch abort system to ensure crew safety. While the design can vary, the underlying principle is the same: detect imminent catastrophic vehicle failure, initiate launch abort, and extract the crew to safety. Abort triggers are the detection mechanisms that identify that a catastrophic launch vehicle failure is occurring or is imminent and cause the initiation of a notification to the crew vehicle that the escape system must be activated. While ensuring that the abort triggers provide this function, designers must also ensure that the abort triggers do not signal that a catastrophic failure is imminent when in fact the launch vehicle can successfully achieve orbit. That is, the abort triggers must have low false negative rates to be sure that real crew-threatening failures are detected, and also low false positive rates to ensure that the crew does not abort from non-crew-threatening launch vehicle behaviors. The analysis process described in this paper is a compilation of over six years of lessons learned and refinements from experiences developing abort triggers for NASA's Constellation Program (Ares I Project) and the SLS Program, as well as the simultaneous development of SHM/FM theory. The paper will describe the abort analysis concepts and process, developed in conjunction with SLS Safety and Mission Assurance (S&MA) to define a common set of mission phase, failure scenario, and Loss of Mission Environment (LOME) combinations upon which the SLS Loss of Mission (LOM) Probabilistic Risk Assessment (PRA) models are built. This abort analysis also requires strong coordination with the Multi-Purpose Crew Vehicle (MPCV) and SLS Structures and Environments (STE) to formulate a series of abortability tables that encapsulate explosion dynamics over the ascent mission phase. The design and assessment of abort conditions and triggers to estimate their Loss of Crew (LOC) Benefits also requires in-depth integration with other groups, including Avionics, Guidance, Navigation and Control(GN&C), the Crew Office, Mission Operations, and Ground Systems. The outputs of this analysis are a critical input to SLS S&MA's LOC PRA models. The process described here may well be the first full quantitative application of SHM/FM theory to the selection of a sensor suite for any aerospace system.

  9. Preliminary design of a solar central receiver for a site-specific repowering application (Saguaro Power Plant). Volume IV. Appendixes. Final report, October 1982-September 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, E.R.

    1983-09-01

    The appendixes for the Saguaro Power Plant includes the following: receiver configuration selection report; cooperating modes and transitions; failure modes analysis; control system analysis; computer codes and simulation models; procurement package scope descriptions; responsibility matrix; solar system flow diagram component purpose list; thermal storage component and system test plans; solar steam generator tube-to-tubesheet weld analysis; pipeline listing; management control schedule; and system list and definitions.

  10. Routes to failure: analysis of 41 civil aviation accidents from the Republic of China using the human factors analysis and classification system.

    PubMed

    Li, Wen-Chin; Harris, Don; Yu, Chung-San

    2008-03-01

    The human factors analysis and classification system (HFACS) is based upon Reason's organizational model of human error. HFACS was developed as an analytical framework for the investigation of the role of human error in aviation accidents, however, there is little empirical work formally describing the relationship between the components in the model. This research analyses 41 civil aviation accidents occurring to aircraft registered in the Republic of China (ROC) between 1999 and 2006 using the HFACS framework. The results show statistically significant relationships between errors at the operational level and organizational inadequacies at both the immediately adjacent level (preconditions for unsafe acts) and higher levels in the organization (unsafe supervision and organizational influences). The pattern of the 'routes to failure' observed in the data from this analysis of civil aircraft accidents show great similarities to that observed in the analysis of military accidents. This research lends further support to Reason's model that suggests that active failures are promoted by latent conditions in the organization. Statistical relationships linking fallible decisions in upper management levels were found to directly affect supervisory practices, thereby creating the psychological preconditions for unsafe acts and hence indirectly impairing the performance of pilots, ultimately leading to accidents.

  11. Independent Orbiter Assessment (IOA): Assessment of the reaction control system, volume 1

    NASA Technical Reports Server (NTRS)

    Prust, Chet D.; Hartman, Dan W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the aft and forward Reaction Control System (RCS) hardware, and Electrical Power Distribution and Control (EPD and C), generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline. This report documents the results of that comparison for the Orbiter RCS hardware and EPD and C systems. The IOA product for the RCS analysis consisted of 208 hardware and 2064 EPD and C failure mode worksheets that resulted in 141 hardware and 449 EPD and C potential critical items (PCIs) being identified. A comparison was made of the IOA product to the NASA FMEA/CIL baseline. After comparison and discussions with the NASA subsystem manager, 96 hardware issues, 83 of which concern CIL items or PCIs, and 280 EPD and C issues, 158 of which concern CIL items or PCIs, and 280 EPD and C issues, 158 of which concern CIL items or PCIs, remain unresolved. Volume 1 contains the subsystem description, assessment results, and some of the IOA worksheets.

  12. Statistical analysis of field data for aircraft warranties

    NASA Astrophysics Data System (ADS)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  13. Photovoltaic module reliability improvement through application testing and failure analysis

    NASA Technical Reports Server (NTRS)

    Dumas, L. N.; Shumka, A.

    1982-01-01

    During the first four years of the U.S. Department of Energy (DOE) National Photovoltatic Program, the Jet Propulsion Laboratory Low-Cost Solar Array (LSA) Project purchased about 400 kW of photovoltaic modules for test and experiments. In order to identify, report, and analyze test and operational problems with the Block Procurement modules, a problem/failure reporting and analysis system was implemented by the LSA Project with the main purpose of providing manufacturers with feedback from test and field experience needed for the improvement of product performance and reliability. A description of the more significant types of failures is presented, taking into account interconnects, cracked cells, dielectric breakdown, delamination, and corrosion. Current design practices and reliability evaluations are also discussed. The conducted evaluation indicates that current module designs incorporate damage-resistant and fault-tolerant features which address field failure mechanisms observed to date.

  14. Develop advanced nonlinear signal analysis topographical mapping system

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Space Shuttle Main Engine (SSME) has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature, pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system; (2) develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amount of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. High compression ratio can be achieved to allow minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities; and (3) integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. The final result of this program will yield an ATMS system of nonlinear and nonstationary spectral analysis software package integrated with the Compressed SSME TOPO Data Base (CSTDB) on the same platform. This system will allow NASA engineers to retrieve any unique defect signatures and trends associated with different failure modes and anomalous phenomena over the entire SSME test history across turbo pump families.

  15. Develop advanced nonlinear signal analysis topographical mapping system

    NASA Technical Reports Server (NTRS)

    Jong, Jen-Yi

    1993-01-01

    The SSME has been undergoing extensive flight certification and developmental testing, which involves some 250 health monitoring measurements. Under the severe temperature pressure, and dynamic environments sustained during operation, numerous major component failures have occurred, resulting in extensive engine hardware damage and scheduling losses. To enhance SSME safety and reliability, detailed analysis and evaluation of the measurements signal are mandatory to assess its dynamic characteristics and operational condition. Efficient and reliable signal detection techniques will reduce catastrophic system failure risks and expedite the evaluation of both flight and ground test data, and thereby reduce launch turn-around time. The basic objective of this contract are threefold: (1) Develop and validate a hierarchy of innovative signal analysis techniques for nonlinear and nonstationary time-frequency analysis. Performance evaluation will be carried out through detailed analysis of extensive SSME static firing and flight data. These techniques will be incorporated into a fully automated system. (2) Develop an advanced nonlinear signal analysis topographical mapping system (ATMS) to generate a Compressed SSME TOPO Data Base (CSTDB). This ATMS system will convert tremendous amounts of complex vibration signals from the entire SSME test history into a bank of succinct image-like patterns while retaining all respective phase information. A high compression ratio can be achieved to allow the minimal storage requirement, while providing fast signature retrieval, pattern comparison, and identification capabilities. (3) Integrate the nonlinear correlation techniques into the CSTDB data base with compatible TOPO input data format. Such integrated ATMS system will provide the large test archives necessary for a quick signature comparison. This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. The final result of this program will yield an ATMS system of nonlinear and nonstationary spectral analysis software package integrated with the Compressed SSME TOPO Data Base (CSTDB) on the same platform. This system will allow NASA engineers to retrieve any unique defect signatures and trends associated with different failure modes and anomalous phenomena over the entire SSME test history across turbo pump families.

  16. Independent Orbiter Assessment (IOA): Analysis of the rudder/speed brake subsystem

    NASA Technical Reports Server (NTRS)

    Wilson, R. E.; Riccio, J. R.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Rudder/Speedbrake Actuation Mechanism is documented. The function of the Rudder/Speedbrake (RSB) is to provide directional control and to provide a means of energy control during entry. The system consists of two panels on a vertical hinge mounted on the aft part of the vertical stabilizer. These two panels move together to form a rudder but split apart to make a speedbrake. The Rudder/Speedbrake Actuation Mechanism consists of the following elements: (1) Power Drive Unit (PDU) which is composed of hydraulic valve module and a hydraulic motor-powered gearbox which contains differentials and mixer gears to provide PDU torque output; (2) four geared rotary actuators which apply the PDU generated torque to the rudder/speedbrake panels; and (3) ten torque shafts which join the PDU to the rotary actuators and interconnect the four rotary actuators. Each level of hardware was evaluated and analyzed for possible failures and causes. Criticality was assigned based upon the severity of the effect for each failure mode. Critical RSB failures which result in potential loss of vehicle control were mainly due to loss of hydraulic fluid, fluid contaminators, and mechanical failures in gears and shafts.

  17. Tethered Satellite System Contingency Investigation Board

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Tethered Satellite System (TSS-1) was launched aboard the Space Shuttle Atlantis (STS-46) on July 31, 1992. During the attempted on-orbit operations, the Tethered Satellite System failed to deploy successfully beyond 256 meters. The satellite was retrieved successfully and was returned on August 6, 1992. The National Aeronautics and Space Administration (NASA) Associate Administrator for Space Flight formed the Tethered Satellite System (TSS-1) Contingency Investigation Board on August 12, 1992. The TSS-1 Contingency Investigation Board was asked to review the anomalies which occurred, to determine the probable cause, and to recommend corrective measures to prevent recurrence. The board was supported by the TSS Systems Working group as identified in MSFC-TSS-11-90, 'Tethered Satellite System (TSS) Contingency Plan'. The board identified five anomalies for investigation: initial failure to retract the U2 umbilical; initial failure to flyaway; unplanned tether deployment stop at 179 meters; unplanned tether deployment stop at 256 meters; and failure to move tether in either direction at 224 meters. Initial observations of the returned flight hardware revealed evidence of mechanical interference by a bolt with the level wind mechanism travel as well as a helical shaped wrap of tether which indicated that the tether had been unwound from the reel beyond the travel by the level wind mechanism. Examination of the detailed mission events from flight data and mission logs related to the initial failure to flyaway and the failure to move in either direction at 224 meters, together with known preflight concerns regarding slack tether, focused the assessment of these anomalies on the upper tether control mechanism. After the second meeting, the board requested the working group to complete and validate a detailed integrated mission sequence to focus the fault tree analysis on a stuck U2 umbilical, level wind mechanical interference, and slack tether in upper tether control mechanism and to prepare a detailed plan for hardware inspection, test, and analysis including any appropriate hardware disassembly.

  18. Tethered Satellite System Contingency Investigation Board

    NASA Astrophysics Data System (ADS)

    1992-11-01

    The Tethered Satellite System (TSS-1) was launched aboard the Space Shuttle Atlantis (STS-46) on July 31, 1992. During the attempted on-orbit operations, the Tethered Satellite System failed to deploy successfully beyond 256 meters. The satellite was retrieved successfully and was returned on August 6, 1992. The National Aeronautics and Space Administration (NASA) Associate Administrator for Space Flight formed the Tethered Satellite System (TSS-1) Contingency Investigation Board on August 12, 1992. The TSS-1 Contingency Investigation Board was asked to review the anomalies which occurred, to determine the probable cause, and to recommend corrective measures to prevent recurrence. The board was supported by the TSS Systems Working group as identified in MSFC-TSS-11-90, 'Tethered Satellite System (TSS) Contingency Plan'. The board identified five anomalies for investigation: initial failure to retract the U2 umbilical; initial failure to flyaway; unplanned tether deployment stop at 179 meters; unplanned tether deployment stop at 256 meters; and failure to move tether in either direction at 224 meters. Initial observations of the returned flight hardware revealed evidence of mechanical interference by a bolt with the level wind mechanism travel as well as a helical shaped wrap of tether which indicated that the tether had been unwound from the reel beyond the travel by the level wind mechanism. Examination of the detailed mission events from flight data and mission logs related to the initial failure to flyaway and the failure to move in either direction at 224 meters, together with known preflight concerns regarding slack tether, focused the assessment of these anomalies on the upper tether control mechanism. After the second meeting, the board requested the working group to complete and validate a detailed integrated mission sequence to focus the fault tree analysis on a stuck U2 umbilical, level wind mechanical interference, and slack tether in upper tether control mechanism and to prepare a detailed plan for hardware inspection, test, and analysis including any appropriate hardware disassembly.

  19. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  20. Independent Orbiter Assessment (IOA): Assessment of the guidance, navigation, and control subsystem FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Trahan, W. H.; Odonnell, R. A.; Pietz, K. C.; Drapela, L. J.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Guidance, Navigation, and Control System (GNC) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the NASA FMEA/CIL baseline with proposed Post 51-L updates included. A resolution of each discrepancy from the comparison is provided through additional analysis as required. The results of that comparison for the Orbiter GNC hardware is documented. The IOA product for the GNC analysis consisted of 141 failure mode worksheets that resulted in 24 potential critical items being identified. Comparison was made to the NASA baseline which consisted of 148 FMEAs and 36 CIL items. This comparison produced agreement on all but 56 FMEAs which caused differences in zero CIL items.

  1. Failure-free survival after second-line systemic treatment of chronic graft-versus-host disease

    PubMed Central

    Storer, Barry E.; Lee, Stephanie J.; Carpenter, Paul A.; Sandmaier, Brenda M.; Flowers, Mary E. D.; Martin, Paul J.

    2013-01-01

    This study attempted to characterize causes of treatment failure, identify associated prognostic factors, and develop shorter-term end points for trials testing investigational products or regimens for second-line systemic treatment of chronic graft-versus-host disease (GVHD). The study cohort (312 patients) received second-line systemic treatment of chronic GVHD. The primary end point was failure-free survival (FFS) defined by the absence of third-line treatment, nonrelapse mortality, and recurrent malignancy during second-line treatment. Treatment change was the major cause of treatment failure. FFS was 56% at 6 months after second-line treatment. Lower steroid doses at 6 months correlated with subsequent withdrawal of immunosuppressive treatment. Multivariate analysis showed that high-risk disease at transplantation, lower gastrointestinal involvement at second-line treatment, and severe NIH global score at second-line treatment were associated with increased risks of treatment failure. These three factors were used to define risk groups, and success rates at 6 months were calculated for each risk group either without or with various steroid dose limits at 6 months as an additional criterion of success. These success rates could be used as the basis for a clinically relevant and efficient shorter-term end point in clinical studies that evaluate agents for second-line systemic treatment of chronic GVHD. PMID:23321253

  2. Robustness analysis of non-ordinary Petri nets for flexible assembly/disassembly processes based on structural decomposition

    NASA Astrophysics Data System (ADS)

    Hsieh, Fu-Shiung

    2011-03-01

    Design of robust supervisory controllers for manufacturing systems with unreliable resources has received significant attention recently. Robustness analysis provides an alternative way to analyse a perturbed system to quickly respond to resource failures. Although we have analysed the robustness properties of several subclasses of ordinary Petri nets (PNs), analysis for non-ordinary PNs has not been done. Non-ordinary PNs have weighted arcs and have the advantage to compactly model operations requiring multiple parts or resources. In this article, we consider a class of flexible assembly/disassembly manufacturing systems and propose a non-ordinary flexible assembly/disassembly Petri net (NFADPN) model for this class of systems. As the class of flexible assembly/disassembly manufacturing systems can be regarded as the integration and interactions of a set of assembly/disassembly subprocesses, a bottom-up approach is adopted in this article to construct the NFADPN models. Due to the routing flexibility in NFADPN, there may exist different ways to accomplish the tasks. To characterise different ways to accomplish the tasks, we propose the concept of completely connected subprocesses. As long as there exists a set of completely connected subprocesses for certain type of products, the production of that type of products can still be maintained without requiring the whole NFADPN to be live. To take advantage of the alternative routes without enforcing liveness for the whole system, we generalise the concept of persistent production proposed to NFADPN. We propose a condition for persistent production based on the concept of completely connected subprocesses. We extend robustness analysis to NFADPN by exploiting its structure. We identify several patterns of resource failures and characterise the conditions to maintain operation in the presence of resource failures.

  3. Structural health monitoring of wind turbine blades : SE 265 Final Project.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barkley, W. C.; Jacobs, Laura D.; Rutherford, A. C.

    2006-03-23

    ACME Wind Turbine Corporation has contacted our dynamic analysis firm regarding structural health monitoring of their wind turbine blades. ACME has had several failures in previous years. Examples are shown in Figure 1. These failures have resulted in economic loss for the company due to down time of the turbines (lost revenue) and repair costs. Blade failures can occur in several modes, which may depend on the type of construction and load history. Cracking and delamination are some typical modes of blade failure. ACME warranties its turbines and wishes to decrease the number of blade failures they have to repairmore » and replace. The company wishes to implement a real time structural health monitoring system in order to better understand when blade replacement is necessary. Because of warranty costs incurred to date, ACME is interested in either changing the warranty period for the blades in question or predicting imminent failure before it occurs. ACME's current practice is to increase the number of physical inspections when blades are approaching the end of their fatigue lives. Implementation of an in situ monitoring system would eliminate or greatly reduce the need for such physical inspections. Another benefit of such a monitoring system is that the life of any given component could be extended since real conditions would be monitored. The SHM system designed for ACME must be able to operate while the wind turbine is in service. This means that wireless communication options will likely be implemented. Because blade failures occur due to cyclic stresses in the blade material, the sensing system will focus on monitoring strain at various points.« less

  4. [Digoxin as a cause of chromatopsia and depression in a patient with heart failure and hyperthyroidism].

    PubMed

    Chyrek, R; Jabłecka, A; Pupek-Musialik, D; Lowicki, Z

    2000-08-01

    67 year old patient with chronic heart failure and persistent atrial fibrillation had overdosed glycosides for several months. The symptoms of gastrointestinal system and nervous system appeared after long term therapy with toxic doses of glycosides. Originally depression was diagnosed based on the central nervous system disturbances. Even though overdose of glycosides was diagnosed the blood serum glycoside level was within the therapeutic limits. Based on the precise analysis of the data, it was concluded that the reason for normal blood serum glycoside level in this case was coexisting hyperthyreosis.

  5. Multi-institutional application of Failure Mode and Effects Analysis (FMEA) to CyberKnife Stereotactic Body Radiation Therapy (SBRT).

    PubMed

    Veronese, Ivan; De Martin, Elena; Martinotti, Anna Stefania; Fumagalli, Maria Luisa; Vite, Cristina; Redaelli, Irene; Malatesta, Tiziana; Mancosu, Pietro; Beltramo, Giancarlo; Fariselli, Laura; Cantone, Marie Claire

    2015-06-13

    A multidisciplinary and multi-institutional working group applied the Failure Mode and Effects Analysis (FMEA) approach to assess the risks for patients undergoing Stereotactic Body Radiation Therapy (SBRT) treatments for lesions located in spine and liver in two CyberKnife® Centres. The various sub-processes characterizing the SBRT treatment were identified to generate the process trees of both the treatment planning and delivery phases. This analysis drove to the identification and subsequent scoring of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system. Novel solutions aimed to increase patient safety were accordingly considered. The process-tree characterising the SBRT treatment planning stage was composed with a total of 48 sub-processes. Similarly, 42 sub-processes were identified in the stage of delivery to liver tumours and 30 in the stage of delivery to spine lesions. All the sub-processes were judged to be potentially prone to one or more failure modes. Nineteen failures (i.e. 5 in treatment planning stage, 5 in the delivery to liver lesions and 9 in the delivery to spine lesions) were considered of high concern in view of the high RPN and/or severity index value. The analysis of the potential failures, their causes and effects allowed to improve the safety strategies already adopted in the clinical practice with additional measures for optimizing quality management workflow and increasing patient safety.

  6. A demonstration of an intelligent control system for a reusable rocket engine

    NASA Technical Reports Server (NTRS)

    Musgrave, Jeffrey L.; Paxson, Daniel E.; Litt, Jonathan S.; Merrill, Walter C.

    1992-01-01

    An Intelligent Control System for reusable rocket engines is under development at NASA Lewis Research Center. The primary objective is to extend the useful life of a reusable rocket propulsion system while minimizing between flight maintenance and maximizing engine life and performance through improved control and monitoring algorithms and additional sensing and actuation. This paper describes current progress towards proof-of-concept of an Intelligent Control System for the Space Shuttle Main Engine. A subset of identifiable and accommodatable engine failure modes is selected for preliminary demonstration. Failure models are developed retaining only first order effects and included in a simplified nonlinear simulation of the rocket engine for analysis under closed loop control. The engine level coordinator acts as an interface between the diagnostic and control systems, and translates thrust and mixture ratio commands dictated by mission requirements, and engine status (health) into engine operational strategies carried out by a multivariable control. Control reconfiguration achieves fault tolerance if the nominal (healthy engine) control cannot. Each of the aforementioned functionalities is discussed in the context of an example to illustrate the operation of the system in the context of a representative failure. A graphical user interface allows the researcher to monitor the Intelligent Control System and engine performance under various failure modes selected for demonstration.

  7. Spaceflight Ground Support Equipment Reliability & System Safety Data

    NASA Technical Reports Server (NTRS)

    Fernandez, Rene; Riddlebaugh, Jeffrey; Brinkman, John; Wilkinson, Myron

    2012-01-01

    Presented were Reliability Analysis, consisting primarily of Failure Modes and Effects Analysis (FMEA), and System Safety Analysis, consisting of Preliminary Hazards Analysis (PHA), performed to ensure that the CoNNeCT (Communications, Navigation, and Networking re- Configurable Testbed) Flight System was safely and reliably operated during its Assembly, Integration and Test (AI&T) phase. A tailored approach to the NASA Ground Support Equipment (GSE) standard, NASA-STD-5005C, involving the application of the appropriate Requirements, S&MA discipline expertise, and a Configuration Management system (to retain a record of the analysis and documentation) were presented. Presented were System Block Diagrams of selected GSE and the corresponding FMEA, as well as the PHAs. Also discussed are the specific examples of the FMEAs and PHAs being used during the AI&T phase to drive modifications to the GSE (via "redlining" of test procedures, and the placement of warning stickers to protect the flight hardware) before being interfaced to the Flight System. These modifications were necessary because failure modes and hazards were identified during the analysis that had not been properly mitigated. Strict Configuration Management was applied to changes (whether due to upgrades or expired calibrations) in the GSE by revisiting the FMEAs and PHAs to reflect the latest System Block Diagrams and Bill Of Material. The CoNNeCT flight system has been successfully assembled, integrated, tested, and shipped to the launch site without incident. This demonstrates that the steps taken to safeguard the flight system when it was interfaced to the various GSE were successful.

  8. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  9. A Genuine TEAM Player

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Qualtech Systems, Inc. developed a complete software system with capabilities of multisignal modeling, diagnostic analysis, run-time diagnostic operations, and intelligent interactive reasoners. Commercially available as the TEAMS (Testability Engineering and Maintenance System) tool set, the software can be used to reveal unanticipated system failures. The TEAMS software package is broken down into four companion tools: TEAMS-RT, TEAMATE, TEAMS-KB, and TEAMS-RDS. TEAMS-RT identifies good, bad, and suspect components in the system in real-time. It reports system health results from onboard tests, and detects and isolates failures within the system, allowing for rapid fault isolation. TEAMATE takes over from where TEAMS-RT left off by intelligently guiding the maintenance technician through the troubleshooting procedure, repair actions, and operational checkout. TEAMS-KB serves as a model management and collection tool. TEAMS-RDS (TEAMS-Remote Diagnostic Server) has the ability to continuously assess a system and isolate any failure in that system or its components, in real time. RDS incorporates TEAMS-RT, TEAMATE, and TEAMS-KB in a large-scale server architecture capable of providing advanced diagnostic and maintenance functions over a network, such as the Internet, with a web browser user interface.

  10. A Study of the Impact of Peak Demand on Increasing Vulnerability of Cascading Failures to Extreme Contingency Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vyakaranam, Bharat GNVSR; Vallem, Mallikarjuna R.; Nguyen, Tony B.

    The vulnerability of large power systems to cascading failures and major blackouts has become evident since the Northeast blackout in 1965. Based on analyses of the series of cascading blackouts in the past decade, the research community realized the urgent need to develop better methods, tools, and practices for performing cascading-outage analysis and for evaluating mitigations that are easily accessible by utility planning engineers. PNNL has developed the Dynamic Contingency Analysis Tool (DCAT) as an open-platform and publicly available methodology to help develop applications that aim to improve the capabilities of power planning engineers to assess the impact and likelihoodmore » of extreme contingencies and potential cascading events across their systems and interconnections. DCAT analysis will help identify potential vulnerabilities and allow study of mitigation solutions to reduce the risk of cascading outages in technically sound and effective ways. Using the DCAT capability, we examined the impacts of various load conditions to identify situations in which the power grid may encounter cascading outages that could lead to potential blackouts. This paper describes the usefulness of the DCAT tool and how it helps to understand potential impacts of load demand on cascading failures on the power system.« less

  11. The analysis of the pilot's cognitive and decision processes

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1975-01-01

    Articles are presented on pilot performance in zero-visibility precision approach, failure detection by pilots during automatic landing, experiments in pilot decision-making during simulated low visibility approaches, a multinomial maximum likelihood program, and a random search algorithm for laboratory computers. Other topics discussed include detection of system failures in multi-axis tasks and changes in pilot workload during an instrument landing.

  12. Tapered Roller Bearing Damage Detection Using Decision Fusion Analysis

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Kreider, Gary; Fichter, Thomas

    2006-01-01

    A diagnostic tool was developed for detecting fatigue damage of tapered roller bearings. Tapered roller bearings are used in helicopter transmissions and have potential for use in high bypass advanced gas turbine aircraft engines. A diagnostic tool was developed and evaluated experimentally by collecting oil debris data from failure progression tests conducted using health monitoring hardware. Failure progression tests were performed with tapered roller bearings under simulated engine load conditions. Tests were performed on one healthy bearing and three pre-damaged bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor and three accelerometers were monitored and recorded for the occurrence of bearing failure. The bearing was removed and inspected periodically for damage progression throughout testing. Using data fusion techniques, two different monitoring technologies, oil debris analysis and vibration, were integrated into a health monitoring system for detecting bearing surface fatigue pitting damage. The data fusion diagnostic tool was evaluated during bearing failure progression tests under simulated engine load conditions. This integrated system showed improved detection of fatigue damage and health assessment of the tapered roller bearings as compared to using individual health monitoring technologies.

  13. The SAM framework: modeling the effects of management factors on human behavior in risk analysis.

    PubMed

    Murphy, D M; Paté-Cornell, M E

    1996-08-01

    Complex engineered systems, such as nuclear reactors and chemical plants, have the potential for catastrophic failure with disastrous consequences. In recent years, human and management factors have been recognized as frequent root causes of major failures in such systems. However, classical probabilistic risk analysis (PRA) techniques do not account for the underlying causes of these errors because they focus on the physical system and do not explicitly address the link between components' performance and organizational factors. This paper describes a general approach for addressing the human and management causes of system failure, called the SAM (System-Action-Management) framework. Beginning with a quantitative risk model of the physical system, SAM expands the scope of analysis to incorporate first the decisions and actions of individuals that affect the physical system. SAM then links management factors (incentives, training, policies and procedures, selection criteria, etc.) to those decisions and actions. The focus of this paper is on four quantitative models of action that describe this last relationship. These models address the formation of intentions for action and their execution as a function of the organizational environment. Intention formation is described by three alternative models: a rational model, a bounded rationality model, and a rule-based model. The execution of intentions is then modeled separately. These four models are designed to assess the probabilities of individual actions from the perspective of management, thus reflecting the uncertainties inherent to human behavior. The SAM framework is illustrated for a hypothetical case of hazardous materials transportation. This framework can be used as a tool to increase the safety and reliability of complex technical systems by modifying the organization, rather than, or in addition to, re-designing the physical system.

  14. Operation reliability analysis of independent power plants of gas-transmission system distant production facilities

    NASA Astrophysics Data System (ADS)

    Piskunov, Maksim V.; Voytkov, Ivan S.; Vysokomornaya, Olga V.; Vysokomorny, Vladimir S.

    2015-01-01

    The new approach was developed to analyze the failure causes in operation of linear facilities independent power supply sources (mini-CHP-plants) of gas-transmission system in Eastern part of Russia. Triggering conditions of ceiling operation substance temperature at condenser output were determined with mathematical simulation use of unsteady heat and mass transfer processes in condenser of mini-CHP-plants. Under these conditions the failure probability in operation of independent power supply sources is increased. Influence of environmental factors (in particular, ambient temperature) as well as output electric capability values of power plant on mini-CHP-plant operation reliability was analyzed. Values of mean time to failure and power plant failure density during operation in different regions of Eastern Siberia and Far East of Russia were received with use of numerical simulation results of heat and mass transfer processes at operation substance condensation.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, J.R.; Keywood, S.S.

    PTFE-based gaskets in chemical plant service typically fail in an extrusion mode, sometimes referred to as blowout. Test work previously published by Monsanto indicated that correctly installed PTFE-based gaskets have pressure performance far exceeding system pressure ratings. These results have since been confirmed by extensive testing at the Montreal based Ecole Polytechnique Tightness Testing and Research Laboratory (TTRL), funded by a consortium of gasket users and manufacturers. With the knowledge that properly installed gaskets can withstand system pressures in excess of 1,000 psig [6,894 kPa], failures at two chemical plants were re-examined. This analysis indicates that extrusion type failures canmore » be caused by excessive internal pressures, associated with sections of pipe having an external source of heat coincident with a blocked flow condition. This results in high system pressures which explain the extrusion type failures observed. The paper discusses details of individual failures and examines methods to prevent them. Other causes for extrusion failures are reviewed, with a recommendation that stronger gasket materials not be utilized to correct problems until it is verified that excessive pressure build-up is not the problem. Also summarized are the requirements for proper installation to achieve the potential blowout resistance found in these gaskets.« less

  16. [Analysis of Time-to-onset of Interstitial Lung Disease after the Administration of Small Molecule Molecularly-targeted Drugs].

    PubMed

    Komada, Fusao

    2018-01-01

     The aim of this study was to investigate the time-to-onset of drug-induced interstitial lung disease (DILD) following the administration of small molecule molecularly-targeted drugs via the use of the spontaneous adverse reaction reporting system of the Japanese Adverse Drug Event Report database. DILD datasets for afatinib, alectinib, bortezomib, crizotinib, dasatinib, erlotinib, everolimus, gefitinib, imatinib, lapatinib, nilotinib, osimertinib, sorafenib, sunitinib, temsirolimus, and tofacitinib were used to calculate the median onset times of DILD and the Weibull distribution parameters, and to perform the hierarchical cluster analysis. The median onset times of DILD for afatinib, bortezomib, crizotinib, erlotinib, gefitinib, and nilotinib were within one month. The median onset times of DILD for dasatinib, everolimus, lapatinib, osimertinib, and temsirolimus ranged from 1 to 2 months. The median onset times of the DILD for alectinib, imatinib, and tofacitinib ranged from 2 to 3 months. The median onset times of the DILD for sunitinib and sorafenib ranged from 8 to 9 months. Weibull distributions for these drugs when using the cluster analysis showed that there were 4 clusters. Cluster 1 described a subgroup with early to later onset DILD and early failure type profiles or a random failure type profile. Cluster 2 exhibited early failure type profiles or a random failure type profile with early onset DILD. Cluster 3 exhibited a random failure type profile or wear out failure type profiles with later onset DILD. Cluster 4 exhibited an early failure type profile or a random failure type profile with the latest onset DILD.

  17. SU-E-T-627: Failure Modes and Effect Analysis for Monthly Quality Assurance of Linear Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, J; Xiao, Y; Wang, J

    2014-06-15

    Purpose: To develop and implement a failure mode and effect analysis (FMEA) on routine monthly Quality Assurance (QA) tests (physical tests part) of linear accelerator. Methods: A systematic failure mode and effect analysis method was performed for monthly QA procedures. A detailed process tree of monthly QA was created and potential failure modes were defined. Each failure mode may have many influencing factors. For each factor, a risk probability number (RPN) was calculated from the product of probability of occurrence (O), the severity of effect (S), and detectability of the failure (D). The RPN scores are in a range ofmore » 1 to 1000, with higher scores indicating stronger correlation to a given influencing factor of a failure mode. Five medical physicists in our institution were responsible to discuss and to define the O, S, D values. Results: 15 possible failure modes were identified and all RPN scores of all influencing factors of these 15 failue modes were from 8 to 150, and the checklist of FMEA in monthly QA was drawn. The system showed consistent and accurate response to erroneous conditions. Conclusion: The influencing factors of RPN greater than 50 were considered as highly-correlated factors of a certain out-oftolerance monthly QA test. FMEA is a fast and flexible tool to develop an implement a quality management (QM) frame work of monthly QA, which improved the QA efficiency of our QA team. The FMEA work may incorporate more quantification and monitoring fuctions in future.« less

  18. Advances in Micromechanics Modeling of Composites Structures for Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Moncada, Albert

    Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focuses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.

  19. Heating Analysis in Constant-pressure Hydraulic System based on Energy Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Chao; Xu, Cong; Mao, Xuyao; Li, Bin; Hu, Junhua; Liu, Yiou

    2017-12-01

    Hydraulic systems are widely used in industrial applications, but the problem of heating has become an important reason to restrict the promotion of hydraulic technology. The high temperature, will seriously affect the operation of the hydraulic system, even cause stuck and other serious failure. Based on the analysis of the heat damage of the hydraulic system, this paper gives the reasons for this problem, and it is showed by the application that the energy analysis can accurately locate the main reasons for the heating of the hydraulic system, which can give strong practical guidance.

  20. Launch Vehicle Abort Analysis for Failures Leading to Loss of Control

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Hill, Ashley D.; Beard, Bernard B.

    2013-01-01

    Launch vehicle ascent is a time of high risk for an onboard crew. There is a large fraction of possible failures for which time is of the essence and a successful abort is possible if the detection and action happens quickly enough. This paper focuses on abort determination based on data already available from the Guidance, Navigation, and Control system. This work is the result of failure analysis efforts performed during the Ares I launch vehicle development program. The two primary areas of focus are the derivation of abort triggers to ensure that abort occurs as quickly as possible when needed, but that false aborts are avoided, and evaluation of success in aborting off the failing launch vehicle.

  1. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    PubMed

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. A Case Study on Improving Intensive Care Unit (ICU) Services Reliability: By Using Process Failure Mode and Effects Analysis (PFMEA)

    PubMed Central

    Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad

    2016-01-01

    Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162

  3. Using the failure mode and effects analysis model to improve parathyroid hormone and adrenocorticotropic hormone testing

    PubMed Central

    Magnezi, Racheli; Hemi, Asaf; Hemi, Rina

    2016-01-01

    Background Risk management in health care systems applies to all hospital employees and directors as they deal with human life and emergency routines. There is a constant need to decrease risk and increase patient safety in the hospital environment. The purpose of this article is to review the laboratory testing procedures for parathyroid hormone and adrenocorticotropic hormone (which are characterized by short half-lives) and to track failure modes and risks, and offer solutions to prevent them. During a routine quality improvement review at the Endocrine Laboratory in Tel Hashomer Hospital, we discovered these tests are frequently repeated unnecessarily due to multiple failures. The repetition of the tests inconveniences patients and leads to extra work for the laboratory and logistics personnel as well as the nurses and doctors who have to perform many tasks with limited resources. Methods A team of eight staff members accompanied by the Head of the Endocrine Laboratory formed the team for analysis. The failure mode and effects analysis model (FMEA) was used to analyze the laboratory testing procedure and was designed to simplify the process steps and indicate and rank possible failures. Results A total of 23 failure modes were found within the process, 19 of which were ranked by level of severity. The FMEA model prioritizes failures by their risk priority number (RPN). For example, the most serious failure was the delay after the samples were collected from the department (RPN =226.1). Conclusion This model helped us to visualize the process in a simple way. After analyzing the information, solutions were proposed to prevent failures, and a method to completely avoid the top four problems was also developed. PMID:27980440

  4. Perceptions and experiences of heart failure patients and clinicians on the use of mobile phone-based telemonitoring.

    PubMed

    Seto, Emily; Leonard, Kevin J; Cafazzo, Joseph A; Barnsley, Jan; Masino, Caterina; Ross, Heather J

    2012-02-10

    Previous trials of heart failure telemonitoring systems have produced inconsistent findings, largely due to diverse interventions and study designs. The objectives of this study are (1) to provide in-depth insight into the effects of telemonitoring on self-care and clinical management, and (2) to determine the features that enable successful heart failure telemonitoring. Semi-structured interviews were conducted with 22 heart failure patients attending a heart function clinic who had used a mobile phone-based telemonitoring system for 6 months. The telemonitoring system required the patients to take daily weight and blood pressure readings, weekly single-lead ECGs, and to answer daily symptom questions on a mobile phone. Instructions were sent to the patient's mobile phone based on their physiological values. Alerts were also sent to a cardiologist's mobile phone, as required. All clinicians involved in the study were also interviewed post-trial (N = 5). The interviews were recorded, transcribed, and then analyzed using a conventional content analysis approach. The telemonitoring system improved patient self-care by instructing the patients in real-time how to appropriately modify their lifestyle behaviors. Patients felt more aware of their heart failure condition, less anxiety, and more empowered. Many were willing to partially fund the use of the system. The clinicians were able to manage their patients' heart failure conditions more effectively, because they had physiological data reported to them frequently to help in their decision-making (eg, for medication titration) and were alerted at the earliest sign of decompensation. Essential characteristics of the telemonitoring system that contributed to improved heart failure management included immediate self-care and clinical feedback (ie, teachable moments), how the system was easy and quick to use, and how the patients and clinicians perceived tangible benefits from telemonitoring. Some clinical concerns included ongoing costs of the telemonitoring system and increased clinical workload. A few patients did not want to be watched long-term while some were concerned they might become dependent on the system. The success of a telemonitoring system is highly dependent on its features and design. The essential system characteristics identified in this study should be considered when developing telemonitoring solutions.

  5. EVALUATION OF SAFETY IN A RADIATION ONCOLOGY SETTING USING FAILURE MODE AND EFFECTS ANALYSIS

    PubMed Central

    Ford, Eric C.; Gaudette, Ray; Myers, Lee; Vanderver, Bruce; Engineer, Lilly; Zellars, Richard; Song, Danny Y.; Wong, John; DeWeese, Theodore L.

    2013-01-01

    Purpose Failure mode and effects analysis (FMEA) is a widely used tool for prospectively evaluating safety and reliability. We report our experiences in applying FMEA in the setting of radiation oncology. Methods and Materials We performed an FMEA analysis for our external beam radiation therapy service, which consisted of the following tasks: (1) create a visual map of the process, (2) identify possible failure modes; assign risk probability numbers (RPN) to each failure mode based on tabulated scores for the severity, frequency of occurrence, and detectability, each on a scale of 1 to 10; and (3) identify improvements that are both feasible and effective. The RPN scores can span a range of 1 to 1000, with higher scores indicating the relative importance of a given failure mode. Results Our process map consisted of 269 different nodes. We identified 127 possible failure modes with RPN scores ranging from 2 to 160. Fifteen of the top-ranked failure modes were considered for process improvements, representing RPN scores of 75 and more. These specific improvement suggestions were incorporated into our practice with a review and implementation by each department team responsible for the process. Conclusions The FMEA technique provides a systematic method for finding vulnerabilities in a process before they result in an error. The FMEA framework can naturally incorporate further quantification and monitoring. A general-use system for incident and near miss reporting would be useful in this regard. PMID:19409731

  6. Stage Separation Failure: Model Based Diagnostics and Prognostics

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry; Hafiychuk, Vasyl; Kulikov, Igor; Smelyanskiy, Vadim; Patterson-Hine, Ann; Hanson, John; Hill, Ashley

    2010-01-01

    Safety of the next-generation space flight vehicles requires development of an in-flight Failure Detection and Prognostic (FD&P) system. Development of such system is challenging task that involves analysis of many hard hitting engineering problems across the board. In this paper we report progress in the development of FD&P for the re-contact fault between upper stage nozzle and the inter-stage caused by the first stage and upper stage separation failure. A high-fidelity models and analytical estimations are applied to analyze the following sequence of events: (i) structural dynamics of the nozzle extension during the impact; (ii) structural stability of the deformed nozzle in the presence of the pressure and temperature loads induced by the hot gas flow during engine start up; and (iii) the fault induced thrust changes in the steady burning regime. The diagnostic is based on the measurements of the impact torque. The prognostic is based on the analysis of the correlation between the actuator signal and fault-induced changes in the nozzle structural stability and thrust.

  7. Impact of Distributed Energy Resources on the Reliability of Critical Telecommunications Facilities: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D. G.; Arent, D. J.; Johnson, L.

    2006-06-01

    This paper documents a probabilistic risk assessment of existing and alternative power supply systems at a large telecommunications office. The analysis characterizes the increase in the reliability of power supply through the use of two alternative power configurations. Failures in the power systems supporting major telecommunications service nodes are a main contributor to significant telecommunications outages. A logical approach to improving the robustness of telecommunication facilities is to increase the depth and breadth of technologies available to restore power during power outages. Distributed energy resources such as fuel cells and gas turbines could provide additional on-site electric power sources tomore » provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.« less

  8. Application of Failure Mode and Effect Analysis (FMEA) and cause and effect analysis in conjunction with ISO 22000 to a snails (Helix aspersa) processing plant; A case study.

    PubMed

    Arvanitoyannis, Ioannis S; Varzakas, Theodoros H

    2009-08-01

    Failure Mode and Effect Analysis (FMEA) has been applied for the risk assessment of snails manufacturing. A tentative approach of FMEA application to the snails industry was attempted in conjunction with ISO 22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (snails processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and fishbone diagram). In this work a comparison of ISO22000 analysis with HACCP is carried out over snails processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the RPN per identified processing hazard. Sterilization of tins, bioaccumulation of heavy metals, packaging of shells and poisonous mushrooms, were the processes identified as the ones with the highest RPN (280, 240, 147, 144, respectively) and corrective actions were undertaken. Following the application of corrective actions, a second calculation of RPN values was carried out leading to considerably lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO22000 system of a snails processing industry is considered imperative.

  9. Testing and failure analysis to improve screening techniques for hermetically sealed metallized film capacitors for low energy applications

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Effective screening techniques are evaluated for detecting insulation resistance degradation and failure in hermetically sealed metallized film capacitors used in applications where low capacitor voltage and energy levels are common to the circuitry. A special test and monitoring system capable of rapidly scanning all test capacitors and recording faults and/or failures is examined. Tests include temperature cycling and storage as well as low, medium, and high voltage life tests. Polysulfone film capacitors are more heat stable and reliable than polycarbonate film units.

  10. Failure Modes Effects and Criticality Analysis, an Underutilized Safety, Reliability, Project Management and Systems Engineering Tool

    NASA Astrophysics Data System (ADS)

    Mullin, Daniel Richard

    2013-09-01

    The majority of space programs whether manned or unmanned for science or exploration require that a Failure Modes Effects and Criticality Analysis (FMECA) be performed as part of their safety and reliability activities. This comes as no surprise given that FMECAs have been an integral part of the reliability engineer's toolkit since the 1950s. The reasons for performing a FMECA are well known including fleshing out system single point failures, system hazards and critical components and functions. However, in the author's ten years' experience as a space systems safety and reliability engineer, findings demonstrate that the FMECA is often performed as an afterthought, simply to meet contract deliverable requirements and is often started long after the system requirements allocation and preliminary design have been completed. There are also important qualitative and quantitative components often missing which can provide useful data to all of project stakeholders. These include; probability of occurrence, probability of detection, time to effect and time to detect and, finally, the Risk Priority Number. This is unfortunate as the FMECA is a powerful system design tool that when used effectively, can help optimize system function while minimizing the risk of failure. When performed as early as possible in conjunction with writing the top level system requirements, the FMECA can provide instant feedback on the viability of the requirements while providing a valuable sanity check early in the design process. It can indicate which areas of the system will require redundancy and which areas are inherently the most risky from the onset. Based on historical and practical examples, it is this author's contention that FMECAs are an immense source of important information for all involved stakeholders in a given project and can provide several benefits including, efficient project management with respect to cost and schedule, system engineering and requirements management, assembly integration and test (AI&T) and operations if applied early, performed to completion and updated along with system design.

  11. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  12. Evaluating the risk of water distribution system failure: A shared frailty model

    NASA Astrophysics Data System (ADS)

    Clark, Robert M.; Thurnau, Robert C.

    2011-12-01

    Condition assessment (CA) Modeling is drawing increasing interest as a technique that can assist in managing drinking water infrastructure. This paper develops a model based on the application of a Cox proportional hazard (PH)/shared frailty model and applies it to evaluating the risk of failure in drinking water networks using data from the Laramie Water Utility (located in Laramie, Wyoming, USA). Using the risk model a cost/ benefit analysis incorporating the inspection value method (IVM), is used to assist in making improved repair, replacement and rehabilitation decisions for selected drinking water distribution system pipes. A separate model is developed to predict failures in prestressed concrete cylinder pipe (PCCP). Various currently available inspection technologies are presented and discussed.

  13. Influence of Finite Element Size in Residual Strength Prediction of Composite Structures

    NASA Technical Reports Server (NTRS)

    Satyanarayana, Arunkumar; Bogert, Philip B.; Karayev, Kazbek Z.; Nordman, Paul S.; Razi, Hamid

    2012-01-01

    The sensitivity of failure load to the element size used in a progressive failure analysis (PFA) of carbon composite center notched laminates is evaluated. The sensitivity study employs a PFA methodology previously developed by the authors consisting of Hashin-Rotem intra-laminar fiber and matrix failure criteria and a complete stress degradation scheme for damage simulation. The approach is implemented with a user defined subroutine in the ABAQUS/Explicit finite element package. The effect of element size near the notch tips on residual strength predictions was assessed for a brittle failure mode with a parametric study that included three laminates of varying material system, thickness and stacking sequence. The study resulted in the selection of an element size of 0.09 in. X 0.09 in., which was later used for predicting crack paths and failure loads in sandwich panels and monolithic laminated panels. Comparison of predicted crack paths and failure loads for these panels agreed well with experimental observations. Additionally, the element size vs. normalized failure load relationship, determined in the parametric study, was used to evaluate strength-scaling factors for three different element sizes. The failure loads predicted with all three element sizes provided converged failure loads with respect to that corresponding with the 0.09 in. X 0.09 in. element size. Though preliminary in nature, the strength-scaling concept has the potential to greatly reduce the computational time required for PFA and can enable the analysis of large scale structural components where failure is dominated by fiber failure in tension.

  14. Brief analysis of Jiangsu grid security and stability based on multi-infeed DC index in power system

    NASA Astrophysics Data System (ADS)

    Zhang, Wenjia; Wang, Quanquan; Ge, Yi; Huang, Junhui; Chen, Zhengfang

    2018-02-01

    The impact of Multi-infeed HVDC has gradually increased to security and stability operating in Jiangsu power grid. In this paper, an appraisal method of Multi-infeed HVDC power grid security and stability is raised with Multi-Infeed Effective Short Circuit Ratio, Multi-Infeed Interaction Factor and Commutation Failure Immunity Index. These indices are adopted in security and stability simulating calculation of Jiangsu Multi-infeed HVDC system. The simulation results indicate that Jiangsu power grid is operating with a strong DC system. It has high level of power grid security and stability, and meet the safety running requirements. Jinpin-Suzhou DC system is located in the receiving end with huge capacity, which is easily leading to commutation failure of the transmission line. In order to resolve this problem, dynamic reactive power compensation can be applied in power grid near Jinpin-Suzhou DC system. Simulation result shows this method is feasible to commutation failure.

  15. Developing Ultra Reliable Life Support for the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  16. Controllability Analysis for Multirotor Helicopter Rotor Degradation and Failure

    NASA Astrophysics Data System (ADS)

    Du, Guang-Xun; Quan, Quan; Yang, Binxian; Cai, Kai-Yuan

    2015-05-01

    This paper considers the controllability analysis problem for a class of multirotor systems subject to rotor failure/wear. It is shown that classical controllability theories of linear systems are not sufficient to test the controllability of the considered multirotors. Owing to this, an easy-to-use measurement index is introduced to assess the available control authority. Based on it, a new necessary and sufficient condition for the controllability of multirotors is derived. Furthermore, a controllability test procedure is approached. The proposed controllability test method is applied to a class of hexacopters with different rotor configurations and different rotor efficiency parameters to show its effectiveness. The analysis results show that hexacopters with different rotor configurations have different fault-tolerant capabilities. It is therefore necessary to test the controllability of the multirotors before any fault-tolerant control strategies are employed.

  17. Independent Orbiter Assessment (IOA): Assessment of the orbital maneuvering subsystem, volume 2

    NASA Technical Reports Server (NTRS)

    Haufler, W. A.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Orbital Maneuvering System (OMS) hardware and electrical power distribution and control (EPD and C), generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline. This report documents the results of that comparison for the Orbiter OMS hardware and EPD and C systems. Volume 2 continues the presentation of IOA worksheets and contains the critical items list and the NASA FMEA to IOA worksheet cross reference and recommendations.

  18. An improved method for risk evaluation in failure modes and effects analysis of CNC lathe

    NASA Astrophysics Data System (ADS)

    Rachieru, N.; Belu, N.; Anghel, D. C.

    2015-11-01

    Failure mode and effects analysis (FMEA) is one of the most popular reliability analysis tools for identifying, assessing and eliminating potential failure modes in a wide range of industries. In general, failure modes in FMEA are evaluated and ranked through the risk priority number (RPN), which is obtained by the multiplication of crisp values of the risk factors, such as the occurrence (O), severity (S), and detection (D) of each failure mode. However, the crisp RPN method has been criticized to have several deficiencies. In this paper, linguistic variables, expressed in Gaussian, trapezoidal or triangular fuzzy numbers, are used to assess the ratings and weights for the risk factors S, O and D. A new risk assessment system based on the fuzzy set theory and fuzzy rule base theory is to be applied to assess and rank risks associated to failure modes that could appear in the functioning of Turn 55 Lathe CNC. Two case studies have been shown to demonstrate the methodology thus developed. It is illustrated a parallel between the results obtained by the traditional method and fuzzy logic for determining the RPNs. The results show that the proposed approach can reduce duplicated RPN numbers and get a more accurate, reasonable risk assessment. As a result, the stability of product and process can be assured.

  19. Failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-01-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We apply support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicts model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures are determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations are the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  20. Failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-08-01

    Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  1. Analytical investigation of solid rocket nozzle failure

    NASA Technical Reports Server (NTRS)

    Mccoy, K. E.; Hester, J.

    1985-01-01

    On April 5, 1983, an Inertial Upper Stage (IUS) spacecraft experienced loss of control during the burn of the second of two solid rocket motors. The anomaly investigation showed the cause to be a malfunction of the solid rocket motor. This paper presents a description of the IUS system, a failure analysis summary, an account of the thermal testing and computer modeling done at Marshall Space Flight Center, a comparison of analysis results with thermal data obtained from motor static tests, and describes some of the design enhancement incorporated to prevent recurrence of the anomaly.

  2. Complex systems analysis of series of blackouts: cascading failure, critical points, and self-organization.

    PubMed

    Dobson, Ian; Carreras, Benjamin A; Lynch, Vickie E; Newman, David E

    2007-06-01

    We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.

  3. Failure mode and effects analysis drastically reduced potential risks in clinical trial conduct

    PubMed Central

    Baik, Jungmi; Kim, Hyunjung; Kim, Rachel

    2017-01-01

    Background Failure mode and effects analysis (FMEA) is a risk management tool to proactively identify and assess the causes and effects of potential failures in a system, thereby preventing them from happening. The objective of this study was to evaluate effectiveness of FMEA applied to an academic clinical trial center in a tertiary care setting. Methods A multidisciplinary FMEA focus group at the Seoul National University Hospital Clinical Trials Center selected 6 core clinical trial processes, for which potential failure modes were identified and their risk priority number (RPN) was assessed. Remedial action plans for high-risk failure modes (RPN >160) were devised and a follow-up RPN scoring was conducted a year later. Results A total of 114 failure modes were identified with an RPN score ranging 3–378, which was mainly driven by the severity score. Fourteen failure modes were of high risk, 11 of which were addressed by remedial actions. Rescoring showed a dramatic improvement attributed to reduction in the occurrence and detection scores by >3 and >2 points, respectively. Conclusions FMEA is a powerful tool to improve quality in clinical trials. The Seoul National University Hospital Clinical Trials Center is expanding its FMEA capability to other core clinical trial processes. PMID:29089745

  4. Probabilistic Analysis of Space Shuttle Body Flap Actuator Ball Bearings

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Jett, Timothy R.; Predmore, Roamer E.; Zaretsky, Erin V.

    2007-01-01

    A probabilistic analysis, using the 2-parameter Weibull-Johnson method, was performed on experimental life test data from space shuttle actuator bearings. Experiments were performed on a test rig under simulated conditions to determine the life and failure mechanism of the grease lubricated bearings that support the input shaft of the space shuttle body flap actuators. The failure mechanism was wear that can cause loss of bearing preload. These tests established life and reliability data for both shuttle flight and ground operation. Test data were used to estimate the failure rate and reliability as a function of the number of shuttle missions flown. The Weibull analysis of the test data for a 2-bearing shaft assembly in each body flap actuator established a reliability level of 99.6 percent for a life of 12 missions. A probabilistic system analysis for four shuttles, each of which has four actuators, predicts a single bearing failure in one actuator of one shuttle after 22 missions (a total of 88 missions for a 4-shuttle fleet). This prediction is comparable with actual shuttle flight history in which a single actuator bearing was found to have failed by wear at 20 missions.

  5. Fuzzy Bayesian Network-Bow-Tie Analysis of Gas Leakage during Biomass Gasification

    PubMed Central

    Yan, Fang; Xu, Kaili; Yao, Xiwen; Li, Yang

    2016-01-01

    Biomass gasification technology has been rapidly developed recently. But fire and poisoning accidents caused by gas leakage restrict the development and promotion of biomass gasification. Therefore, probabilistic safety assessment (PSA) is necessary for biomass gasification system. Subsequently, Bayesian network-bow-tie (BN-bow-tie) analysis was proposed by mapping bow-tie analysis into Bayesian network (BN). Causes of gas leakage and the accidents triggered by gas leakage can be obtained by bow-tie analysis, and BN was used to confirm the critical nodes of accidents by introducing corresponding three importance measures. Meanwhile, certain occurrence probability of failure was needed in PSA. In view of the insufficient failure data of biomass gasification, the occurrence probability of failure which cannot be obtained from standard reliability data sources was confirmed by fuzzy methods based on expert judgment. An improved approach considered expert weighting to aggregate fuzzy numbers included triangular and trapezoidal numbers was proposed, and the occurrence probability of failure was obtained. Finally, safety measures were indicated based on the obtained critical nodes. The theoretical occurrence probabilities in one year of gas leakage and the accidents caused by it were reduced to 1/10.3 of the original values by these safety measures. PMID:27463975

  6. Probabilistic Analysis of Space Shuttle Body Flap Actuator Ball Bearings

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Jett, Timothy R.; Predmore, Roamer E.; Zaretsky, Erwin V.

    2008-01-01

    A probabilistic analysis, using the 2-parameter Weibull-Johnson method, was performed on experimental life test data from space shuttle actuator bearings. Experiments were performed on a test rig under simulated conditions to determine the life and failure mechanism of the grease lubricated bearings that support the input shaft of the space shuttle body flap actuators. The failure mechanism was wear that can cause loss of bearing preload. These tests established life and reliability data for both shuttle flight and ground operation. Test data were used to estimate the failure rate and reliability as a function of the number of shuttle missions flown. The Weibull analysis of the test data for the four actuators on one shuttle, each with a 2-bearing shaft assembly, established a reliability level of 96.9 percent for a life of 12 missions. A probabilistic system analysis for four shuttles, each of which has four actuators, predicts a single bearing failure in one actuator of one shuttle after 22 missions (a total of 88 missions for a 4-shuttle fleet). This prediction is comparable with actual shuttle flight history in which a single actuator bearing was found to have failed by wear at 20 missions.

  7. Independent Orbiter Assessment (IOA): Assessment of the extravehicular mobility unit, volume 1

    NASA Technical Reports Server (NTRS)

    Raffaelli, Gary G.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort performed an independent analysis of the Extravehicular Mobility Unit (EMU) hardware and system, generating draft failure modes criticalities and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were than compared to the most recent proposed Post 51-L NASA FMEA/CIL baseline. A resolution of each discrepancy from the comparison was provided through additional analysis as required. This report documents the results of that comparison for the Orbiter EMU hardware.

  8. Intelligent redundant actuation system requirements and preliminary system design

    NASA Technical Reports Server (NTRS)

    Defeo, P.; Geiger, L. J.; Harris, J.

    1985-01-01

    Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.

  9. Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.

    1999-01-01

    A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.

  10. Electromechanical actuators affected by multiple failures: Prognostic method based on spectral analysis techniques

    NASA Astrophysics Data System (ADS)

    Belmonte, D.; Vedova, M. D. L. Dalla; Ferro, C.; Maggiore, P.

    2017-06-01

    The proposal of prognostic algorithms able to identify precursors of incipient failures of primary flight command electromechanical actuators (EMA) is beneficial for the anticipation of the incoming failure: an early and correct interpretation of the failure degradation pattern, in fact, can trig an early alert of the maintenance crew, who can properly schedule the servomechanism replacement. An innovative prognostic model-based approach, able to recognize the EMA progressive degradations before his anomalous behaviors become critical, is proposed: the Fault Detection and Identification (FDI) of the considered incipient failures is performed analyzing proper system operational parameters, able to put in evidence the corresponding degradation path, by means of a numerical algorithm based on spectral analysis techniques. Subsequently, these operational parameters will be correlated with the actual EMA health condition by means of failure maps created by a reference monitoring model-based algorithm. In this work, the proposed method has been tested in case of EMA affected by combined progressive failures: in particular, partial stator single phase turn to turn short-circuit and rotor static eccentricity are considered. In order to evaluate the prognostic method, a numerical test-bench has been conceived. Results show that the method exhibit adequate robustness and a high degree of confidence in the ability to early identify an eventual malfunctioning, minimizing the risk of fake alarms or unannounced failures.

  11. Root Cause Failure Analysis of Stator Winding Insulation failure on 6.2 MW hydropower generator

    NASA Astrophysics Data System (ADS)

    Adhi Nugroho, Agus; Widihastuti, Ida; Ary, As

    2017-04-01

    Insulation failure on generator winding insulation occurred in the Wonogiri Hydropower plant has caused stator damage since ase was short circuited to ground. The fault has made the generator stop to operate. Wonogiri Hydropower plant is one of the hydroelectric plants run by PT. Indonesia Power UBP Mrica with capacity 2 × 6.2 MW. To prevent damage to occur again on hydropower generators, an analysis is carried out using Root Cause Failure Analysis RCFA is a systematic approach to identify the root cause of the main orbasic root cause of a problem or a condition that is not wanted. There are several aspects to concerned such as: loading pattern and operations, protection systems, generator insulation resistance, vibration, the cleanliness of the air and the ambient air. Insulation damage caused by gradual inhomogeneous cooling at the surface of winding may lead in to partial discharge. In homogeneous cooling may present due to lattice hampered by dust and oil deposits. To avoid repetitive defects and unwanted condition above, it is necessary to perform major maintenance overhaul every 5000-6000 hours of operation.

  12. Hazards/Failure Modes and Effects Analysis MK 1 MOD 0 LSO-HUD Console System.

    DTIC Science & Technology

    1980-03-24

    AsI~f~ ! 127 = 3gc Z Isre -0 -q ~sI I I 𔃻~~~ ~ _ _ 3_______ II! -0udC Z Z’ P4 12 d-U * ~s ’:i~i42 S- 60 -, Uh ~ U3l I OM -C ~ . - U 4~ dcd 8U-q Ali...8 VI SCOPE AND METHODOLOGY OF ANALYSIS ........ 1O FIGURE 1: H/ FMEA /(SSA) WORK SHEET FORMAT ........... 14 APPENDIX A: HAZARD/FAILURE MODES AND...EFFECTS ANALYSIS (H/ FMEA ) -- WORK SHEETS ......... 15(A-O) TABLE: SUBSYSTEM: UNIT I Heads-Up Display Console .............. 17(A-1) UNIT 2 Auxiliary

  13. Manned space flight nuclear system safety. Volume 3: Reactor system preliminary nuclear safety analysis. Part 2A: Accident model document, appendix

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The detailed abort sequence trees for the reference zirconium hydride (ZrH) reactor power module that have been generated for each phase of the reference Space Base program mission are presented. The trees are graphical representations of causal sequences. Each tree begins with the phase identification and the dichotomy between success and failure. The success branch shows the mission phase objective as being achieved. The failure branch is subdivided, as conditions require, into various primary initiating abort conditions.

  14. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.

  15. A novel approach for analyzing fuzzy system reliability using different types of intuitionistic fuzzy failure rates of components.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-03-01

    This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Flight Test Results from the NF-15B Intelligent Flight Control System (IFCS) Project with Adaptation to a Simulated Stabilator Failure

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Williams-Hayes, Peggy S.

    2007-01-01

    Adaptive flight control systems have the potential to be more resilient to extreme changes in airplane behavior. Extreme changes could be a result of a system failure or of damage to the airplane. A direct adaptive neural-network-based flight control system was developed for the National Aeronautics and Space Administration NF-15B Intelligent Flight Control System airplane and subjected to an inflight simulation of a failed (frozen) (unmovable) stabilator. Formation flight handling qualities evaluations were performed with and without neural network adaptation. The results of these flight tests are presented. Comparison with simulation predictions and analysis of the performance of the adaptation system are discussed. The performance of the adaptation system is assessed in terms of its ability to decouple the roll and pitch response and reestablish good onboard model tracking. Flight evaluation with the simulated stabilator failure and adaptation engaged showed that there was generally improvement in the pitch response; however, a tendency for roll pilot-induced oscillation was experienced. A detailed discussion of the cause of the mixed results is presented.

  17. Flight Test Results from the NF-15B Intelligent Flight Control System (IFCS) Project with Adaptation to a Simulated Stabilator Failure

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Williams-Hayes, Peggy S.

    2010-01-01

    Adaptive flight control systems have the potential to be more resilient to extreme changes in airplane behavior. Extreme changes could be a result of a system failure or of damage to the airplane. A direct adaptive neural-network-based flight control system was developed for the National Aeronautics and Space Administration NF-15B Intelligent Flight Control System airplane and subjected to an inflight simulation of a failed (frozen) (unmovable) stabilator. Formation flight handling qualities evaluations were performed with and without neural network adaptation. The results of these flight tests are presented. Comparison with simulation predictions and analysis of the performance of the adaptation system are discussed. The performance of the adaptation system is assessed in terms of its ability to decouple the roll and pitch response and reestablish good onboard model tracking. Flight evaluation with the simulated stabilator failure and adaptation engaged showed that there was generally improvement in the pitch response; however, a tendency for roll pilot-induced oscillation was experienced. A detailed discussion of the cause of the mixed results is presented.

  18. Failure Assessment of Stainless Steel and Titanium Brazed Joints

    NASA Technical Reports Server (NTRS)

    Flom, Yury A.

    2012-01-01

    Following successful application of Coulomb-Mohr and interaction equations for evaluation of safety margins in Albemet 162 brazed joints, two additional base metal/filler metal systems were investigated. Specimens consisting of stainless steel brazed with silver-base filler metal and titanium brazed with 1100 Al alloy were tested to failure under combined action of tensile, shear, bending and torsion loads. Finite Element Analysis (FEA), hand calculations and digital image comparison (DIC) techniques were used to estimate failure stresses and construct Failure Assessment Diagrams (FAD). This study confirms that interaction equation R(sub sigma) + R(sub tau) = 1, where R(sub sigma) and R(sub t u) are normal and shear stress ratios, can be used as conservative lower bound estimate of the failure criterion in stainless steel and titanium brazed joints.

  19. Effect of Sensors on the Reliability and Control Performance of Power Circuits in the Web of Things (WoT)

    PubMed Central

    Bae, Sungwoo; Kim, Myungchin

    2016-01-01

    In order to realize a true WoT environment, a reliable power circuit is required to ensure interconnections among a range of WoT devices. This paper presents research on sensors and their effects on the reliability and response characteristics of power circuits in WoT devices. The presented research can be used in various power circuit applications, such as energy harvesting interfaces, photovoltaic systems, and battery management systems for the WoT devices. As power circuits rely on the feedback from voltage/current sensors, the system performance is likely to be affected by the sensor failure rates, sensor dynamic characteristics, and their interface circuits. This study investigated how the operational availability of the power circuits is affected by the sensor failure rates by performing a quantitative reliability analysis. In the analysis process, this paper also includes the effects of various reconstruction and estimation techniques used in power processing circuits (e.g., energy harvesting circuits and photovoltaic systems). This paper also reports how the transient control performance of power circuits is affected by sensor interface circuits. With the frequency domain stability analysis and circuit simulation, it was verified that the interface circuit dynamics may affect the transient response characteristics of power circuits. The verification results in this paper showed that the reliability and control performance of the power circuits can be affected by the sensor types, fault tolerant approaches against sensor failures, and the response characteristics of the sensor interfaces. The analysis results were also verified by experiments using a power circuit prototype. PMID:27608020

  20. Using Combined SFTA and SFMECA Techniques for Space Critical Software

    NASA Astrophysics Data System (ADS)

    Nicodemos, F. G.; Lahoz, C. H. N.; Abdala, M. A. D.; Saotome, O.

    2012-01-01

    This work addresses the combined Software Fault Tree Analysis (SFTA) and Software Failure Modes, Effects and Criticality Analysis (SFMECA) techniques applied to space critical software of satellite launch vehicles. The combined approach is under research as part of the Verification and Validation (V&V) efforts to increase software dependability and as future application in other projects under development at Instituto de Aeronáutica e Espaço (IAE). The applicability of such approach was conducted on system software specification and applied to a case study based on the Brazilian Satellite Launcher (VLS). The main goal is to identify possible failure causes and obtain compensating provisions that lead to inclusion of new functional and non-functional system software requirements.

  1. Failure Time Analysis of Office System Use.

    ERIC Educational Resources Information Center

    Cooper, Michael D.

    1991-01-01

    Develops mathematical models to characterize the probability of continued use of an integrated office automation system and tests these models on longitudinal data collected from 210 individuals using the IBM Professional Office System (PROFS) at the University of California at Berkeley. Analyses using survival functions and proportional hazard…

  2. On the occurrence of rainstorm damage based on home insurance and weather data

    NASA Astrophysics Data System (ADS)

    Spekkers, M. H.; Clemens, F. H. L. R.; ten Veldhuis, J. A. E.

    2014-08-01

    Rainstorm damage caused by malfunctioning of urban drainage systems and water intrusion due to defects in the building envelope can be considerable. Little research on this topic focused on the collection of damage data, the understanding of damage mechanisms and the deepening of data analysis methods. In this paper, the relative contribution of different failure mechanisms to the occurrence of rainstorm damage are investigated, as well as the extent to which these mechanisms relate to weather variables. For a case study in Rotterdam, the Netherlands, a property level home insurance database of around 3100 water-related damage claims was analysed. Records include comprehensive transcripts of communication between insurer, insured and damage assessment experts, which allowed claims to be classified according to their actual damage cause. Results show that roof and wall leakage is the most frequent failure mechanism causing precipitation-related claims, followed by blocked roof gutters, melting snow and sewer flooding. Claims related to sewer flooding were less present in the data, but are associated with significantly larger claim sizes than claims in the majority class, i.e. roof and wall leakages. Rare events logistic regression analysis revealed that maximum rainfall intensity and rainfall volume are significant predictors for the occurrence probability of precipitation-related claims. Moreover, it was found that claims associated with rainfall intensities smaller than 7-8 mm in a 60 min window are mainly related to failures processes in the private domain, such as roof and wall leakages. For rainfall events that exceed the 7-8 mm h-1 threshold, failure of systems in the public domain, such as sewer systems, start to contribute considerably to the overall occurrence probability of claims. The communication transcripts, however, lacked information to be conclusive about to extent to which sewer-related claims were caused by overloading of sewer systems or failure of system components.

  3. On the occurrence of rainstorm damage based on home insurance and weather data

    NASA Astrophysics Data System (ADS)

    Spekkers, M. H.; Clemens, F. H. L. R.; ten Veldhuis, J. A. E.

    2015-02-01

    Rainstorm damage caused by the malfunction of urban drainage systems and water intrusion due to defects in the building envelope can be considerable. Little research on this topic focused on the collection of damage data, the understanding of damage mechanisms and the deepening of data analysis methods. In this paper, the relative contribution of different failure mechanisms to the occurrence of rainstorm damage is investigated, as well as the extent to which these mechanisms relate to weather variables. For a case study in Rotterdam, the Netherlands, a property level home insurance database of around 3100 water-related damage claims was analysed. The records include comprehensive transcripts of communication between insurer, insured and damage assessment experts, which allowed claims to be classified according to their actual damage cause. The results show that roof and wall leakage is the most frequent failure mechanism causing precipitation-related claims, followed by blocked roof gutters, melting snow and sewer flooding. Claims related to sewer flooding were less present in the data, but are associated with significantly larger claim sizes than claims in the majority class, i.e. roof and wall leakages. Rare events logistic regression analysis revealed that maximum rainfall intensity and rainfall volume are significant predictors for the occurrence probability of precipitation-related claims. Moreover, it was found that claims associated with rainfall intensities smaller than 7-8 mm in a 60-min window are mainly related to failure processes in the private domain, such as roof and wall leakages. For rainfall events that exceed the 7-8 mm h-1 threshold, the failure of systems in the public domain, such as sewer systems, start to contribute considerably to the overall occurrence probability of claims. The communication transcripts, however, lacked information to be conclusive about to which extent sewer-related claims were caused by overloading of sewer systems or failure of system components.

  4. Cost-Utility Analysis of the EVOLVO Study on Remote Monitoring for Heart Failure Patients With Implantable Defibrillators: Randomized Controlled Trial

    PubMed Central

    Landolina, Maurizio; Marzegalli, Maurizio; Lunati, Maurizio; Perego, Giovanni B; Guenzati, Giuseppe; Curnis, Antonio; Valsecchi, Sergio; Borghetti, Francesca; Borghi, Gabriella; Masella, Cristina

    2013-01-01

    Background Heart failure patients with implantable defibrillators place a significant burden on health care systems. Remote monitoring allows assessment of device function and heart failure parameters, and may represent a safe, effective, and cost-saving method compared to conventional in-office follow-up. Objective We hypothesized that remote device monitoring represents a cost-effective approach. This paper summarizes the economic evaluation of the Evolution of Management Strategies of Heart Failure Patients With Implantable Defibrillators (EVOLVO) study, a multicenter clinical trial aimed at measuring the benefits of remote monitoring for heart failure patients with implantable defibrillators. Methods Two hundred patients implanted with a wireless transmission–enabled implantable defibrillator were randomized to receive either remote monitoring or the conventional method of in-person evaluations. Patients were followed for 16 months with a protocol of scheduled in-office and remote follow-ups. The economic evaluation of the intervention was conducted from the perspectives of the health care system and the patient. A cost-utility analysis was performed to measure whether the intervention was cost-effective in terms of cost per quality-adjusted life year (QALY) gained. Results Overall, remote monitoring did not show significant annual cost savings for the health care system (€1962.78 versus €2130.01; P=.80). There was a significant reduction of the annual cost for the patients in the remote arm in comparison to the standard arm (€291.36 versus €381.34; P=.01). Cost-utility analysis was performed for 180 patients for whom QALYs were available. The patients in the remote arm gained 0.065 QALYs more than those in the standard arm over 16 months, with a cost savings of €888.10 per patient. Results from the cost-utility analysis of the EVOLVO study show that remote monitoring is a cost-effective and dominant solution. Conclusions Remote management of heart failure patients with implantable defibrillators appears to be cost-effective compared to the conventional method of in-person evaluations. Trial Registration ClinicalTrials.gov NCT00873899; http://clinicaltrials.gov/show/NCT00873899 (Archived by WebCite at http://www.webcitation.org/6H0BOA29f). PMID:23722666

  5. Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.

    1997-01-01

    A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.

  6. Investigating incidents of EHR failures in China: analysis of search engine reports.

    PubMed

    Lei, Jianbo; Guan, Pengcheng; Gao, Kaihua; Lu, Xueqing; Sittig, Dean

    2013-01-01

    As the healthcare industry becomes increasingly dependent on information technology (IT), the failure of computerized systems could cause catastrophic effects on patient safety. We conducted an empirical study to analyze news articles available on the internet using Baidu and Google. 116 distinct EHR outage news reports were identified. We examined characteristics, potential causes, and possible preventive strategies. Risk management strategies based are discussed.

  7. Asymmetrical booster guidance and control system design study. Volume 3: Space shuttle vehicle SRB actuator failure study. [space shuttle development

    NASA Technical Reports Server (NTRS)

    Williams, F. E.; Lemon, R. S.

    1974-01-01

    The investigation of single actuator failures on the space shuttle solid rocket booster required the analysis of both square pattern and diamond pattern actuator configurations. It was determined that for failures occuring near or prior to the region of maximum dynamic pressure, control gain adjustments can be used to achieve virtually nominal mid-boost vehicle behavior. A distinct worst case failure condition was established near staging that could significantly delay staging. It is recommended that the square pattern be retained as a viable alternative to the baseline diamond pattern because the staging transient is better controlled resulting in earlier staging.

  8. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    PubMed

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  9. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method

    PubMed Central

    Deng, Xinyang

    2017-01-01

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model. PMID:28895905

  10. Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Go, Susie

    2006-01-01

    A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meneses, Esteban; Ni, Xiang; Jones, Terry R

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of faultmore » tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.« less

  12. Effect of Geometrical Imperfection on Buckling Failure of ITER VVPSS Tank

    NASA Astrophysics Data System (ADS)

    Jha, Saroj Kumar; Gupta, Girish Kumar; Pandey, Manish Kumar; Bhattacharya, Avik; Jogi, Gaurav; Bhardwaj, Anil Kumar

    2017-04-01

    The ‘Vacuum Vessel Pressure Suppression System’ (VVPSS) is part of ITER machine, which is designed to protect the ITER Vacuum Vessel and its connected systems, from an over-pressure situation. It is comprised of a partially evacuated tank of stainless steel approximately 46 m long and 6 m in diameter and thickness 30 mm. It is to hold approximately 675 tonnes of water at room temperature to condense the steam resulting from the adverse water leakage into the Vacuum Vessel chamber. For any vacuum vessel, geometrical imperfection has significant effect on buckling failure and structural integrity. Major geometrical imperfection in VVPSS tank depends on form tolerances. To study the effect of geometrical imperfection on buckling failure of VVPSS tank, finite element analysis (FEA) has been performed in line with ASME section VIII division 2 part 5 [1], ‘design by analysis method’. Linear buckling analysis has been performed to get the buckled shape and displacement. Geometrical imperfection due to form tolerance is incorporated in FEA model of VVPSS tank by scaling the resulted buckled shape by a factor ‘60’. This buckled shape model is used as input geometry for plastic collapse and buckling failure assessment. Plastic collapse and buckling failure of VVPSS tank has been assessed by using the elastic-plastic analysis method. This analysis has been performed for different values of form tolerance. The results of analysis show that displacement and load proportionality factor (LPF) vary inversely with form tolerance. For higher values of form tolerance LPF reduces significantly with high values of displacement.

  13. SU-E-T-117: Analysis of the ArcCHECK Dosimetry Gamma Failure Using the 3DVH System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, S; Choi, W; Lee, H

    2015-06-15

    Purpose: To evaluate gamma analysis failure for the VMAT patient specific QA using ArcCHECK cylindrical phantom. The 3DVH system(Sun Nuclear, FL) was used to analyze the dose difference statistic between measured dose and treatment planning system calculated dose. Methods: Four case of gamma analysis failure were selected retrospectively. Our institution gamma analysis indexes were absolute dose, 3%/3mm and 90%pass rate in the ArcCHECK dosimetry. The collapsed cone convolution superposition (CCCS) dose calculation algorithm for VMAT was used. Dose delivery was performed with Elekta Agility. The A1SL(standard imaging, WI) and cavity plug were used for point dose measurement. Delivery QA plansmore » and images were used for 3DVH Reference data instead of patient plan and image. The measured data of ‘.txt’ file was used for comparison at diodes to acquire a global dose level. The,.acml’ file was used for AC-PDP and to calculated point dose. Results: The global dose of 3DVH was calculated as 1.10 Gy, 1.13, 1.01 and 0.2 Gy respectively. The global dose of 0.2 Gy case was induced by distance discrepancy. The TPS calculated point dose of was 2.33 Gy to 2.77 Gy and 3DVH calculated dose was 2.33 Gy to 2.68 Gy. The maximum dose differences were −2.83% and −3.1% for TPS vs. measured dose and TPS vs. 3DVH calculated respectively in the same case. The difference between measured and 3DVH was 0.1% in that case. The 3DVH gamma pass rate was 98% to 99.7%. Conclusion: We found the TPS calculation error by 3DVH calculation using ArcCHECK measured dose. It seemed that our CCCS algorithm RTP system over estimated at the central region and underestimated scattering at the peripheral diode detector point. The relative gamma analysis and point dose measurement would be recommended for VMAT DQA in the gamma failure case of ArcCHECK dosimetry.« less

  14. Factors Influencing Progressive Failure Analysis Predictions for Laminated Composite Structure

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    2008-01-01

    Progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model for use with a nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details are described in the present paper. Parametric studies for laminated composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented and to demonstrate their influence on progressive failure analysis predictions.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarrack, A.G.

    The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses tomore » calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).« less

  16. Failure detection and isolation analysis of a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Motyka, P.; Landey, M.; Mckern, R.

    1981-01-01

    The objective of this study was to define and develop techniques for failure detection and isolation (FDI) algorithms for a dual fail/operational redundant strapdown inertial navigation system are defined and developed. The FDI techniques chosen include provisions for hard and soft failure detection in the context of flight control and navigation. Analyses were done to determine error detection and switching levels for the inertial navigation system, which is intended for a conventional takeoff or landing (CTOL) operating environment. In addition, investigations of false alarms and missed alarms were included for the FDI techniques developed, along with the analyses of filters to be used in conjunction with FDI processing. Two specific FDI algorithms were compared: the generalized likelihood test and the edge vector test. A deterministic digital computer simulation was used to compare and evaluate the algorithms and FDI systems.

  17. Reliability evaluation methodology for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1992-01-01

    Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.

  18. Model Based Autonomy for Robust Mars Operations

    NASA Technical Reports Server (NTRS)

    Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.

  19. Life Cost Based FMEA Manual: A Step by Step Guide to Carrying Out a Cost-based Failure Modes and Effects Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhee, Seung; Spencer, Cherrill; /Stanford U. /SLAC

    2009-01-23

    Failure occurs when one or more of the intended functions of a product are no longer fulfilled to the customer's satisfaction. The most critical product failures are those that escape design reviews and in-house quality inspection and are found by the customer. The product may work for a while until its performance degrades to an unacceptable level or it may have not worked even before customer took possession of the product. The end results of failures which may lead to unsafe conditions or major losses of the main function are rated high in severity. Failure Modes and Effects Analysis (FMEA)more » is a tool widely used in the automotive, aerospace, and electronics industries to identify, prioritize, and eliminate known potential failures, problems, and errors from systems under design, before the product is released (Stamatis, 1997). Several industrial FMEA standards such as those published by the Society of Automotive Engineers, US Department of Defense, and the Automotive Industry Action Group employ the Risk Priority Number (RPN) to measure risk and severity of failures. The Risk Priority Number (RPN) is a product of 3 indices: Occurrence (O), Severity (S), and Detection (D). In a traditional FMEA process design engineers typically analyze the 'root cause' and 'end-effects' of potential failures in a sub-system or component and assign penalty points through the O, S, D values to each failure. The analysis is organized around categories called failure modes, which link the causes and effects of failures. A few actions are taken upon completing the FMEA worksheet. The RPN column generally will identify the high-risk areas. The idea of performing FMEA is to eliminate or reduce known and potential failures before they reach the customers. Thus, a plan of action must be in place for the next task. Not all failures can be resolved during the product development cycle, thus prioritization of actions must be made within the design group. One definition of detection difficulty (D) is how well the organization controls the development process. Another definition relates to the detectability of a particular failure in the product when it is in the hands of the customer. The former asks 'What is the chance of catching the problem before we give it to the customer'? The latter asks 'What is the chance of the customer catching the problem before the problem results in a catastrophic failure?' (Palady, 1995) These differing definitions confuse the FMEA users when one tries to determine detection difficulty. Are we trying to measure how easy it is to detect where a failure has occurred or when it has occurred? Or are we trying to measure how easy or difficult it is to prevent failures? Ordinal scale variables are used to rank-order industries such as, hotels, restaurants, and movies (Note that a 4 star hotel is not necessarily twice as good as a 2 star hotel). Ordinal values preserve rank in a group of items, but the distance between the values cannot be measured since a distance function does not exist. Thus, the product or sum of ordinal variables loses its rank since each parameter has different scales. The RPN is a product of 3 independent ordinal variables, it can indicate that some failure types are 'worse' than others, but give no quantitative indication of their relative effects. To resolve the ambiguity of measuring detection difficulty and the irrational logic of multiplying 3 ordinal indices, a new methodology was created to overcome these shortcomings, Life Cost-Based FMEA. Life Cost-Based FMEA measures failure/risk in terms of monetary cost. Cost is a universal parameter that can be easily related to severity by engineers and others. Thus, failure cost can be estimated using the following simplest form: Expected Failure Cost = {sup n}{Sigma}{sub i=1}p{sub i}c{sub i}, p: Probability of a particular failure occurring; c: Monetary cost associated with that particular failure; and n: Total number of failure scenarios. FMEA is most effective when there are inputs into it from all concerned disciplines of the product development team. However, FMEA is a long process and can become tedious and won't be effective if too many people participate. An ideal team should have 3 to 4 people from: design, manufacturing, and service departments if possible. Depending on how complex the system is, the entire process can take anywhere from one to four weeks working full time. Thus, it is important to agree to the time commitment before starting the analysis else, anxious managers might stop the procedure before it is completed.« less

  20. Urine monitoring system failure analysis and operational verification test report

    NASA Technical Reports Server (NTRS)

    Glanfield, E. J.

    1978-01-01

    Failure analysis and testing of a prototype urine monitoring system (UMS) are reported. System performance was characterized by a regression formula developed from volume measurement test data. When the volume measurement test data. When the volume measurement data was imputted to the formula, the standard error of the estimate calculated using the regression formula was found to be within 1.524% of the mean of the mass of the input. System repeatability was found to be somewhat dependent upon the residual volume of the system and the evaporation of fluid from the separator. The evaporation rate was determined to be approximately 1cc/minute. The residual volume in the UMS was determined by measuring the concentration of LiCl in the flush water. Observed results indicated residual levels in the range of 9-10ml, however, results obtained during the flushing efficiency test indicated a residual level of approximately 20ml. It is recommended that the phase separator pumpout time be extended or the design modified to minimize the residual level.

  1. An Accident Precursor Analysis Process Tailored for NASA Space Systems

    NASA Technical Reports Server (NTRS)

    Groen, Frank; Stamatelatos, Michael; Dezfuli, Homayoon; Maggio, Gaspare

    2010-01-01

    Accident Precursor Analysis (APA) serves as the bridge between existing risk modeling activities, which are often based on historical or generic failure statistics, and system anomalies, which provide crucial information about the failure mechanisms that are actually operative in the system and which may differ in frequency or type from those in the various models. These discrepancies between the models (perceived risk) and the system (actual risk) provide the leading indication of an underappreciated risk. This paper presents an APA process developed specifically for NASA Earth-to-Orbit space systems. The purpose of the process is to identify and characterize potential sources of system risk as evidenced by anomalous events which, although not necessarily presenting an immediate safety impact, may indicate that an unknown or insufficiently understood risk-significant condition exists in the system. Such anomalous events are considered accident precursors because they signal the potential for severe consequences that may occur in the future, due to causes that are discernible from their occurrence today. Their early identification allows them to be integrated into the overall system risk model used to intbrm decisions relating to safety.

  2. Failure probability under parameter uncertainty.

    PubMed

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  3. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1984-01-01

    The use and implementation of Ada in distributed environments in which reliability is the primary concern is investigated. Emphasis is placed on the possibility that a distributed system may be programmed entirely in ADA so that the individual tasks of the system are unconcerned with which processors they are executing on, and that failures may occur in the software or underlying hardware. The primary activities are: (1) Continued development and testing of our fault-tolerant Ada testbed; (2) consideration of desirable language changes to allow Ada to provide useful semantics for failure; (3) analysis of the inadequacies of existing software fault tolerance strategies.

  4. Hypervelocity impact testing of the Space Station utility distribution system carrier

    NASA Technical Reports Server (NTRS)

    Lazaroff, Scott

    1993-01-01

    A two-phase, joint JSC and McDonnell Douglas Aerospace-Huntington Beach hypervelocity impact (HVI) test program was initiated to develop an improved understanding of how meteoroid and orbital debris (M/OD) impacts affect the Space Station Freedom (SSF) avionic and fluid lines routed in the Utility Distribution System (UDS) carrier. This report documents the first phase of the test program which covers nonpowered avionic line segment and pressurized fluid line segment HVI testing. From these tests, a better estimation of avionic line failures is approximately 15 failures per year and could very well drop to around 1 or 2 avionic line failures per year (depending upon the results of the second phase testing of the powered avionic line at White Sands). For the fluid lines, the initial McDonnell Douglas analysis calculated 1 to 2 line failures over a 30 year period. The data obtained from these tests indicate the number of predicted fluid line failures increased slightly to as many as 3 in the first 10 years and up to 15 for the entire 30 year life of SSF.

  5. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Technical Reports Server (NTRS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  6. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Astrophysics Data System (ADS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-10-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  7. Prediction of hospital failure: a post-PPS analysis.

    PubMed

    Gardiner, L R; Oswald, S L; Jahera, J S

    1996-01-01

    This study investigates the ability of discriminant analysis to provide accurate predictions of hospital failure. Using data from the period following the introduction of the Prospective Payment System, we developed discriminant functions for each of two hospital ownership categories: not-for-profit and proprietary. The resulting discriminant models contain six and seven variables, respectively. For each ownership category, the variables represent four major aspects of financial health (liquidity, leverage, profitability, and efficiency) plus county marketshare and length of stay. The proportion of closed hospitals misclassified as open one year before closure does not exceed 0.05 for either ownership type. Our results show that discriminant functions based on a small set of financial and nonfinancial variables provide the capability to predict hospital failure reliably for both not-for-profit and proprietary hospitals.

  8. Research on Application of FMECA in Missile Equipment Maintenance Decision

    NASA Astrophysics Data System (ADS)

    Kun, Wang

    2018-03-01

    Fault mode effects and criticality analysis (FMECA) is a method widely used in engineering. Studying the application of FMEA technology in military equipment maintenance decision-making, can help us build a better equipment maintenance support system, and increase the using efficiency of weapons and equipment. Through Failure Modes, Effects and Criticality Analysis (FMECA) of equipment, known and potential failure modes and their causes are found out, and the influence on the equipment performance, operation success, personnel security are determined. Furthermore, according to the synthetical effects of the severity of effects and the failure probability, possible measures for prevention and correction are put forward. Through replacing or adjusting the corresponding parts, corresponding maintenance strategy is decided for preventive maintenance of equipment, which helps improve the equipment reliability.

  9. An Incremental Life-cycle Assurance Strategy for Critical System Certification

    DTIC Science & Technology

    2014-11-04

    for Safe Aircraft Operation Embedded software systems introduce a new class of problems not addressed by traditional system modeling & analysis...Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency jitter affects control behavior...do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software system as major source of

  10. Profile, risk factors and outcome of acute kidney injury in paediatric acute-on-chronic liver failure.

    PubMed

    Lal, Bikrant B; Alam, Seema; Sood, Vikrant; Rawat, Dinesh; Khanna, Rajeev

    2018-01-11

    There are no studies on acute kidney injury in paediatric acute-on-chronic liver failure. This study was planned with aim to describe the clinical presentation and outcome of acute kidney injury among paediatric acute-on-chronic liver failure patients. Data of all children 1-18 years of age presenting with acute chronic liver failure (Asia pacific association for the study of the liver definition) was reviewed. Acute kidney injury was defined as per Kidney Diseases-Improving Global Outcomes guidelines. Poor outcome was defined as death or need for liver transplant within 3 months of development of acute kidney injury. A total of 84 children with acute-on-chronic liver failure were presented to us in the study period. Acute kidney injury developed in 22.6% of patients with acute-on-chronic liver failure. The median duration from acute-on-chronic liver failure to development of acute kidney injury was 4 weeks (Range: 2-10 weeks). The causes of acute kidney injury were hepatorenal syndrome (31.6%), sepsis (31.6%), nephrotoxic drugs (21%), dehydration (10.5%) and bile pigment related acute tubular necrosis in one patient. On univariate analysis, higher baseline bilirubin, higher international normalized ratio, higher paediatric end stage liver disease, presence of systemic inflammatory response syndrome and presence of spontaneous bacterial peritonitis had significant association with presence of acute kidney injury. On logistic regression analysis, presence of systemic inflammatory response syndrome (adjusted OR: 8.659, 95% CI: 2.18-34.37, P = .002) and higher baseline bilirubin (adjusted OR: 1.07, 95% CI: 1.008-1.135, P = .025) were independently associated with presence of acute kidney injury. Of the patients with acute kidney injury, 5(26.3%) survived with native liver, 10(52.6%) died and 4 (21.1%) underwent liver transplantation. Acute kidney injury developed in 22.6% of children with acute-on-chronic liver failure. Bilirubin more than 17.7 mg/dL and presence of systemic inflammatory response syndrome were high risk factors for acute kidney injury. Development of acute kidney injury in a child with acute-on-chronic liver failure suggests poor outcome and need for early intervention. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Application of ISO 22000 and Failure Mode and Effect Analysis (FMEA) for industrial processing of salmon: a case study.

    PubMed

    Arvanitoyannis, Ioannis S; Varzakas, Theodoros H

    2008-05-01

    The Failure Mode and Effect Analysis (FMEA) model was applied for risk assessment of salmon manufacturing. A tentative approach of FMEA application to the salmon industry was attempted in conjunction with ISO 22000. Preliminary Hazard Analysis was used to analyze and predict the occurring failure modes in a food chain system (salmon processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points were identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram and fishbone diagram). In this work, a comparison of ISO 22000 analysis with HACCP is carried out over salmon processing and packaging. However, the main emphasis was put on the quantification of risk assessment by determining the RPN per identified processing hazard. Fish receiving, casing/marking, blood removal, evisceration, filet-making cooling/freezing, and distribution were the processes identified as the ones with the highest RPN (252, 240, 210, 210, 210, 210, 200 respectively) and corrective actions were undertaken. After the application of corrective actions, a second calculation of RPN values was carried out resulting in substantially lower values (below the upper acceptable limit of 130). It is noteworthy that the application of Ishikawa (Cause and Effect or Tree diagram) led to converging results thus corroborating the validity of conclusions derived from risk assessment and FMEA. Therefore, the incorporation of FMEA analysis within the ISO 22000 system of a salmon processing industry is anticipated to prove advantageous to industrialists, state food inspectors, and consumers.

  12. Fretting and Corrosion Between a Metal Shell and Metal Liner May Explain the High Rate of Failure of R3 Modular Metal-on-Metal Hips.

    PubMed

    Ilo, Kevin C; Derby, Emma J; Whittaker, Robert K; Blunn, Gordon W; Skinner, John A; Hart, Alister J

    2017-05-01

    The R3 acetabular system used with its metal liner has higher revision rates when compared to its ceramic and polyethylene liner. In June 2012, the medical and healthcare products regulatory agency issued an alert regarding the metal liner of the R3 acetabular system. Six retrieved R3 acetabular systems with metal liners underwent detailed visual analysis using macroscopic and microscopic techniques. Visual analysis discovered corrosion on the backside of the metal liners. There was a distinct border to the areas of corrosion that conformed to antirotation tab insertions on the inner surface of the acetabular shell, which are for the polyethylene liner. Scanning electron microscopy indicated evidence of crevice corrosion, and energy-dispersive X-ray analysis confirmed corrosion debris rich in titanium. The high failure rate of the metal liner option of the R3 acetabular system may be attributed to corrosion on the backside of the liner which appear to result from geometry and design characteristics of the acetabular shell. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. URBAN-NET: A Network-based Infrastructure Monitoring and Analysis System for Emergency Management and Public Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Chen, Liangzhe; Duan, Sisi

    Abstract Critical Infrastructures (CIs) such as energy, water, and transportation are complex networks that are crucial for sustaining day-to-day commodity flows vital to national security, economic stability, and public safety. The nature of these CIs is such that failures caused by an extreme weather event or a man-made incident can trigger widespread cascading failures, sending ripple effects at regional or even national scales. To minimize such effects, it is critical for emergency responders to identify existing or potential vulnerabilities within CIs during such stressor events in a systematic and quantifiable manner and take appropriate mitigating actions. We present here amore » novel critical infrastructure monitoring and analysis system named URBAN-NET. The system includes a software stack and tools for monitoring CIs, pre-processing data, interconnecting multiple CI datasets as a heterogeneous network, identifying vulnerabilities through graph-based topological analysis, and predicting consequences based on what-if simulations along with visualization. As a proof-of-concept, we present several case studies to show the capabilities of our system. We also discuss remaining challenges and future work.« less

  14. Electrical deaths in the US construction: an analysis of fatality investigations.

    PubMed

    Zhao, Dong; Thabet, Walid; McCoy, Andrew; Kleiner, Brian

    2014-01-01

    Electrocution is among the 'fatal four' in US construction according to the Occupational Safety and Health Administration. Learning from failures is believed to be an effective path to success, with deaths being the most serious system failures. This paper examined the failures in electrical safety by analysing all electrical fatality investigations (N = 132) occurring between 1989 and 2010 from the Fatality Assessment and Control Evaluation programme that is completed by the National Institute of Occupational Safety and Health. Results reveal the features of the electrical fatalities in construction and disclose the most common electrical safety challenges on construction sites. This research also suggests the sociotechnical system breakdowns and the less effectiveness of current safety training programmes may significantly contribute to worker's unsafe behaviours and electrical fatality occurrences.

  15. Independent Orbiter Assessment (IOA): Assessment of the life support and airlock support systems, volume 2

    NASA Technical Reports Server (NTRS)

    Barickman, K.

    1988-01-01

    The McDonnell Douglas Astronautics Company (MDAC) was selected in June 1986 to perform an Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL). The IOA effort first completed an analysis of the Life Support and Airlock Support Systems (LSS and ALSS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the NASA FMEA/CIL baseline with proposed Post 51-L updates included. The discrepancies were flagged for potential future resolution. This report documents the results of that comparison for the Orbiter LSS and ALSS hardware. Volume 2 continues the presentation of IOA worksheets and contains the critical items list and NASA FMEA to IOA worksheet cross reference and recommendations.

  16. Nonlinear analysis for the response and failure of compression-loaded angle-ply laminates with a hole

    NASA Technical Reports Server (NTRS)

    Mathison, Steven R.; Herakovich, Carl T.; Pindera, Marek-Jerzy; Shuart, Mark J.

    1987-01-01

    The objective was to determine the effect of nonlinear material behavior on the response and failure of unnotched and notched angle-ply laminates under uniaxial compressive loading. The endochronic theory was chosen as the constitutive theory to model the AS4/3502 graphite-epoxy material system. Three-dimensional finite element analysis incorporating the endochronic theory was used to determine the stresses and strains in the laminates. An incremental/iterative initial strain algorithm was used in the finite element program. To increase computational efficiency, a 180 deg rotational symmetry relationship was utilized and the finite element program was vectorized to run on a supercomputer. Laminate response was compared to experimentation revealing excellent agreement for both the unnotched and notched angle-ply laminates. Predicted stresses in the region of the hole were examined and are presented, comparing linear elastic analysis to the inelastic endochronic theory analysis. A failure analysis of the unnotched and notched laminates was performed using the quadratic tensor polynomial. Predicted fracture loads compared well with experimentation for the unnotched laminates, but were very conservative in comparison with experiments for the notched laminates.

  17. COMCAN; COMCAN2A; system safety common cause analysis. [IBM360; CDC CYBER176,175; FORTRAN IV (30%) and BAL (70%) (IBM360), FORTRAN IV (97%) and COMPASS (3%) (CDC CYBER176)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, G.R.; Wilson, J.R.

    COMCAN2A and COMCAN are designed to analyze complex systems such as nuclear plants for common causes of failure. A common cause event, or common mode failure, is a secondary cause that could contribute to the failure of more than one component and violates the assumption of independence. Analysis of such events is an integral part of system reliability and safety analysis. A significant common cause event is a secondary cause common to all basic events in one or more minimal cut sets. Minimal cut sets containing events from components sharing a common location or a common link are called commonmore » cause candidates. Components share a common location if no barrier insulates any one of them from the secondary cause. A common link is a dependency among components which cannot be removed by a physical barrier (e.g., a common energy source or common maintenance instructions).IBM360;CDC CYBER176,175; FORTRAN IV (30%) and BAL (70%) (IBM360), FORTRAN IV (97%) and COMPASS (3%) (CDC CYBER176).; OS/360 (IBM360) and NOS/BE 1.4 (CDC CYBER176), NOS 1.3 (CDC CYBER175); 140K bytes of memory for COMCAN and 242K (octal) words of memory for COMCAN2A.« less

  18. [Applying healthcare failure mode and effect analysis to improve the surgical specimen transportation process and rejection rate].

    PubMed

    Hu, Pao-Hsueh; Hu, Hsiao-Chen; Huang, Hui-Ju; Chao, Hui-Lin; Lei, Ei-Fang

    2014-04-01

    Because surgical pathology specimens are crucial to the diagnosis and treatment of disease, it is critical that they be collected and transported safely and securely. Due to recent near-miss events in our department, we used the healthcare failure model and effect analysis to identify 14 potential perils in the specimen collection and transportation process. Improvement and prevention strategies were developed accordingly to improve quality of care. Using health care failure mode and effect analysis (HFMEA) may improve the surgical specimen transportation process and reduce the rate of surgical specimen rejection. Rectify standard operating procedures for surgical pathology specimen collection and transportation. Create educational videos and posters. Rectify methods of specimen verification. Organize and create an online and instantaneous management system for specimen tracking and specimen rejection. Implementation of the new surgical specimen transportation process effectively eliminated the 14 identified potential perils. In addition, the specimen rejection fell from 0.86% to 0.03%. This project was applied to improve the specimen transportation process, enhance interdisciplinary cooperation, and improve the patient-centered healthcare system. The creation and implementation of an online information system significantly facilitates specimen tracking, hospital cost reductions, and patient safety improvements. The success in our department is currently being replicated across all departments in our hospital that transport specimens. Our experience and strategy may be applied to inter-hospital specimen transportation in the future.

  19. Comparative analysis on flexibility requirements of typical Cryogenic Transfer lines

    NASA Astrophysics Data System (ADS)

    Jadon, Mohit; Kumar, Uday; Choukekar, Ketan; Shah, Nitin; Sarkar, Biswanath

    2017-04-01

    The cryogenic systems and their applications; primarily in large Fusion devices, utilize multiple cryogen transfer lines of various sizes and complexities to transfer cryogenic fluids from plant to the various user/ applications. These transfer lines are composed of various critical sections i.e. tee section, elbows, flexible components etc. The mechanical sustainability (under failure circumstances) of these transfer lines are primary requirement for safe operation of the system and applications. The transfer lines need to be designed for multiple design constraints conditions like line layout, support locations and space restrictions. The transfer lines are subjected to single load and multiple load combinations, such as operational loads, seismic loads, leak in insulation vacuum loads etc. [1]. The analytical calculations and flexibility analysis using professional software are performed for the typical transfer lines without any flexible component, the results were analysed for functional and mechanical load conditions. The failure modes were identified along the critical sections. The same transfer line was then refurbished with the flexible components and analysed for failure modes. The flexible components provide additional flexibility to the transfer line system and make it safe. The results obtained from the analytical calculations were compared with those obtained from the flexibility analysis software calculations. The optimization of the flexible component’s size and selection was performed and components were selected to meet the design requirements as per code.

  20. Geomorphic and hydraulic controls on large-scale riverbank failure on a mixed bedrock-alluvial river system, the River Murray, South Australia: a bathymetric analysis.

    NASA Astrophysics Data System (ADS)

    De Carli, E.; Hubble, T.

    2014-12-01

    During the peak of the Millennium Drought (1997-2010) pool-levels in the lower River Murray in South Australia dropped 1.5 metres below sea level, resulting in large-scale mass failure of the alluvial banks. The largest of these failures occurred without signs of prior instability at Long Island Marina whereby a 270 metre length of populated and vegetated riverbank collapsed in a series of rotational failures. Analysis of long-reach bathymetric surveys of the river channel revealed a strong relationship between geomorphic and hydraulic controls on channel width and downstream alluvial failure. As the entrenched channel planform meanders within and encroaches upon its bedrock valley confines the channel width is 'pinched' and decreases by up to half, resulting in a deepening thalweg and channel bed incision. The authors posit that flow and shear velocities increase at these geomorphically controlled 'pinch-points' resulting in complex and variable hydraulic patterns such as erosional scour eddies, which act to scour the toe of the slope over-steepening and destabilising the alluvial margins. Analysis of bathymetric datasets between 2009 and 2014 revealed signs of active incision and erosional scour of the channel bed. This is counter to conceptual models which deem the backwater zone of a river to be one of decelerating flow and thus sediment deposition. Complex and variable flow patterns have been observed in other mixed alluvial-bedrock river systems, and signs of active incision observed in the backwater zone of the Mississippi River, United States. The incision and widening of the lower Murray River suggests the channel is in an erosional phase of channel readjustment which has implications for riverbank collapse on the alluvial margins. The prevention of seawater ingress due to barrage construction at the Murray mouth and Southern Ocean confluence, allowed pool-levels to drop significantly during the Millennium Drought reducing lateral confining support to the over-steepened channel margins triggering large-scale riverbank failure.

Top