Sample records for system component failure

  1. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  2. Space tug propulsion system failure mode, effects and criticality analysis

    NASA Technical Reports Server (NTRS)

    Boyd, J. W.; Hardison, E. P.; Heard, C. B.; Orourke, J. C.; Osborne, F.; Wakefield, L. T.

    1972-01-01

    For purposes of the study, the propulsion system was considered as consisting of the following: (1) main engine system, (2) auxiliary propulsion system, (3) pneumatic system, (4) hydrogen feed, fill, drain and vent system, (5) oxygen feed, fill, drain and vent system, and (6) helium reentry purge system. Each component was critically examined to identify possible failure modes and the subsequent effect on mission success. Each space tug mission consists of three phases: launch to separation from shuttle, separation to redocking, and redocking to landing. The analysis considered the results of failure of a component during each phase of the mission. After the failure modes of each component were tabulated, those components whose failure would result in possible or certain loss of mission or inability to return the Tug to ground were identified as critical components and a criticality number determined for each. The criticality number of a component denotes the number of mission failures in one million missions due to the loss of that component. A total of 68 components were identified as critical with criticality numbers ranging from 1 to 2990.

  3. Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.

  4. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  5. Control system failure monitoring using generalized parity relations. M.S. Thesis Interim Technical Report

    NASA Technical Reports Server (NTRS)

    Vanschalkwyk, Christiaan Mauritz

    1991-01-01

    Many applications require that a control system must be tolerant to the failure of its components. This is especially true for large space-based systems that must work unattended and with long periods between maintenance. Fault tolerance can be obtained by detecting the failure of the control system component, determining which component has failed, and reconfiguring the system so that the failed component is isolated from the controller. Component failure detection experiments that were conducted on an experimental space structure, the NASA Langley Mini-Mast are presented. Two methodologies for failure detection and isolation (FDI) exist that do not require the specification of failure modes and are applicable to both actuators and sensors. These methods are known as the Failure Detection Filter and the method of Generalized Parity Relations. The latter method was applied to three different sensor types on the Mini-Mast. Failures were simulated in input-output data that were recorded during operation of the Mini-Mast. Both single and double sensor parity relations were tested and the effect of several design parameters on the performance of these relations is discussed. The detection of actuator failures is also treated. It is shown that in all the cases it is possible to identify the parity relations directly from input-output data. Frequency domain analysis is used to explain the behavior of the parity relations.

  6. SCADA alarms processing for wind turbine component failure detection

    NASA Astrophysics Data System (ADS)

    Gonzalez, E.; Reder, M.; Melero, J. J.

    2016-09-01

    Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.

  7. (n, N) type maintenance policy for multi-component systems with failure interactions

    NASA Astrophysics Data System (ADS)

    Zhang, Zhuoqi; Wu, Su; Li, Binfeng; Lee, Seungchul

    2015-04-01

    This paper studies maintenance policies for multi-component systems in which failure interactions and opportunistic maintenance (OM) involve. This maintenance problem can be formulated as a Markov decision process (MDP). However, since an action set and state space in MDP exponentially expand as the number of components increase, traditional approaches are computationally intractable. To deal with curse of dimensionality, we decompose such a multi-component system into mutually influential single-component systems. Each single-component system is formulated as an MDP with the objective of minimising its long-run average maintenance cost. Under some reasonable assumptions, we prove the existence of the optimal (n, N) type policy for a single-component system. An algorithm to obtain the optimal (n, N) type policy is also proposed. Based on the proposed algorithm, we develop an iterative approximation algorithm to obtain an acceptable maintenance policy for a multi-component system. Numerical examples find that failure interactions and OM pose significant effects on a maintenance policy.

  8. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  9. Determining Component Probability using Problem Report Data for Ground Systems used in Manned Space Flight

    NASA Technical Reports Server (NTRS)

    Monaghan, Mark W.; Gillespie, Amanda M.

    2013-01-01

    During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.

  10. Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations

    NASA Technical Reports Server (NTRS)

    Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor

    2014-01-01

    One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.

  11. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  12. DEPEND - A design environment for prediction and evaluation of system dependability

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.; Iyer, Ravishankar K.

    1990-01-01

    The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.

  13. Health monitoring display system for a complex plant

    DOEpatents

    Ridolfo, Charles F [Bloomfield, CT; Harmon, Daryl L [Enfield, CT; Colin, Dreyfuss [Enfield, CT

    2006-08-08

    A single page enterprise wide level display provides a comprehensive readily understood representation of the overall health status of a complex plant. Color coded failure domains allow rapid intuitive recognition of component failure status. A three-tier hierarchy of displays provide details on the health status of the components and systems displayed on the enterprise wide level display in a manner that supports a logical drill down to the health status of sub-components on Tier 1 to expected faults of the sub-components on Tier 2 to specific information relative to expected sub-component failures on Tier 3.

  14. Sensitivity analysis by approximation formulas - Illustrative examples. [reliability analysis of six-component architectures

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1983-01-01

    This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.

  15. Virtually-synchronous communication based on a weak failure suspector

    NASA Technical Reports Server (NTRS)

    Schiper, Andre; Ricciardi, Aleta

    1993-01-01

    Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.

  16. Diverse Redundant Systems for Reliable Space Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.

  17. System Lifetimes, The Memoryless Property, Euler's Constant, and Pi

    ERIC Educational Resources Information Center

    Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon

    2013-01-01

    A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…

  18. Game-Theoretic strategies for systems of components using product-form utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.

    Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less

  19. Failure and recovery in dynamical networks.

    PubMed

    Böttcher, L; Luković, M; Nagler, J; Havlin, S; Herrmann, H J

    2017-02-03

    Failure, damage spread and recovery crucially underlie many spatially embedded networked systems ranging from transportation structures to the human body. Here we study the interplay between spontaneous damage, induced failure and recovery in both embedded and non-embedded networks. In our model the network's components follow three realistic processes that capture these features: (i) spontaneous failure of a component independent of the neighborhood (internal failure), (ii) failure induced by failed neighboring nodes (external failure) and (iii) spontaneous recovery of a component. We identify a metastable domain in the global network phase diagram spanned by the model's control parameters where dramatic hysteresis effects and random switching between two coexisting states are observed. This dynamics depends on the characteristic link length of the embedded system. For the Euclidean lattice in particular, hysteresis and switching only occur in an extremely narrow region of the parameter space compared to random networks. We develop a unifying theory which links the dynamics of our model to contact processes. Our unifying framework may help to better understand controllability in spatially embedded and random networks where spontaneous recovery of components can mitigate spontaneous failure and damage spread in dynamical networks.

  20. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less

  1. Advanced Self-Calibrating, Self-Repairing Data Acquisition System

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Eckhoff, Anthony J. (Inventor); Angel, Lucena R. (Inventor); Perotti, Jose M. (Inventor)

    2002-01-01

    An improved self-calibrating and self-repairing Data Acquisition System (DAS) for use in inaccessible areas, such as onboard spacecraft, and capable of autonomously performing required system health checks, failure detection. When required, self-repair is implemented utilizing a "spare parts/tool box" system. The available number of spare components primarily depends upon each component's predicted reliability which may be determined using Mean Time Between Failures (MTBF) analysis. Failing or degrading components are electronically removed and disabled to reduce power consumption, before being electronically replaced with spare components.

  2. Risk assessment for enterprise resource planning (ERP) system implementations: a fault tree analysis approach

    NASA Astrophysics Data System (ADS)

    Zeng, Yajun; Skibniewski, Miroslaw J.

    2013-08-01

    Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.

  3. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  4. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  5. Estimating distributions with increasing failure rate in an imperfect repair model.

    PubMed

    Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R

    2002-03-01

    A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.

  6. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  7. Shuttle/ISS EMU Failure History and the Impact on Advanced EMU Portable Life Support System (PLSS) Design

    NASA Technical Reports Server (NTRS)

    Campbell, Colin

    2015-01-01

    As the Shuttle/ISS EMU Program exceeds 35 years in duration and is still supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.

  8. Autonomous Component Health Management with Failed Component Detection, Identification, and Avoidance

    NASA Technical Reports Server (NTRS)

    Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.

    2004-01-01

    This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.

  9. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  10. Enhanced Component Performance Study: Motor-Driven Pumps 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2016-02-01

    This report presents an enhanced performance evaluation of motor-driven pumps at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The motor-driven pump failure modes considered for standby systems are failure to start, failure to run less than or equal to one hour, and failure to run more than one hour; for normally running systems, the failure modes considered are failure to start and failure tomore » run. An eight hour unreliability estimate is also calculated and trended. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified in pump run hours per reactor year. Statistically significant decreasing trends were identified for standby systems industry-wide frequency of start demands, and run hours per reactor year for runs of less than or equal to one hour.« less

  11. Design of Critical Components

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C.; Zaretsky, Erwin V.

    2001-01-01

    Critical component design is based on minimizing product failures that results in loss of life. Potential catastrophic failures are reduced to secondary failures where components removed for cause or operating time in the system. Issues of liability and cost of component removal become of paramount importance. Deterministic design with factors of safety and probabilistic design address but lack the essential characteristics for the design of critical components. In deterministic design and fabrication there are heuristic rules and safety factors developed over time for large sets of structural/material components. These factors did not come without cost. Many designs failed and many rules (codes) have standing committees to oversee their proper usage and enforcement. In probabilistic design, not only are failures a given, the failures are calculated; an element of risk is assumed based on empirical failure data for large classes of component operations. Failure of a class of components can be predicted, yet one can not predict when a specific component will fail. The analogy is to the life insurance industry where very careful statistics are book-kept on classes of individuals. For a specific class, life span can be predicted within statistical limits, yet life-span of a specific element of that class can not be predicted.

  12. Stress Analysis of B-52B and B-52H Air-Launching Systems Failure-Critical Structural Components

    NASA Technical Reports Server (NTRS)

    Ko, William L.

    2005-01-01

    The operational life analysis of any airborne failure-critical structural component requires the stress-load equation, which relates the applied load to the maximum tangential tensile stress at the critical stress point. The failure-critical structural components identified are the B-52B Pegasus pylon adapter shackles, B-52B Pegasus pylon hooks, B-52H airplane pylon hooks, B-52H airplane front fittings, B-52H airplane rear pylon fitting, and the B-52H airplane pylon lower sway brace. Finite-element stress analysis was performed on the said structural components, and the critical stress point was located and the stress-load equation was established for each failure-critical structural component. The ultimate load, yield load, and proof load needed for operational life analysis were established for each failure-critical structural component.

  13. A Study to Compare the Failure Rates of Current Space Shuttle Ground Support Equipment with the New Pathfinder Equipment and Investigate the Effect that the Proposed GSE Infrastructure Upgrade Might Have to Reduce GSE Infrastructure Failures

    NASA Technical Reports Server (NTRS)

    Kennedy, Barbara J.

    2004-01-01

    The purposes of this study are to compare the current Space Shuttle Ground Support Equipment (GSE) infrastructure with the proposed GSE infrastructure upgrade modification. The methodology will include analyzing the first prototype installation equipment at Launch PAD B called the "Pathfinder". This study will begin by comparing the failure rate of the current components associated with the "Hardware interface module (HIM)" at the Kennedy Space Center to the failure rate of the neW Pathfinder components. Quantitative data will be gathered specifically on HIM components and the PAD B Hypergolic Fuel facility and Hypergolic Oxidizer facility areas which has the upgraded pathfinder equipment installed. The proposed upgrades include utilizing industrial controlled modules, software, and a fiber optic network. The results of this study provide evidence that there is a significant difference in the failure rates of the two studied infrastructure equipment components. There is also evidence that the support staff for each infrastructure system is not equal. A recommendation to continue with future upgrades is based on a significant reduction of failures in the new' installed ground system components.

  14. A geometric approach to failure detection and identification in linear systems

    NASA Technical Reports Server (NTRS)

    Massoumnia, M. A.

    1986-01-01

    Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.

  15. Defense strategies for asymmetric networked systems under composite utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell

    We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less

  16. Modeling joint restoration strategies for interdependent infrastructure systems.

    PubMed

    Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P

    2018-01-01

    Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.

  17. Developing Reliable Life Support for Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.

  18. Shuttle/ISS EMU Failure History and the Impact on Advanced EMU PLSS Design

    NASA Technical Reports Server (NTRS)

    Campbell, Colin

    2011-01-01

    As the Shuttle/ISS EMU Program exceeds 30 years in duration and is still successfully supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.

  19. Shuttle/ISS EMU Failure History and the Impact on Advanced EMU PLSS Design

    NASA Technical Reports Server (NTRS)

    Campbell, Colin

    2015-01-01

    As the Shuttle/ISS EMU Program exceeds 30 years in duration and is still supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.

  20. Failure detection and identification

    NASA Technical Reports Server (NTRS)

    Massoumnia, Mohammad-Ali; Verghese, George C.; Willsky, Alan S.

    1989-01-01

    Using the geometric concept of an unobservability subspace, a solution is given to the problem of detecting and identifying control system component failures in linear, time-invariant systems. Conditions are developed for the existence of a causal, linear, time-invariant processor that can detect and uniquely identify a component failure, first for the case where components can fail simultaneously, and then for the case where they fail only one at a time. Explicit design algorithms are provided when these conditions are satisfied. In addition to time-domain solvability conditions, frequency-domain interpretations of the results are given, and connections are drawn with results already available in the literature.

  1. Validation of PV-RPM Code in the System Advisor Model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Freeman, Janine

    2017-04-01

    This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whethermore » the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.« less

  2. Defense Strategies for Asymmetric Networked Systems with Discrete Components.

    PubMed

    Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun

    2018-05-03

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.

  3. Defense Strategies for Asymmetric Networked Systems with Discrete Components

    PubMed Central

    Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.

    2018-01-01

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588

  4. Logic analysis of complex systems by characterizing failure phenomena to achieve diagnosis and fault-isolation

    NASA Technical Reports Server (NTRS)

    Wong, J. T.; Andre, W. L.

    1981-01-01

    A recent result shows that, for a certain class of systems, the interdependency among the elements of such a system together with the elements constitutes a mathematical structure a partially ordered set. It is called a loop free logic model of the system. On the basis of an intrinsic property of the mathematical structure, a characterization of system component failure in terms of maximal subsets of bad test signals of the system was obtained. Also, as a consequence, information concerning the total number of failure components in the system was deduced. Detailed examples are given to show how to restructure real systems containing loops into loop free models for which the result is applicable.

  5. An overview of fatigue failures at the Rocky Flats Wind System Test Center

    NASA Technical Reports Server (NTRS)

    Waldon, C. A.

    1981-01-01

    Potential small wind energy conversion (SWECS) design problems were identified to improve product quality and reliability. Mass produced components such as gearboxes, generators, bearings, etc., are generally reliable due to their widespread uniform use in other industries. The likelihood of failure increases, though, in the interfacing of these components and in SWECS components designed for a specific system use. Problems relating to the structural integrity of such components are discussed and analyzed with techniques currently used in quality assurance programs in other manufacturing industries.

  6. Levelized cost-benefit analysis of proposed diagnostics for the Ammunition Transfer Arm of the US Army`s Future Armored Resupply Vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, V.K.; Young, J.M.

    1995-07-01

    The US Army`s Project Manager, Advanced Field Artillery System/Future Armored Resupply Vehicle (PM-AFAS/FARV) is sponsoring the development of technologies that can be applied to the resupply vehicle for the Advanced Field Artillery System. The Engineering Technology Division of the Oak Ridge National Laboratory has proposed adding diagnostics/prognostics systems to four components of the Ammunition Transfer Arm of this vehicle, and a cost-benefit analysis was performed on the diagnostics/prognostics to show the potential savings that may be gained by incorporating these systems onto the vehicle. Possible savings could be in the form of reduced downtime, less unexpected or unnecessary maintenance, fewermore » regular maintenance checks. and/or tower collateral damage or loss. The diagnostics/prognostics systems are used to (1) help determine component problems, (2) determine the condition of the components, and (3) estimate the remaining life of the monitored components. The four components on the arm that are targeted for diagnostics/prognostics are (1) the electromechanical brakes, (2) the linear actuators, (3) the wheel/roller bearings, and (4) the conveyor drive system. These would be monitored using electrical signature analysis, vibration analysis, or a combination of both. Annual failure rates for the four components were obtained along with specifications for vehicle costs, crews, number of missions, etc. Accident scenarios based on component failures were postulated, and event trees for these scenarios were constructed to estimate the annual loss of the resupply vehicle, crew, arm. or mission aborts. A levelized cost-benefit analysis was then performed to examine the costs of such failures, both with and without some level of failure reduction due to the diagnostics/prognostics systems. Any savings resulting from using diagnostics/prognostics were calculated.« less

  7. A Generic Modeling Process to Support Functional Fault Model Development

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  8. Comparison between four dissimilar solar panel configurations

    NASA Astrophysics Data System (ADS)

    Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.

    2017-12-01

    Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.

  9. Modeling joint restoration strategies for interdependent infrastructure systems

    PubMed Central

    Simonovic, Slobodan P.

    2018-01-01

    Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300

  10. Blowout Prevention System Events and Equipment Component Failures : 2016 SafeOCS Annual Report

    DOT National Transportation Integrated Search

    2017-09-22

    The SafeOCS 2016 Annual Report, produced by the Bureau of Transportation Statistics (BTS), summarizes blowout prevention (BOP) equipment failures on marine drilling rigs in the Outer Continental Shelf. It includes an analysis of equipment component f...

  11. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  12. Failure Diagnosis for the Holdup Tank System via ISFA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Huijuan; Bragg-Sitton, Shannon; Smidts, Carol

    This paper discusses the use of the integrated system failure analysis (ISFA) technique for fault diagnosis for the holdup tank system. ISFA is a simulation-based, qualitative and integrated approach used to study fault propagation in systems containing both hardware and software subsystems. The holdup tank system consists of a tank containing a fluid whose level is controlled by an inlet valve and an outlet valve. We introduce the component and functional models of the system, quantify the main parameters and simulate possible failure-propagation paths based on the fault propagation approach, ISFA. The results show that most component failures in themore » holdup tank system can be identified clearly and that ISFA is viable as a technique for fault diagnosis. Since ISFA is a qualitative technique that can be used in the very early stages of system design, this case study provides indications that it can be used early to study design aspects that relate to robustness and fault tolerance.« less

  13. Critical Infrastructure Vulnerability to Spatially Localized Failures with Applications to Chinese Railway System.

    PubMed

    Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun

    2017-01-17

    This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.

  14. A failure management prototype: DR/Rx

    NASA Technical Reports Server (NTRS)

    Hammen, David G.; Baker, Carolyn G.; Kelly, Christine M.; Marsh, Christopher A.

    1991-01-01

    This failure management prototype performs failure diagnosis and recovery management of hierarchical, distributed systems. The prototype, which evolved from a series of previous prototypes following a spiral model for development, focuses on two functions: (1) the diagnostic reasoner (DR) performs integrated failure diagnosis in distributed systems; and (2) the recovery expert (Rx) develops plans to recover from the failure. Issues related to expert system prototype design and the previous history of this prototype are discussed. The architecture of the current prototype is described in terms of the knowledge representation and functionality of its components.

  15. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  16. Catastrophic Fault Recovery with Self-Reconfigurable Chips

    NASA Technical Reports Server (NTRS)

    Zheng, Will Hua; Marzwell, Neville I.; Chau, Savio N.

    2006-01-01

    Mission critical systems typically employ multi-string redundancy to cope with possible hardware failure. Such systems are only as fault tolerant as there are many redundant strings. Once a particular critical component exhausts its redundant spares, the multi-string architecture cannot tolerate any further hardware failure. This paper aims at addressing such catastrophic faults through the use of 'Self-Reconfigurable Chips' as a last resort effort to 'repair' a faulty critical component.

  17. New understandings of failure modes in SSL luminaires

    NASA Astrophysics Data System (ADS)

    Shepherd, Sarah D.; Mills, Karmann C.; Yaga, Robert; Johnson, Cortina; Davis, J. Lynn

    2014-09-01

    As SSL products are being rapidly introduced into the market, there is a need to develop standard screening and testing protocols that can be performed quickly and provide data surrounding product lifetime and performance. These protocols, derived from standard industry tests, are known as ALTs (accelerated life tests) and can be performed in a timeframe of weeks to months instead of years. Accelerated testing utilizes a combination of elevated temperature and humidity conditions as well as electrical power cycling to control aging of the luminaires. In this study, we report on the findings of failure modes for two different luminaire products exposed to temperature-humidity ALTs. LEDs are typically considered the determining component for the rate of lumen depreciation. However, this study has shown that each luminaire component can independently or jointly influence system performance and reliability. Material choices, luminaire designs, and driver designs all have significant impacts on the system reliability of a product. From recent data, it is evident that the most common failure modes are not within the LED, but instead occur within resistors, capacitors, and other electrical components of the driver. Insights into failure modes and rates as a result of ALTs are reported with emphasis on component influence on overall system reliability.

  18. Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline

    NASA Astrophysics Data System (ADS)

    Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.

    2017-05-01

    In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.

  19. Overview of the Smart Network Element Architecture and Recent Innovations

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.

    2008-01-01

    In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.

  20. Solving Component Structural Dynamic Failures Due to Extremely High Frequency Structural Response on the Space Shuttle Program

    NASA Technical Reports Server (NTRS)

    Frady, Greg; Nesman, Thomas; Zoladz, Thomas; Szabo, Roland

    2010-01-01

    For many years, the capabilities to determine the root-cause failure of component failures have been limited to the analytical tools and the state of the art data acquisition systems. With this limited capability, many anomalies have been resolved by adding material to the design to increase robustness without the ability to determine if the design solution was satisfactory until after a series of expensive test programs were complete. The risk of failure and multiple design, test, and redesign cycles were high. During the Space Shuttle Program, many crack investigations in high energy density turbomachines, like the SSME turbopumps and high energy flows in the main propulsion system, have led to the discovery of numerous root-cause failures and anomalies due to the coexistences of acoustic forcing functions, structural natural modes, and a high energy excitation, such as an edge tone or shedding flow, leading the technical community to understand many of the primary contributors to extremely high frequency high cycle fatique fluid-structure interaction anomalies. These contributors have been identified using advanced analysis tools and verified using component and system tests during component ground tests, systems tests, and flight. The structural dynamics and fluid dynamics communities have developed a special sensitivity to the fluid-structure interaction problems and have been able to adjust and solve these problems in a time effective manner to meet budget and schedule deadlines of operational vehicle programs, such as the Space Shuttle Program over the years.

  1. A novel approach for analyzing fuzzy system reliability using different types of intuitionistic fuzzy failure rates of components.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-03-01

    This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Game-theoretic strategies for asymmetric networked systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell

    Abstract—We consider an infrastructure consisting of a network of systems each composed of discrete components that can be reinforced at a certain cost to guard against attacks. The network provides the vital connectivity between systems, and hence plays a critical, asymmetric role in the infrastructure operations. We characterize the system-level correlations using the aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network. The survival probabilities of systems and network satisfy first-order differential conditions that capture the component-level correlations. We formulate the problem of ensuring the infrastructure survival as a gamemore » between anattacker and a provider, using the sum-form and product-form utility functions, each composed of a survival probability term and a cost term. We derive Nash Equilibrium conditions which provide expressions for individual system survival probabilities, and also the expected capacity specified by the total number of operational components. These expressions differ only in a single term for the sum-form and product-form utilities, despite their significant differences.We apply these results to simplified models of distributed cloud computing infrastructures.« less

  3. Qualification and issues with space flight laser systems and components

    NASA Astrophysics Data System (ADS)

    Ott, Melanie N.; Coyle, D. B.; Canham, John S.; Leidecker, Henning W., Jr.

    2006-02-01

    The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 1990's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.

  4. Qualification and Issues with Space Flight Laser Systems and Components

    NASA Technical Reports Server (NTRS)

    Ott, Melanie N.; Coyle, D. Barry; Canham, John S.; Leidecker, Henning W.

    2006-01-01

    The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 1990's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.

  5. Qualification and Issues with Space Flight Laser Systems and Components

    NASA Technical Reports Server (NTRS)

    Ott, Melanie N.; Coyle, D. Barry; Canham, John S.; Leidecker, Henning W.

    2006-01-01

    The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 199O's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.

  6. Enhanced Component Performance Study. Emergency Diesel Generators 1998–2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2014-11-01

    This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2013 and maintenance unavailability (UA) performance data using Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2013. The objective is to present an analysis of factors that could influence the system and component trends in addition to annual performance trends of failure rates and probabilities. The factors analyzed for the EDG component are the differences in failuresmore » between all demands and actual unplanned engineered safety feature (ESF) demands, differences among manufacturers, and differences among EDG ratings. Statistical analyses of these differences are performed and results showing whether pooling is acceptable across these factors. In addition, engineering analyses were performed with respect to time period and failure mode. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating.« less

  7. ANALYSIS OF SEQUENTIAL FAILURES FOR ASSESSMENT OF RELIABILITY AND SAFETY OF MANUFACTURING SYSTEMS. (R828541)

    EPA Science Inventory

    Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...

  8. Auxiliary feedwater system risk-based inspection guide for the Salem Nuclear Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugh, R.; Gore, B.F. Vo, T.V.

    In a study by the US Nuclear Regulatory Commission (NRC), Pacific Northwest Laboratory has developed and applied a methodology for deriving plant-specific risk-based inspection guidance for the auxiliary feedwater (AFW) system at pressurized water reactors that have not undergone probabilistic risk assessment (PRA). This methodology uses existing PRA results and plant operating experience information. Existing PRA-based inspection guidance information recently developed for the NRC for various plants was used to identify generic component failure modes. This information was then combined with plant-specific and industry-wide component information and failure data to identify failure modes and failure mechanisms for the AFW systemmore » at the selected plants. Salem was selected as the fifth plant for study. The product of this effort is a prioritized listing of AFW failures which have occurred at the plant and at other PWRs. This listing is intended for use by NRC inspectors in the preparation of inspection plans addressing AFW risk-important components at the Salem plant. 23 refs., 1 fig., 1 tab.« less

  9. Prognostics for Microgrid Components

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav

    2012-01-01

    Prognostics is the science of predicting future performance and potential failures based on targeted condition monitoring. Moving away from the traditional reliability centric view, prognostics aims at detecting and quantifying the time to impending failures. This advance warning provides the opportunity to take actions that can preserve uptime, reduce cost of damage, or extend the life of the component. The talk will focus on the concepts and basics of prognostics from the viewpoint of condition-based systems health management. Differences with other techniques used in systems health management and philosophies of prognostics used in other domains will be shown. Examples relevant to micro grid systems and subsystems will be used to illustrate various types of prediction scenarios and the resources it take to set up a desired prognostic system. Specifically, the implementation results for power storage and power semiconductor components will demonstrate specific solution approaches of prognostics. The role of constituent elements of prognostics, such as model, prediction algorithms, failure threshold, run-to-failure data, requirements and specifications, and post-prognostic reasoning will be explained. A discussion on performance evaluation and performance metrics will conclude the technical discussion followed by general comments on open research problems and challenges in prognostics.

  10. Developing Ultra Reliable Life Support for the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  11. Failure Analysis in Platelet Molded Composite Systems

    NASA Astrophysics Data System (ADS)

    Kravchenko, Sergii G.

    Long-fiber discontinuous composite systems in the form of chopped prepreg tapes provide an advanced, structural grade, molding compound allowing for fabrication of complex three-dimensional components. Understanding of process-structure-property relationship is essential for application of prerpeg platelet molded components, especially because of their possible irregular disordered heterogeneous morphology. Herein, a structure-property relationship was analyzed in the composite systems of many platelets. Regular and irregular morphologies were considered. Platelet-based systems with more ordered morphology possess superior mechanical performance. While regular morphologies allow for a careful inspection of failure mechanisms derived from the morphological characteristics, irregular morphologies are representative of the composite architectures resulting from uncontrolled deposition and molding with chopped prerpegs. Progressive failure analysis (PFA) was used to study the damaged deformation up to ultimate failure in a platelet-based composite system. Computational damage mechanics approaches were utilized to conduct the PFA. The developed computational models granted understanding of how the composite structure details, meaning the platelet geometry and system morphology (geometrical arrangement and orientation distribution of platelets), define the effective mechanical properties of a platelet-molded composite system, its stiffness, strength and variability in properties.

  12. Risk measures for power failures in transmission systems

    NASA Astrophysics Data System (ADS)

    Cassidy, Alex; Feinstein, Zachary; Nehorai, Arye

    2016-11-01

    We present a novel framework for evaluating the risk of failures in power transmission systems. We use the concept of systemic risk measures from the financial mathematics literature with models of power system failures in order to quantify the risk of the entire power system for design and comparative purposes. The proposed risk measures provide the collection of capacity vectors for the components in the system that lead to acceptable outcomes. Keys to the formulation of our measures of risk are two elements: a model of system behavior that provides the (distribution of) outcomes based on component capacities and an acceptability criterion that determines whether a (random) outcome is acceptable from an aggregated point of view. We examine the effects of altering the line capacities on energy not served under a variety of networks, flow manipulation methods, load shedding schemes, and load profiles using Monte Carlo simulations. Our results provide a quantitative comparison of the performance of these schemes, measured by the required line capacity. These results provide more complete descriptions of the risks of power failures than the previous, one-dimensional metrics.

  13. Prognostics using Engineering and Environmental Parameters as Applied to State of Health (SOH) Radionuclide Aerosol Sampler Analyzer (RASA) Real-Time Monitoring

    NASA Astrophysics Data System (ADS)

    Hutchenson, K. D.; Hartley-McBride, S.; Saults, T.; Schmidt, D. P.

    2006-05-01

    The International Monitoring System (IMS) is composed in part of radionuclide particulate and gas monitoring systems. Monitoring the operational status of these systems is an important aspect of nuclear weapon test monitoring. Quality data, process control techniques, and predictive models are necessary to detect and predict system component failures. Predicting failures in advance provides time to mitigate these failures, thus minimizing operational downtime. The Provisional Technical Secretariat (PTS) requires IMS radionuclide systems be operational 95 percent of the time. The United States National Data Center (US NDC) offers contributing components to the IMS. This effort focuses on the initial research and process development using prognostics for monitoring and predicting failures of the RASA two (2) days into the future. The predictions, using time series methods, are input to an expert decision system, called SHADES (State of Health Airflow and Detection Expert System). The results enable personnel to make informed judgments about the health of the RASA system. Data are read from a relational database, processed, and displayed to the user in a GIS as a prototype GUI. This procedure mimics the real time application process that could be implemented as an operational system, This initial proof-of-concept effort developed predictive models focused on RASA components for a single site (USP79). Future work shall include the incorporation of other RASA systems, as well as their environmental conditions that play a significant role in performance. Similarly, SHADES currently accommodates specific component behaviors at this one site. Future work shall also include important environmental variables that play an important part of the prediction algorithms.

  14. Fundamental Technology Development for Gas-Turbine Engine Health Management

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.; Simon, Donald L.; Hunter, Gary W.; Arnold, Steven M.; Reveley, Mary S.; Anderson, Lynn M.

    2007-01-01

    Integrated vehicle health management technologies promise to dramatically improve the safety of commercial aircraft by reducing system and component failures as causal and contributing factors in aircraft accidents. To realize this promise, fundamental technology development is needed to produce reliable health management components. These components include diagnostic and prognostic algorithms, physics-based and data-driven lifing and failure models, sensors, and a sensor infrastructure including wireless communications, power scavenging, and electronics. In addition, system assessment methods are needed to effectively prioritize development efforts. Development work is needed throughout the vehicle, but particular challenges are presented by the hot, rotating environment of the propulsion system. This presentation describes current work in the field of health management technologies for propulsion systems for commercial aviation.

  15. Performance-based maintenance of gas turbines for reliable control of degraded power systems

    NASA Astrophysics Data System (ADS)

    Mo, Huadong; Sansavini, Giovanni; Xie, Min

    2018-03-01

    Maintenance actions are necessary for ensuring proper operations of control systems under component degradation. However, current condition-based maintenance (CBM) models based on component health indices are not suitable for degraded control systems. Indeed, failures of control systems are only determined by the controller outputs, and the feedback mechanism compensates the control performance loss caused by the component deterioration. Thus, control systems may still operate normally even if the component health indices exceed failure thresholds. This work investigates the CBM model of control systems and employs the reduced control performance as a direct degradation measure for deciding maintenance activities. The reduced control performance depends on the underlying component degradation modelled as a Wiener process and the feedback mechanism. To this aim, the controller features are quantified by developing a dynamic and stochastic control block diagram-based simulation model, consisting of the degraded components and the control mechanism. At each inspection, the system receives a maintenance action if the control performance deterioration exceeds its preventive-maintenance or failure thresholds. Inspired by realistic cases, the component degradation model considers random start time and unit-to-unit variability. The cost analysis of maintenance model is conducted via Monte Carlo simulation. Optimal maintenance strategies are investigated to minimize the expected maintenance costs, which is a direct consequence of the control performance. The proposed framework is able to design preventive maintenance actions on a gas power plant, to ensuring required load frequency control performance against a sudden load increase. The optimization results identify the trade-off between system downtime and maintenance costs as a function of preventive maintenance thresholds and inspection frequency. Finally, the control performance-based maintenance model can reduce maintenance costs as compared to CBM and pre-scheduled maintenance.

  16. Computer-aided operations engineering with integrated models of systems and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  17. Hybrid Modeling for Testing Intelligent Software for Lunar-Mars Closed Life Support

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Nicholson, Leonard S. (Technical Monitor)

    1999-01-01

    Intelligent software is being developed for closed life support systems with biological components, for human exploration of the Moon and Mars. The intelligent software functions include planning/scheduling, reactive discrete control and sequencing, management of continuous control, and fault detection, diagnosis, and management of failures and errors. Four types of modeling information have been essential to system modeling and simulation to develop and test the software and to provide operational model-based what-if analyses: discrete component operational and failure modes; continuous dynamic performance within component modes, modeled qualitatively or quantitatively; configuration of flows and power among components in the system; and operations activities and scenarios. CONFIG, a multi-purpose discrete event simulation tool that integrates all four types of models for use throughout the engineering and operations life cycle, has been used to model components and systems involved in the production and transfer of oxygen and carbon dioxide in a plant-growth chamber and between that chamber and a habitation chamber with physicochemical systems for gas processing.

  18. Fuel Cell Balance-of-Plant Reliability Testbed Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sproat, Vern; LaHurd, Debbie

    Reliability of the fuel cell system balance-of-plant (BoP) components is a critical factor that needs to be addressed prior to fuel cells becoming fully commercialized. Failure or performance degradation of BoP components has been identified as a life-limiting factor in fuel cell systems.1 The goal of this project is to develop a series of test beds that will test system components such as pumps, valves, sensors, fittings, etc., under operating conditions anticipated in real Polymer Electrolyte Membrane (PEM) fuel cell systems. Results will be made generally available to begin removing reliability as a roadblock to the growth of the PEMmore » fuel cell industry. Stark State College students participating in the project, in conjunction with their coursework, have been exposed to technical knowledge and training in the handling and maintenance of hydrogen, fuel cells and system components as well as component failure modes and mechanisms. Three test beds were constructed. Testing was completed on gas flow pumps, tubing, and pressure and temperature sensors and valves.« less

  19. A Automated Tool for Supporting FMEAs of Digital Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yue,M.; Chu, T.-L.; Martinez-Guridi, G.

    2008-09-07

    Although designs of digital systems can be very different from each other, they typically use many of the same types of generic digital components. Determining the impacts of the failure modes of these generic components on a digital system can be used to support development of a reliability model of the system. A novel approach was proposed for such a purpose by decomposing the system into a level of the generic digital components and propagating failure modes to the system level, which generally is time-consuming and difficult to implement. To overcome the associated issues of implementing the proposed FMEA approach,more » an automated tool for a digital feedwater control system (DFWCS) has been developed in this study. The automated FMEA tool is in nature a simulation platform developed by using or recreating the original source code of the different module software interfaced by input and output variables that represent physical signals exchanged between modules, the system, and the controlled process. For any given failure mode, its impacts on associated signals are determined first and the variables that correspond to these signals are modified accordingly by the simulation. Criteria are also developed, as part of the simulation platform, to determine whether the system has lost its automatic control function, which is defined as a system failure in this study. The conceptual development of the automated FMEA support tool can be generalized and applied to support FMEAs for reliability assessment of complex digital systems.« less

  20. Product component genealogy modeling and field-failure prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Caleb; Hong, Yili; Meeker, William Q.

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  1. Product component genealogy modeling and field-failure prediction

    DOE PAGES

    King, Caleb; Hong, Yili; Meeker, William Q.

    2016-04-13

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  2. EEMD-based wind turbine bearing failure detection using the generator stator current homopolar component

    NASA Astrophysics Data System (ADS)

    Amirat, Yassine; Choqueuse, Vincent; Benbouzid, Mohamed

    2013-12-01

    Failure detection has always been a demanding task in the electrical machines community; it has become more challenging in wind energy conversion systems because sustainability and viability of wind farms are highly dependent on the reduction of the operational and maintenance costs. Indeed the most efficient way of reducing these costs would be to continuously monitor the condition of these systems. This allows for early detection of the generator health degeneration, facilitating a proactive response, minimizing downtime, and maximizing productivity. This paper provides then an assessment of a failure detection techniques based on the homopolar component of the generator stator current and attempts to highlight the use of the ensemble empirical mode decomposition as a tool for failure detection in wind turbine generators for stationary and non-stationary cases.

  3. An application of artificial intelligence theory to reconfigurable flight control

    NASA Technical Reports Server (NTRS)

    Handelman, David A.

    1987-01-01

    Artificial intelligence techniques were used along with statistical hpyothesis testing and modern control theory, to help the pilot cope with the issues of information, knowledge, and capability in the event of a failure. An intelligent flight control system is being developed which utilizes knowledge of cause and effect relationships between all aircraft components. It will screen the information available to the pilots, supplement his knowledge, and most importantly, utilize the remaining flight capability of the aircraft following a failure. The list of failure types the control system will accommodate includes sensor failures, actuator failures, and structural failures.

  4. Modelling Wind Turbine Failures based on Weather Conditions

    NASA Astrophysics Data System (ADS)

    Reder, Maik; Melero, Julio J.

    2017-11-01

    A large proportion of the overall costs of a wind farm is directly related to operation and maintenance (O&M) tasks. By applying predictive O&M strategies rather than corrective approaches these costs can be decreased significantly. Here, especially wind turbine (WT) failure models can help to understand the components’ degradation processes and enable the operators to anticipate upcoming failures. Usually, these models are based on the age of the systems or components. However, latest research shows that the on-site weather conditions also affect the turbine failure behaviour significantly. This study presents a novel approach to model WT failures based on the environmental conditions to which they are exposed to. The results focus on general WT failures, as well as on four main components: gearbox, generator, pitch and yaw system. A penalised likelihood estimation is used in order to avoid problems due to for example highly correlated input covariates. The relative importance of the model covariates is assessed in order to analyse the effect of each weather parameter on the model output.

  5. Ferrographic and spectrometer oil analysis from a failed gas turbine engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1982-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, a plasma, an atomic absorption, and an emission spectrometer to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations ( 2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure.

  6. Number and placement of control system components considering possible failures. [for large space structures

    NASA Technical Reports Server (NTRS)

    Vander Velde, W. E.; Carignan, C. R.

    1984-01-01

    One of the first questions facing the designer of the control system for a large space structure is how many components actuators and sensors - to specify and where to place them on the structure. This paper presents a methodology which is intended to assist the designer in making these choices. A measure of controllability is defined which is a quantitative indication of how well the system can be controlled with a given set of actuators. Similarly, a measure of observability is defined which is a quantitative indication of how well the system can be observed with a given set of sensors. Then the effect of component unreliability is introduced by computing the average expected degree of controllability (observability) over the operating lifetime of the system accounting for the likelihood of various combinations of component failures. The problem of component location is resolved by optimizing this performance measure over the admissible set of locations. The variation of this optimized performance measure with number of actuators (sensors) is helpful in deciding how many components to use.

  7. Modular space vehicle boards, control software, reprogramming, and failure recovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judd, Stephen; Dallmann, Nicholas; McCabe, Kevin

    A space vehicle may have a modular board configuration that commonly uses some or all components and a common operating system for at least some of the boards. Each modular board may have its own dedicated processing, and processing loads may be distributed. The space vehicle may be reprogrammable, and may be launched without code that enables all functionality and/or components. Code errors may be detected and the space vehicle may be reset to a working code version to prevent system failure.

  8. Generic Sensor Failure Modeling for Cooperative Systems.

    PubMed

    Jäger, Georg; Zug, Sebastian; Casimiro, António

    2018-03-20

    The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application's fault tolerance and thereby promises maintainability of such system's safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques.

  9. Dynamic Considerations for Control of Closed Life Support Systems

    NASA Technical Reports Server (NTRS)

    Babcock, P. S.; Auslander, D. M.; Spear, R. C.

    1985-01-01

    Reliability of closed life support systems depend on their ability to continue supplying the crew's needs during perturbations and equipment failures. The dynamic considerations interact with the basic static design through the sizing of storages, the specification of excess capacities in processors, and the choice of system initial state. A very simple system flow model was used to examine the possibilities for system failures even when there is sufficient storage to buffer the immediate effects of the perturbation. Two control schemes are shown which have different dynamic consequences in response to component failures.

  10. Defense strategies for cloud computing multi-site server infrastructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; He, Fei

    We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, andmore » also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.« less

  11. CONFIG: Qualitative simulation tool for analyzing behavior of engineering devices

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Basham, Bryan D.; Harris, Richard A.

    1987-01-01

    To design failure management expert systems, engineers mentally analyze the effects of failures and procedures as they propagate through device configurations. CONFIG is a generic device modeling tool for use in discrete event simulation, to support such analyses. CONFIG permits graphical modeling of device configurations and qualitative specification of local operating modes of device components. Computation requirements are reduced by focussing the level of component description on operating modes and failure modes, and specifying qualitative ranges of variables relative to mode transition boundaries. Simulation processing occurs only when modes change or variables cross qualitative boundaries. Device models are built graphically, using components from libraries. Components are connected at ports by graphical relations that define data flow. The core of a component model is its state transition diagram, which specifies modes of operation and transitions among them.

  12. Reliability considerations in the placement of control system components

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1983-01-01

    This paper presents a methodology, along with applications to a grid type structure, for incorporating reliability considerations in the decision for actuator placement on large space structures. The method involves the minimization of a criterion that considers mission life and the reliability of the system components. It is assumed that the actuator gains are to be readjusted following failures, but their locations cannot be changed. The goal of the design is to suppress vibrations of the grid and the integral square of the grid modal amplitudes is used as a measure of performance of the control system. When reliability of the actuators is considered, a more pertinent measure is the expected value of the integral; that is, the sum of the squares of the modal amplitudes for each possible failure state considered, multiplied by the probability that the failure state will occur. For a given set of actuator locations, the optimal criterion may be graphed as a function of the ratio of the mean time to failure of the components and the design mission life or reservicing interval. The best location of the actuators is typically different for a short mission life than for a long one.

  13. On defense strategies for system of systems using aggregated correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Imam, Neena; Ma, Chris Y. T.

    2017-04-01

    We consider a System of Systems (SoS) wherein each system Si, i = 1; 2; ... ;N, is composed of discrete cyber and physical components which can be attacked and reinforced. We characterize the disruptions using aggregate failure correlation functions given by the conditional failure probability of SoS given the failure of an individual system. We formulate the problem of ensuring the survival of SoS as a game between an attacker and a provider, each with a utility function composed of asurvival probability term and a cost term, both expressed in terms of the number of components attacked and reinforced.more » The survival probabilities of systems satisfy simple product-form, first-order differential conditions, which simplify the Nash Equilibrium (NE) conditions. We derive the sensitivity functions that highlight the dependence of SoS survival probability at NE on cost terms, correlation functions, and individual system survival probabilities.We apply these results to a simplified model of distributed cloud computing infrastructure.« less

  14. Diagnostic reasoning techniques for selective monitoring

    NASA Technical Reports Server (NTRS)

    Homem-De-mello, L. S.; Doyle, R. J.

    1991-01-01

    An architecture for using diagnostic reasoning techniques in selective monitoring is presented. Given the sensor readings and a model of the physical system, a number of assertions are generated and expressed as Boolean equations. The resulting system of Boolean equations is solved symbolically. Using a priori probabilities of component failure and Bayes' rule, revised probabilities of failure can be computed. These will indicate what components have failed or are the most likely to have failed. This approach is suitable for systems that are well understood and for which the correctness of the assertions can be guaranteed. Also, the system must be such that changes are slow enough to allow the computation.

  15. Component Repair Times Obtained from MSPI Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eide, Steven A.; Cadwallader, Lee

    Information concerning times to repair or restore equipment to service given a failure is valuable to probabilistic risk assessments (PRAs). Examples of such uses in modern PRAs include estimation of the probability of failing to restore a failed component within a specified time period (typically tied to recovering a mitigating system before core damage occurs at nuclear power plants) and the determination of mission times for support system initiating event (SSIE) fault tree models. Information on equipment repair or restoration times applicable to PRA modeling is limited and dated for U.S. commercial nuclear power plants. However, the Mitigating Systems Performancemore » Index (MSPI) program covering all U.S. commercial nuclear power plants provides up-to-date information on restoration times for a limited set of component types. This paper describes the MSPI program data available and analyzes the data to obtain median and mean component restoration times as well as non-restoration cumulative probability curves. The MSPI program provides guidance for monitoring both planned and unplanned outages of trains of selected mitigating systems deemed important to safety. For systems included within the MSPI program, plants monitor both train UA and component unreliability (UR) against baseline values. If the combined system UA and UR increases sufficiently above established baseline results (converted to an estimated change in core damage frequency or CDF), a “white” (or worse) indicator is generated for that system. That in turn results in increased oversight by the US Nuclear Regulatory Commission (NRC) and can impact a plant’s insurance rating. Therefore, there is pressure to return MSPI program components to service as soon as possible after a failure occurs. Three sets of unplanned outages might be used to determine the component repair durations desired in this article: all unplanned outages for the train type that includes the component of interest, only unplanned outages associated with failures of the component of interest, and only unplanned outages associated with PRA failures of the component of interest. The paper will describe how component repair times can be generated from each set and which approach is most applicable. Repair time information will be summarized for MSPI pumps and diesel generators using data over 2003 – 2007. Also, trend information over 2003 – 2012 will be presented to indicate whether the 2003 – 2007 repair time information is still considered applicable. For certain types of pumps, mean repair times are significantly higher than the typically assumed 24 h duration.« less

  16. Reliability measurement for mixed mode failures of 33/11 kilovolt electric power distribution stations.

    PubMed

    Alwan, Faris M; Baharum, Adam; Hassan, Geehan S

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.

  17. Reliability Measurement for Mixed Mode Failures of 33/11 Kilovolt Electric Power Distribution Stations

    PubMed Central

    Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346

  18. Guest Editor's Introduction: Special section on dependable distributed systems

    NASA Astrophysics Data System (ADS)

    Fetzer, Christof

    1999-09-01

    We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potential disadvantages when used in dependable systems. For example, minimizing the time to market might lead to the release of components with inherent design faults (e.g. use of `shortcuts' that only work most of the time). In addition, the components might be more complex than needed and, hence, potentially have more design faults than simpler components. However, given economic constraints and the ability to cope with some of the problems using fault-evasion and fault-tolerance, only for a small percentage of systems can one justify not using COTS components. Distributed systems built from current COTS components are asynchronous systems in the sense that there exists no a priori known bound on the transmission delay of messages or the execution time of processes. When designing a distributed algorithm, one would like to make sure (e.g. by testing or verification) that it is correct, i.e. satisfies its specification. Many distributed algorithms make use of consensus (eventually all non-crashed processes have to agree on a value), leader election (a crashed leader is eventually replaced by a new leader, but at any time there is at most one leader) or a group membership detection service (a crashed process is eventually suspected to have crashed but only crashed processes are suspected). From a theoretical point of view, the service specifications given for such services are not implementable in asynchronous systems. In particular, for each implementation one can derive a counter example in which the service violates its specification. From a practical point of view, the consensus, the leader election, and the membership detection problem are solvable in asynchronous distributed systems. In this special section, Raynal and Tronel show how to bridge this difference by showing how to implement the group membership detection problem with a negligible probability [1] to fail in an asynchronous system. The group membership detection problem is specified by a liveness condition (L) and a safety property (S): (L) if a process p crashes, then eventually every non-crashed process q has to suspect that p has crashed; and (S) if a process q suspects p, then p has indeed crashed. One can show that either (L) or (S) is implementable, but one cannot implement both (L) and (S) at the same time in an asynchronous system. In practice, one only needs to implement (L) and (S) such that the probability that (L) or (S) is violated becomes negligible. Raynal and Tronel propose and analyse a protocol that implements (L) with certainty and that can be tuned such that the probability that (S) is violated becomes negligible. Designing and implementing distributed fault-tolerant protocols for asynchronous systems is a difficult but not an impossible task. A fault-tolerant protocol has to detect and mask certain failure classes, e.g. crash failures and message omission failures. There is a trade-off between the performance of a fault-tolerant protocol and the failure classes the protocol can tolerate. One wants to tolerate as many failure classes as needed to satisfy the stochastic requirements of the protocol [1] while still maintaining a sufficient performance. Since clients of a protocol have different requirements with respect to the performance/fault-tolerance trade-off, one would like to be able to customize protocols such that one can select an appropriate performance/fault-tolerance trade-off. In this special section Hiltunen et al describe how one can compose protocols from micro-protocols in their Cactus system. They show how a group RPC system can be tailored to the needs of a client. In particular, they show how considering additional failure classes affects the performance of a group RPC system. References [1] Cristian F 1991 Understanding fault-tolerant distributed systems Communications of ACM 34 (2) 56-78 [2] Heimerdinger W L and Weinstock C B 1992 A conceptual framework for system fault tolerance Technical Report 92-TR-33, CMU/SEI [3] Laprie J C (ed) 1992 Dependability: Basic Concepts and Terminology (Vienna: Springer)

  19. Failure Prevention of Hydraulic System Based on Oil Contamination

    NASA Astrophysics Data System (ADS)

    Singh, M.; Lathkar, G. S.; Basu, S. K.

    2012-07-01

    Oil contamination is the major source of failure and wear of hydraulic system components. As per literature survey, approximately 70 % of hydraulic system failures are caused by oil contamination. Hence, to operate the hydraulic system reliably, the hydraulic oil should be of perfect condition. This requires a proper `Contamination Management System' which involves monitoring of various parameters like oil viscosity, oil temperature, contamination level etc. A study has been carried out on vehicle mounted hydraulically operated system used for articulation of heavy article, after making the platform levelled with outrigger cylinders. It is observed that by proper monitoring of contamination level, there is considerably increase in reliability, economy in operation and long service life. This also prevents the frequent failure of hydraulic system.

  20. Improvements in High Speed, High Resolution Dynamic Digital Image Correlation for Experimental Evaluation of Composite Drive System Components

    NASA Technical Reports Server (NTRS)

    Kohlman, Lee W.; Ruggeri, Charles R.; Roberts, Gary D.; Handschuh, Robert Frederick

    2013-01-01

    Composite materials have the potential to reduce the weight of rotating drive system components. However, these components are more complex to design and evaluate than static structural components in part because of limited ability to acquire deformation and failure initiation data during dynamic tests. Digital image correlation (DIC) methods have been developed to provide precise measurements of deformation and failure initiation for material test coupons and for structures under quasi-static loading. Attempts to use the same methods for rotating components (presented at the AHS International 68th Annual Forum in 2012) are limited by high speed camera resolution, image blur, and heating of the structure by high intensity lighting. Several improvements have been made to the system resulting in higher spatial resolution, decreased image noise, and elimination of heating effects. These improvements include the use of a high intensity synchronous microsecond pulsed LED lighting system, different lenses, and changes in camera configuration. With these improvements, deformation measurements can be made during rotating component tests with resolution comparable to that which can be achieved in static tests

  1. Improvements in High Speed, High Resolution Dynamic Digital Image Correlation for Experimental Evaluation of Composite Drive System Components

    NASA Technical Reports Server (NTRS)

    Kohlman, Lee; Ruggeri, Charles; Roberts, Gary; Handshuh, Robert

    2013-01-01

    Composite materials have the potential to reduce the weight of rotating drive system components. However, these components are more complex to design and evaluate than static structural components in part because of limited ability to acquire deformation and failure initiation data during dynamic tests. Digital image correlation (DIC) methods have been developed to provide precise measurements of deformation and failure initiation for material test coupons and for structures under quasi-static loading. Attempts to use the same methods for rotating components (presented at the AHS International 68th Annual Forum in 2012) are limited by high speed camera resolution, image blur, and heating of the structure by high intensity lighting. Several improvements have been made to the system resulting in higher spatial resolution, decreased image noise, and elimination of heating effects. These improvements include the use of a high intensity synchronous microsecond pulsed LED lighting system, different lenses, and changes in camera configuration. With these improvements, deformation measurements can be made during rotating component tests with resolution comparable to that which can be achieved in static tests.

  2. System safety in Stirling engine development

    NASA Technical Reports Server (NTRS)

    Bankaitis, H.

    1981-01-01

    The DOE/NASA Stirling Engine Project Office has required that contractors make safety considerations an integral part of all phases of the Stirling engine development program. As an integral part of each engine design subtask, analyses are evolved to determine possible modes of failure. The accepted system safety analysis techniques (Fault Tree, FMEA, Hazards Analysis, etc.) are applied in various degrees of extent at the system, subsystem and component levels. The primary objectives are to identify critical failure areas, to enable removal of susceptibility to such failures or their effects from the system and to minimize risk.

  3. C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component

    NASA Astrophysics Data System (ADS)

    Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.

    2018-06-01

    The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.

  4. C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component

    NASA Astrophysics Data System (ADS)

    Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.

    2018-02-01

    The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.

  5. 24 CFR 572.125 - Replacement reserves.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... drawn down under the Cash and Management Information System when specifically needed to assist a... prevent severe financial hardship to families caused by the failure of a major system or component of the... families; and (3) The condition and age of the properties and each of their major systems and components...

  6. Ferrographic and spectrometer oil analysis from a failed gas turbine engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1983-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor parts that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, and with plasma, atomic absorption, and emission spectrometers to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations (2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure. Previously announced in STAR as N83-12433

  7. Extended Aging Theories for Predictions of Safe Operational Life of Critical Airborne Structural Components

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Chen, Tony

    2006-01-01

    The previously developed Ko closed-form aging theory has been reformulated into a more compact mathematical form for easier application. A new equivalent loading theory and empirical loading theories have also been developed and incorporated into the revised Ko aging theory for the prediction of a safe operational life of airborne failure-critical structural components. The new set of aging and loading theories were applied to predict the safe number of flights for the B-52B aircraft to carry a launch vehicle, the structural life of critical components consumed by load excursion to proof load value, and the ground-sitting life of B-52B pylon failure-critical structural components. A special life prediction method was developed for the preflight predictions of operational life of failure-critical structural components of the B-52H pylon system, for which no flight data are available.

  8. Failure Behavior of Elbows with Local Wall Thinning

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Ho; Lee, Jeong-Keun; Park, Jai-Hak

    Wall thinning defect due to corrosion is one of major aging phenomena in carbon steel pipes in most plant industries, and it results in reducing load carrying capacity of the piping components. A failure test system was set up for real scale elbows containing various simulated wall thinning defects, and monotonic in-plane bending tests were performed under internal pressure to find out the failure behavior of them. The failure behavior of wall-thinned elbows was characterized by the circumferential angle of thinned region and the loading conditions to the piping system.

  9. Generic Sensor Failure Modeling for Cooperative Systems

    PubMed Central

    Jäger, Georg; Zug, Sebastian

    2018-01-01

    The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435

  10. 40 CFR 86.1803-01 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operator prior to procurement. Auxiliary Emission Control Device (AECD) means any element of design which... components are those components which are designed primarily for emission control, or whose failure may... of design means any control system (i.e., computer software, electronic control system, emission...

  11. 40 CFR 86.1803-01 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... prior to procurement. Auxiliary Emission Control Device (AECD) means any element of design which senses... components are those components which are designed primarily for emission control, or whose failure may... of design means any control system (i.e., computer software, electronic control system, emission...

  12. Decreasing inventory of a cement factory roller mill parts using reliability centered maintenance method

    NASA Astrophysics Data System (ADS)

    Witantyo; Rindiyah, Anita

    2018-03-01

    According to data from maintenance planning and control, it was obtained that highest inventory value is non-routine components. Maintenance components are components which procured based on maintenance activities. The problem happens because there is no synchronization between maintenance activities and the components required. Reliability Centered Maintenance method is used to overcome the problem by reevaluating maintenance activities required components. The case chosen is roller mill system because it has the highest unscheduled downtime record. Components required for each maintenance activities will be determined by its failure distribution, so the number of components needed could be predicted. Moreover, those components will be reclassified from routine component to be non-routine component, so the procurement could be carried out regularly. Based on the conducted analysis, failure happens in almost every maintenance task are classified to become scheduled on condition task, scheduled discard task, schedule restoration task and no schedule maintenance. From 87 used components for maintenance activities are evaluated and there 19 components that experience reclassification from non-routine components to routine components. Then the reliability and need of those components were calculated for one-year operation period. Based on this invention, it is suggested to change all of the components in overhaul activity to increase the reliability of roller mill system. Besides, the inventory system should follow maintenance schedule and the number of required components in maintenance activity so the value of procurement will be decreased and the reliability system will increase.

  13. Application of reliability-centered maintenance to boiling water reactor emergency core cooling systems fault-tree analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Y.A.; Feltus, M.A.

    1995-07-01

    Reliability-centered maintenance (RCM) methods are applied to boiling water reactor plant-specific emergency core cooling system probabilistic risk assessment (PRA) fault trees. The RCM is a technique that is system function-based, for improving a preventive maintenance (PM) program, which is applied on a component basis. Many PM programs are based on time-directed maintenance tasks, while RCM methods focus on component condition-directed maintenance tasks. Stroke time test data for motor-operated valves (MOVs) are used to address three aspects concerning RCM: (a) to determine if MOV stroke time testing was useful as a condition-directed PM task; (b) to determine and compare the plant-specificmore » MOV failure data from a broad RCM philosophy time period compared with a PM period and, also, compared with generic industry MOV failure data; and (c) to determine the effects and impact of the plant-specific MOV failure data on core damage frequency (CDF) and system unavailabilities for these emergency systems. The MOV stroke time test data from four emergency core cooling systems [i.e., high-pressure coolant injection (HPCI), reactor core isolation cooling (RCIC), low-pressure core spray (LPCS), and residual heat removal/low-pressure coolant injection (RHR/LPCI)] were gathered from Philadelphia Electric Company`s Peach Bottom Atomic Power Station Units 2 and 3 between 1980 and 1992. The analyses showed that MOV stroke time testing was not a predictor for eminent failure and should be considered as a go/no-go test. The failure data from the broad RCM philosophy showed an improvement compared with the PM-period failure rates in the emergency core cooling system MOVs. Also, the plant-specific MOV failure rates for both maintenance philosophies were shown to be lower than the generic industry estimates.« less

  14. Enhanced Component Performance Study: Turbine-Driven Pumps 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-11-01

    This report presents an enhanced performance evaluation of turbine-driven pumps (TDPs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The TDP failure modes considered are failure to start (FTS), failure to run less than or equal to one hour (FTR=1H), failure to run more than one hour (FTR>1H), and normally running systems FTS and failure to run (FTR). The component reliability estimates and themore » reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified for TDP unavailability, for frequency of start demands for standby TDPs, and for run hours in the first hour after start. Statistically significant decreasing trends were identified for start demands for normally running TDPs, and for run hours per reactor critical year for normally running TDPs.« less

  15. A Genuine TEAM Player

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Qualtech Systems, Inc. developed a complete software system with capabilities of multisignal modeling, diagnostic analysis, run-time diagnostic operations, and intelligent interactive reasoners. Commercially available as the TEAMS (Testability Engineering and Maintenance System) tool set, the software can be used to reveal unanticipated system failures. The TEAMS software package is broken down into four companion tools: TEAMS-RT, TEAMATE, TEAMS-KB, and TEAMS-RDS. TEAMS-RT identifies good, bad, and suspect components in the system in real-time. It reports system health results from onboard tests, and detects and isolates failures within the system, allowing for rapid fault isolation. TEAMATE takes over from where TEAMS-RT left off by intelligently guiding the maintenance technician through the troubleshooting procedure, repair actions, and operational checkout. TEAMS-KB serves as a model management and collection tool. TEAMS-RDS (TEAMS-Remote Diagnostic Server) has the ability to continuously assess a system and isolate any failure in that system or its components, in real time. RDS incorporates TEAMS-RT, TEAMATE, and TEAMS-KB in a large-scale server architecture capable of providing advanced diagnostic and maintenance functions over a network, such as the Internet, with a web browser user interface.

  16. A case study in nonconformance and performance trend analysis

    NASA Technical Reports Server (NTRS)

    Maloy, Joseph E.; Newton, Coy P.

    1990-01-01

    As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.

  17. Wound Trauma Mediated Inflammatory Signaling Attenuates a Tissue Regenerative Response in MRL/MpJ Mice

    DTIC Science & Technology

    2010-01-01

    multi-system organ failure, and remote organ injury at sites such as the lung, liver , small intestines, and brain, representing major causes of...inflammatory components. The development of systemic inflammation following severe thermal injury has been implicated in immune dysfunction, delayed wound...healing, multi-system organ failure and increased mortality. Methods: In this study, we examined the impact of thermal injury -induced systemic

  18. 40 CFR 86.1803-01 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... procurement. Auxiliary Emission Control Device (AECD) means any element of design which senses temperature... components are those components which are designed primarily for emission control, or whose failure may... system as a means of providing electrical energy. Element of design means any control system (i.e...

  19. Cause and Effects of Fluorocarbon Degradation in Electronics and Opto-Electronic Systems

    NASA Technical Reports Server (NTRS)

    Predmore, Roamer E.; Canham, John S.

    2002-01-01

    Trace degradation of fluorocarbon or halocarbon materials must be addressed in their application in sensitive systems. As the dimensions and/or tolerances of components in a system decrease, the sensitivity of the system to trace fluorocarbon or halocarbon degradation products increases. Trace quantities of highly reactive degradation products from fluorocarbons have caused a number of failures of flight hardware. It is of utmost importance that the risk of system failure, resulting from trace amounts of reactive fluorocarbon degradation products be addressed in designs containing fluorocarbon or halocarbon materials. Thermal, electrical, and mechanical energy input into the system can multiply the risk of failure.

  20. Vibration detection of component health and operability

    NASA Technical Reports Server (NTRS)

    Baird, B. C.

    1975-01-01

    In order to prevent catastrophic failure and eliminate unnecessary periodic maintenance in the shuttle orbiter program environmental control system components, some means of detecting incipient failure in these components is required. The utilization was investigated of vibrational/acoustic phenomena as one of the principal physical parameters on which to base the design of this instrumentation. Baseline vibration/acoustic data was collected from three aircraft type fans and two aircraft type pumps over a frequency range from a few hertz to greater than 3000 kHz. The baseline data included spectrum analysis of the baseband vibration signal, spectrum analysis of the detected high frequency bandpass acoustic signal, and amplitude distribution of the high frequency bandpass acoustic signal. A total of eight bearing defects and two unbalancings was introduced into the five test items. All defects were detected by at least one of a set of vibration/acoustic parameters with a margin of at least 2:1 over the worst case baseline. The design of a portable instrument using this set of vibration/acoustic parameters for detecting incipient failures in environmental control system components is described.

  1. Heroic Reliability Improvement in Manned Space Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.

  2. Rate based failure detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less

  3. Acoustic emissions (AE) monitoring of large-scale composite bridge components

    NASA Astrophysics Data System (ADS)

    Velazquez, E.; Klein, D. J.; Robinson, M. J.; Kosmatka, J. B.

    2008-03-01

    Acoustic Emissions (AE) has been successfully used with composite structures to both locate and give a measure of damage accumulation. The current experimental study uses AE to monitor large-scale composite modular bridge components. The components consist of a carbon/epoxy beam structure as well as a composite to metallic bonded/bolted joint. The bonded joints consist of double lap aluminum splice plates bonded and bolted to carbon/epoxy laminates representing the tension rail of a beam. The AE system is used to monitor the bridge component during failure loading to assess the failure progression and using time of arrival to give insight into the origins of the failures. Also, a feature in the AE data called Cumulative Acoustic Emission counts (CAE) is used to give an estimate of the severity and rate of damage accumulation. For the bolted/bonded joints, the AE data is used to interpret the source and location of damage that induced failure in the joint. These results are used to investigate the use of bolts in conjunction with the bonded joint. A description of each of the components (beam and joint) is given with AE results. A summary of lessons learned for AE testing of large composite structures as well as insight into failure progression and location is presented.

  4. Failure detection and recovery in the assembly/contingency subsystem

    NASA Technical Reports Server (NTRS)

    Gantenbein, Rex E.

    1993-01-01

    The Assembly/Contingency Subsystem (ACS) is the primary communications link on board the Space Station. Any failure in a component of this system or in the external devices through which it communicates with ground-based systems will isolate the Station. The ACS software design includes a failure management capability (ACFM) that provides protocols for failure detection, isolation, and recovery (FDIR). The the ACFM design requirements as outlined in the current ACS software requirements specification document are reviewed. The activities carried out in this review include: (1) an informal, but thorough, end-to-end failure mode and effects analysis of the proposed software architecture for the ACFM; and (2) a prototype of the ACFM software, implemented as a C program under the UNIX operating system. The purpose of this review is to evaluate the FDIR protocols specified in the ACS design and the specifications themselves in light of their use in implementing the ACFM. The basis of failure detection in the ACFM is the loss of signal between the ground and the Station, which (under the appropriate circumstances) will initiate recovery to restore communications. This recovery involves the reconfiguration of the ACS to either a backup set of components or to a degraded communications mode. The initiation of recovery depends largely on the criticality of the failure mode, which is defined by tables in the ACFM and can be modified to provide a measure of flexibility in recovery procedures.

  5. Independent Orbiter Assessment (IOA): Analysis of the active thermal control subsystem

    NASA Technical Reports Server (NTRS)

    Sinclair, S. K.; Parkman, W. E.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical (PCIs) items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Active Thermal Control Subsystem (ATCS) are documented. The major purpose of the ATCS is to remove the heat, generated during normal Shuttle operations from the Orbiter systems and subsystems. The four major components of the ATCS contributing to the heat removal are: Freon Coolant Loops; Radiator and Flow Control Assembly; Flash Evaporator System; and Ammonia Boiler System. In order to perform the analysis, the IOA process utilized available ATCS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 310 failure modes analyzed, 101 were determined to be PCIs.

  6. Vapor Compression Distillation Subsystem (VCDS) Component Enhancement, Testing and Expert Fault Diagnostics Development, Volume 2

    NASA Technical Reports Server (NTRS)

    Mallinak, E. S.

    1987-01-01

    A wide variety of Space Station functions will be managed via computerized controls. Many of these functions are at the same time very complex and very critical to the operation of the Space Station. The Environmental Control and Life Support System is one group of very complex and critical subsystems which directly affects the ability of the crew to perform their mission. Failure of the Environmental Control and Life Support Subsystems are to be avoided and, in the event of failure, repair must be effected as rapidly as possible. Due to the complex and diverse nature of the subsystems, it is not possible to train the Space Station crew to be experts in the operation of all of the subsystems. By applying the concepts of computer-based expert systems, it may be possible to provide the necessary expertise for these subsystems in dedicated controllers. In this way, an expert system could avoid failures and extend the operating time of the subsystems even in the event of failure of some components, and could reduce the time to repair by being able to pinpoint the cause of a failure when one cannot be avoided.

  7. An approximation formula for a class of fault-tolerant computers

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1986-01-01

    An approximation formula is derived for the probability of failure for fault-tolerant process-control computers. These computers use redundancy and reconfiguration to achieve high reliability. Finite-state Markov models capture the dynamic behavior of component failure and system recovery, and the approximation formula permits an estimation of system reliability by an easy examination of the model.

  8. Model-OA wind turbine generator - Failure modes and effects analysis

    NASA Technical Reports Server (NTRS)

    Klein, William E.; Lali, Vincent R.

    1990-01-01

    The results failure modes and effects analysis (FMEA) conducted for wind-turbine generators are presented. The FMEA was performed for the functional modes of each system, subsystem, or component. The single-point failures were eliminated for most of the systems. The blade system was the only exception. The qualitative probability of a blade separating was estimated at level D-remote. Many changes were made to the hardware as a result of this analysis. The most significant change was the addition of the safety system. Operational experience and need to improve machine availability have resulted in subsequent changes to the various systems, which are also reflected in this FMEA.

  9. On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sappok, Alex; Ragaller, Paul; Herman, Andrew

    The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directlymore » monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.« less

  10. Comparative analysis on flexibility requirements of typical Cryogenic Transfer lines

    NASA Astrophysics Data System (ADS)

    Jadon, Mohit; Kumar, Uday; Choukekar, Ketan; Shah, Nitin; Sarkar, Biswanath

    2017-04-01

    The cryogenic systems and their applications; primarily in large Fusion devices, utilize multiple cryogen transfer lines of various sizes and complexities to transfer cryogenic fluids from plant to the various user/ applications. These transfer lines are composed of various critical sections i.e. tee section, elbows, flexible components etc. The mechanical sustainability (under failure circumstances) of these transfer lines are primary requirement for safe operation of the system and applications. The transfer lines need to be designed for multiple design constraints conditions like line layout, support locations and space restrictions. The transfer lines are subjected to single load and multiple load combinations, such as operational loads, seismic loads, leak in insulation vacuum loads etc. [1]. The analytical calculations and flexibility analysis using professional software are performed for the typical transfer lines without any flexible component, the results were analysed for functional and mechanical load conditions. The failure modes were identified along the critical sections. The same transfer line was then refurbished with the flexible components and analysed for failure modes. The flexible components provide additional flexibility to the transfer line system and make it safe. The results obtained from the analytical calculations were compared with those obtained from the flexibility analysis software calculations. The optimization of the flexible component’s size and selection was performed and components were selected to meet the design requirements as per code.

  11. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  12. On-Line Thermal Barrier Coating Monitoring for Real-Time Failure Protection and Life Maximization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis H. LeMieux

    2004-10-01

    Under the sponsorship of the U. S. Department of Energy's National Energy Laboratory, Siemens Westinghouse Power Corporation proposes a four year program titled, ''On-Line Thermal Barrier Coating (TBC) Monitor for Real-Time Failure Protection and Life Maximization'', to develop, build and install the first generation of an on-line TBC monitoring system for use on land -based advanced gas turbines (AGT). Federal deregulation in electric power generation has accelerated power plant owner's demand for improved reliability availability maintainability (RAM) of the land-based advanced gas turbines. As a result, firing temperatures have been increased substantially in the advanced turbine engines, and the TBCsmore » have been developed for maximum protection and life of all critical engine components operating at these higher temperatures. Losing TBC protection can therefore accelerate the degradation of substrate components materials and eventually lead to a premature failure of critical component and costly unscheduled power outages. This program seeks to substantially improve the operating life of high cost gas turbine components using TBC; thereby, lowering the cost of maintenance leading to lower cost of electricity. Siemens Westinghouse Power Corporation has teamed with Indigo Systems; a supplier of state-of-the-art infrared camera systems, and Wayne State University, a leading research organization.« less

  13. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Technical Reports Server (NTRS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  14. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Astrophysics Data System (ADS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-10-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  15. In Situ, On-Demand Lubrication System Developed for Space Mechanisms

    NASA Technical Reports Server (NTRS)

    Marchetti, Mario; Pepper, Stephen V.; Jansen, Mark J.; Predmore, Roamer E.

    2003-01-01

    Many moving mechanical assemblies (MMA) for space mechanisms rely on liquid lubricants to provide reliable, long-term performance. The proper performance of the MMA is critical in assuring a successful mission. Historically, mission lifetimes were short and MMA duty cycles were minimal. As mission lifetimes were extended, other components, such as batteries and computers, failed before lubricated systems. However, improvements in these ancillary systems over the last decade have left the tribological systems of the MMAs as the limiting factor in determining spacecraft reliability. Typically, MMAs are initially lubricated with a very small charge that is supposed to last the entire mission lifetime, often well in excess of 5 years. In many cases, the premature failure of a lubricated component can result in mission failure.

  16. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    NASA Technical Reports Server (NTRS)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  17. Probability of loss of assured safety in temperature dependent systems with multiple weak and strong links.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Dean; Oberkampf, William Louis; Helton, Jon Craig

    2004-12-01

    Relationships to determine the probability that a weak link (WL)/strong link (SL) safety system will fail to function as intended in a fire environment are investigated. In the systems under study, failure of the WL system before failure of the SL system is intended to render the overall system inoperational and thus prevent the possible occurrence of accidents with potentially serious consequences. Formal developments of the probability that the WL system fails to deactivate the overall system before failure of the SL system (i.e., the probability of loss of assured safety, PLOAS) are presented for several WWSL configurations: (i) onemore » WL, one SL, (ii) multiple WLs, multiple SLs with failure of any SL before any WL constituting failure of the safety system, (iii) multiple WLs, multiple SLs with failure of all SLs before any WL constituting failure of the safety system, and (iv) multiple WLs, multiple SLs and multiple sublinks in each SL with failure of any sublink constituting failure of the associated SL and failure of all SLs before failure of any WL constituting failure of the safety system. The indicated probabilities derive from time-dependent temperatures in the WL/SL system and variability (i.e., aleatory uncertainty) in the temperatures at which the individual components of this system fail and are formally defined as multidimensional integrals. Numerical procedures based on quadrature (i.e., trapezoidal rule, Simpson's rule) and also on Monte Carlo techniques (i.e., simple random sampling, importance sampling) are described and illustrated for the evaluation of these integrals. Example uncertainty and sensitivity analyses for PLOAS involving the representation of uncertainty (i.e., epistemic uncertainty) with probability theory and also with evidence theory are presented.« less

  18. Creep Life of Ceramic Components Using a Finite-Element-Based Integrated Design Program (CARES/CREEP)

    NASA Technical Reports Server (NTRS)

    Powers, L. M.; Jadaan, O. M.; Gyekenyesi, J. P.

    1998-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural application such as in advanced turbine engine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilizes commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life, of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the Ceramics Analysis and Reliability Evaluation of Structures/CREEP (CARES/CREEP) integrated design program, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benchmark problems and engine components are included.

  19. Creep Life of Ceramic Components Using a Finite-Element-Based Integrated Design Program (CARES/CREEP)

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.; Powers, L. M.; Jadaan, O. M.

    1998-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilized commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the CARES/CREEP (Ceramics Analysis and Reliability Evaluation of Structures/CREEP) integrated design programs, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benechmark problems and engine components are included.

  20. Fuzzy-based failure mode and effect analysis (FMEA) of a hybrid molten carbonate fuel cell (MCFC) and gas turbine system for marine propulsion

    NASA Astrophysics Data System (ADS)

    Ahn, Junkeon; Noh, Yeelyong; Park, Sung Ho; Choi, Byung Il; Chang, Daejun

    2017-10-01

    This study proposes a fuzzy-based FMEA (failure mode and effect analysis) for a hybrid molten carbonate fuel cell and gas turbine system for liquefied hydrogen tankers. An FMEA-based regulatory framework is adopted to analyze the non-conventional propulsion system and to understand the risk picture of the system. Since the participants of the FMEA rely on their subjective and qualitative experiences, the conventional FMEA used for identifying failures that affect system performance inevitably involves inherent uncertainties. A fuzzy-based FMEA is introduced to express such uncertainties appropriately and to provide flexible access to a risk picture for a new system using fuzzy modeling. The hybrid system has 35 components and has 70 potential failure modes, respectively. Significant failure modes occur in the fuel cell stack and rotary machine. The fuzzy risk priority number is used to validate the crisp risk priority number in the FMEA.

  1. LDEF electronic systems: Successes, failures, and lessons

    NASA Technical Reports Server (NTRS)

    Miller, Emmett; Porter, Dave; Smith, Dave; Brooks, Larry; Levorsen, Joe; Mulkey, Owen

    1991-01-01

    Following the Long Duration Exposure Facility (LDEF) retrieval, the Systems Special Investigation Group (SIG) participated in an extensive series of tests of various electronic systems, including the NASA provided data and initiate systems, and some experiment systems. Overall, these were found to have performed remarkably well, even though most were designed and tested under limited budgets and used at least some nonspace qualified components. However, several anomalies were observed, including a few which resulted in some loss of data. The postflight test program objectives, observations, and lessons learned from these examinations are discussed. All analyses are not yet complete, but observations to date will be summarized, including the Boeing experiment component studies and failure analysis results related to the Interstellar Gas Experiment. Based upon these observations, suggestions for avoiding similar problems on future programs are presented.

  2. Risk assessment for Industrial Control Systems quantifying availability using mean failure cost (MFC)

    DOE PAGES

    Chen, Qian; Abercrombie, Robert K; Sheldon, Frederick T.

    2015-09-23

    Industrial Control Systems (ICS) are commonly used in industries such as oil and natural gas, transportation, electric, water and wastewater, chemical, pharmaceutical, pulp and paper, food and beverage, as well as discrete manufacturing (e.g., automotive, aerospace, and durable goods.) SCADA systems are generally used to control dispersed assets using centralized data acquisition and supervisory control.Originally, ICS implementations were susceptible primarily to local threats because most of their components were located in physically secure areas (i.e., ICS components were not connected to IT networks or systems). The trend toward integrating ICS systems with IT networks (e.g., efficiency and the Internet ofmore » Things) provides significantly less isolation for ICS from the outside world thus creating greater risk due to external threats. Albeit, the availability of ICS/SCADA systems is critical to assuring safety, security and profitability. Such systems form the backbone of our national cyber-physical infrastructure.Herein, we extend the concept of mean failure cost (MFC) to address quantifying availability to harmonize well with ICS security risk assessment. This new measure is based on the classic formulation of Availability combined with Mean Failure Cost (MFC). Finally, the metric offers a computational basis to estimate the availability of a system in terms of the loss that each stakeholder stands to sustain as a result of security violations or breakdowns (e.g., deliberate malicious failures).« less

  3. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  4. Effect of Surge Current Testing on Reliability of Solid Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2008-01-01

    Tantalum capacitors manufactured per military specifications are established reliability components and have less than 0.001% of failures per 1000 hours for grades D or S, thus positioning these parts among electronic components with the highest reliability characteristics. Still, failures of tantalum capacitors do happen and when it occurs it might have catastrophic consequences for the system. To reduce this risk, further development of a screening and qualification system with special attention to the possible deficiencies in the existing procedures is necessary. The purpose of this work is evaluation of the effect of surge current stress testing on reliability of the parts at both steady-state and multiple surge current stress conditions. In order to reveal possible degradation and precipitate more failures, various part types were tested and stressed in the range of voltage and temperature conditions exceeding the specified limits. A model to estimate the probability of post-surge current testing-screening failures and measures to improve the effectiveness of the screening process has been suggested.

  5. Extending the life and recycle capability of earth storable propellant systems.

    NASA Technical Reports Server (NTRS)

    Schweickert, T. F.

    1972-01-01

    Rocket propulsion systems for reusable vehicles will be required to operate reliably for a large number of missions with a minimum of maintenance and a fast turnaround. For the space shuttle reaction control system to meet these requirements, current and prior related system failures were examined for their impact on reuse and, where warranted, component design and/or system configuration changes were defined for improving system service life. It was found necessary to change the pressurization component arrangement used on many single-use applications in order to eliminate a prevalent check valve failure mode and to incorporate redundant expulsion capability in propellant tank designs to achieve the necessary system reliability. Material flaws in pressurant and propellant tanks were noted to have a significant effect on tank cycle life. Finally, maintenance considerations dictated a modularized systems approach, allowing the system to be removed from the vehicle for service and repair at a remote site.

  6. Reliability demonstration test for load-sharing systems with exponential and Weibull components

    PubMed Central

    Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn’t yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics. PMID:29284030

  7. Reliability demonstration test for load-sharing systems with exponential and Weibull components.

    PubMed

    Xu, Jianyu; Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn't yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics.

  8. 10 CFR 55.41 - Written examination: Operators.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... elements, control rods, core instrumentation, and coolant flow. (3) Mechanical components and design..., and functions of reactivity control mechanisms and instrumentation. (7) Design, components, and functions of control and safety systems, including instrumentation, signals, interlocks, failure modes, and...

  9. 10 CFR 55.41 - Written examination: Operators.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... elements, control rods, core instrumentation, and coolant flow. (3) Mechanical components and design..., and functions of reactivity control mechanisms and instrumentation. (7) Design, components, and functions of control and safety systems, including instrumentation, signals, interlocks, failure modes, and...

  10. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  11. Mechanical systems readiness assessment and performance monitoring study

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The problem of mechanical devices which lack the real-time readiness assessment and performance monitoring capability required for future space missions is studied. The results of a test program to establish the feasibility of implementing structure borne acoustics, a nondestructive test technique, are described. The program included the monitoring of operational acoustic signatures of five separate mechanical components, each possessing distinct sound characteristics. Acoustic signatures were established for normal operation of each component. Critical failure modes were then inserted into the test components, and faulted acoustic signatures obtained. Predominant features of the sound signature were related back to operational events occurring within the components both for normal and failure mode operations. All of these steps can be automated. The structure borne acoustics technique lends itself to reducing checkout time, simplifying maintenance procedures, and reducing manual involvement in the checkout, operation, maintenance, and fault diagnosis of mechanical systems.

  12. Prognostics of Power Electronics, Methods and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Celaya, Jose R.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    Abstract Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.

  13. Failure-Time Distribution Of An m-Out-of-n System

    NASA Technical Reports Server (NTRS)

    Scheuer, Ernest M.

    1988-01-01

    Formulas for reliability extended to more general cases. Useful in analyses of reliabilities of practical systems and structures, especially of redundant systems of identical components, among which operating loads distributed equally.

  14. Operation of U.S. Geological Survey unmanned digital magnetic observatories

    USGS Publications Warehouse

    Wilson, L.R.

    1990-01-01

    The precision and continuity of data recorded by unmanned digital magnetic observatories depend on the type of data acquisition equipment used and operating procedures employed. Three generations of observatory systems used by the U.S. Geological Survey are described. A table listing the frequency of component failures in the current observatory system has been compiled for a 54-month period of operation. The cause of component failure was generally mechanical or due to lightning. The average percentage data loss per month for 13 observatories operating a combined total of 637 months was 9%. Frequency distributions of data loss intervals show the highest frequency of occurrence to be intervals of less than 1 h. Installation of the third generation system will begin in 1988. The configuration of the third generation observatory system will eliminate most of the mechanical problems, and its components should be less susceptible to lightning. A quasi-absolute coil-proton system will be added to obtain baseline control for component variation data twice daily. Observatory data, diagnostics, and magnetic activity indices will be collected at 12-min intervals via satellite at Golden, Colorado. An improvement in the quality and continuity of data obtained with the new system is expected. ?? 1990.

  15. Reliability and Maintainability Data for Lead Lithium Cooling Systems

    DOE PAGES

    Cadwallader, Lee

    2016-11-16

    This article presents component failure rate data for use in assessment of lead lithium cooling systems. Best estimate data applicable to this liquid metal coolant is presented. Repair times for similar components are also referenced in this work. These data support probabilistic safety assessment and reliability, availability, maintainability and inspectability analyses.

  16. Failure modes and effects analysis automation

    NASA Technical Reports Server (NTRS)

    Kamhieh, Cynthia H.; Cutts, Dannie E.; Purves, R. Byron

    1988-01-01

    A failure modes and effects analysis (FMEA) assistant was implemented as a knowledge based system and will be used during design of the Space Station to aid engineers in performing the complex task of tracking failures throughout the entire design effort. The three major directions in which automation was pursued were the clerical components of the FMEA process, the knowledge acquisition aspects of FMEA, and the failure propagation/analysis portions of the FMEA task. The system is accessible to design, safety, and reliability engineers at single user workstations and, although not designed to replace conventional FMEA, it is expected to decrease by many man years the time required to perform the analysis.

  17. Self-monitoring fiber reinforced polymer strengthening system for civil engineering infrastructures

    NASA Astrophysics Data System (ADS)

    Jiang, Guoliang; Dawood, Mina; Peters, Kara; Rizkalla, Sami

    2008-03-01

    Fiber reinforced polymer (FRP) materials are currently used for strengthening civil engineering infrastructures. The strengthening system is dependant on the bond characteristics of the FRP to the external surface of the structure to be effective in resisting the applied loads. This paper presents an innovative self-monitoring FRP strengthening system. The system consists of two components which can be embedded in FRP materials to monitor the global and local behavior of the strengthened structure respectively. The first component of the system is designed to evaluate the applied load acting on a structure based on elongation of the FRP layer along the entire span of the structure. Success of the global system has been demonstrated using a full-scale prestressed concrete bridge girder which was loaded up to failure. The test results indicate that this type of sensor can be used to accurately determine the load prior to failure within 15 percent of the measured value. The second sensor component consists of fiber Bragg grating sensors. The sensors were used to monitor the behavior of steel double-lap shear splices tested under tensile loading up to failure. The measurements were used to identify abnormal structural behavior such as epoxy cracking and FRP debonding. Test results were also compared to numerical values obtained from a three dimensional shear-lag model which was developed to predict the sensor response.

  18. A Summary of Taxonomies of Digital System Failure Modes Provided by the DigRel Task Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu T. L.; Yue M.; Postma, W.

    2012-06-25

    Recently, the CSNI directed WGRisk to set up a task group called DIGREL to initiate a new task on developing a taxonomy of failure modes of digital components for the purposes of PSA. It is an important step towards standardized digital I&C reliability assessment techniques for PSA. The objective of this paper is to provide a comparison of the failure mode taxonomies provided by the participants. The failure modes are classified in terms of their levels of detail. Software and hardware failure modes are discussed separately.

  19. Space System Survivability

    NASA Astrophysics Data System (ADS)

    Kuller, W. G.; Hanifen, D. W.

    1982-07-01

    Exoatmospheric detonations of nuclear weapons produce a broad spectrum of effects which can prevent operational space missions from being successfully accomplished. The spacecraft may be exposed to the prompt radiation from the detonations which can cause upset or burnout of critical mission components through Transient Radiation Effects on Electronics (TREE) or System Generated Electromagnetic Pulse (SGEMP). Continual exposure to the trapped radiation environment may cause component failure due to total dose or Electron Caused EMP (ECEMP). Satellite links to ground and airborne terminals are subject to serious degradation due to signal absorption and scintillation. The ground data stations and lines of communications are subject to failure from the broad range effects of high-altitude EMP.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meneses, Esteban; Ni, Xiang; Jones, Terry R

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of faultmore » tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.« less

  1. Digital Systems Validation Handbook. Volume 2

    DTIC Science & Technology

    1989-02-01

    0 TABLE 7.2-3. FAILURE RATES FOR MAJOR RDFCS COMPONENTS COMPONENT UNIT FAILURE RATE* Pitch Angle Gyro 303 Roll Angle Gyro 303 Yaw Rate Gyro 200...Airplane Weight 314,500 lb Altitude 35 ft Angle of Attack 10.91 0 Indicated Air Speed 168 kts Flap Deployment 22 o Transition capability was added to go...various pieces of information into the form needed by the FCCs. For example, roll angle and pitch angle are converted to three-wire AC signals, properly

  2. Meteorological Satellites (METSAT) and Earth Observing System (EOS) Advanced Microwave Sounding Unit-A (AMSU-A) Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    This Failure Modes and Effects Analysis (FMEA) is for the Advanced Microwave Sounding Unit-A (AMSU-A) instruments that are being designed and manufactured for the Meteorological Satellites Project (METSAT) and the Earth Observing System (EOS) integrated programs. The FMEA analyzes the design of the METSAT and EOS instruments as they currently exist. This FMEA is intended to identify METSAT and EOS failure modes and their effect on spacecraft-instrument and instrument-component interfaces. The prime objective of this FMEA is to identify potential catastrophic and critical failures so that susceptibility to the failures and their effects can be eliminated from the METSAT/EOS instruments.

  3. a New Method for Fmeca Based on Fuzzy Theory and Expert System

    NASA Astrophysics Data System (ADS)

    Byeon, Yoong-Tae; Kim, Dong-Jin; Kim, Jin-O.

    2008-10-01

    Failure Mode Effects and Criticality Analysis (FMECA) is one of most widely used methods in modern engineering system to investigate potential failure modes and its severity upon the system. FMECA evaluates criticality and severity of each failure mode and visualize the risk level matrix putting those indices to column and row variable respectively. Generally, those indices are determined subjectively by experts and operators. However, this process has no choice but to include uncertainty. In this paper, a method for eliciting expert opinions considering its uncertainty is proposed to evaluate the criticality and severity. In addition, a fuzzy expert system is constructed in order to determine the crisp value of risk level for each failure mode. Finally, an illustrative example system is analyzed in the case study. The results are worth considering in deciding the proper policies for each component of the system.

  4. Predicted performance of an integrated modular engine system

    NASA Technical Reports Server (NTRS)

    Binder, Michael; Felder, James L.

    1993-01-01

    Space vehicle propulsion systems are traditionally comprised of a cluster of discrete engines, each with its own set of turbopumps, valves, and a thrust chamber. The Integrated Modular Engine (IME) concept proposes a vehicle propulsion system comprised of multiple turbopumps, valves, and thrust chambers which are all interconnected. The IME concept has potential advantages in fault-tolerance, weight, and operational efficiency compared with the traditional clustered engine configuration. The purpose of this study is to examine the steady-state performance of an IME system with various components removed to simulate fault conditions. An IME configuration for a hydrogen/oxygen expander cycle propulsion system with four sets of turbopumps and eight thrust chambers has been modeled using the Rocket Engine Transient Simulator (ROCETS) program. The nominal steady-state performance is simulated, as well as turbopump thrust chamber and duct failures. The impact of component failures on system performance is discussed in the context of the system's fault tolerant capabilities.

  5. Cost decomposition of linear systems with application to model reduction

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.

    1980-01-01

    A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.

  6. The challenge of measuring emergency preparedness: integrating component metrics to build system-level measures for strategic national stockpile operations.

    PubMed

    Jackson, Brian A; Faith, Kay Sullivan

    2013-02-01

    Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.

  7. Software Considerations for Subscale Flight Testing of Experimental Control Laws

    NASA Technical Reports Server (NTRS)

    Murch, Austin M.; Cox, David E.; Cunningham, Kevin

    2009-01-01

    The NASA AirSTAR system has been designed to address the challenges associated with safe and efficient subscale flight testing of research control laws in adverse flight conditions. In this paper, software elements of this system are described, with an emphasis on components which allow for rapid prototyping and deployment of aircraft control laws. Through model-based design and automatic coding a common code-base is used for desktop analysis, piloted simulation and real-time flight control. The flight control system provides the ability to rapidly integrate and test multiple research control laws and to emulate component or sensor failures. Integrated integrity monitoring systems provide aircraft structural load protection, isolate the system from control algorithm failures, and monitor the health of telemetry streams. Finally, issues associated with software configuration management and code modularity are briefly discussed.

  8. Savannah River Site generic data base development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanton, C.H.; Eide, S.A.

    This report describes the results of a project to improve the generic component failure data base for the Savannah River Site (SRS). A representative list of components and failure modes for SRS risk models was generated by reviewing existing safety analyses and component failure data bases and from suggestions from SRS safety analysts. Then sources of data or failure rate estimates were identified and reviewed for applicability. A major source of information was the Nuclear Computerized Library for Assessing Reactor Reliability, or NUCLARR. This source includes an extensive collection of failure data and failure rate estimates for commercial nuclear powermore » plants. A recent Idaho National Engineering Laboratory report on failure data from the Idaho Chemical Processing Plant was also reviewed. From these and other recent sources, failure data and failure rate estimates were collected for the components and failure modes of interest. This information was aggregated to obtain a recommended generic failure rate distribution (mean and error factor) for each component failure mode.« less

  9. COMCAN; COMCAN2A; system safety common cause analysis. [IBM360; CDC CYBER176,175; FORTRAN IV (30%) and BAL (70%) (IBM360), FORTRAN IV (97%) and COMPASS (3%) (CDC CYBER176)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, G.R.; Wilson, J.R.

    COMCAN2A and COMCAN are designed to analyze complex systems such as nuclear plants for common causes of failure. A common cause event, or common mode failure, is a secondary cause that could contribute to the failure of more than one component and violates the assumption of independence. Analysis of such events is an integral part of system reliability and safety analysis. A significant common cause event is a secondary cause common to all basic events in one or more minimal cut sets. Minimal cut sets containing events from components sharing a common location or a common link are called commonmore » cause candidates. Components share a common location if no barrier insulates any one of them from the secondary cause. A common link is a dependency among components which cannot be removed by a physical barrier (e.g., a common energy source or common maintenance instructions).IBM360;CDC CYBER176,175; FORTRAN IV (30%) and BAL (70%) (IBM360), FORTRAN IV (97%) and COMPASS (3%) (CDC CYBER176).; OS/360 (IBM360) and NOS/BE 1.4 (CDC CYBER176), NOS 1.3 (CDC CYBER175); 140K bytes of memory for COMCAN and 242K (octal) words of memory for COMCAN2A.« less

  10. On-Line Thermal Barrier Coating Monitoring for Real-Time Failure Protection and Life Maximization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis H. LeMieux

    2005-04-01

    Under the sponsorship of the U. S. Department of Energy's National Energy Laboratory, Siemens Westinghouse Power Corporation proposes a four year program titled, ''On-Line Thermal Barrier Coating (TBC) Monitor for Real-Time Failure Protection and Life Maximization'', to develop, build and install the first generation of an on-line TBC monitoring system for use on land-based advanced gas turbines (AGT). Federal deregulation in electric power generation has accelerated power plant owner's demand for improved reliability availability maintainability (RAM) of the land-based advanced gas turbines. As a result, firing temperatures have been increased substantially in the advanced turbine engines, and the TBCs havemore » been developed for maximum protection and life of all critical engine components operating at these higher temperatures. Losing TBC protection can therefore accelerate the degradation of substrate components materials and eventually lead to a premature failure of critical component and costly unscheduled power outages. This program seeks to substantially improve the operating life of high cost gas turbine components using TBC; thereby, lowering the cost of maintenance leading to lower cost of electricity. Siemens Westinghouse Power Corporation has teamed with Indigo Systems, a supplier of state-of-the-art infrared camera systems, and Wayne State University, a leading research organization in the field of infrared non-destructive examination (NDE), to complete the program.« less

  11. ON-LINE THERMAL BARRIER COATING MONITORING FOR REAL-TIME FAILURE PROTECTION AND LIFE MAXIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis H. LeMieux

    2003-10-01

    Under the sponsorship of the U. S. Department of Energy's National Energy Laboratory, Siemens Westinghouse Power Corporation proposes a four year program titled, ''On-Line Thermal Barrier Coating (TBC) Monitor for Real-Time Failure Protection and Life Maximization,'' to develop, build and install the first generation of an on-line TBC monitoring system for use on land-based advanced gas turbines (AGT). Federal deregulation in electric power generation has accelerated power plant owner's demand for improved reliability, availability, and maintainability (RAM) of the land-based advanced gas turbines. As a result, firing temperatures have been increased substantially in the advanced turbine engines, and the TBCsmore » have been developed for maximum protection and life of all critical engine components operating at these higher temperatures. Losing TBC protection can, therefore, accelerate the degradation of substrate component materials and eventually lead to a premature failure of critical components and costly unscheduled power outages. This program seeks to substantially improve the operating life of high cost gas turbine components using TBC; thereby, lowering the cost of maintenance leading to lower cost of electricity. Siemens Westinghouse Power Corporation has teamed with Indigo Systems, a supplier of state-of-the-art infrared camera systems, and Wayne State University, a leading research organization in the field of infrared non-destructive examination (NDE), to complete the program.« less

  12. ON-LINE THERMAL BARRIER COATING MONITORING FOR REAL-TIME FAILURE PROTECTION AND LIFE MAXIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis H. LeMieux

    2003-07-01

    Under the sponsorship of the U. S. Department of Energy's National Energy Laboratory, Siemens Westinghouse Power Corporation proposes a four year program titled, ''On-Line Thermal Barrier Coating (TBC) Monitor for Real-Time Failure Protection and Life Maximization,'' to develop, build and install the first generation of an on-line TBC monitoring system for use on land-based advanced gas turbines (AGT). Federal deregulation in electric power generation has accelerated power plant owner's demand for improved reliability, availability, and maintainability (RAM) of the land-based advanced gas turbines. As a result, firing temperatures have been increased substantially in the advanced turbine engines, and the TBCsmore » have been developed for maximum protection and life of all critical engine components operating at these higher temperatures. Losing TBC protection can, therefore, accelerate the degradation of substrate component materials and eventually lead to a premature failure of critical components and costly unscheduled power outages. This program seeks to substantially improve the operating life of high cost gas turbine components using TBC; thereby, lowering the cost of maintenance leading to lower cost of electricity. Siemens Westinghouse Power Corporation has teamed with Indigo Systems, a supplier of state-of-the-art infrared camera systems, and Wayne State University, a leading research organization in the field of infrared non-destructive examination (NDE), to complete the program.« less

  13. On-Line Thermal Barrier Coating Monitoring for Real-Time Failure Protection and Life Maximization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis H. LeMieux

    2005-10-01

    Under the sponsorship of the U. S. Department of Energy's National Energy Laboratory, Siemens Power Generation, Inc proposed a four year program titled, ''On-Line Thermal Barrier Coating (TBC) Monitor for Real-Time Failure Protection and Life Maximization'', to develop, build and install the first generation of an on-line TBC monitoring system for use on land-based advanced gas turbines (AGT). Federal deregulation in electric power generation has accelerated power plant owner's demand for improved reliability availability maintainability (RAM) of the land-based advanced gas turbines. As a result, firing temperatures have been increased substantially in the advanced turbine engines, and the TBCs havemore » been developed for maximum protection and life of all critical engine components operating at these higher temperatures. Losing TBC protection can therefore accelerate the degradation of substrate components materials and eventually lead to a premature failure of critical component and costly unscheduled power outages. This program seeks to substantially improve the operating life of high cost gas turbine components using TBC; thereby, lowering the cost of maintenance leading to lower cost of electricity. Siemens Power Generation, Inc. has teamed with Indigo Systems, a supplier of state-of-the-art infrared camera systems, and Wayne State University, a leading research organization in the field of infrared non-destructive examination (NDE), to complete the program.« less

  14. Patient Litter System Response in a Full-Scale CH-46 Crash Test.

    PubMed

    Weisenbach, Charles A; Rooks, Tyler; Bowman, Troy; Fralish, Vince; McEntire, B Joseph

    2017-03-01

    U.S. Military aeromedical patient litter systems are currently required to meet minimal static strength performance requirements at the component level. Operationally, these components must function as a system and are subjected to the dynamics of turbulent flight and potentially crash events. The first of two full-scale CH-46 crash tests was conducted at NASA's Langley Research Center and included an experiment to assess patient and litter system response during a severe but survivable crash event. A three-tiered strap and pole litter system was mounted into the airframe and occupied by three anthropomorphic test devices (ATDs). During the crash event, the litter system failed to maintain structural integrity and collapsed. Component structural failures were recorded from the litter support system and the litters. The upper ATD was displaced laterally into the cabin, while the middle ATD was displaced longitudinally into the cabin. Acceleration, force, and bending moment data from the instrumented middle ATD were analyzed using available injury criteria. Results indicated that a patient might sustain a neck injury. The current test illustrates that a litter system, with components designed and tested to static requirements only, experiences multiple component structural failures during a dynamic crash event and does not maintain restraint control of its patients. It is unknown if a modern litter system, with components tested to the same static criteria, would perform differently. A systems level dynamic performance requirement needs to be developed so that patients can be provided with protection levels equivalent to that provided to seated aircraft occupants. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  15. Advanced Health Management of a Brushless Direct Current Motor/Controller

    NASA Technical Reports Server (NTRS)

    Pickett, R. D.

    2003-01-01

    This effort demonstrates that health management can be taken to the component level for electromechanical systems. The same techniques can be applied to take any health management system to the component level, based on the practicality of the implementation for that particular system. This effort allows various logic schemes to be implemented for the identification and management of failures. By taking health management to the component level, integrated vehicle health management systems can be enhanced by protecting box-level avionics from being shut down in order to isolate a failed computer.

  16. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  17. Creep Life Prediction of Ceramic Components Using the Finite Element Based Integrated Design Program (CARES/Creep)

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.

    1997-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. Such long life requirements necessitate subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this work is to present a design methodology for predicting the lifetimes of structural components subjected to multiaxial creep loading. This methodology utilizes commercially available finite element packages and takes into account the time varying creep stress distributions (stress relaxation). In this methodology, the creep life of a component is divided into short time steps, during which, the stress and strain distributions are assumed constant. The damage, D, is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. For components subjected to predominantly tensile loading, failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity.

  18. Independent Orbiter Assessment (IOA): Analysis of the remote manipulator system

    NASA Technical Reports Server (NTRS)

    Tangorra, F.; Grasmeder, R. F.; Montgomery, A. D.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Remote Manipulator System (RMS) are documented. The RMS hardware and software are primarily required for deploying and/or retrieving up to five payloads during a single mission, capture and retrieve free-flying payloads, and for performing Manipulator Foot Restraint operations. Specifically, the RMS hardware consists of the following components: end effector; displays and controls; manipulator controller interface unit; arm based electronics; and the arm. The IOA analysis process utilized available RMS hardware drawings, schematics and documents for defining hardware assemblies, components and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 574 failure modes analyzed, 413 were determined to be PCIs.

  19. An intelligent control system for failure detection and controller reconfiguration

    NASA Technical Reports Server (NTRS)

    Biswas, Saroj K.

    1994-01-01

    We present an architecture of an intelligent restructurable control system to automatically detect failure of system components, assess its impact on system performance and safety, and reconfigure the controller for performance recovery. Fault detection is based on neural network associative memories and pattern classifiers, and is implemented using a multilayer feedforward network. Details of the fault detection network along with simulation results on health monitoring of a dc motor have been presented. Conceptual developments for fault assessment using an expert system and controller reconfiguration using a neural network are outlined.

  20. Spacecraft dynamics characterization and control system failure detection. Volume 3: Control system failure monitoring

    NASA Technical Reports Server (NTRS)

    Vanschalkwyk, Christiaan M.

    1992-01-01

    We discuss the application of Generalized Parity Relations to two experimental flexible space structures, the NASA Langley Mini-Mast and Marshall Space Flight Center ACES mast. We concentrate on the generation of residuals and make no attempt to implement the Decision Function. It should be clear from the examples that are presented whether it would be possible to detect the failure of a specific component. We derive the equations from Generalized Parity Relations. Two special cases are treated: namely, Single Sensor Parity Relations (SSPR) and Double Sensor Parity Relations (DSPR). Generalized Parity Relations for actuators are also derived. The NASA Langley Mini-Mast and the application of SSPR and DSPR to a set of displacement sensors located at the tip of the Mini-Mast are discussed. The performance of a reduced order model that includes the first five models of the mast is compared to a set of parity relations that was identified on a set of input-output data. Both time domain and frequency domain comparisons are made. The effect of the sampling period and model order on the performance of the Residual Generators are also discussed. Failure detection experiments where the sensor set consisted of two gyros and an accelerometer are presented. The effects of model order and sampling frequency are again illustrated. The detection of actuator failures is discussed. We use Generalized Parity Relations to monitor control system component failures on the ACES mast. An overview is given of the Failure Detection Filter and experimental results are discussed. Conclusions and directions for future research are given.

  1. A Novel Solution-Technique Applied to a Novel WAAS Architecture

    NASA Technical Reports Server (NTRS)

    Bavuso, J.

    1998-01-01

    The Federal Aviation Administration has embarked on an historic task of modernizing and significantly improving the national air transportation system. One system that uses the Global Positioning System (GPS) to determine aircraft navigational information is called the Wide Area Augmentation System (WAAS). This paper describes a reliability assessment of one candidate system architecture for the WAAS. A unique aspect of this study regards the modeling and solution of a candidate system that allows a novel cold sparing scheme. The cold spare is a WAAS communications satellite that is fabricated and launched after a predetermined number of orbiting satellite failures have occurred and after some stochastic fabrication time transpires. Because these satellites are complex systems with redundant components, they exhibit an increasing failure rate with a Weibull time to failure distribution. Moreover, the cold spare satellite build-time is Weibull and upon launch is considered to be a good-as-new system with an increasing failure rate and a Weibull time to failure distribution as well. The reliability model for this system is non-Markovian because three distinct system clocks are required: the time to failure of the orbiting satellites, the build time for the cold spare, and the time to failure for the launched spare satellite. A powerful dynamic fault tree modeling notation and Monte Carlo simulation technique with importance sampling are shown to arrive at a reliability prediction for a 10 year mission.

  2. Analysis on Sealing Reliability of Bolted Joint Ball Head Component of Satellite Propulsion System

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Fan, Yougao; Gao, Feng; Gu, Shixin; Wang, Wei

    2018-01-01

    Propulsion system is one of the important subsystems of satellite, and its performance directly affects the service life, attitude control and reliability of the satellite. The Paper analyzes the sealing principle of bolted joint ball head component of satellite propulsion system and discuss from the compatibility of hydrazine anhydrous and bolted joint ball head component, influence of ground environment on the sealing performance of bolted joint ball heads, and material failure caused by environment, showing that the sealing reliability of bolted joint ball head component is good and the influence of above three aspects on sealing of bolted joint ball head component can be ignored.

  3. An Evidence Theoretic Approach to Design of Reliable Low-Cost UAVs

    DTIC Science & Technology

    2009-07-28

    given period. For complex systems with various stages of missions, “ success ” becomes hard to define. For a UAV, for example, is success defined as...For this reason, the proposed methods in this thesis investigate probability of failure (PoF ) rather than probability of success . Further, failure will...reduction in system PoF . Figure 25 illustrates this; a single component 43 (A) from the original system (Figure 25a) is modified to act in a subsystem with

  4. Real-time diagnostics for a reusable rocket engine

    NASA Technical Reports Server (NTRS)

    Guo, T. H.; Merrill, W.; Duyar, A.

    1992-01-01

    A hierarchical, decentralized diagnostic system is proposed for the Real-Time Diagnostic System component of the Intelligent Control System (ICS) for reusable rocket engines. The proposed diagnostic system has three layers of information processing: condition monitoring, fault mode detection, and expert system diagnostics. The condition monitoring layer is the first level of signal processing. Here, important features of the sensor data are extracted. These processed data are then used by the higher level fault mode detection layer to do preliminary diagnosis on potential faults at the component level. Because of the closely coupled nature of the rocket engine propulsion system components, it is expected that a given engine condition may trigger more than one fault mode detector. Expert knowledge is needed to resolve the conflicting reports from the various failure mode detectors. This is the function of the diagnostic expert layer. Here, the heuristic nature of this decision process makes it desirable to use an expert system approach. Implementation of the real-time diagnostic system described above requires a wide spectrum of information processing capability. Generally, in the condition monitoring layer, fast data processing is often needed for feature extraction and signal conditioning. This is usually followed by some detection logic to determine the selected faults on the component level. Three different techniques are used to attack different fault detection problems in the NASA LeRC ICS testbed simulation. The first technique employed is the neural network application for real-time sensor validation which includes failure detection, isolation, and accommodation. The second approach demonstrated is the model-based fault diagnosis system using on-line parameter identification. Besides these model based diagnostic schemes, there are still many failure modes which need to be diagnosed by the heuristic expert knowledge. The heuristic expert knowledge is implemented using a real-time expert system tool called G2 by Gensym Corp. Finally, the distributed diagnostic system requires another level of intelligence to oversee the fault mode reports generated by component fault detectors. The decision making at this level can best be done using a rule-based expert system. This level of expert knowledge is also implemented using G2.

  5. Cascading failures in interdependent networks with finite functional components

    NASA Astrophysics Data System (ADS)

    Di Muro, M. A.; Buldyrev, S. V.; Stanley, H. E.; Braunstein, L. A.

    2016-10-01

    We present a cascading failure model of two interdependent networks in which functional nodes belong to components of size greater than or equal to s . We find theoretically and via simulation that in complex networks with random dependency links the transition is first order for s ≥3 and continuous for s =2 . We also study interdependent lattices with a distance constraint r in the dependency links and find that increasing r moves the system from a regime without a phase transition to one with a second-order transition. As r continues to increase, the system collapses in a first-order transition. Each regime is associated with a different structure of domain formation of functional nodes.

  6. Oxygen sensor signal validation for the safety of the rebreather diver.

    PubMed

    Sieber, Arne; L'abbate, Antonio; Bedini, Remo

    2009-03-01

    In electronically controlled, closed-circuit rebreather diving systems, the partial pressure of oxygen inside the breathing loop is controlled with three oxygen sensors, a microcontroller and a solenoid valve - critical components that may fail. State-of-the-art detection of sensor failure, based on a voting algorithm, may fail under circumstances where two or more sensors show the same but incorrect values. The present paper details a novel rebreather controller that offers true sensor-signal validation, thus allowing efficient and reliable detection of sensor failure. The core components of this validation system are two additional solenoids, which allow an injection of oxygen or diluent gas directly across the sensor membrane.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  8. Development of KSC program for investigating and generating field failure rates. Reliability handbook for ground support equipment

    NASA Technical Reports Server (NTRS)

    Bloomquist, C. E.; Kallmeyer, R. H.

    1972-01-01

    Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results are also included.

  9. 49 CFR 571.105 - Standard No. 105; Hydraulic and electric brake systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... current, and which may include a non-electrical source of power designed to charge batteries and... dissipating electrical energy. Skid number means the frictional resistance of a pavement measured in..., designed so that a single failure in any subsystem (such as a leakage-type failure of a pressure component...

  10. 49 CFR 571.105 - Standard No. 105; Hydraulic and electric brake systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... current, and which may include a non-electrical source of power designed to charge batteries and... dissipating electrical energy. Skid number means the frictional resistance of a pavement measured in..., designed so that a single failure in any subsystem (such as a leakage-type failure of a pressure component...

  11. 49 CFR 571.105 - Standard No. 105; Hydraulic and electric brake systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... current, and which may include a non-electrical source of power designed to charge batteries and... dissipating electrical energy. Skid number means the frictional resistance of a pavement measured in..., designed so that a single failure in any subsystem (such as a leakage-type failure of a pressure component...

  12. Simulation-driven machine learning: Bearing fault classification

    NASA Astrophysics Data System (ADS)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  13. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  14. Graceful Failure, Engineering, and Planning for Extremes: The Engineering for Climate Extremes Partnership (ECEP)

    NASA Astrophysics Data System (ADS)

    Bruyere, C. L.; Tye, M. R.; Holland, G. J.; Done, J.

    2015-12-01

    Graceful failure acknowledges that all systems will fail at some level and incorporates the potential for failure as a key component of engineering design, community planning, and the associated research and development. This is a fundamental component of the ECEP, an interdisciplinary partnership bringing together scientific, engineering, cultural, business and government expertise to develop robust, well-communicated predictions and advice on the impacts of weather and climate extremes in support of decision-making. A feature of the partnership is the manner in which basic and applied research and development is conducted in direct collaboration with the end user. A major ECEP focus is the Global Risk and Resilience Toolbox (GRRT) that is aimed at developing public-domain, risk-modeling and response data and planning system in support of engineering design, and community planning and adaptation activities. In this presentation I will outline the overall ECEP and GRIP activities, and expand on the 'graceful failure' concept. Specific examples for direct assessment and prediction of hurricane impacts and damage potential will be included.

  15. Modeling Hydraulic Components for Automated FMEA of a Braking System

    DTIC Science & Technology

    2014-12-23

    Modeling Hydraulic Components for Automated FMEA of a Braking System Peter Struss, Alessandro Fraracci Tech. Univ. of Munich, 85748 Garching...Germany struss@in.tum.de ABSTRACT This paper presents work on model-based automation of failure-modes-and-effects analysis ( FMEA ) applied to...the hydraulic part of a vehicle braking system. We describe the FMEA task and the application problem and outline the foundations for automating the

  16. Microtensile bond strength of etch and rinse versus self-etch adhesive systems.

    PubMed

    Hamouda, Ibrahim M; Samra, Nagia R; Badawi, Manal F

    2011-04-01

    The aim of this study was to compare the microtensile bond strength of the etch and rinse adhesive versus one-component or two-component self-etch adhesives. Twelve intact human molar teeth were cleaned and the occlusal enamel of the teeth was removed. The exposed dentin surfaces were polished and rinsed, and the adhesives were applied. A microhybride composite resin was applied to form specimens of 4 mm height and 6 mm diameter. The specimens were sectioned perpendicular to the adhesive interface to produce dentin-resin composite sticks, with an adhesive area of approximately 1.4 mm(2). The sticks were subjected to tensile loading until failure occurred. The debonded areas were examined with a scanning electron microscope to determine the site of failure. The results showed that the microtensile bond strength of the etch and rinse adhesive was higher than that of one-component or two-component self-etch adhesives. The scanning electron microscope examination of the dentin surfaces revealed adhesive and mixed modes of failure. The adhesive mode of failure occurred at the adhesive/dentin interface, while the mixed mode of failure occurred partially in the composite and partially at the adhesive/dentin interface. It was concluded that the etch and rinse adhesive had higher microtensile bond strength when compared to that of the self-etch adhesives. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Updated System-Availability and Resource-Allocation Program

    NASA Technical Reports Server (NTRS)

    Viterna, Larry

    2004-01-01

    A second version of the Availability, Cost and Resource Allocation (ACARA) computer program has become available. The first version was reported in an earlier tech brief. To recapitulate: ACARA analyzes the availability, mean-time-between-failures of components, life-cycle costs, and scheduling of resources of a complex system of equipment. ACARA uses a statistical Monte Carlo method to simulate the failure and repair of components while complying with user-specified constraints on spare parts and resources. ACARA evaluates the performance of the system on the basis of a mathematical model developed from a block-diagram representation. The previous version utilized the MS-DOS operating system and could not be run by use of the most recent versions of the Windows operating system. The current version incorporates the algorithms of the previous version but is compatible with Windows and utilizes menus and a file-management approach typical of Windows-based software.

  18. Fatigue failure of metal components as a factor in civil aircraft accidents

    NASA Technical Reports Server (NTRS)

    Holshouser, W. L.; Mayner, R. D.

    1972-01-01

    A review of records maintained by the National Transportation Safety Board showed that 16,054 civil aviation accidents occurred in the United States during the 3-year period ending December 31, 1969. Material failure was an important factor in the cause of 942 of these accidents. Fatigue was identified as the mode of the material failures associated with the cause of 155 accidents and in many other accidents the records indicated that fatigue failures might have been involved. There were 27 fatal accidents and 157 fatalities in accidents in which fatigue failures of metal components were definitely identified. Fatigue failures associated with accidents occurred most frequently in landing-gear components, followed in order by powerplant, propeller, and structural components in fixed-wing aircraft and tail-rotor and main-rotor components in rotorcraft. In a study of 230 laboratory reports on failed components associated with the cause of accidents, fatigue was identified as the mode of failure in more than 60 percent of the failed components. The most frequently identified cause of fatigue, as well as most other types of material failures, was improper maintenance (including inadequate inspection). Fabrication defects, design deficiencies, defective material, and abnormal service damage also caused many fatigue failures. Four case histories of major accidents are included in the paper as illustrations of some of the factors invovled in fatigue failures of aircraft components.

  19. NAC Off-Vehicle Brake Testing Project

    DTIC Science & Technology

    2007-05-01

    disc pads/rotors and drum shoe assemblies/ drums - Must use vehicle “OEM” brake /hub-end hardware, or ESA... brake component comparison analysis (primary)* - brake system design analysis - brake system component failure analysis - (*) limited to disc pads...e.g. disc pads/rotors, drum shoe assemblies/ drums . - Not limited to “OEM” brake /hub-end hardware as there is none ! - Weight transfer, plumbing,

  20. Apollo CSM Power Generation System Design Considerations, Failure Modes and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Interbartolo, Michael

    2009-01-01

    The objectives of this slide presentation are to: review the basic design criteria for fuel cells (FC's), review design considerations during developmental phase that affected Block I and Block II vehicles, summarize the conditions that led to the failure of components in the FC's, and state the solution implemented for each failure. It reviews the location of the fuel cells, the fuel cell theory the design criteria going into development phase and coming from the development phase, failures and solutions of Block I and II, and the lessons learned.

  1. Space Vehicle Reliability Modeling in DIORAMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tornga, Shawn Robert

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  2. A simplified fragility analysis of fan type cable stayed bridges

    NASA Astrophysics Data System (ADS)

    Khan, R. A.; Datta, T. K.; Ahmad, S.

    2005-06-01

    A simplified fragility analysis of fan type cable stayed bridges using Probabilistic Risk Analysis (PRA) procedure is presented for determining their failure probability under random ground motion. Seismic input to the bridge support is considered to be a risk consistent response spectrum which is obtained from a separate analysis. For the response analysis, the bridge deck is modeles as a beam supported on spring at different points. The stiffnesses of the springs are determined by a separate 2D static analysis of cable-tower-deck system. The analysis provides a coupled stiffness matrix for the spring system. A continuum method of analysis using dynamic stiffness is used to determine the dynamic properties of the bridges. The response of the bridge deck is obtained by the response spectrum method of analysis as applied to multidegree of freedom system which duly takes into account the quasi-static component of bridge deck vibration. The fragility analysis includes uncertainties arising due to the variation in ground motion, material property, modeling, method of analysis, ductility factor and damage concentration effect. Probability of failure of the bridge deck is determined by the First Order Second Moment (FOSM) method of reliability. A three span double plane symmetrical fan type cable stayed bridge of total span 689 m, is used as an illustrative example. The fragility curves for the bridge deck failure are obtained under a number of parametric variations. Some of the important conclusions of the study indicate that (i) not only vertical component but also the horizontal component of ground motion has considerable effect on the probability of failure; (ii) ground motion with no time lag between support excitations provides a smaller probability of failure as compared to ground motion with very large time lag between support excitation; and (iii) probability of failure may considerably increase soft soil condition.

  3. Epidemic failure detection and consensus for extreme parallelism

    DOE PAGES

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...

    2017-02-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  4. Spatial correlation analysis of cascading failures: Congestions and Blackouts

    PubMed Central

    Daqing, Li; Yinan, Jiang; Rui, Kang; Havlin, Shlomo

    2014-01-01

    Cascading failures have become major threats to network robustness due to their potential catastrophic consequences, where local perturbations can induce global propagation of failures. Unlike failures spreading via direct contacts due to structural interdependencies, overload failures usually propagate through collective interactions among system components. Despite the critical need in developing protection or mitigation strategies in networks such as power grids and transportation, the propagation behavior of cascading failures is essentially unknown. Here we find by analyzing our collected data that jams in city traffic and faults in power grid are spatially long-range correlated with correlations decaying slowly with distance. Moreover, we find in the daily traffic, that the correlation length increases dramatically and reaches maximum, when morning or evening rush hour is approaching. Our study can impact all efforts towards improving actively system resilience ranging from evaluation of design schemes, development of protection strategies to implementation of mitigation programs. PMID:24946927

  5. Response analysis of curved bridge with unseating failure control system under near-fault ground motions

    NASA Astrophysics Data System (ADS)

    Zuo, Ye; Sun, Guangjun; Li, Hongjing

    2018-01-01

    Under the action of near-fault ground motions, curved bridges are prone to pounding, local damage of bridge components and even unseating. A multi-scale fine finite element model of a typical three-span curved bridge is established by considering the elastic-plastic behavior of piers and pounding effect of adjacent girders. The nonlinear time-history method is used to study the seismic response of the curved bridge equipped with unseating failure control system under the action of near-fault ground motion. An in-depth analysis is carried to evaluate the control effect of the proposed unseating failure control system. The research results indicate that under the near-fault ground motion, the seismic response of the curved bridge is strong. The unseating failure control system perform effectively to reduce the pounding force of the adjacent girders and the probability of deck unseating.

  6. 10 CFR 50.73 - Licensee event report system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... plant design; or (2) Normal and expected wear or degradation. (x) Any event that posed an actual threat... discovery of each component or system failure or procedural error. (J) For each human performance related...

  7. Cyber-Physical System Security of Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dagle, Jeffery E.

    2012-01-31

    Abstract—This panel presentation will provide perspectives of cyber-physical system security of smart grids. As smart grid technologies are deployed, the interconnected nature of these systems is becoming more prevalent and more complex, and the cyber component of this cyber-physical system is increasing in importance. Studying system behavior in the face of failures (e.g., cyber attacks) allows a characterization of the systems’ response to failure scenarios, loss of communications, and other changes in system environment (such as the need for emergent updates and rapid reconfiguration). The impact of such failures on the availability of the system can be assessed and mitigationmore » strategies considered. Scenarios associated with confidentiality, integrity, and availability are considered. The cyber security implications associated with the American Recovery and Reinvestment Act of 2009 in the United States are discussed.« less

  8. The Local Wind Pump for Marginal Societies in Indonesia: A Perspective of Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Gunawan, Insan; Taufik, Ahmad

    2007-10-01

    There are many efforts to reduce a cost of investment of well established hybrid wind pump applied to rural areas. A recent study on a local wind pump (LWP) for marginal societies in Indonesia (traditional farmers, peasant and tribes) was one of the efforts reporting a new application area. The objectives of the study were defined to measure reliability value of the LWP due to fluctuated wind intensity, low wind speed, economic point of view regarding a prolong economic crisis occurring and an available local component of the LWP and to sustain economics productivity (agriculture product) of the society. In the study, a fault tree analysis (FTA) was deployed as one of three methods used for assessing the LWP. In this article, the FTA has been thoroughly discussed in order to improve a better performance of the LWP applied in dry land watering system of Mesuji district of Lampung province-Indonesia. In the early stage, all of local component of the LWP was classified in term of its function. There were four groups of the components. Moreover, all of the sub components of each group were subjected to failure modes of the FTA, namely (1) primary failure modes; (2) secondary failure modes and (3) common failure modes. In the data processing stage, an available software package, ITEM was deployed. It was observed that the component indicated obtaining relative a long life duration of operational life cycle in 1,666 hours. Moreover, to enhance high performance the LWP, maintenance schedule, critical sub component suffering from failure and an overhaul priority have been identified in term of quantity values. Throughout a year pilot project, it can be concluded that the LWP is a reliable product to the societies enhancing their economics productivities.

  9. Immunity-based detection, identification, and evaluation of aircraft sub-system failures

    NASA Astrophysics Data System (ADS)

    Moncayo, Hever Y.

    This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.

  10. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  11. Recent advances in computational structural reliability analysis methods

    NASA Astrophysics Data System (ADS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  12. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  13. Failure criterion for materials with spatially correlated mechanical properties

    NASA Astrophysics Data System (ADS)

    Faillettaz, J.; Or, D.

    2015-03-01

    The role of spatially correlated mechanical elements in the failure behavior of heterogeneous materials represented by fiber bundle models (FBMs) was evaluated systematically for different load redistribution rules. Increasing the range of spatial correlation for FBMs with local load sharing is marked by a transition from ductilelike failure characteristics into brittlelike failure. The study identified a global failure criterion based on macroscopic properties (external load and cumulative damage) that is independent of spatial correlation or load redistribution rules. This general metric could be applied to assess the mechanical stability of complex and heterogeneous systems and thus provide an important component for early warning of a class of geophysical ruptures.

  14. Behavioral System Feedback Measurement Failure: Sweeping Quality under the Rug

    ERIC Educational Resources Information Center

    Mihalic, Maria T.; Ludwig, Timothy D.

    2009-01-01

    Behavioral Systems rely on valid measurement systems to manage processes and feedback and to deliver contingencies. An examination of measurement system components designed to track customer service quality of furniture delivery drivers revealed the measurement system failed to capture information it was designed to measure. A reason for this…

  15. PV System Component Fault and Failure Compilation and Analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne

    This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.

  16. Final Report: Studies in Structural, Stochastic and Statistical Reliability for Communication Networks and Engineered Systems

    DTIC Science & Technology

    to do so, and (5) three distinct versions of the problem of estimating component reliability from system failure-time data are treated, each resulting inconsistent estimators with asymptotically normal distributions.

  17. Developing Crash-Resistant Electronic Services.

    ERIC Educational Resources Information Center

    Almquist, Arne J.

    1997-01-01

    Libraries' dependence on computers can lead to frustrations for patrons and staff during downtime caused by computer system failures. Advice for reducing the number of crashes is provided, focusing on improved training for systems staff, better management of library systems, and the development of computer systems using quality components which…

  18. Preliminary design of a solar central receiver for a site-specific repowering application (Saguaro Power Plant). Volume IV. Appendixes. Final report, October 1982-September 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, E.R.

    1983-09-01

    The appendixes for the Saguaro Power Plant includes the following: receiver configuration selection report; cooperating modes and transitions; failure modes analysis; control system analysis; computer codes and simulation models; procurement package scope descriptions; responsibility matrix; solar system flow diagram component purpose list; thermal storage component and system test plans; solar steam generator tube-to-tubesheet weld analysis; pipeline listing; management control schedule; and system list and definitions.

  19. The development of control and monitoring system on marine current renewable energy Case study: strait of Toyapakeh - Nusa Penida, Bali

    NASA Astrophysics Data System (ADS)

    Arief, I. S.; Suherman, I. H.; Wardani, A. Y.; Baidowi, A.

    2017-05-01

    Control and monitoring system is a continuous process of securing the asset in the Marine Current Renewable Energy. A control and monitoring system is existed each critical components which is embedded in Failure Mode Effect Analysis (FMEA) method. As the result, the process in this paper developed through a matrix sensor. The matrix correlated to critical components and monitoring system which supported by sensors to conduct decision-making.

  20. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T.

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less

  1. Failure Scenarios and Mitigations for the BABAR Superconducting Solenoid

    NASA Astrophysics Data System (ADS)

    Thompson, EunJoo; Candia, A.; Craddock, W. W.; Racine, M.; Weisend, J. G.

    2006-04-01

    The cryogenic department at the Stanford Linear Accelerator Center is responsible for the operation, troubleshooting, and upgrade of the 1.5 Tesla superconducting solenoid detector for the BABAR B-factory experiment. Events that disable the detector are rare but significantly impact the availability of the detector for physics research. As a result, a number of systems and procedures have been developed over time to minimize the downtime of the detector, for example improved control systems, improved and automatic backup systems, and spares for all major components. Together they can prevent or mitigate many of the failures experienced by the utilities, mechanical systems, controls and instrumentation. In this paper we describe various failure scenarios, their effect on the detector, and the modifications made to mitigate the effects of the failure. As a result of these modifications the reliability of the detector has increased significantly with only 3 shutdowns of the detector due to cryogenics systems over the last 2 years.

  2. Reliability analysis of a phaser measurement unit using a generalized fuzzy lambda-tau(GFLT) technique.

    PubMed

    Komal

    2018-05-01

    Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Simultaneously Coupled Mechanical-Electrochemical-Thermal Simulation of Lithium-Ion Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, C.; Santhanagopalan, S.; Sprague, M. A.

    2016-07-28

    Understanding the combined electrochemical-thermal and mechanical response of a system has a variety of applications, for example, structural failure from electrochemical fatigue and the potential induced changes of material properties. For lithium-ion batteries, there is an added concern over the safety of the system in the event of mechanical failure of the cell components. In this work, we present a generic multi-scale simultaneously coupled mechanical-electrochemical-thermal model to examine the interaction between mechanical failure and electrochemical-thermal responses. We treat the battery cell as a homogeneous material while locally we explicitly solve for the mechanical response of individual components using a homogenizationmore » model and the electrochemical-thermal responses using an electrochemical model for the battery. A benchmark problem is established to demonstrate the proposed modeling framework. The model shows the capability to capture the gradual evolution of cell electrochemical-thermal responses, and predicts the variation of those responses under different short-circuit conditions.« less

  4. Simultaneously Coupled Mechanical-Electrochemical-Thermal Simulation of Lithium-Ion Cells: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chao; Santhanagopalan, Shriram; Sprague, Michael A.

    2016-08-01

    Understanding the combined electrochemical-thermal and mechanical response of a system has a variety of applications, for example, structural failure from electrochemical fatigue and the potential induced changes of material properties. For lithium-ion batteries, there is an added concern over the safety of the system in the event of mechanical failure of the cell components. In this work, we present a generic multi-scale simultaneously coupled mechanical-electrochemical-thermal model to examine the interaction between mechanical failure and electrochemical-thermal responses. We treat the battery cell as a homogeneous material while locally we explicitly solve for the mechanical response of individual components using a homogenizationmore » model and the electrochemical-thermal responses using an electrochemical model for the battery. A benchmark problem is established to demonstrate the proposed modeling framework. The model shows the capability to capture the gradual evolution of cell electrochemical-thermal responses, and predicts the variation of those responses under different short-circuit conditions.« less

  5. Development of KSC program for investigating and generating field failure rates. Volume 2: Recommended format for reliability handbook for ground support equipment

    NASA Technical Reports Server (NTRS)

    Bloomquist, C. E.; Kallmeyer, R. H.

    1972-01-01

    Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results presented in this handbook are also included.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobson, Ian; Hiskens, Ian; Linderoth, Jeffrey

    Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines

  7. Effects of Assuming Independent Component Failure Times, if They Actually Dependent, in a Series System.

    DTIC Science & Technology

    1984-10-26

    test for independence; ons i ser, -, of the poduct life estimator; dependent risks; 119 ASRACT Coniinue on ’wme-se f nereiary-~and iaen r~f> by Worst...the failure times associated with different failure - modes when we really should use a bivariate (or multivariate) distribution, then what is the...dependencies may be present, then what is the magnitude of the estimation error? S The third specific aim will attempt to obtain bounds on the

  8. Software Health Management: A Short Review of Challenges and Existing Techniques

    NASA Technical Reports Server (NTRS)

    Pipatsrisawat, Knot; Darwiche, Adnan; Mengshoel, Ole J.; Schumann, Johann

    2009-01-01

    Modern spacecraft (as well as most other complex mechanisms like aircraft, automobiles, and chemical plants) rely more and more on software, to a point where software failures have caused severe accidents and loss of missions. Software failures during a manned mission can cause loss of life, so there are severe requirements to make the software as safe and reliable as possible. Typically, verification and validation (V&V) has the task of making sure that all software errors are found before the software is deployed and that it always conforms to the requirements. Experience, however, shows that this gold standard of error-free software cannot be reached in practice. Even if the software alone is free of glitches, its interoperation with the hardware (e.g., with sensors or actuators) can cause problems. Unexpected operational conditions or changes in the environment may ultimately cause a software system to fail. Is there a way to surmount this problem? In most modern aircraft and many automobiles, hardware such as central electrical, mechanical, and hydraulic components are monitored by IVHM (Integrated Vehicle Health Management) systems. These systems can recognize, isolate, and identify faults and failures, both those that already occurred as well as imminent ones. With the help of diagnostics and prognostics, appropriate mitigation strategies can be selected (replacement or repair, switch to redundant systems, etc.). In this short paper, we discuss some challenges and promising techniques for software health management (SWHM). In particular, we identify unique challenges for preventing software failure in systems which involve both software and hardware components. We then present our classifications of techniques related to SWHM. These classifications are performed based on dimensions of interest to both developers and users of the techniques, and hopefully provide a map for dealing with software faults and failures.

  9. Low-thrust mission risk analysis, with application to a 1980 rendezvous with the comet Encke

    NASA Technical Reports Server (NTRS)

    Yen, C. L.; Smith, D. B.

    1973-01-01

    A computerized failure process simulation procedure is used to evaluate the risk in a solar electric space mission. The procedure uses currently available thrust-subsystem reliability data and performs approximate simulations of the thrust sybsystem burn operation, the system failure processes, and the retargeting operations. The method is applied to assess the risks in carrying out a 1980 rendezvous mission to the comet Encke. Analysis of the results and evaluation of the effects of various risk factors on the mission show that system component failure rates are the limiting factors in attaining a high mission relability. It is also shown that a well-designed trajectory and system operation mode can be used effectively to partially compensate for unreliable thruster performance.

  10. Mass and Reliability System (MaRS)

    NASA Technical Reports Server (NTRS)

    Barnes, Sarah

    2016-01-01

    The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including: operation hours, random/nonrandom failures, software/hardware failures, quantity, orbital replaceable units (ORU), date of placement, unit weight, frequency of part, etc. The motivation for creating such a database will be the development of a mass/reliability parametric model to estimate mass required for replacement parts. Once complete, engineers working on future space flight missions will have access a mean time to failures and on parts along with their mass, this will be used to make proper decisions for long duration space flight missions

  11. Remote maintenance monitoring system

    NASA Technical Reports Server (NTRS)

    Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)

    1992-01-01

    A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.

  12. Reliability models applicable to space telescope solar array assembly system

    NASA Technical Reports Server (NTRS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  13. Prognostics for Electronics Components of Avionics Systems

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saha, Bhaskar; Wysocki, Philip F.; Goebel, Kai F.

    2009-01-01

    Electronics components have and increasingly critical role in avionics systems and for the development of future aircraft systems. Prognostics of such components is becoming a very important research filed as a result of the need to provide aircraft systems with system level health management. This paper reports on a prognostics application for electronics components of avionics systems, in particular, its application to the Isolated Gate Bipolar Transistor (IGBT). The remaining useful life prediction for the IGBT is based on the particle filter framework, leveraging data from an accelerated aging tests on IGBTs. The accelerated aging test provided thermal-electrical overstress by applying thermal cycling to the device. In-situ state monitoring, including measurements of the steady-state voltages and currents, electrical transients, and thermal transients are recorded and used as potential precursors of failure.

  14. Detection of structural deterioration and associated airline maintenance problems

    NASA Technical Reports Server (NTRS)

    Henniker, H. D.; Mitchell, R. G.

    1972-01-01

    Airline operations involving the detection of structural deterioration and associated maintenance problems are discussed. The standard approach to the maintenance and inspection of aircraft components and systems is described. The frequency of inspections and the application of preventive maintenance practices are examined. The types of failure which airline transport aircraft encounter and the steps taken to prevent catastrophic failure are reported.

  15. Independent Orbiter Assessment (IOA): Analysis of the DPS subsystem

    NASA Technical Reports Server (NTRS)

    Lowery, H. J.; Haufler, W. A.; Pietz, K. C.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) is presented. The IOA approach features a top-down analysis of the hardware to independently determine failure modes, criticality, and potential critical items. The independent analysis results corresponding to the Orbiter Data Processing System (DPS) hardware are documented. The DPS hardware is required for performing critical functions of data acquisition, data manipulation, data display, and data transfer throughout the Orbiter. Specifically, the DPS hardware consists of the following components: Multiplexer/Demultiplexer (MDM); General Purpose Computer (GPC); Multifunction CRT Display System (MCDS); Data Buses and Data Bus Couplers (DBC); Data Bus Isolation Amplifiers (DBIA); Mass Memory Unit (MMU); and Engine Interface Unit (EIU). The IOA analysis process utilized available DPS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the extensive redundancy built into the DPS the number of critical items are few. Those identified resulted from premature operation and erroneous output of the GPCs.

  16. Durability of implanted electrodes and leads in an upper-limb neuroprosthesis.

    PubMed

    Kilgore, Kevin L; Peckham, P Hunter; Keith, Michael W; Montague, Fred W; Hart, Ronald L; Gazdik, Martha M; Bryden, Anne M; Snyder, Scott A; Stage, Thomas G

    2003-01-01

    Implanted neuroprosthetic systems have been successfully used to provide upper-limb function for over 16 years. A critical aspect of these implanted systems is the safety, stability, and-reliability of the stimulating electrodes and leads. These components are (1) the stimulating electrode itself, (2) the electrode lead, and (3) the lead-to-device connector. A failure in any of these components causes the direct loss of the capability to activate a muscle consistently, usually resulting in a decrement in the function provided by the neuroprosthesis. Our results indicate that the electrode, lead, and connector system are extremely durable. We analyzed 238 electrodes that have been implanted as part of an upper-limb neuroprosthesis. Each electrode had been implanted at least 3 years, with a maximum implantation time of over 16 years. Only three electrode-lead failures and one electrode infection occurred, for a survival rate of almost 99 percent. Electrode threshold measurements indicate that the electrode response is stable over time, with no evidence of electrode migration or continual encapsulation in any of the electrodes studied. These results have an impact on the design of implantable neuroprosthetic systems. The electrode-lead component of these systems should no longer be considered a weak technological link.

  17. PRA (Probabilistic Risk Assessment) Applications Program for inspection at Oconee Unit 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gore, B.F.; Vo, T.V.; Harris, M.S.

    1987-10-01

    The extensive Oconee-3 PRA performed by EPRI has been analyzed to identify plant systems and components important to minimizing public risk, and to identify the primary failure modes of these components. This information has been tabulated, and correlated with inspection modules from the NRC Inspection and Enforcement Manual. The report presents a series of tables, organized by system and prioritized by public risk (in person-rem per year), which identify components associated with 98% of the inspectable risk due to plant operation. External events (earthquakes, tornadoes, fires and floods) are not addressed because inspections cannot directly minimize the risks from thesemore » events; however, flooding caused by the breach of internal systems is addressed. The systems addressed, in descending order of risk importance, are: Reactor Building Spray, R B Cooling, Condenser Circulating Water, Safety Relief Valves, Low Pressure Injection, Standby Shutdown Facility-High Pressure Injection, Low-Pressure Service Water, and Emergency Feedwater. This ranking is based on the Fussel-Vesely measure of risk importance, i.e., the fraction of the total risk which involves failures of the system of interest. 8 refs., 25 tabs.« less

  18. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning ofmore » component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the Bayesian method to lifecycle prognostics consisted of identifying the prior, which is the RUL estimate and uncertainty from the previous prognostics type, and combining it with observational data related to the newer prognostics type. The resulting lifecycle prognostics algorithm uses all available information throughout the component lifecycle.« less

  19. Small vulnerable sets determine large network cascades in power grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  20. Small vulnerable sets determine large network cascades in power grids

    DOE PAGES

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    2017-11-17

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less

  2. Effectiveness of back-to-back testing

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.

    1987-01-01

    Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.

  3. From Diagnosis to Action: An Automated Failure Advisor for Human Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano; Spirkovska, Lilly; Baskaran, Vijayakumar; Morris, Paul; Mcdermott, William; Ossenfort, John; Bajwa, Anupa

    2015-01-01

    The major goal of current space system development at NASA is to enable human travel to deep space locations such as Mars and asteroids. At that distance, round trip communication with ground operators may take close to an hour, thus it becomes unfeasible to seek ground operator advice for problems that require immediate attention, either for crew safety or for activities that need to be performed at specific times for the attainment of scientific results. To achieve this goal, major reliance will need to be placed on automation systems capable of aiding the crew in detecting and diagnosing failures, assessing consequences of these failures, and providing guidance in repair activities that may be required. We report here on the most current step in the continuing development of such a system, and that is the addition of a Failure Response Advisor. In simple terms, we have a system in place the Advanced Caution and Warning System (ACAWS) to tell us what happened (failure diagnosis) and what happened because that happened (failure effects). The Failure Response Advisor will tell us what to do about it, how long until something must be done and why its important that something be done and will begin to approach the complex reasoning that is generally required for an optimal approach to automated system health management. This advice is based on the criticality and various timing elements, such as durations of activities and of component repairs, failure effects delay, and other factors. The failure advice is provided to operators (crew and mission controllers) together with the diagnostic and effects information. The operators also have the option to drill down for more information about the failure and the reasons for any suggested priorities.

  4. Model 0A wind turbine generator FMEA

    NASA Technical Reports Server (NTRS)

    Klein, William E.; Lalli, Vincent R.

    1989-01-01

    The results of Failure Modes and Effects Analysis (FMEA) conducted for the Wind Turbine Generators are presented. The FMEA was performed for the functional modes of each system, subsystem, or component. The single-point failures were eliminated for most of the systems. The blade system was the only exception. The qualitative probability of a blade separating was estimated at level D-remote. Many changes were made to the hardware as a result of this analysis. The most significant change was the addition of the safety system. Operational experience and need to improve machine availability have resulted in subsequent changes to the various systems which are also reflected in this FMEA.

  5. Accounting for Epistemic Uncertainty in Mission Supportability Assessment: A Necessary Step in Understanding Risk and Logistics Requirements

    NASA Technical Reports Server (NTRS)

    Owens, Andrew; De Weck, Olivier L.; Stromgren, Chel; Goodliff, Kandyce; Cirillo, William

    2017-01-01

    Future crewed missions to Mars present a maintenance logistics challenge that is unprecedented in human spaceflight. Mission endurance – defined as the time between resupply opportunities – will be significantly longer than previous missions, and therefore logistics planning horizons are longer and the impact of uncertainty is magnified. Maintenance logistics forecasting typically assumes that component failure rates are deterministically known and uses them to represent aleatory uncertainty, or uncertainty that is inherent to the process being examined. However, failure rates cannot be directly measured; rather, they are estimated based on similarity to other components or statistical analysis of observed failures. As a result, epistemic uncertainty – that is, uncertainty in knowledge of the process – exists in failure rate estimates that must be accounted for. Analyses that neglect epistemic uncertainty tend to significantly underestimate risk. Epistemic uncertainty can be reduced via operational experience; for example, the International Space Station (ISS) failure rate estimates are refined using a Bayesian update process. However, design changes may re-introduce epistemic uncertainty. Thus, there is a tradeoff between changing a design to reduce failure rates and operating a fixed design to reduce uncertainty. This paper examines the impact of epistemic uncertainty on maintenance logistics requirements for future Mars missions, using data from the ISS Environmental Control and Life Support System (ECLS) as a baseline for a case study. Sensitivity analyses are performed to investigate the impact of variations in failure rate estimates and epistemic uncertainty on spares mass. The results of these analyses and their implications for future system design and mission planning are discussed.

  6. Enhanced Component Performance Study: Emergency Diesel Generators 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-11-01

    This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using (1) Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2014 and (2) maintenance unavailability (UA) performance data from Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2014. The objective is to show estimates of current failure probabilities and rates related to EDGs, trend these data on an annual basis, determine if the current data are consistent with the probability distributions currently recommended for use inmore » NRC probabilistic risk assessments, show how the reliability data differ for different EDG manufacturers and for EDGs with different ratings; and summarize the subcomponents, causes, detection methods, and recovery associated with each EDG failure mode. Engineering analyses were performed with respect to time period and failure mode without regard to the actual number of EDGs at each plant. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating. Six trends with varying degrees of statistical significance were identified in the data.« less

  7. A Critical Analysis of the Conventionally Employed Creep Lifing Methods

    PubMed Central

    Abdallah, Zakaria; Gray, Veronica; Whittaker, Mark; Perkins, Karen

    2014-01-01

    The deformation of structural alloys presents problems for power plants and aerospace applications due to the demand for elevated temperatures for higher efficiencies and reductions in greenhouse gas emissions. The materials used in such applications experience harsh environments which may lead to deformation and failure of critical components. To avoid such catastrophic failures and also increase efficiency, future designs must utilise novel/improved alloy systems with enhanced temperature capability. In recognising this issue, a detailed understanding of creep is essential for the success of these designs by ensuring components do not experience excessive deformation which may ultimately lead to failure. To achieve this, a variety of parametric methods have been developed to quantify creep and creep fracture in high temperature applications. This study reviews a number of well-known traditionally employed creep lifing methods with some more recent approaches also included. The first section of this paper focuses on predicting the long-term creep rupture properties which is an area of interest for the power generation sector. The second section looks at pre-defined strains and the re-production of full creep curves based on available data which is pertinent to the aerospace industry where components are replaced before failure. PMID:28788623

  8. HTGR plant availability and reliability evaluations. Volume I. Summary of evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadwallader, G.J.; Hannaman, G.W.; Jacobsen, F.K.

    1976-12-01

    The report (1) describes a reliability assessment methodology for systematically locating and correcting areas which may contribute to unavailability of new and uniquely designed components and systems, (2) illustrates the methodology by applying it to such components in a high-temperature gas-cooled reactor (Public Service Company of Colorado's Fort St. Vrain 330-MW(e) HTGR), and (3) compares the results of the assessment with actual experience. The methodology can be applied to any component or system; however, it is particularly valuable for assessments of components or systems which provide essential functions, or the failure or mishandling of which could result in relatively largemore » economic losses.« less

  9. Towards Prognostics for Electronics Components

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar; Celaya, Jose R.; Wysocki, Philip F.; Goebel, Kai F.

    2013-01-01

    Electronics components have an increasingly critical role in avionics systems and in the development of future aircraft systems. Prognostics of such components is becoming a very important research field as a result of the need to provide aircraft systems with system level health management information. This paper focuses on a prognostics application for electronics components within avionics systems, and in particular its application to an Isolated Gate Bipolar Transistor (IGBT). This application utilizes the remaining useful life prediction, accomplished by employing the particle filter framework, leveraging data from accelerated aging tests on IGBTs. These tests induced thermal-electrical overstresses by applying thermal cycling to the IGBT devices. In-situ state monitoring, including measurements of steady-state voltages and currents, electrical transients, and thermal transients are recorded and used as potential precursors of failure.

  10. Interfaces - Weak Links, Yet Great Opportunities

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C.; Dimofte, Florin; Chupp, Raymond E.; Steinetz, Bruce M.

    2011-01-01

    Inadequate turbomachine interface design can rapidly degrade system performance, yet provide great opportunity for improvements. Engineered coatings of seals and bearing interfaces are major issues in the operational life of power systems. Coatings, films, and combined use of both metals and ceramics play a major role in maintaining component life. Interface coatings, like lubricants, are sacrificial for the benefit of the component. Bearing and sealing surfaces are routinely protected by tribologically paired coatings such as silicon diamond like coatings (SiDLC) in combination with an oil lubricated wave bearing that prolongs bearing operational life. Likewise, of several methods used or researched for detecting interface failures, dopants within coatings show failures in functionally graded ceramic coatings. The Bozzolo-Ferrante-Smith (BFS) materials models and quantum mechanical tools, employed in interface design, are discussed.

  11. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 3

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    Structural failure is rarely a "sudden death" type of event, such sudden failures may occur only under abnormal loadings like bomb or gas explosions and very strong earthquakes. In most cases, structures fail due to damage accumulated under normal loadings such as wind loads, dead and live loads. The consequence of cumulative damage will affect the reliability of surviving components and finally causes collapse of the system. The cumulative damage effects on system reliability under time-invariant loadings are of practical interest in structural design and therefore will be investigated in this study. The scope of this study is, however, restricted to the consideration of damage accumulation as the increase in the number of failed components due to the violation of their strength limits.

  12. Detailed Post-Soft Impact Progressive Damage Assessment for Hybrid Structure Jet Engines

    NASA Technical Reports Server (NTRS)

    Siddens, Aaron; Bayandor, Javid; Celestina, Mark L.

    2014-01-01

    Currently, certification of engine designs for resistance to bird strike is reliant on physical tests. Predictive modeling of engine structural damage has mostly been limited to evaluation of individual forward section components, such as fan blades within a fixed frame of reference, to direct impact with a bird. Such models must be extended to include interactions among engine components under operating conditions to evaluate the full extent of engine damage. This paper presents the results of a study aim to develop a methodology for evaluating bird strike damage in advanced propulsion systems incorporating hybrid composite/metal structures. The initial degradation and failure of individual fan blades struck by a bird were investigated. Subsequent damage to other fan blades and engine components due to resultant violent fan assembly vibrations and fragmentation was further evaluated. Various modeling parameters for the bird and engine components were investigated to determine guidelines for accurately capturing initial damage and progressive failure of engine components. Then, a novel hybrid structure modeling approach was investigated and incorporated into the crashworthiness methodology. Such a tool is invaluable to the process of design, development, and certification of future advanced propulsion systems.

  13. Space shuttle solid rocket booster recovery system definition, volume 1

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.

  14. Recent advances in Ni-H2 technology at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Gonzalezsanabria, O. D.; Britton, D. L.; Smithrick, J. J.; Reid, M. A.

    1986-01-01

    The NASA Lewis Research Center has concentrated its efforts on advancing the Ni-H2 system technology for low Earth orbit applications. Component technology as well as the design principles were studied in an effort to understand the system behavior and failure mechanisms in order to increase performance and extend cycle life. The design principles were previously addressed. The component development is discussed, in particular the separator and nickel electrode and how these efforts will advance the Ni-H2 system technology.

  15. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

  16. Failure analysis of storage tank component in LNG regasification unit using fault tree analysis method (FTA)

    NASA Astrophysics Data System (ADS)

    Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo

    2017-03-01

    Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.

  17. Reliability Effects of Surge Current Testing of Solid Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2007-01-01

    Solid tantalum capacitors are widely used in space applications to filter low-frequency ripple currents in power supply circuits and stabilize DC voltages in the system. Tantalum capacitors manufactured per military specifications (MIL-PRF-55365) are established reliability components and have less than 0.001% of failures per 1000 hours (the failure rate is less than 10 FIT) for grades D or S, thus positioning these parts among electronic components with the highest reliability characteristics. Still, failures of tantalum capacitors do happen and when it occurs it might have catastrophic consequences for the system. This is due to a short-circuit failure mode, which might be damaging to a power supply, and also to the capability of tantalum capacitors with manganese cathodes to self-ignite when a failure occurs in low-impedance applications. During such a failure, a substantial amount of energy is released by exothermic reaction of the tantalum pellet with oxygen generated by the overheated manganese oxide cathode, resulting not only in destruction of the part, but also in damage of the board and surrounding components. A specific feature of tantalum capacitors, compared to ceramic parts, is a relatively large value of capacitance, which in contemporary low-size chip capacitors reaches dozens and hundreds of microfarads. This might result in so-called surge current or turn-on failures in the parts when the board is first powered up. Such a failure, which is considered as the most prevalent type of failures in tantalum capacitors [I], is due to fast changes of the voltage in the circuit, dV/dt, producing high surge current spikes, I(sub sp) = Cx(dV/dt), when current in the circuit is unrestricted. These spikes can reach hundreds of amperes and cause catastrophic failures in the system. The mechanism of surge current failures has not been understood completely yet, and different hypotheses were discussed in relevant literature. These include a sustained scintillation breakdown model [1-3]; electrical oscillations in circuits with a relatively high inductance [4-6]; local overheating of the cathode [5,7, 8]; mechanical damage to tantalum pentoxide dielectric caused by the impact of MnO2 crystals [2,9, 10]; or stress-induced-generation of electron traps caused by electromagnetic forces developed during current spikes [11]. A commonly accepted explanation of the surge current failures is that at unlimited current supply during surge current conditions, the self-healing mechanism in tantalum capacitors does not work, and what would be a minor scintillation spike if the current were limited, becomes a catastrophic failure of the part [l, 12]. However, our data show that the scintillation breakdown voltages are significantly greater that the surge current breakdown voltages, so it is still not clear why the part, which has no scintillations, would fail at the same voltage during surge current testing (SCT).

  18. Overview of the Systems Special Investigation Group investigation

    NASA Technical Reports Server (NTRS)

    Mason, James B.; Dursch, Harry; Edelman, Joel

    1993-01-01

    The Long Duration Exposure Facility (LDEF) carried a remarkable variety of electrical, mechanical, thermal, and optical systems, subsystems, and components. Nineteen of the fifty-seven experiments flown on LDEF contained functional systems that were active on-orbit. Almost all of the other experiments possessed at least a few specific components of interest to the Systems Special Investigation Group (Systems SIG), such as adhesives, seals, fasteners, optical components, and thermal blankets. Almost all top level functional testing of the active LDEF and experiment systems has been completed. Failure analysis of both LDEF hardware and individual experiments that failed to perform as designed has also been completed. Testing of system components and experimenter hardware of interest to the Systems SIG is ongoing. All available testing and analysis results were collected and integrated by the Systems SIG. An overview of our findings is provided. An LDEF Optical Experiment Database containing information for all 29 optical related experiments is also discussed.

  19. Algebraic geometric methods for the stabilizability and reliability of multivariable and of multimode systems

    NASA Technical Reports Server (NTRS)

    Anderson, B. D. O.; Brockett, R. W.; Byrnes, C. I.; Ghosh, B. K.; Stevens, P. K.

    1983-01-01

    The extent to which feedback can alter the dynamic characteristics (e.g., instability, oscillations) of a control system, possibly operating in one or more modes (e.g., failure versus nonfailure of one or more components) is examined.

  20. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.

  1. Independent Orbiter Assessment (IOA): Analysis of the mechanical actuation subsystem

    NASA Technical Reports Server (NTRS)

    Bacher, J. L.; Montgomery, A. D.; Bradway, M. W.; Slaughter, W. T.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). The IOA analysis process utilized available MAS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, D. I.; Han, S. H.

    A PSA analyst has been manually determining fire-induced component failure modes and modeling them into the PSA logics. These can be difficult and time-consuming tasks as they need much information and many events are to be modeled. KAERI has been developing the IPRO-ZONE (interface program for constructing zone effect table) to facilitate fire PSA works for identifying and modeling fire-induced component failure modes, and to construct a one top fire event PSA model. With the output of the IPRO-ZONE, the AIMS-PSA, and internal event one top PSA model, one top fire events PSA model is automatically constructed. The outputs ofmore » the IPRO-ZONE include information on fire zones/fire scenarios, fire propagation areas, equipment failure modes affected by a fire, internal PSA basic events corresponding to fire-induced equipment failure modes, and fire events to be modeled. This paper introduces the IPRO-ZONE, and its application results to fire PSA of Ulchin Unit 3 and SMART(System-integrated Modular Advanced Reactor). (authors)« less

  3. The digestive tract as the origin of systemic inflammation.

    PubMed

    de Jong, Petrus R; González-Navajas, José M; Jansen, Nicolaas J G

    2016-10-18

    Failure of gut homeostasis is an important factor in the pathogenesis and progression of systemic inflammation, which can culminate in multiple organ failure and fatality. Pathogenic events in critically ill patients include mesenteric hypoperfusion, dysregulation of gut motility, and failure of the gut barrier with resultant translocation of luminal substrates. This is followed by the exacerbation of local and systemic immune responses. All these events can contribute to pathogenic crosstalk between the gut, circulating cells, and other organs like the liver, pancreas, and lungs. Here we review recent insights into the identity of the cellular and biochemical players from the gut that have key roles in the pathogenic turn of events in these organ systems that derange the systemic inflammatory homeostasis. In particular, we discuss the dangers from within the gastrointestinal tract, including metabolic products from the liver (bile acids), digestive enzymes produced by the pancreas, and inflammatory components of the mesenteric lymph.

  4. Accelerated Aging System for Prognostics of Power Semiconductor Devices

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Vashchenko, Vladislav; Wysocki, Philip; Saha, Sankalita

    2010-01-01

    Prognostics is an engineering discipline that focuses on estimation of the health state of a component and the prediction of its remaining useful life (RUL) before failure. Health state estimation is based on actual conditions and it is fundamental for the prediction of RUL under anticipated future usage. Failure of electronic devices is of great concern as future aircraft will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. Therefore, development of prognostics solutions for electronics is of key importance. This paper presents an accelerated aging system for gate-controlled power transistors. This system allows for the understanding of the effects of failure mechanisms, and the identification of leading indicators of failure which are essential in the development of physics-based degradation models and RUL prediction. In particular, this system isolates electrical overstress from thermal overstress. Also, this system allows for a precise control of internal temperatures, enabling the exploration of intrinsic failure mechanisms not related to the device packaging. By controlling the temperature within safe operation levels of the device, accelerated aging is induced by electrical overstress only, avoiding the generation of thermal cycles. The temperature is controlled by active thermal-electric units. Several electrical and thermal signals are measured in-situ and recorded for further analysis in the identification of leading indicators of failures. This system, therefore, provides a unique capability in the exploration of different failure mechanisms and the identification of precursors of failure that can be used to provide a health management solution for electronic devices.

  5. Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.

  6. Applicability of a Crack-Detection System for Use in Rotor Disk Spin Test Experiments Being Evaluated

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Baaklini, George Y.; Roth, Don J.

    2004-01-01

    Engine makers and aviation safety government institutions continue to have a strong interest in monitoring the health of rotating components in aircraft engines to improve safety and to lower maintenance costs. To prevent catastrophic failure (burst) of the engine, they use nondestructive evaluation (NDE) and major overhauls for periodic inspections to discover any cracks that might have formed. The lowest cost fluorescent penetrant inspection NDE technique can fail to disclose cracks that are tightly closed during rest or that are below the surface. The NDE eddy current system is more effective at detecting both crack types, but it requires careful setup and operation and only a small portion of the disk can be practically inspected. So that sensor systems can sustain normal function in a severe environment, health-monitoring systems require the sensor system to transmit a signal if a crack detected in the component is above a predetermined length (but below the length that would lead to failure) and lastly to act neutrally upon the overall performance of the engine system and not interfere with engine maintenance operations. Therefore, more reliable diagnostic tools and high-level techniques for detecting damage and monitoring the health of rotating components are very essential in maintaining engine safety and reliability and in assessing life.

  7. Integrating FMEA in a Model-Driven Methodology

    NASA Astrophysics Data System (ADS)

    Scippacercola, Fabio; Pietrantuono, Roberto; Russo, Stefano; Esper, Alexandre; Silva, Nuno

    2016-08-01

    Failure Mode and Effects Analysis (FMEA) is a well known technique for evaluating the effects of potential failures of components of a system. FMEA demands for engineering methods and tools able to support the time- consuming tasks of the analyst. We propose to make FMEA part of the design of a critical system, by integration into a model-driven methodology. We show how to conduct the analysis of failure modes, propagation and effects from SysML design models, by means of custom diagrams, which we name FMEA Diagrams. They offer an additional view of the system, tailored to FMEA goals. The enriched model can then be exploited to automatically generate FMEA worksheet and to conduct qualitative and quantitative analyses. We present a case study from a real-world project.

  8. Ferrographic and spectrographic analysis of oil sampled before and after failure of a jet engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1980-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph as well as plasma, atomic absorption, and emission spectrometers. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism, nor a high level of wear debris was detected in the oil sample from the engine just prior to the test in which the failure occurred. However, low concentrations of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure.

  9. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    NASA Technical Reports Server (NTRS)

    Flores, Melissa; Malin, Jane T.

    2013-01-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component s functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  10. Weld failure detection

    DOEpatents

    Pennell, William E.; Sutton, Jr., Harry G.

    1981-01-01

    Method and apparatus for detecting failure in a welded connection, particrly applicable to not readily accessible welds such as those joining components within the reactor vessel of a nuclear reactor system. A preselected tag gas is sealed within a chamber which extends through selected portions of the base metal and weld deposit. In the event of a failure, such as development of a crack extending from the chamber to an outer surface, the tag gas is released. The environment about the welded area is directed to an analyzer which, in the event of presence of the tag gas, evidences the failure. A trigger gas can be included with the tag gas to actuate the analyzer.

  11. Tribology symposium 1995. PD-Volume 72

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masudi, H.

    After the keynote presentation by Professor Aaron Cohen of Texas A and M University, entitled Processes Used in Design, the program is divided into five major sessions: Research and Development -- Recent research and development of tribological components; Tribology in Manufacturing -- The impact of tribology on modern manufacturing; Design/Design Representation -- Aspects of design related to tribological systems; Tribo-Chemistry/Tribo-Physics -- Discussion of chemical and physical behavior of substances as related to tribology; and Failure Analysis -- An analysis of failure, failure detection, and failure monitoring as related to manufacturing processes. Papers have been processed separately for inclusion on themore » data base.« less

  12. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    NASA Astrophysics Data System (ADS)

    Flores, Melissa D.; Malin, Jane T.; Fleming, Land D.

    2013-09-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component's functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  13. Hybrid Decompositional Verification for Discovering Failures in Adaptive Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah; Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    Adaptive flight control systems hold tremendous promise for maintaining the safety of a damaged aircraft and its passengers. However, most currently proposed adaptive control methodologies rely on online learning neural networks (OLNNs), which necessarily have the property that the controller is changing during the flight. These changes tend to be highly nonlinear, and difficult or impossible to analyze using standard techniques. In this paper, we approach the problem with a variant of compositional verification. The overall system is broken into components. Undesirable behavior is fed backwards through the system. Components which can be solved using formal methods techniques explicitly for the ranges of safe and unsafe input bounds are treated as white box components. The remaining black box components are analyzed with heuristic techniques that try to predict a range of component inputs that may lead to unsafe behavior. The composition of these component inputs throughout the system leads to overall system test vectors that may elucidate the undesirable behavior

  14. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Chen, Zizhong; Song, Shuaiwen

    2016-01-18

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  15. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Chen, Zizhong; Song, Shuaiwen Leon

    2015-11-16

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  16. Socket position determines hip resurfacing 10-year survivorship.

    PubMed

    Amstutz, Harlan C; Le Duff, Michel J; Johnson, Alicia J

    2012-11-01

    Modern metal-on-metal hip resurfacing arthroplasty designs have been used for over a decade. Risk factors for short-term failure include small component size, large femoral head defects, low body mass index, older age, high level of sporting activity, and component design, and it is established there is a surgeon learning curve. Owing to failures with early surgical techniques, we developed a second-generation technique to address those failures. However, it is unclear whether the techniques affected the long-term risk factors. We (1) determined survivorship for hips implanted with the second-generation cementing technique; (2) identified the risk factors for failure in these patients; and (3) determined the effect of the dominant risk factors on the observed modes of failure. We retrospectively reviewed the first 200 hips (178 patients) implanted using our second-generation surgical technique, which consisted of improvements in cleaning and drying the femoral head before and during cement application. There were 129 men and 49 women. Component orientation and contact patch to rim distance were measured. We recorded the following modes of failure: femoral neck fracture, femoral component loosening, acetabular component loosening, wear, dislocation, and sepsis. The minimum followup was 25 months (mean, 106.5 months; range, 25-138 months). Twelve hips were revised. Kaplan-Meier survivorship was 98.0% at 5 years and 94.3% at 10 years. The only variable associated with revision was acetabular component position. Contact patch to rim distance was lower in hips that dislocated, were revised for wear, or were revised for acetabular loosening. The dominant modes of failure were related to component wear or acetabular component loosening. Acetabular component orientation, a factor within the surgeon's control, determines the long-term success of our current hip resurfacing techniques. Current techniques have changed the modes of failure from aseptic femoral failure to wear or loosening of the acetabular component. Level III, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.

  17. Predicting remaining life by fusing the physics of failure modeling with diagnostics

    NASA Astrophysics Data System (ADS)

    Kacprzynski, G. J.; Sarlashkar, A.; Roemer, M. J.; Hess, A.; Hardman, B.

    2004-03-01

    Technology that enables failure prediction of critical machine components (prognostics) has the potential to significantly reduce maintenance costs and increase availability and safety. This article summarizes a research effort funded through the U.S. Defense Advanced Research Projects Agency and Naval Air System Command aimed at enhancing prognostic accuracy through more advanced physics-of-failure modeling and intelligent utilization of relevant diagnostic information. H-60 helicopter gear is used as a case study to introduce both stochastic sub-zone crack initiation and three-dimensional fracture mechanics lifing models along with adaptive model updating techniques for tuning key failure mode variables at a local material/damage site based on fused vibration features. The overall prognostic scheme is aimed at minimizing inherent modeling and operational uncertainties via sensed system measurements that evolve as damage progresses.

  18. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; He, Fei; Ma, Chris Y. T.

    In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less

  19. Survivorship analysis of failure pattern after revision total hip arthroplasty.

    PubMed

    Retpen, J B; Varmarken, J E; Jensen, J S

    1989-12-01

    Failure, defined as established indication for or performed re-revision of one or both components, was analyzed using survivorship methods in 306 revision total hip arthroplasties. The longevity of revision total hip arthroplasties was inferior to that of previously reported primary total hip arthroplasties. The overall survival curve was two-phased, with a late failure period associated with aseptic loosening of one or both components and an early failure period associated with causes of failure other than loosening. Separate survival curves for aseptic loosening of femoral and acetabular components showed late and almost simultaneous decline, but with a tendency toward a higher rate of failure for the femoral component. No differences in survival could be found between the Stanmore, Lubinus standard, and Lubinus long-stemmed femoral components. A short interval between the index operation and the revision and intraoperative and postoperative complications were risk factors for early failure. Young age was a risk factor for aseptic loosening of the femoral component. Intraoperative fracture of the femoral shaft was not a risk factor for secondary loosening. No difference in survival was found between primary cemented total arthroplasty and primary noncemented hemiarthroplasty.

  20. Optimization of replacement and inspection decisions for multiple components on a power system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauney, D.A.

    1994-12-31

    The use of optimization on the rescheduling of replacement dates provided a very proactive approach to deciding when components on individual units need to be addressed with a run/repair/replace decision. Including the effects of time value of money and taxes and unit need inside the spreadsheet model allowed the decision maker to concentrate on the effects of engineering input and replacement date decisions on the final net present value (NPV). The personal computer (PC)-based model was applied to a group of 140 forced outage critical fossil plant tube components across a power system. The estimated resulting NPV of the optimizationmore » was in the tens of millions of dollars. This PC spreadsheet model allows the interaction of inputs from structural reliability risk assessment models, plant foreman interviews, and actual failure history on a by component by unit basis across a complete power production system. This model includes not only the forced outage performance of these components caused by tube failures but, in addition, the forecasted need of the individual units on the power system and the expected cost of their replacement power if forced off line. The use of cash flow analysis techniques in the spreadsheet model results in the calculation of an NPV for a whole combination of replacement dates. This allows rapid assessments of {open_quotes}what if{close_quotes} scenarios of major maintenance projects on a systemwide basis and not just on a unit-by-unit basis.« less

  1. Reliability analysis for the smart grid : from cyber control and communication to physical manifestations of failure.

    DOT National Transportation Integrated Search

    2010-01-01

    The Smart Grid is a cyber-physical system comprised of physical components, such as transmission lines and generators, and a : network of embedded systems deployed for their cyber control. Our objective is to qualitatively and quantitatively analyze ...

  2. Probabilistic risk analysis of building contamination.

    PubMed

    Bolster, D T; Tartakovsky, D M

    2008-10-01

    We present a general framework for probabilistic risk assessment (PRA) of building contamination. PRA provides a powerful tool for the rigorous quantification of risk in contamination of building spaces. A typical PRA starts by identifying relevant components of a system (e.g. ventilation system components, potential sources of contaminants, remediation methods) and proceeds by using available information and statistical inference to estimate the probabilities of their failure. These probabilities are then combined by means of fault-tree analyses to yield probabilistic estimates of the risk of system failure (e.g. building contamination). A sensitivity study of PRAs can identify features and potential problems that need to be addressed with the most urgency. Often PRAs are amenable to approximations, which can significantly simplify the approach. All these features of PRA are presented in this paper via a simple illustrative example, which can be built upon in further studies. The tool presented here can be used to design and maintain adequate ventilation systems to minimize exposure of occupants to contaminants.

  3. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2016-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshal Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  4. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2015-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshall Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  5. Machine for applying a two component resin to a roadway surface

    DOEpatents

    Huszagh, Donald W.

    1985-01-01

    A portable machine for spraying two component resins onto a roadway, the machine having a pneumatic control system, including apparatus for purging the machine of mixed resin with air and then removing remaining resin with solvent. Interlocks prevent contamination of solvent and resin, and mixed resin can be purged in the event of a power failure.

  6. Machine for applying a two component resin to a roadway surface

    DOEpatents

    Huszagh, D.W.

    1984-01-01

    A portable machine for spraying two component resins onto a roadway, the machine having a pneumatic control system, including means for purging the machine of mixed resin with air and then removing remaining resin with solvent. Interlocks prevent contamination of solvent and resin, and mixed resin can be purged in the event of a power failure.

  7. NASA's Evolutionary Xenon Thruster (NEXT) Power Processing Unit (PPU) Capacitor Failure Root Cause Analysis

    NASA Technical Reports Server (NTRS)

    Soeder, James F.; Pinero, Luis; Schneidegger, Robert; Dunning, John; Birchenough, Art

    2012-01-01

    The NASA's Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. A critical element of the propulsion system is the Power Processing Unit (PPU) which supplies regulated power to the key components of the thruster. The PPU contains six different power supplies including the beam, discharge, discharge heater, neutralizer, neutralizer heater, and accelerator supplies. The beam supply is the largest and processes up to 93+% of the power. The NEXT PPU had been operated for approximately 200+ hours and has experienced a series of three capacitor failures in the beam supply. The capacitors are in the same, nominally non-critical location the input filter capacitor to a full wave switching inverter. The three failures occurred after about 20, 30, and 135 hours of operation. This paper provides background on the NEXT PPU and the capacitor failures. It discusses the failure investigation approach, the beam supply power switching topology and its operating modes, capacitor characteristics and circuit testing. Finally, it identifies root cause of the failures to be the unusual confluence of circuit switching frequency, the physical layout of the power circuits, and the characteristics of the capacitor.

  8. NASA's Evolutionary Xenon Thruster (NEXT) Power Processing Unit (PPU) Capacitor Failure Root Cause Analysis

    NASA Technical Reports Server (NTRS)

    Soeder, James F.; Scheidegger, Robert J.; Pinero, Luis R.; Birchenough, Arthur J.; Dunning, John W.

    2012-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. A critical element of the propulsion system is the Power Processing Unit (PPU) which supplies regulated power to the key components of the thruster. The PPU contains six different power supplies including the beam, discharge, discharge heater, neutralizer, neutralizer heater, and accelerator supplies. The beam supply is the largest and processes up to 93+% of the power. The NEXT PPU had been operated for approximately 200+ hr and has experienced a series of three capacitor failures in the beam supply. The capacitors are in the same, nominally non-critical location-the input filter capacitor to a full wave switching inverter. The three failures occurred after about 20, 30, and 135 hr of operation. This paper provides background on the NEXT PPU and the capacitor failures. It discusses the failure investigation approach, the beam supply power switching topology and its operating modes, capacitor characteristics and circuit testing. Finally, it identifies root cause of the failures to be the unusual confluence of circuit switching frequency, the physical layout of the power circuits, and the characteristics of the capacitor.

  9. Reliability and Maintainability Analysis for the Amine Swingbed Carbon Dioxide Removal System

    NASA Technical Reports Server (NTRS)

    Dunbar, Tyler

    2016-01-01

    I have performed a reliability & maintainability analysis for the Amine Swingbed payload system. The Amine Swingbed is a carbon dioxide removal technology that has gone through 2,400 hours of International Space Station on-orbit use between 2013 and 2016. While the Amine Swingbed is currently an experimental payload system, the Amine Swingbed may be converted to system hardware. If the Amine Swingbed becomes system hardware, it will supplement the Carbon Dioxide Removal Assembly (CDRA) as the primary CO2 removal technology on the International Space Station. NASA is also considering using the Amine Swingbed as the primary carbon dioxide removal technology for future extravehicular mobility units and for the Orion, which will be used for the Asteroid Redirect and Journey to Mars missions. The qualitative component of the reliability and maintainability analysis is a Failure Modes and Effects Analysis (FMEA). In the FMEA, I have investigated how individual components in the Amine Swingbed may fail, and what the worst case scenario is should a failure occur. The significant failure effects are the loss of ability to remove carbon dioxide, the formation of ammonia due to chemical degradation of the amine, and loss of atmosphere because the Amine Swingbed uses the vacuum of space to regenerate the Amine Swingbed. In the quantitative component of the reliability and maintainability analysis, I have assumed a constant failure rate for both electronic and nonelectronic parts. Using this data, I have created a Poisson distribution to predict the failure rate of the Amine Swingbed as a whole. I have determined a mean time to failure for the Amine Swingbed to be approximately 1,400 hours. The observed mean time to failure for the system is between 600 and 1,200 hours. This range includes initial testing of the Amine Swingbed, as well as software faults that are understood to be non-critical. If many of the commercial parts were switched to military-grade parts, the expected mean time to failure would be 2,300 hours. Both calculated mean times to failure for the Amine Swingbed use conservative failure rate models. The observed mean time to failure for CDRA is 2,500 hours. Working on this project and for NASA in general has helped me gain insight into current aeronautics missions, reliability engineering, circuit analysis, and different cultures. Prior my internship, I did not have a lot knowledge about the work being performed at NASA. As a chemical engineer, I had not really considered working for NASA as a career path. By engaging in interactions with civil servants, contractors, and other interns, I have learned a great deal about modern challenges that NASA is addressing. My work has helped me develop a knowledge base in safety and reliability that would be difficult to find elsewhere. Prior to this internship, I had not thought about reliability engineering. Now, I have gained a skillset in performing reliability analyses, and understanding the inner workings of a large mechanical system. I have also gained experience in understanding how electrical systems work while I was analyzing the electrical components of the Amine Swingbed. I did not expect to be exposed to as many different cultures as I have while working at NASA. I am referring to both within NASA and the Houston area. NASA employs individuals with a broad range of backgrounds. It has been great to learn from individuals who have highly diverse experiences and outlooks on the world. In the Houston area, I have come across individuals from different parts of the world. Interacting with such a high number of individuals with significantly different backgrounds has helped me to grow as a person in ways that I did not expect. My time at NASA has opened a window into the field of aeronautics. After earning a bachelor's degree in chemical engineering, I plan to go to graduate school for a PhD in engineering. Prior to coming to NASA, I was not aware of the graduate Pathways program. I intend to apply for the graduate Pathways program as positions are opened up. I would like to pursue future opportunities with NASA, especially as my engineering career progresses.

  10. Vibroacoustic test plan evaluation: Parameter variation study

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloef, H. R.

    1976-01-01

    Statistical decision models are shown to provide a viable method of evaluating the cost effectiveness of alternate vibroacoustic test plans and the associated test levels. The methodology developed provides a major step toward the development of a realistic tool to quantitatively tailor test programs to specific payloads. Testing is considered at the no test, component, subassembly, or system level of assembly. Component redundancy and partial loss of flight data are considered. Most and probabilistic costs are considered, and incipient failures resulting from ground tests are treated. Optimums defining both component and assembly test levels are indicated for the modified test plans considered. modeling simplifications must be considered in interpreting the results relative to a particular payload. New parameters introduced were a no test option, flight by flight failure probabilities, and a cost to design components for higher vibration requirements. Parameters varied were the shuttle payload bay internal acoustic environment, the STS launch cost, the component retest/repair cost, and the amount of redundancy in the housekeeping section of the payload reliability model.

  11. The Development of a Highly Reliable Power Management and Distribution System for Civil Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Coleman, Anthony S.; Hansen, Irving G.

    1994-01-01

    NASA is pursuing a program in Advanced Subsonic Transport (AST) to develop the technology for a highly reliable Fly-By-Light/Power-By-WIre aircraft. One of the primary objectives of the program is to develop the technology base for confident application of integrated PBW components and systems to transport aircraft to improve operating reliability and efficiency. Technology will be developed so that the present hydraulic and pneumatic systems of the aircraft can be systematically eliminated and replaced by electrical systems. These motor driven actuators would move the aircraft wing surfaces as well as the rudder to provide steering controls for the pilot. Existing aircraft electrical systems are not flight critical and are prone to failure due to Electromagnetic Interference (EMI) (1), ground faults and component failures. In order to successfully implement electromechanical flight control actuation, a Power Management and Distribution (PMAD) System must be designed having a reliability of 1 failure in 10(exp +9) hours, EMI hardening and a fault tolerance architecture to ensure uninterrupted power to all aircraft flight critical systems. The focus of this paper is to analyze, define, and describe technically challenging areas associated with the development of a Power By Wire Aircraft and typical requirements to be established at the box level. The authors will attempt to propose areas of investigation, citing specific military standards and requirements that need to be revised to accommodate the 'More Electric Aircraft Systems'.

  12. Integration and Assessment of Component Health Prognostics in Supervisory Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Bonebrake, Christopher A.; Dib, Gerges

    Enhanced risk monitors (ERMs) for active components in advanced reactor concepts use predictive estimates of component failure to update, in real time, predictive safety and economic risk metrics. These metrics have been shown to be capable of use in optimizing maintenance scheduling and managing plant maintenance costs. Integrating this information with plant supervisory control systems increases the potential for making control decisions that utilize real-time information on component conditions. Such decision making would limit the possibility of plant operations that increase the likelihood of degrading the functionality of one or more components while maintaining the overall functionality of the plant.more » ERM uses sensor data for providing real-time information about equipment condition for deriving risk monitors. This information is used to estimate the remaining useful life and probability of failure of these components. By combining this information with plant probabilistic risk assessment models, predictive estimates of risk posed by continued plant operation in the presence of detected degradation may be estimated. In this paper, we describe this methodology in greater detail, and discuss its integration with a prototypic software-based plant supervisory control platform. In order to integrate these two technologies and evaluate the integrated system, software to simulate the sensor data was developed, prognostic models for feedwater valves were developed, and several use cases defined. The full paper will describe these use cases, and the results of the initial evaluation.« less

  13. FOR Allocation to Distribution Systems based on Credible Improvement Potential (CIP)

    NASA Astrophysics Data System (ADS)

    Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.

    2017-02-01

    This paper describes an algorithm for forced outage rate (FOR) allocation to each section of an electrical distribution system subject to satisfaction of reliability constraints at each load point. These constraints include threshold values of basic reliability indices, for example, failure rate, interruption duration and interruption duration per year at load points. Component improvement potential measure has been used for FOR allocation. Component with greatest magnitude of credible improvement potential (CIP) measure is selected for improving reliability performance. The approach adopted is a monovariable method where one component is selected for FOR allocation and in the next iteration another component is selected for FOR allocation based on the magnitude of CIP. The developed algorithm is implemented on sample radial distribution system.

  14. Integrated Vehicle Health Management (IVHM) for Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Baroth, Edmund C.; Pallix, Joan

    2006-01-01

    To achieve NASA's ambitious Integrated Space Transportation Program objectives, aerospace systems will implement a variety of new concept in health management. System level integration of IVHM technologies for real-time control and system maintenance will have significant impact on system safety and lifecycle costs. IVHM technologies will enhance the safety and success of complex missions despite component failures, degraded performance, operator errors, and environment uncertainty. IVHM also has the potential to reduce, or even eliminate many of the costly inspections and operations activities required by current and future aerospace systems. This presentation will describe the array of NASA programs participating in the development of IVHM technologies for NASA missions. Future vehicle systems will use models of the system, its environment, and other intelligent agents with which they may interact. IVHM will be incorporated into future mission planners, reasoning engines, and adaptive control systems that can recommend or execute commands enabling the system to respond intelligently in real time. In the past, software errors and/or faulty sensors have been identified as significant contributors to mission failures. This presentation will also address the development and utilization of highly dependable sohare and sensor technologies, which are key components to ensure the reliability of IVHM systems.

  15. 49 CFR 236.1029 - PTC system use and en route failures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... system component fails to perform its intended function, the cause must be determined and the faulty... advance of the train in accordance with the following: (i) Where no block signal system is in use, the... location where an absolute block has been established in advance of the train, as referenced in paragraph...

  16. Market reform and universal coverage: avoid market failure.

    PubMed

    Enthoven, A

    1993-02-01

    Determining the marketing mix for hospitals, especially those in transition, will require critical analysis to guard against market failure. Managed competition requires careful planning and awareness of pricing components in a free-market situation. Alain Enthoven, writing for the Jackson Hole Group, proposes establishment of a new national system of sponsor organizations--Health Insurance Purchasing Cooperatives--to function as a collective purchasing agent on behalf of small employers and individuals.

  17. Implementing a Microcontroller Watchdog with a Field-Programmable Gate Array (FPGA)

    NASA Technical Reports Server (NTRS)

    Straka, Bartholomew

    2013-01-01

    Reliability is crucial to safety. Redundancy of important system components greatly enhances reliability and hence safety. Field-Programmable Gate Arrays (FPGAs) are useful for monitoring systems and handling the logic necessary to keep them running with minimal interruption when individual components fail. A complete microcontroller watchdog with logic for failure handling can be implemented in a hardware description language (HDL.). HDL-based designs are vendor-independent and can be used on many FPGAs with low overhead.

  18. Validation of Commercial Fiber Optic Components for Aerospace Environments

    NASA Technical Reports Server (NTRS)

    Ott, Melanie N.

    2005-01-01

    Full qualification for commercial photonic parts as defined by the Military specification system in the past, is not feasible. Due to changes in the photonic components industry and the Military specification system that NASA had relied upon so heavily in the past, an approach to technology validation of commercial off the shelf parts had to be devised. This approach involves knowledge of system requirements, environmental requirements and failure modes of the particular components under consideration. Synthesizing the criteria together with the major known failure modes to formulate a test plan is an effective way of establishing knowledge based "qualification". Although this does not provide the type of reliability assurance that the Military specification system did in the past, it is an approach that allows for increased risk mitigation. The information presented will introduce the audience to the technology validation approach that is currently applied at NASA for the usage of commercial-off-the-shelf (COTS) fiber optic components for space flight environments. The focus will be on how to establish technology validation criteria for commercial fiber products such that continued reliable performance is assured under the harsh environmental conditions of typical missions. The goal of this presentation is to provide the audience with an approach to formulating a COTS qualification test plan for these devices. Examples from past NASA missions will be discussed.

  19. Structural evaluation of concepts for a solar energy concentrator for Space Station advanced development program

    NASA Technical Reports Server (NTRS)

    Kenner, Winfred S.; Rhodes, Marvin D.

    1994-01-01

    Solar dynamic power systems have a higher thermodynamic efficiency than conventional photovoltaic systems; therefore they are attractive for long-term space missions with high electrical power demands. In an investigation conducted in support of a preliminary concept for Space Station Freedom, an approach for a solar dynamic power system was developed and a number of the components for the solar concentrator were fabricated for experimental evaluation. The concentrator consists of hexagonal panels comprised of triangular reflective facets which are supported by a truss. Structural analyses of the solar concentrator and the support truss were conducted using finite-element models. A number of potential component failure scenarios were postulated and the resulting structural performance was assessed. The solar concentrator and support truss were found to be adequate to meet a 1.0-Hz structural dynamics design requirement in pristine condition. However, for some of the simulated component failure conditions, the fundamental frequency dropped below the 1.0-Hz design requirement. As a result, two alternative concepts were developed and assessed. One concept incorporated a tetrahedral ring truss support for the hexagonal panels: the second incorporated a full tetrahedral truss support for the panels. The results indicate that significant improvements in stiffness can be obtained by attaching the panels to a tetrahedral truss, and that this concentrator and support truss will meet the 1.0-Hz design requirement with any of the simulated failure conditions.

  20. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  1. Failure Modes Effects and Criticality Analysis, an Underutilized Safety, Reliability, Project Management and Systems Engineering Tool

    NASA Astrophysics Data System (ADS)

    Mullin, Daniel Richard

    2013-09-01

    The majority of space programs whether manned or unmanned for science or exploration require that a Failure Modes Effects and Criticality Analysis (FMECA) be performed as part of their safety and reliability activities. This comes as no surprise given that FMECAs have been an integral part of the reliability engineer's toolkit since the 1950s. The reasons for performing a FMECA are well known including fleshing out system single point failures, system hazards and critical components and functions. However, in the author's ten years' experience as a space systems safety and reliability engineer, findings demonstrate that the FMECA is often performed as an afterthought, simply to meet contract deliverable requirements and is often started long after the system requirements allocation and preliminary design have been completed. There are also important qualitative and quantitative components often missing which can provide useful data to all of project stakeholders. These include; probability of occurrence, probability of detection, time to effect and time to detect and, finally, the Risk Priority Number. This is unfortunate as the FMECA is a powerful system design tool that when used effectively, can help optimize system function while minimizing the risk of failure. When performed as early as possible in conjunction with writing the top level system requirements, the FMECA can provide instant feedback on the viability of the requirements while providing a valuable sanity check early in the design process. It can indicate which areas of the system will require redundancy and which areas are inherently the most risky from the onset. Based on historical and practical examples, it is this author's contention that FMECAs are an immense source of important information for all involved stakeholders in a given project and can provide several benefits including, efficient project management with respect to cost and schedule, system engineering and requirements management, assembly integration and test (AI&T) and operations if applied early, performed to completion and updated along with system design.

  2. A Computer Model of the Cardiovascular System for Effective Learning.

    ERIC Educational Resources Information Center

    Rothe, Carl F.

    1979-01-01

    Described is a physiological model which solves a set of interacting, possibly nonlinear, differential equations through numerical integration on a digital computer. Sample printouts are supplied and explained for effects on the components of a cardiovascular system when exercise, hemorrhage, and cardiac failure occur. (CS)

  3. Probabilistic structural analysis methods for space transportation propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Moore, N.; Anis, C.; Newell, J.; Nagpal, V.; Singhal, S.

    1991-01-01

    Information on probabilistic structural analysis methods for space propulsion systems is given in viewgraph form. Information is given on deterministic certification methods, probability of failure, component response analysis, stress responses for 2nd stage turbine blades, Space Shuttle Main Engine (SSME) structural durability, and program plans. .

  4. Instrumentation Cables Test Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muna, Alice Baca; LaFleur, Chris Bensdotter

    A fire at a nuclear power plant (NPP) has the potential to damage structures, systems, and components important to safety, if not promptly detected and suppressed. At Browns Ferry Nuclear Power Plant on March 22, 1975, a fire in the reactor building damaged electrical power and control systems. Damage to instrumentation cables impeded the function of both normal and standby reactor coolant systems, and degraded the operators’ plant monitoring capability. This event resulted in additional NRC involvement with utilities to ensure that NPPs are properly protected from fire as intended by the NRC principle design criteria (i.e., general design criteriamore » 3, Fire Protection). Current guidance and methods for both deterministic and performance based approaches typically make conservative (bounding) assumptions regarding the fire-induced failure modes of instrumentation cables and those failure modes effects on component and system response. Numerous fire testing programs have been conducted in the past to evaluate the failure modes and effects of electrical cables exposed to severe thermal conditions. However, that testing has primarily focused on control circuits with only a limited number of tests performed on instrumentation circuits. In 2001, the Nuclear Energy Institute (NEI) and the Electric Power Research Institute (EPRI) conducted a series of cable fire tests designed to address specific aspects of the cable failure and circuit fault issues of concern1. The NRC was invited to observe and participate in that program. The NRC sponsored Sandia National Laboratories to support this participation, whom among other things, added a 4-20 mA instrumentation circuit and instrumentation cabling to six of the tests. Although limited, one insight drawn from those instrumentation circuits tests was that the failure characteristics appeared to depend on the cable insulation material. The results showed that for thermoset insulated cables, the instrument reading tended to drift and fluctuate, while the thermoplastic insulated cables, the instrument reading fell off-scale rapidly. From an operational point of view, the latter failure characteristics would likely be identified as a failure from the effects of fire, while the former may result in inaccurate readings.« less

  5. Rate-based structural health monitoring using permanently installed sensors

    PubMed Central

    2017-01-01

    Permanently installed sensors are becoming increasingly ubiquitous, facilitating very frequent in situ measurements and consequently improved monitoring of ‘trends’ in the observed system behaviour. It is proposed that this newly available data may be used to provide prior warning and forecasting of critical events, particularly system failure. Numerous damage mechanisms are examples of positive feedback; they are ‘self-accelerating’ with an increasing rate of damage towards failure. The positive feedback leads to a common time-response behaviour which may be described by an empirical relation allowing prediction of the time to criticality. This study focuses on Structural Health Monitoring of engineering components; failure times are projected well in advance of failure for fatigue, creep crack growth and volumetric creep damage experiments. The proposed methodology provides a widely applicable framework for using newly available near-continuous data from permanently installed sensors to predict time until failure in a range of application areas including engineering, geophysics and medicine. PMID:28989308

  6. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  7. Intelligent on-line fault tolerant control for unanticipated catastrophic failures.

    PubMed

    Yen, Gary G; Ho, Liang-Wei

    2004-10-01

    As dynamic systems become increasingly complex, experience rapidly changing environments, and encounter a greater variety of unexpected component failures, solving the control problems of such systems is a grand challenge for control engineers. Traditional control design techniques are not adequate to cope with these systems, which may suffer from unanticipated dynamic failures. In this research work, we investigate the on-line fault tolerant control problem and propose an intelligent on-line control strategy to handle the desired trajectories tracking problem for systems suffering from various unanticipated catastrophic faults. Through theoretical analysis, the sufficient condition of system stability has been derived and two different on-line control laws have been developed. The approach of the proposed intelligent control strategy is to continuously monitor the system performance and identify what the system's current state is by using a fault detection method based upon our best knowledge of the nominal system and nominal controller. Once a fault is detected, the proposed intelligent controller will adjust its control signal to compensate for the unknown system failure dynamics by using an artificial neural network as an on-line estimator to approximate the unexpected and unknown failure dynamics. The first control law is derived directly from the Lyapunov stability theory, while the second control law is derived based upon the discrete-time sliding mode control technique. Both control laws have been implemented in a variety of failure scenarios to validate the proposed intelligent control scheme. The simulation results, including a three-tank benchmark problem, comply with theoretical analysis and demonstrate a significant improvement in trajectory following performance based upon the proposed intelligent control strategy.

  8. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.

  9. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  10. The Failure Envelope Concept Applied To The Bone-Dental Implant System.

    PubMed

    Korabi, R; Shemtov-Yona, K; Dorogoy, A; Rittel, D

    2017-05-17

    Dental implants interact with the jawbone through their common interface. While the implant is an inert structure, the jawbone is a living one that reacts to mechanical stimuli. Setting aside mechanical failure considerations of the implant, the bone is the main component to be addressed. With most failure criteria being expressed in terms of stress or strain values, their fulfillment can mean structural flow or fracture. However, in addition to those effects, the bony structure is likely to react biologically to the applied loads by dissolution or remodeling, so that additional (strain-based) criteria must be taken into account. While the literature abounds in studies of particular loading configurations, e.g. angle and value of the applied load to the implant, a general study of the admissible implant loads is still missing. This paper introduces the concept of failure envelopes for the dental implant-jawbone system, thereby defining admissible combinations of vertical and lateral loads for various failure criteria of the jawbone. Those envelopes are compared in terms of conservatism, thereby providing a systematic comparison of the various failure criteria and their determination of the admissible loads.

  11. Managing a Multisite Academic-Private Radiology Practice Reading Environment: Impact of IT Downtimes on Enterprise Efficiency.

    PubMed

    Becker, Murray; Goldszal, Alberto; Detal, Julie; Gronlund-Jacob, Judith; Epstein, Robert

    2015-06-01

    The aim of this study was to assess whether the complex radiology IT infrastructures needed for large, geographically diversified, radiology practices are inherently stable with respect to system downtimes, and to characterize the nature of the downtimes to better understand their impact on radiology department workflow. All radiology IT unplanned downtimes over a 12-month period in a hybrid academic-private practice that performs all interpretations in-house (no commercial "nighthawk" services) for approximately 900,000 studies per year, originating at 6 hospitals, 10 outpatient imaging centers, and multiple low-volume off-hours sites, were logged and characterized using 5 downtime metrics: duration, etiology, failure type, extent, and severity. In 12 consecutive months, 117 unplanned downtimes occurred with the following characteristics: duration: median time = 3.5 hours with 34% <1.5 hours and 30% >12 hours; etiology: 87% were due to software malfunctions, and 13% to hardware malfunctions; failure type: 88% were transient component failures, 12% were complete component failures; extent: all sites experienced downtimes, but downtimes were always localized to a subset of sites, and no system-wide downtimes occurred; severity (impact on radiologist workflow): 47% had minimal impact, 50% moderate impact, and 3% severe impact. In the complex radiology IT system that was studied, downtimes were common; they were usually a result of transient software malfunctions; the geographic extent was always localized rather than system wide; and most often, the impacts on radiologist workflow were modest. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  12. AUTOMOTIVE DIESEL MAINTENACE 1. UNIT XV, I--MAINTAINING THE COOLING SYSTEM, CUMMINS DIESEL ENGINE, I--UNIT INSTALLATION--TRANSMISSION.

    ERIC Educational Resources Information Center

    Human Engineering Inst., Cleveland, OH.

    THIS MODULE OF A 30-MODULE COURSE IS DESIGNED TO DEVELOP AN UNDERSTANDING OF THE FUNCTION AND MAINTENANCE OF THE DIESEL ENGINE COOLING SYSTEM AND THE PROCEDURES FOR TRANSMISSION INSTALLATION. TOPICS ARE (1) IMPORTANCE OF THE COOLING SYSTEM, (2) COOLING SYSTEM COMPONENTS, (3) EVALUATING COOLING SYSTEM FAILURES, (4) CARING FOR THE COOLING SYSTEM,…

  13. Investigating the Interplay between Energy Efficiency and Resilience in High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Song, Shuaiwen; Wu, Panruo

    2015-05-29

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  14. Flight test of a full authority Digital Electronic Engine Control system in an F-15 aircraft

    NASA Technical Reports Server (NTRS)

    Barrett, W. J.; Rembold, J. P.; Burcham, F. W.; Myers, L.

    1981-01-01

    The Digital Electronic Engine Control (DEEC) system considered is a relatively low cost digital full authority control system containing selectively redundant components and fault detection logic with capability for accommodating faults to various levels of operational capability. The DEEC digital control system is built around a 16-bit, 1.2 microsecond cycle time, CMOS microprocessor, microcomputer system with approximately 14 K of available memory. Attention is given to the control mode, component bench testing, closed loop bench testing, a failure mode and effects analysis, sea-level engine testing, simulated altitude engine testing, flight testing, the data system, cockpit, and real time display.

  15. Command module/service module reaction control subsystem assessment

    NASA Technical Reports Server (NTRS)

    Weary, D. P.

    1971-01-01

    Detailed review of component failure histories, qualification adequacy, manufacturing flow, checkout requirements and flow, ground support equipment interfaces, subsystem interface verification, protective devices, and component design did not reveal major weaknesses in the command service module (CSM) reaction control system (RCS). No changes to the CSM RCS were recommended. The assessment reaffirmed the adequacy of the CSM RCS for future Apollo missions.

  16. 40 CFR 264.1101 - Design and operating standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... hazardous waste (e.g., upon detection of leakage from the primary barrier) the owner or operator must: (A... constituents into the barrier, and a leak detection system that is capable of detecting failure of the primary... requirements of the leak detection component of the secondary containment system are satisfied by installation...

  17. 49 CFR 571.122 - Standard No. 122; Motorcycle brake systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... mile before any brake application. Skid number means the frictional resistance of a pavement measured... control designed so that a leakage-type failure of a pressure component in a single subsystem (except... pounds). S5.8Service brake system design durability. Each motorcycle shall be capable of completing all...

  18. Active parallel redundancy for electronic integrator-type control circuits

    NASA Technical Reports Server (NTRS)

    Peterson, R. A.

    1971-01-01

    Circuit extends concept of redundant feedback control from type-0 to type-1 control systems. Inactive channels are slaves to the active channel, if latter fails, it is rejected and slave channel is activated. High reliability and elimination of single-component catastrophic failure are important in closed-loop control systems.

  19. A Methodology for Modeling Nuclear Power Plant Passive Component Aging in Probabilistic Risk Assessment under the Impact of Operating Conditions, Surveillance and Maintenance Activities

    NASA Astrophysics Data System (ADS)

    Guler Yigitoglu, Askin

    In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.

  20. [MaRS Project

    NASA Technical Reports Server (NTRS)

    Aruljothi, Arunvenkatesh

    2016-01-01

    The Space Exploration Division of the Safety and Mission Assurances Directorate is responsible for reducing the risk to Human Space Flight Programs by providing system safety, reliability, and risk analysis. The Risk & Reliability Analysis branch plays a part in this by utilizing Probabilistic Risk Assessment (PRA) and Reliability and Maintainability (R&M) tools to identify possible types of failure and effective solutions. A continuous effort of this branch is MaRS, or Mass and Reliability System, a tool that was the focus of this internship. Future long duration space missions will have to find a balance between the mass and reliability of their spare parts. They will be unable take spares of everything and will have to determine what is most likely to require maintenance and spares. Currently there is no database that combines mass and reliability data of low level space-grade components. MaRS aims to be the first database to do this. The data in MaRS will be based on the hardware flown on the International Space Stations (ISS). The components on the ISS have a long history and are well documented, making them the perfect source. Currently, MaRS is a functioning excel workbook database; the backend is complete and only requires optimization. MaRS has been populated with all the assemblies and their components that are used on the ISS; the failures of these components are updated regularly. This project was a continuation on the efforts of previous intern groups. Once complete, R&M engineers working on future space flight missions will be able to quickly access failure and mass data on assemblies and components, allowing them to make important decisions and tradeoffs.

  1. Quantifying Pilot Contribution to Flight Safety During an In-Flight Airspeed Failure

    NASA Technical Reports Server (NTRS)

    Etherington, Timothy J.; Kramer, Lynda J.; Bailey, Randall E.; Kennedey, Kellie D.

    2017-01-01

    Accident statistics cite the flight crew as a causal factor in over 60% of large transport fatal accidents. Yet a well-trained and well-qualified crew is acknowledged as the critical center point of aircraft systems safety and an integral component of the entire commercial aviation system. A human-in-the-loop test was conducted using a Level D certified Boeing 737-800 simulator to evaluate the pilot's contribution to safety-of-flight during routine air carrier flight operations and in response to system failures. To quantify the human's contribution, crew complement was used as an independent variable in a between-subjects design. This paper details the crew's actions and responses while dealing with an in-flight airspeed failure. Accident statistics often cite flight crew error (Baker, 2001) as the primary contributor in accidents and incidents in transport category aircraft. However, the Air Line Pilots Association (2011) suggests "a well-trained and well-qualified pilot is acknowledged as the critical center point of the aircraft systems safety and an integral safety component of the entire commercial aviation system." This is generally acknowledged but cannot be verified because little or no quantitative data exists on how or how many accidents/incidents are averted by crew actions. Anecdotal evidence suggest crews handle failures on a daily basis and Aviation Safety Action Program data generally supports this assertion, even if the data is not released to the public. However without hard evidence, the contribution and means by which pilots achieve safety of flight is difficult to define. Thus, ways to improve the human ability to contribute or overcome deficiencies are ill-defined.

  2. Diagnostic tolerance for missing sensor data

    NASA Technical Reports Server (NTRS)

    Scarl, Ethan A.

    1989-01-01

    For practical automated diagnostic systems to continue functioning after failure, they must not only be able to diagnose sensor failures but also be able to tolerate the absence of data from the faulty sensors. It is shown that conventional (associational) diagnostic methods will have combinatoric problems when trying to isolate faulty sensors, even if they adequately diagnose other components. Moreover, attempts to extend the operation of diagnostic capability past sensor failure will necessarily compound those difficulties. Model-based reasoning offers a structured alternative that has no special problems diagnosing faulty sensors and can operate gracefully when sensor data is missing.

  3. Final Report: System Reliability Model for Solid-State Lighting (SSL) Luminaires

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, J. Lynn

    2017-05-31

    The primary objectives of this project was to develop and validate reliability models and accelerated stress testing (AST) methodologies for predicting the lifetime of integrated SSL luminaires. This study examined the likely failure modes for SSL luminaires including abrupt failure, excessive lumen depreciation, unacceptable color shifts, and increased power consumption. Data on the relative distribution of these failure modes were acquired through extensive accelerated stress tests and combined with industry data and other source of information on LED lighting. This data was compiled and utilized to build models of the aging behavior of key luminaire optical and electrical components.

  4. Cold startup and low temperature performance of the Brayton cycle electrical subsystem

    NASA Technical Reports Server (NTRS)

    Vrancik, J. E.; Bainbridge, R. C.

    1971-01-01

    Cold performance tests and startup tests were conducted on the Brayton-cycle inverter, motor-driven pump, dc supply, speed control with parasitic load resistor and the Brayton control system. These tests were performed with the components in a vacuum and mounted on coldplates. A temperature range of ?25 to -50 C was used for the tests. No failures occurred, and component performance gave no indication that there would be any problem with the safe operation of the Brayton power generating system.

  5. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  6. Analysis of failed nuclear plant components

    NASA Astrophysics Data System (ADS)

    Diercks, D. R.

    1993-12-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.

  7. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  8. NMESys: An expert system for network fault detection

    NASA Technical Reports Server (NTRS)

    Nelson, Peter C.; Warpinski, Janet

    1991-01-01

    The problem of network management is becoming an increasingly difficult and challenging task. It is very common today to find heterogeneous networks consisting of many different types of computers, operating systems, and protocols. The complexity of implementing a network with this many components is difficult enough, while the maintenance of such a network is an even larger problem. A prototype network management expert system, NMESys, implemented in the C Language Integrated Production System (CLIPS). NMESys concentrates on solving some of the critical problems encountered in managing a large network. The major goal of NMESys is to provide a network operator with an expert system tool to quickly and accurately detect hard failures, potential failures, and to minimize or eliminate user down time in a large network.

  9. Method and system for detecting a failure or performance degradation in a dynamic system such as a flight vehicle

    NASA Technical Reports Server (NTRS)

    Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)

    2003-01-01

    A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.

  10. An Examination of Commercial Aviation Accidents and Incidents Related to Integrated Vehicle Health Management

    NASA Technical Reports Server (NTRS)

    Reveley, Mary S.; Briggs, Jeffrey L.; Thomas, Megan A.; Evans, Joni K.; Jones, Sharon M.

    2011-01-01

    The Integrated Vehicle Health Management (IVHM) Project is one of the four projects within the National Aeronautics and Space Administration's (NASA) Aviation Safety Program (AvSafe). The IVHM Project conducts research to develop validated tools and technologies for automated detection, diagnosis, and prognosis that enable mitigation of adverse events during flight. Adverse events include those that arise from system, subsystem, or component failure, faults, and malfunctions due to damage, degradation, or environmental hazards that occur during flight. Determining the causal factors and adverse events related to IVHM technologies will help in the formulation of research requirements and establish a list of example adverse conditions against which IVHM technologies can be evaluated. This paper documents the results of an examination of the most recent statistical/prognostic accident and incident data that is available from the Aviation Safety Information Analysis and Sharing (ASIAS) System to determine the causal factors of system/component failures and/or malfunctions in U.S. commercial aviation accidents and incidents.

  11. Failure Analysis for Composition of Web Services Represented as Labeled Transition Systems

    NASA Astrophysics Data System (ADS)

    Nadkarni, Dinanath; Basu, Samik; Honavar, Vasant; Lutz, Robyn

    The Web service composition problem involves the creation of a choreographer that provides the interaction between a set of component services to realize a goal service. Several methods have been proposed and developed to address this problem. In this paper, we consider those scenarios where the composition process may fail due to incomplete specification of goal service requirements or due to the fact that the user is unaware of the functionality provided by the existing component services. In such cases, it is desirable to have a composition algorithm that can provide feedback to the user regarding the cause of failure in the composition process. Such feedback will help guide the user to re-formulate the goal service and iterate the composition process. We propose a failure analysis technique for composition algorithms that views Web service behavior as multiple sequences of input/output events. Our technique identifies the possible cause of composition failure and suggests possible recovery options to the user. We discuss our technique using a simple e-Library Web service in the context of the MoSCoE Web service composition framework.

  12. A theoretical basis for the analysis of redundant software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques fault-tolerant software, is an understanding of the impact of multiple-joint occurrences of coincident errors. A theoretical basis for the study of redundant software is developed which provides a probabilistic framework for empirically evaluating the effectiveness of the general (N-Version) strategy when component versions are subject to coincident errors, and permits an analytical study of the effects of these errors. The basic assumptions of the model are: (1) independently designed software components are chosen in a random sample; and (2) in the user environment, the system is required to execute on a stationary input series. The intensity of coincident errors, has a central role in the model. This function describes the propensity to introduce design faults in such a way that software components fail together when executing in the user environment. The model is used to give conditions under which an N-Version system is a better strategy for reducing system failure probability than relying on a single version of software. A condition which limits the effectiveness of a fault-tolerant strategy is studied, and it is posted whether system failure probability varies monotonically with increasing N or whether an optimal choice of N exists.

  13. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  14. Integrated Design Software Predicts the Creep Life of Monolithic Ceramic Components

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Significant improvements in propulsion and power generation for the next century will require revolutionary advances in high-temperature materials and structural design. Advanced ceramics are candidate materials for these elevated-temperature applications. As design protocols emerge for these material systems, designers must be aware of several innate features, including the degrading ability of ceramics to carry sustained load. Usually, time-dependent failure in ceramics occurs because of two different, delayedfailure mechanisms: slow crack growth and creep rupture. Slow crack growth initiates at a preexisting flaw and continues until a critical crack length is reached, causing catastrophic failure. Creep rupture, on the other hand, occurs because of bulk damage in the material: void nucleation and coalescence that eventually leads to macrocracks which then propagate to failure. Successful application of advanced ceramics depends on proper characterization of material behavior and the use of an appropriate design methodology. The life of a ceramic component can be predicted with the NASA Lewis Research Center's Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design programs. CARES/CREEP determines the expected life of a component under creep conditions, and CARES/LIFE predicts the component life due to fast fracture and subcritical crack growth. The previously developed CARES/LIFE program has been used in numerous industrial and Government applications.

  15. FEAT - FAILURE ENVIRONMENT ANALYSIS TOOL (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Pack, G.

    1994-01-01

    The Failure Environment Analysis Tool, FEAT, enables people to see and better understand the effects of failures in a system. FEAT uses digraph models to determine what will happen to a system if a set of failure events occurs and to identify the possible causes of a selected set of failures. Failures can be user-selected from either engineering schematic or digraph model graphics, and the effects or potential causes of the failures will be color highlighted on the same schematic or model graphic. As a design tool, FEAT helps design reviewers understand exactly what redundancies have been built into a system and where weaknesses need to be protected or designed out. A properly developed digraph will reflect how a system functionally degrades as failures accumulate. FEAT is also useful in operations, where it can help identify causes of failures after they occur. Finally, FEAT is valuable both in conceptual development and as a training aid, since digraphs can identify weaknesses in scenarios as well as hardware. Digraphs models for use with FEAT are generally built with the Digraph Editor, a Macintosh-based application which is distributed with FEAT. The Digraph Editor was developed specifically with the needs of FEAT users in mind and offers several time-saving features. It includes an icon toolbox of components required in a digraph model and a menu of functions for manipulating these components. It also offers FEAT users a convenient way to attach a formatted textual description to each digraph node. FEAT needs these node descriptions in order to recognize nodes and propagate failures within the digraph. FEAT users store their node descriptions in modelling tables using any word processing or spreadsheet package capable of saving data to an ASCII text file. From within the Digraph Editor they can then interactively attach a properly formatted textual description to each node in a digraph. Once descriptions are attached to them, a selected set of nodes can be saved as a library file which represents a generic digraph structure for a class of components. The Generate Model feature can then use library files to generate digraphs for every component listed in the modeling tables, and these individual digraph files can be used in a variety of ways to speed generation of complete digraph models. FEAT contains a preprocessor which performs transitive closure on the digraph. This multi-step algorithm builds a series of phantom bridges, or gates, that allow accurate bi-directional processing of digraphs. This preprocessing can be time-consuming, but once preprocessing is complete, queries can be answered and displayed within seconds. A UNIX X-Windows port of version 3.5 of FEAT, XFEAT, is also available to speed the processing of digraph models created on the Macintosh. FEAT v3.6, which is only available for the Macintosh, has some report generation capabilities which are not available in XFEAT. For very large integrated systems, FEAT can be a real cost saver in terms of design evaluation, training, and knowledge capture. The capability of loading multiple digraphs and schematics into FEAT allows modelers to build smaller, more focused digraphs. Typically, each digraph file will represent only a portion of a larger failure scenario. FEAT will combine these files and digraphs from other modelers to form a continuous mathematical model of the system's failure logic. Since multiple digraphs can be cumbersome to use, FEAT ties propagation results to schematic drawings produced using MacDraw II (v1.1v2 or later) or MacDraw Pro. This makes it easier to identify single and double point failures that may have to cross several system boundaries and multiple engineering disciplines before creating a hazardous condition. FEAT v3.6 for the Macintosh is written in C-language using Macintosh Programmer's Workshop C v3.2. It requires at least a Mac II series computer running System 7 or System 6.0.8 and 32 Bit QuickDraw. It also requires a math coprocessor or coprocessor emulator and a color monitor (or one with 256 gray scale capability). A minimum of 4Mb of free RAM is highly recommended. The UNIX version of FEAT includes both FEAT v3.6 for the Macintosh and XFEAT. XFEAT is written in C-language for Sun series workstations running SunOS, SGI workstations running IRIX, DECstations running ULTRIX, and Intergraph workstations running CLIX version 6. It requires the MIT X Window System, Version 11 Revision 4, with OSF/Motif 1.1.3, and 16Mb of RAM. The standard distribution medium for FEAT 3.6 (Macintosh version) is a set of three 3.5 inch Macintosh format diskettes. The standard distribution package for the UNIX version includes the three FEAT 3.6 Macintosh diskettes plus a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format which contains XFEAT. Alternate distribution media and formats for XFEAT are available upon request. FEAT has been under development since 1990. Both FEAT v3.6 for the Macintosh and XFEAT v3.5 were released in 1993.

  16. Safety Guided Design of Crew Return Vehicle in Concept Design Phase Using STAMP/STPA

    NASA Astrophysics Data System (ADS)

    Nakao, H.; Katahira, M.; Miyamoto, Y.; Leveson, N.

    2012-01-01

    In the concept development and design phase of a new space system, such as a Crew Vehicle, designers tend to focus on how to implement new technology. Designers also consider the difficulty of using the new technology and trade off several system design candidates. Then they choose an optimal design from the candidates. Safety should be a key aspect driving optimal concept design. However, in past concept design activities, safety analysis such as FTA has not used to drive the design because such analysis techniques focus on component failure and component failure cannot be considered in the concept design phase. The solution to these problems is to apply a new hazard analysis technique, called STAMP/STPA. STAMP/STPA defines safety as a control problem rather than a failure problem and identifies hazardous scenarios and their causes. Defining control flow is the essential in concept design phase. Therefore STAMP/STPA could be a useful tool to assess the safety of system candidates and to be part of the rationale for choosing a design as the baseline of the system. In this paper, we explain our case study of safety guided concept design using STPA, the new hazard analysis technique, and model-based specification technique on Crew Return Vehicle design and evaluate benefits of using STAMP/STPA in concept development phase.

  17. Cascading Failures and Recovery in Networks of Networks

    NASA Astrophysics Data System (ADS)

    Havlin, Shlomo

    Network science have been focused on the properties of a single isolated network that does not interact or depends on other networks. In reality, many real-networks, such as power grids, transportation and communication infrastructures interact and depend on other networks. I will present a framework for studying the vulnerability and the recovery of networks of interdependent networks. In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This is also the case when some nodes like certain locations play a role in two networks -multiplex. This may happen recursively and can lead to a cascade of failures and to a sudden fragmentation of the system. I will present analytical solutions for the critical threshold and the giant component of a network of n interdependent networks. I will show, that the general theory has many novel features that are not present in the classical network theory. When recovery of components is possible global spontaneous recovery of the networks and hysteresis phenomena occur and the theory suggests an optimal repairing strategy of system of systems. I will also show that interdependent networks embedded in space are significantly more vulnerable compared to non embedded networks. In particular, small localized attacks may lead to cascading failures and catastrophic consequences.Thus, analyzing data of real network of networks is highly required to understand the system vulnerability. DTRA, ONR, Israel Science Foundation.

  18. Thermal Cycling Fatigue in DIPs Mounted with Eutectic Tin-Lead Solder Joints in Stub and Gullwing Geometries

    NASA Technical Reports Server (NTRS)

    Winslow, J. W.; Silveira, C. de

    1993-01-01

    It has long been known that solder joints under mechanical stress are subject to failure. In early electronic systems, such failures were avoided primarily by avoiding the use of solder as a mechanical structural component. The rule was first to make sound wire connections that did not depend mechanically on solder, and only then to solder them. Careful design and miniaturization in modern electronic systems limits the mechanical stresses exerted on solder joints to values less than their yield points, and these joints have become integral parts of the mechanical structures. Unfortunately, while these joints are strong enough when new, they have proven vulnerable to fatigue failures as they age. Details of the fatigue process are poorly understood, making predictions of expected lifetimes difficult.

  19. Clinical outcome of the metal-on-metal hybrid Corin Cormet 2000 hip resurfacing system: an up to 11-year follow-up study.

    PubMed

    Gross, Thomas P; Liu, Fei; Webb, Lee A

    2012-04-01

    This report extends the follow-up for the largest center of the first multicenter US Food and Drug Administration investigational device exemption study on metal-on-metal hip resurfacing arthroplasty up to 11 years. A single surgeon performed 373 hip resurfacing arthroplasties using the hybrid Corin Cormet 2000 system. The Kaplan-Meier survivorship at 11 years was 93% when revision for any reason was used as an end point and 91% if radiographic failures were included. The clinical results demonstrate an acceptable failure rate with use of this system. Loosening of the cemented femoral components was the most common source of failure and occurred at all follow-up intervals. A learning curve that persisted for at least 200 cases was confirmed. All femoral neck fractures occurred before 6 months postoperatively. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Regression to fuzziness method for estimation of remaining useful life in power plant components

    NASA Astrophysics Data System (ADS)

    Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.

    2014-10-01

    Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.

  1. Parametric Testing of Launch Vehicle FDDR Models

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar

    2011-01-01

    For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.

  2. Weighted Fuzzy Risk Priority Number Evaluation of Turbine and Compressor Blades Considering Failure Mode Correlations

    NASA Astrophysics Data System (ADS)

    Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-06-01

    Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.

  3. Composite Interlaminar Shear Fracture Toughness, G(sub 2c): Shear Measurement of Sheer Myth?

    NASA Technical Reports Server (NTRS)

    OBrien, T. Kevin

    1997-01-01

    The concept of G2c as a measure of the interlaminar shear fracture toughness of a composite material is critically examined. In particular, it is argued that the apparent G2c as typically measured is inconsistent with the original definition of shear fracture. It is shown that interlaminar shear failure actually consists of tension failures in the resin rich layers between plies followed by the coalescence of ligaments created by these failures and not the sliding of two planes relative to one another that is assumed in fracture mechanics theory. Several strain energy release rate solutions are reviewed for delamination in composite laminates and structural components where failures have been experimentally documented. Failures typically occur at a location where the mode 1 component accounts for at least one half of the total G at failure. Hence, it is the mode I and mixed-mode interlaminar fracture toughness data that will be most useful in predicting delamination failure in composite components in service. Although apparent G2c measurements may prove useful for completeness of generating mixed-mode criteria, the accuracy of these measurements may have very little influence on the prediction of mixed-mode failures in most structural components.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Incorporation of real-time component information using equipment condition assessment (ECA) through the developmentof enhanced risk monitors (ERM) for active components in advanced reactor (AR) and advanced small modular reactor (SMR) designs. We incorporate time-dependent failure probabilities from prognostic health management (PHM) systems to dynamically update the risk metric of interest. This information is used to augment data used for supervisory control and plant-wide coordination of multiple modules by providing the incremental risk incurred due to aging and demands placed on components that support mission requirements.

  5. An electromechanical material testing system for in situ electron microscopy and applications.

    PubMed

    Zhu, Yong; Espinosa, Horacio D

    2005-10-11

    We report the development of a material testing system for in situ electron microscopy (EM) mechanical testing of nanostructures. The testing system consists of an actuator and a load sensor fabricated by means of surface micromachining. This previously undescribed nanoscale material testing system makes possible continuous observation of the specimen deformation and failure with subnanometer resolution, while simultaneously measuring the applied load electronically with nanonewton resolution. This achievement was made possible by the integration of electromechanical and thermomechanical components based on microelectromechanical system technology. The system capabilities are demonstrated by the in situ EM testing of free-standing polysilicon films, metallic nanowires, and carbon nanotubes. In particular, a previously undescribed real-time instrumented in situ transmission EM observation of carbon nanotubes failure under tensile load is presented here.

  6. Data Quality for Situational Awareness during Mass-Casualty Events

    PubMed Central

    Demchak, Barry; Griswold, William G.; Lenert, Leslie A.

    2007-01-01

    Incident Command systems often achieve situational awareness through manual paper-tracking systems. Such systems often produce high latencies and incomplete data, resulting in inefficient and ineffective resource deployment. WIISARD (Wireless Internet Information System for Medical Response in Disasters) collects much more data than a paper-based system, dramatically reducing latency while increasing the kinds and quality of information available to incident commanders. Yet, the introduction of IT into a disaster setting is not problem-free. Notably, system component failures can delay the delivery of data. The type and extent of a failure can have varying effects on the usefulness of information displays. We describe a small, coherent set of customizble information overlays to address this problem, and we discuss reactions to these displays by medical commanders. PMID:18693821

  7. Dynamic decision-making for reliability and maintenance analysis of manufacturing systems based on failure effects

    NASA Astrophysics Data System (ADS)

    Zhang, Ding; Zhang, Yingjie

    2017-09-01

    A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.

  8. Insulation Coordination and Failure Mitigation Concerns for Roust Dc Electrical Power Systems (Preprint)

    DTIC Science & Technology

    2014-05-01

    vulnerable to failure is air. This could be a discharge through an air medium or along an air/surface interface. Achieving robustness in dc power...sputtering” arcs) are discharges that are most commonly located in series with the intended load; the electrical impedance of the load limits the...particularly those used at voltages > 1000 V, is detection and measurement of partial- discharge (PD) activity. The presence of PD in a component typically

  9. 49 CFR 571.122 - Standard No. 122; Motorcycle brake systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... mile before any brake application. Skid number means the frictional resistance of a pavement measured... subsystems actuated by a single control designed so that a leakage-type failure of a pressure component in a...), but not less than 0 Newtons (0 pounds). S5.8Service brake system design durability. Each motorcycle...

  10. 49 CFR 571.122 - Standard No. 122; Motorcycle brake systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... mile before any brake application. Skid number means the frictional resistance of a pavement measured... subsystems actuated by a single control designed so that a leakage-type failure of a pressure component in a...), but not less than 0 Newtons (0 pounds). S5.8Service brake system design durability. Each motorcycle...

  11. Adaptive model-based control systems and methods for controlling a gas turbine

    NASA Technical Reports Server (NTRS)

    Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)

    2004-01-01

    Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).

  12. Failure Impact Analysis of Key Management in AMI Using Cybernomic Situational Assessment (CSA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T; Hauser, Katie R

    2013-01-01

    In earlier work, we presented a computational framework for quantifying the security of a system in terms of the average loss a stakeholder stands to sustain as a result of threats to the system. We named this system, the Cyberspace Security Econometrics System (CSES). In this paper, we refine the framework and apply it to cryptographic key management within the Advanced Metering Infrastructure (AMI) as an example. The stakeholders, requirements, components, and threats are determined. We then populate the matrices with justified values by addressing the AMI at a higher level, rather than trying to consider every piece of hardwaremore » and software involved. We accomplish this task by leveraging the recently established NISTR 7628 guideline for smart grid security. This allowed us to choose the stakeholders, requirements, components, and threats realistically. We reviewed the literature and selected an industry technical working group to select three representative threats from a collection of 29 threats. From this subset, we populate the stakes, dependency, and impact matrices, and the threat vector with realistic numbers. Each Stakeholder s Mean Failure Cost is then computed.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mouton, S.; Ledoux, Y.; Teissandier, D.

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision supportmore » system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.« less

  14. The cell wall components peptidoglycan and lipoteichoic acid from Staphylococcus aureus act in synergy to cause shock and multiple organ failure.

    PubMed Central

    De Kimpe, S J; Kengatharan, M; Thiemermann, C; Vane, J R

    1995-01-01

    Although the incidence of Gram-positive sepsis has risen strongly, it is unclear how Gram-positive organisms (without endotoxin) initiate septic shock. We investigated whether two cell wall components from Staphylococcus aureus, peptidoglycan (PepG) and lipoteichoic acid (LTA), can induce the inflammatory response and multiple organ dysfunction syndrome (MODS) associated with septic shock caused by Gram-positive organisms. In cultured macrophages, LTA (10 micrograms/ml), but not PepG (100 micrograms/ml), induces the release of nitric oxide measured as nitrite. PepG, however, caused a 4-fold increase in the production of nitrite elicited by LTA. Furthermore, PepG antibodies inhibited the release of nitrite elicited by killed S. aureus. Administration of both PepG (10 mg/kg; i.v.) and LTA (3 mg/kg; i.v.) in anesthetized rats resulted in the release of tumor necrosis factor alpha and interferon gamma and MODS, as indicated by a decrease in arterial oxygen pressure (lung) and an increase in plasma concentrations of bilirubin and alanine aminotransferase (liver), creatinine and urea (kidney), lipase (pancreas), and creatine kinase (heart or skeletal muscle). There was also the expression of inducible nitric oxide synthase in these organs, circulatory failure, and 50% mortality. These effects were not observed after administration of PepG or LTA alone. Even a high dose of LTA (10 mg/kg) causes only circulatory failure but no MODS. Thus, our results demonstrate that the two bacterial wall components, PepG and LTA, work together to cause systemic inflammation and multiple systems failure associated with Gram-positive organisms. Images Fig. 2 PMID:7479784

  15. 77 FR 3514 - Protection Against Turbine Missiles

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-24

    ... NUCLEAR REGULATORY COMMISSION [NRC-2009-0481] Protection Against Turbine Missiles AGENCY: Nuclear... (NRC or Commission) is issuing a revision to Regulatory Guide 1.115, ``Protection Against Turbine... structures, systems, and components against missiles resulting from turbine failure by the appropriate...

  16. Reliability evaluation methodology for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1992-01-01

    Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.

  17. Mass and Reliability Source (MaRS) Database

    NASA Technical Reports Server (NTRS)

    Valdenegro, Wladimir

    2017-01-01

    The Mass and Reliability Source (MaRS) Database consolidates components mass and reliability data for all Oribital Replacement Units (ORU) on the International Space Station (ISS) into a single database. It was created to help engineers develop a parametric model that relates hardware mass and reliability. MaRS supplies relevant failure data at the lowest possible component level while providing support for risk, reliability, and logistics analysis. Random-failure data is usually linked to the ORU assembly. MaRS uses this data to identify and display the lowest possible component failure level. As seen in Figure 1, the failure point is identified to the lowest level: Component 2.1. This is useful for efficient planning of spare supplies, supporting long duration crewed missions, allowing quicker trade studies, and streamlining diagnostic processes. MaRS is composed of information from various databases: MADS (operating hours), VMDB (indentured part lists), and ISS PART (failure data). This information is organized in Microsoft Excel and accessed through a program made in Microsoft Access (Figure 2). The focus of the Fall 2017 internship tour was to identify the components that were the root cause of failure from the given random-failure data, develop a taxonomy for the database, and attach material headings to the component list. Secondary objectives included verifying the integrity of the data in MaRS, eliminating any part discrepancies, and generating documentation for future reference. Due to the nature of the random-failure data, data mining had to be done manually without the assistance of an automated program to ensure positive identification.

  18. Computational methodology to predict satellite system-level effects from impacts of untrackable space debris

    NASA Astrophysics Data System (ADS)

    Welty, N.; Rudolph, M.; Schäfer, F.; Apeldoorn, J.; Janovsky, R.

    2013-07-01

    This paper presents a computational methodology to predict the satellite system-level effects resulting from impacts of untrackable space debris particles. This approach seeks to improve on traditional risk assessment practices by looking beyond the structural penetration of the satellite and predicting the physical damage to internal components and the associated functional impairment caused by untrackable debris impacts. The proposed method combines a debris flux model with the Schäfer-Ryan-Lambert ballistic limit equation (BLE), which accounts for the inherent shielding of components positioned behind the spacecraft structure wall. Individual debris particle impact trajectories and component shadowing effects are considered and the failure probabilities of individual satellite components as a function of mission time are calculated. These results are correlated to expected functional impairment using a Boolean logic model of the system functional architecture considering the functional dependencies and redundancies within the system.

  19. Development of STS/Centaur failure probabilities liftoff to Centaur separation

    NASA Technical Reports Server (NTRS)

    Hudson, J. M.

    1982-01-01

    The results of an analysis to determine STS/Centaur catastrophic vehicle response probabilities for the phases of vehicle flight from STS liftoff to Centaur separation from the Orbiter are presented. The analysis considers only category one component failure modes as contributors to the vehicle response mode probabilities. The relevant component failure modes are grouped into one of fourteen categories of potential vehicle behavior. By assigning failure rates to each component, for each of its failure modes, the STS/Centaur vehicle response probabilities in each phase of flight can be calculated. The results of this study will be used in a DOE analysis to ascertain the hazard from carrying a nuclear payload on the STS.

  20. Commonalities and Differences in Functional Safety Systems Between ISS Payloads and Industrial Applications

    NASA Astrophysics Data System (ADS)

    Malyshev, Mikhail; Kreimer, Johannes

    2013-09-01

    Safety analyses for electrical, electronic and/or programmable electronic (E/E/EP) safety-related systems used in payload applications on-board the International Space Station (ISS) are often based on failure modes, effects and criticality analysis (FMECA). For industrial applications of E/E/EP safety-related systems, comparable strategies exist and are defined in the IEC-61508 standard. This standard defines some quantitative criteria based on potential failure modes (for example, Safe Failure Fraction). These criteria can be calculated for an E/E/EP system or components to assess their compliance to requirements of a particular Safety Integrity Level (SIL). The standard defines several SILs depending on how much risk has to be mitigated by a safety-critical system. When a FMECA is available for an ISS payload or its subsystem, it may be possible to calculate the same or similar parameters as defined in the 61508 standard. One example of a payload that has a dedicated functional safety subsystem is the Electromagnetic Levitator (EML). This payload for the ISS is planned to be operated on-board starting 2014. The EML is a high-temperature materials processing facility. The dedicated subsystem "Hazard Control Electronics" (HCE) is implemented to ensure compliance to failure tolerance in limiting samples processing parameters to maintain generation of the potentially toxic by-products to safe limits in line with the requirements applied to the payloads by the ISS Program. The objective of this paper is to assess the implementation of the HCE in the EML against criteria for functional safety systems in the IEC-61508 standard and to evaluate commonalities and differences with respect to safety requirements levied on ISS Payloads. An attempt is made to assess a possibility of using commercially available components and systems certified for compliance to industrial functional safety standards in ISS payloads.

  1. A Robust Damage-Reporting Strategy for Polymeric Materials Enabled by Aggregation-Induced Emission.

    PubMed

    Robb, Maxwell J; Li, Wenle; Gergely, Ryan C R; Matthews, Christopher C; White, Scott R; Sottos, Nancy R; Moore, Jeffrey S

    2016-09-28

    Microscopic damage inevitably leads to failure in polymers and composite materials, but it is difficult to detect without the aid of specialized equipment. The ability to enhance the detection of small-scale damage prior to catastrophic material failure is important for improving the safety and reliability of critical engineering components, while simultaneously reducing life cycle costs associated with regular maintenance and inspection. Here, we demonstrate a simple, robust, and sensitive fluorescence-based approach for autonomous detection of damage in polymeric materials and composites enabled by aggregation-induced emission (AIE). This simple, yet powerful system relies on a single active component, and the general mechanism delivers outstanding performance in a wide variety of materials with diverse chemical and mechanical properties.

  2. Failure and life cycle evaluation of watering valves.

    PubMed

    Gonzalez, David M; Graciano, Sandy J; Karlstad, John; Leblanc, Mathias; Clark, Tom; Holmes, Scott; Reuter, Jon D

    2011-09-01

    Automated watering systems provide a reliable source of ad libitum water to animal cages. Our facility uses an automated water delivery system to support approximately 95% of the housed population (approximately 14,000 mouse cages). Drinking valve failure rates from 2002 through 2006 never exceeded the manufacturer standard of 0.1% total failure, based on monthly cage census and the number of floods. In 2007, we noted an increase in both flooding and cases of clinical dehydration in our mouse population. Using manufacturer's specifications for a water flow rate of 25 to 50 mL/min, we initiated a wide-scale screening of all valves used. During a 4-mo period, approximately 17,000 valves were assessed, of which 2200 failed according to scoring criteria (12.9% overall; 7.2% low flow; 1.6% no flow; 4.1% leaky). Factors leading to valve failures included residual metal shavings, silicone flash, introduced debris or bedding, and (most common) distortion of the autoclave-rated internal diaphragm and O-ring. Further evaluation revealed that despite normal autoclave conditions of heat, pressure, and steam, an extreme negative vacuum pull caused the valves' internal silicone components (diaphragm and O-ring) to become distorted and water-permeable. Normal flow rate often returned after a 'drying out' period, but components then reabsorbed water while on the animal rack or during subsequent autoclave cycles to revert to a variable flow condition. On the basis of our findings, we recalibrated autoclaves and initiated a preventative maintenance program to mitigate the risk of future valve failure.

  3. Failure and Life Cycle Evaluation of Watering Valves

    PubMed Central

    Gonzalez, David M; Graciano, Sandy J; Karlstad, John; Leblanc, Mathias; Clark, Tom; Holmes, Scott; Reuter, Jon D

    2011-01-01

    Automated watering systems provide a reliable source of ad libitum water to animal cages. Our facility uses an automated water delivery system to support approximately 95% of the housed population (approximately 14,000 mouse cages). Drinking valve failure rates from 2002 through 2006 never exceeded the manufacturer standard of 0.1% total failure, based on monthly cage census and the number of floods. In 2007, we noted an increase in both flooding and cases of clinical dehydration in our mouse population. Using manufacturer's specifications for a water flow rate of 25 to 50 mL/min, we initiated a wide-scale screening of all valves used. During a 4-mo period, approximately 17,000 valves were assessed, of which 2200 failed according to scoring criteria (12.9% overall; 7.2% low flow; 1.6% no flow; 4.1% leaky). Factors leading to valve failures included residual metal shavings, silicone flash, introduced debris or bedding, and (most common) distortion of the autoclave-rated internal diaphragm and O-ring. Further evaluation revealed that despite normal autoclave conditions of heat, pressure, and steam, an extreme negative vacuum pull caused the valves’ internal silicone components (diaphragm and O-ring) to become distorted and water-permeable. Normal flow rate often returned after a ‘drying out’ period, but components then reabsorbed water while on the animal rack or during subsequent autoclave cycles to revert to a variable flow condition. On the basis of our findings, we recalibrated autoclaves and initiated a preventative maintenance program to mitigate the risk of future valve failure. PMID:22330720

  4. Transtibial prosthesis suspension failure during skydiving freefall: a case report.

    PubMed

    Gordon, Assaf T; Land, Rebekah M

    2009-01-01

    This report describes the unusual case of an everyday-use prosthesis suspension system failure during the freefall phase of a skydiving jump. The case individual was a 53-year-old male with a left transtibial amputation secondary to trauma. He used his everyday prosthesis, a transtibial endoskeleton with push-button, plunger-releasing, pin-locking silicon liner suction suspension and a neoprene knee suspension sleeve, for a standard recreational tandem skydive. Within seconds of exiting the plane, the suspension systems failed, resulting in the complete prosthesis floating away. Several factors may have led to suspension system failure, including an inadequate seal and material design of the knee suspension sleeve and liner, lack of auxiliary suspension mechanisms, and lack of a safety cover overlying the push-button release mechanism. This is the first report, to our knowledge, to discuss prosthetic issues specifically related to skydiving. While amputees are to be encouraged to participate in this extreme sport, special modifications to everyday components may be necessary to reduce the possibility of prosthesis failure during freefall, parachute deployment, and landing.

  5. Lyapunov-Based Sensor Failure Detection And Recovery For The Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Haralambous, Michael G.

    2001-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in terms of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  6. LYAPUNOV-Based Sensor Failure Detection and Recovery for the Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Haralambous, Michael G.

    2002-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in t e m of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  7. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1993-11-23

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures.

  8. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system.

  9. Solder Reflow Failures in Electronic Components During Manual Soldering

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander; Greenwell, Chris; Felt, Frederick

    2008-01-01

    This viewgraph presentation reviews the solder reflow failures in electronic components that occur during manual soldering. It discusses the specifics of manual-soldering-induced failures in plastic devices with internal solder joints. The failure analysis turned up that molten solder had squeezed up to the die surface along the die molding compound interface, and the dice were not protected with glassivation allowing solder to short gate and source to the drain contact. The failure analysis concluded that the parts failed due to overheating during manual soldering.

  10. Advanced Signal Conditioners for Data-Acquisition Systems

    NASA Technical Reports Server (NTRS)

    Lucena, Angel; Perotti, Jose; Eckhoff, Anthony; Medelius, Pedro

    2004-01-01

    Signal conditioners embodying advanced concepts in analog and digital electronic circuitry and software have been developed for use in data-acquisition systems that are required to be compact and lightweight, to utilize electric energy efficiently, and to operate with high reliability, high accuracy, and high power efficiency, without intervention by human technicians. These signal conditioners were originally intended for use aboard spacecraft. There are also numerous potential terrestrial uses - especially in the fields of aeronautics and medicine, wherein it is necessary to monitor critical functions. Going beyond the usual analog and digital signal-processing functions of prior signal conditioners, the new signal conditioner performs the following additional functions: It continuously diagnoses its own electronic circuitry, so that it can detect failures and repair itself (as described below) within seconds. It continuously calibrates itself on the basis of a highly accurate and stable voltage reference, so that it can continue to generate accurate measurement data, even under extreme environmental conditions. It repairs itself in the sense that it contains a micro-controller that reroutes signals among redundant components as needed to maintain the ability to perform accurate and stable measurements. It detects deterioration of components, predicts future failures, and/or detects imminent failures by means of a real-time analysis in which, among other things, data on its present state are continuously compared with locally stored historical data. It minimizes unnecessary consumption of electric energy. The design architecture divides the signal conditioner into three main sections: an analog signal section, a digital module, and a power-management section. The design of the analog signal section does not follow the traditional approach of ensuring reliability through total redundancy of hardware: Instead, following an approach called spare parts tool box, the reliability of each component is assessed in terms of such considerations as risks of damage, mean times between failures, and the effects of certain failures on the performance of the signal conditioner as a whole system. Then, fewer or more spares are assigned for each affected component, pursuant to the results of this analysis, in order to obtain the required degree of reliability of the signal conditioner as a whole system. The digital module comprises one or more processors and field-programmable gate arrays, the number of each depending on the results of the aforementioned analysis. The digital module provides redundant control, monitoring, and processing of several analog signals. It is designed to minimize unnecessary consumption of electric energy, including, when possible, going into a low-power "sleep" mode that is implemented in firmware. The digital module communicates with external equipment via a personal-computer serial port. The digital module monitors the "health" of the rest of the signal conditioner by processing defined measurements and/or trends. It automatically makes adjustments to respond to channel failures, compensate for effects of temperature, and maintain calibration.

  11. Functional Safety of Hybrid Laser Safety Systems - How can a Combination between Passive and Active Components Prevent Accidents?

    NASA Astrophysics Data System (ADS)

    Lugauer, F. P.; Stiehl, T. H.; Zaeh, M. F.

    Modern laser systems are widely used in industry due to their excellent flexibility and high beam intensities. This leads to an increased hazard potential, because conventional laser safety barriers only offer a short protection time when illuminated with high laser powers. For that reason active systems are used more and more to prevent accidents with laser machines. These systems must fulfil the requirements of functional safety, e.g. according to IEC 61508, which causes high costs. The safety provided by common passive barriers is usually unconsidered in this context. In the presented approach, active and passive systems are evaluated from a holistic perspective. To assess the functional safety of hybrid safety systems, the failure probability of passive barriers is analysed and added to the failure probability of the active system.

  12. ACARA - AVAILABILITY, COST AND RESOURCE ALLOCATION

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    ACARA is a program for analyzing availability, lifecycle cost, and resource scheduling. It uses a statistical Monte Carlo method to simulate a system's capacity states as well as component failure and repair. Component failures are modelled using a combination of exponential and Weibull probability distributions. ACARA schedules component replacement to achieve optimum system performance. The scheduling will comply with any constraints on component production, resupply vehicle capacity, on-site spares, or crew manpower and equipment. ACARA is capable of many types of analyses and trade studies because of its integrated approach. It characterizes the system performance in terms of both state availability and equivalent availability (a weighted average of state availability). It can determine the probability of exceeding a capacity state to assess reliability and loss of load probability. It can also evaluate the effect of resource constraints on system availability and lifecycle cost. ACARA interprets the results of a simulation and displays tables and charts for: (1) performance, i.e., availability and reliability of capacity states, (2) frequency of failure and repair, (3) lifecycle cost, including hardware, transportation, and maintenance, and (4) usage of available resources, including mass, volume, and maintenance man-hours. ACARA incorporates a user-friendly, menu-driven interface with full screen data entry. It provides a file management system to store and retrieve input and output datasets for system simulation scenarios. ACARA is written in APL2 using the APL2 interpreter for IBM PC compatible systems running MS-DOS. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. A dot matrix printer is required if the user wishes to print a graph from a results table. A sample MS-DOS executable is provided on the distribution medium. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The standard distribution medium for this program is a set of three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ACARA was developed in 1992.

  13. Closed-form solution of decomposable stochastic models

    NASA Technical Reports Server (NTRS)

    Sjogren, Jon A.

    1990-01-01

    Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.

  14. Aging assessment of large electric motors in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villaran, M.; Subudhi, M.

    1996-03-01

    Large electric motors serve as the prime movers to drive high capacity pumps, fans, compressors, and generators in a variety of nuclear plant systems. This study examined the stressors that cause degradation and aging in large electric motors operating in various plant locations and environments. The operating history of these machines in nuclear plant service was studied by review and analysis of failure reports in the NPRDS and LER databases. This was supplemented by a review of motor designs, and their nuclear and balance of plant applications, in order to characterize the failure mechanisms that cause degradation, aging, and failuremore » in large electric motors. A generic failure modes and effects analysis for large squirrel cage induction motors was performed to identify the degradation and aging mechanisms affecting various components of these large motors, the failure modes that result, and their effects upon the function of the motor. The effects of large motor failures upon the systems in which they are operating, and on the plant as a whole, were analyzed from failure reports in the databases. The effectiveness of the industry`s large motor maintenance programs was assessed based upon the failure reports in the databases and reviews of plant maintenance procedures and programs.« less

  15. PRA and Risk Informed Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernsen, Sidney A.; Simonen, Fredric A.; Balkey, Kenneth R.

    2006-01-01

    The Boiler and Pressure Vessel Code (BPVC) of the American Society of Mechanical Engineers (ASME) has introduced a risk based approach into Section XI that covers Rules for Inservice Inspection of Nuclear Power Plant Components. The risk based approach requires application of the probabilistic risk assessments (PRA). Because no industry consensus standard existed for PRAs, ASME has developed a standard to evaluate the quality level of an available PRA needed to support a given risk based application. The paper describes the PRA standard, Section XI application of PRAs, and plans for broader applications of PRAs to other ASME nuclear codesmore » and standards. The paper addresses several specific topics of interest to Section XI. Important consideration are special methods (surrogate components) used to overcome the lack of PRA treatments of passive components in PRAs. The approach allows calculations of conditional core damage probabilities both for component failures that cause initiating events and failures in standby systems that decrease the availability of these systems. The paper relates the explicit risk based methods of the new Section XI code cases to the implicit consideration of risk used in the development of Section XI. Other topics include the needed interactions of ISI engineers, plant operating staff, PRA specialists, and members of expert panels that review the risk based programs.« less

  16. Techniques to evaluate the importance of common cause degradation on reliability and safety of nuclear weapons.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    2011-05-01

    As the nuclear weapon stockpile ages, there is increased concern about common degradation ultimately leading to common cause failure of multiple weapons that could significantly impact reliability or safety. Current acceptable limits for the reliability and safety of a weapon are based on upper limits on the probability of failure of an individual item, assuming that failures among items are independent. We expanded the current acceptable limits to apply to situations with common cause failure. Then, we developed a simple screening process to quickly assess the importance of observed common degradation for both reliability and safety to determine if furthermore » action is necessary. The screening process conservatively assumes that common degradation is common cause failure. For a population with between 100 and 5000 items we applied the screening process and conclude the following. In general, for a reliability requirement specified in the Military Characteristics (MCs) for a specific weapon system, common degradation is of concern if more than 100(1-x)% of the weapons are susceptible to common degradation, where x is the required reliability expressed as a fraction. Common degradation is of concern for the safety of a weapon subsystem if more than 0.1% of the population is susceptible to common degradation. Common degradation is of concern for the safety of a weapon component or overall weapon system if two or more components/weapons in the population are susceptible to degradation. Finally, we developed a technique for detailed evaluation of common degradation leading to common cause failure for situations that are determined to be of concern using the screening process. The detailed evaluation requires that best estimates of common cause and independent failure probabilities be produced. Using these techniques, observed common degradation can be evaluated for effects on reliability and safety.« less

  17. Analysis on IGBT and Diode Failures in Distribution Electronic Power Transformers

    NASA Astrophysics Data System (ADS)

    Wang, Si-cong; Sang, Zi-xia; Yan, Jiong; Du, Zhi; Huang, Jia-qi; Chen, Zhu

    2018-02-01

    Fault characteristics of power electronic components are of great importance for a power electronic device, and are of extraordinary importance for those applied in power system. The topology structures and control method of Distribution Electronic Power Transformer (D-EPT) are introduced, and an exploration on fault types and fault characteristics for the IGBT and diode failures is presented. The analysis and simulation of different fault types for the fault characteristics lead to the D-EPT fault location scheme.

  18. Local-global analysis of crack growth in continuously reinfoced ceramic matrix composites

    NASA Technical Reports Server (NTRS)

    Ballarini, Roberto; Ahmed, Shamim

    1989-01-01

    This paper describes the development of a mathematical model for predicting the strength and micromechanical failure characteristics of continuously reinforced ceramic matrix composites. The local-global analysis models the vicinity of a propagating crack tip as a local heterogeneous region (LHR) consisting of spring-like representation of the matrix, fibers and interfaces. Parametric studies are conducted to investigate the effects of LHR size, component properties, and interface conditions on the strength and sequence of the failure processes in the unidirectional composite system.

  19. Structural health monitoring of wind turbine blades : SE 265 Final Project.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barkley, W. C.; Jacobs, Laura D.; Rutherford, A. C.

    2006-03-23

    ACME Wind Turbine Corporation has contacted our dynamic analysis firm regarding structural health monitoring of their wind turbine blades. ACME has had several failures in previous years. Examples are shown in Figure 1. These failures have resulted in economic loss for the company due to down time of the turbines (lost revenue) and repair costs. Blade failures can occur in several modes, which may depend on the type of construction and load history. Cracking and delamination are some typical modes of blade failure. ACME warranties its turbines and wishes to decrease the number of blade failures they have to repairmore » and replace. The company wishes to implement a real time structural health monitoring system in order to better understand when blade replacement is necessary. Because of warranty costs incurred to date, ACME is interested in either changing the warranty period for the blades in question or predicting imminent failure before it occurs. ACME's current practice is to increase the number of physical inspections when blades are approaching the end of their fatigue lives. Implementation of an in situ monitoring system would eliminate or greatly reduce the need for such physical inspections. Another benefit of such a monitoring system is that the life of any given component could be extended since real conditions would be monitored. The SHM system designed for ACME must be able to operate while the wind turbine is in service. This means that wireless communication options will likely be implemented. Because blade failures occur due to cyclic stresses in the blade material, the sensing system will focus on monitoring strain at various points.« less

  20. NASA ground terminal communication equipment automated fault isolation expert systems

    NASA Technical Reports Server (NTRS)

    Tang, Y. K.; Wetzel, C. R.

    1990-01-01

    The prototype expert systems are described that diagnose the Distribution and Switching System I and II (DSS1 and DSS2), Statistical Multiplexers (SM), and Multiplexer and Demultiplexer systems (MDM) at the NASA Ground Terminal (NGT). A system level fault isolation expert system monitors the activities of a selected data stream, verifies that the fault exists in the NGT and identifies the faulty equipment. Equipment level fault isolation expert systems are invoked to isolate the fault to a Line Replaceable Unit (LRU) level. Input and sometimes output data stream activities for the equipment are available. The system level fault isolation expert system compares the equipment input and output status for a data stream and performs loopback tests (if necessary) to isolate the faulty equipment. The equipment level fault isolation system utilizes the process of elimination and/or the maintenance personnel's fault isolation experience stored in its knowledge base. The DSS1, DSS2 and SM fault isolation systems, using the knowledge of the current equipment configuration and the equipment circuitry issues a set of test connections according to the predefined rules. The faulty component or board can be identified by the expert system by analyzing the test results. The MDM fault isolation system correlates the failure symptoms with the faulty component based on maintenance personnel experience. The faulty component can be determined by knowing the failure symptoms. The DSS1, DSS2, SM, and MDM equipment simulators are implemented in PASCAL. The DSS1 fault isolation expert system was converted to C language from VP-Expert and integrated into the NGT automation software for offline switch diagnoses. Potentially, the NGT fault isolation algorithms can be used for the DSS1, SM, amd MDM located at Goddard Space Flight Center (GSFC).

  1. A recursive Bayesian approach for fatigue damage prognosis: An experimental validation at the reliability component level

    NASA Astrophysics Data System (ADS)

    Gobbato, Maurizio; Kosmatka, John B.; Conte, Joel P.

    2014-04-01

    Fatigue-induced damage is one of the most uncertain and highly unpredictable failure mechanisms for a large variety of mechanical and structural systems subjected to cyclic and random loads during their service life. A health monitoring system capable of (i) monitoring the critical components of these systems through non-destructive evaluation (NDE) techniques, (ii) assessing their structural integrity, (iii) recursively predicting their remaining fatigue life (RFL), and (iv) providing a cost-efficient reliability-based inspection and maintenance plan (RBIM) is therefore ultimately needed. In contribution to these objectives, the first part of the paper provides an overview and extension of a comprehensive reliability-based fatigue damage prognosis methodology — previously developed by the authors — for recursively predicting and updating the RFL of critical structural components and/or sub-components in aerospace structures. In the second part of the paper, a set of experimental fatigue test data, available in the literature, is used to provide a numerical verification and an experimental validation of the proposed framework at the reliability component level (i.e., single damage mechanism evolving at a single damage location). The results obtained from this study demonstrate (i) the importance and the benefits of a nearly continuous NDE monitoring system, (ii) the efficiency of the recursive Bayesian updating scheme, and (iii) the robustness of the proposed framework in recursively updating and improving the RFL estimations. This study also demonstrates that the proposed methodology can lead to either an extent of the RFL (with a consequent economical gain without compromising the minimum safety requirements) or an increase of safety by detecting a premature fault and therefore avoiding a very costly catastrophic failure.

  2. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  3. Response of power systems to the San Fernando Valley earthquake of 9 February 1971. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiff, A.J.; Yao, J.T.P.

    1972-01-01

    The impact of the San Fernando Valley earthquake on electric power systems is discussed. Particular attention focused on the following three areas; (1) the effects of an earthquake on the power network in the Western States, (2) the failure of subsystems and components of the power system, and (3) the loss of power to hospitals. The report includes sections on the description and functions of major components of a power network, existing procedures to protect the network, safety devices within the system which influence the network, a summary of the effects of the San Fernando Valley earthquake on the Westernmore » States Power Network, and present efforts to reduce the network vulnerability to faults. Also included in the report are a review of design procedures and practices prior to the San Fernando Valley earthquake and descriptions of types of damage to electrical equipment, dynamic analysis of equipment failures, equipment surviving the San Fernando Valley earthquake and new seismic design specifications. In addition, some observations and insights gained during the study, which are not directly related to power systems are discussed.« less

  4. Sensitivity Analysis of Digital I&C Modules in Protection and Safety Systems

    NASA Astrophysics Data System (ADS)

    Khalil Ur, Rahman; Zubair, M.; Heo, G.

    2013-12-01

    This research is performed to examine the sensitivity of digital Instrumentation and Control (I&C) components and modules used in regulating and protection systems architectures of nuclear industry. Fault Tree Analysis (FTA) was performed for four configurations of RPS channel architecture. The channel unavailability has been calculated by using AIMS-PSA, which comes out 4.517E-03, 2.551E-03, 2.246E-03 and 2.7613-04 for architecture configuration I, II, III and IV respectively. It is observed that unavailability decreases by 43.5 % & 50.4% by inserting partial redundancy whereas maximum reduction of 93.9 % in unavailability happens when double redundancy is inserted in architecture. Coincidence module output failure and bi-stable output failures are identified as sensitive failures by Risk Reduction Worth (RRW) and Fussell-Vesely (FV) importance. RRW highlights that risk from coincidence processor output failure can reduced by 48.83 folds and FV indicates that BP output is sensitive by 0.9796 (on a scale of 1).

  5. X-framework: Space system failure analysis framework

    NASA Astrophysics Data System (ADS)

    Newman, John Steven

    Space program and space systems failures result in financial losses in the multi-hundred million dollar range every year. In addition to financial loss, space system failures may also represent the loss of opportunity, loss of critical scientific, commercial and/or national defense capabilities, as well as loss of public confidence. The need exists to improve learning and expand the scope of lessons documented and offered to the space industry project team. One of the barriers to incorporating lessons learned include the way in which space system failures are documented. Multiple classes of space system failure information are identified, ranging from "sound bite" summaries in space insurance compendia, to articles in journals, lengthy data-oriented (what happened) reports, and in some rare cases, reports that treat not only the what, but also the why. In addition there are periodically published "corporate crisis" reports, typically issued after multiple or highly visible failures that explore management roles in the failure, often within a politically oriented context. Given the general lack of consistency, it is clear that a good multi-level space system/program failure framework with analytical and predictive capability is needed. This research effort set out to develop such a model. The X-Framework (x-fw) is proposed as an innovative forensic failure analysis approach, providing a multi-level understanding of the space system failure event beginning with the proximate cause, extending to the directly related work or operational processes and upward through successive management layers. The x-fw focus is on capability and control at the process level and examines: (1) management accountability and control, (2) resource and requirement allocation, and (3) planning, analysis, and risk management at each level of management. The x-fw model provides an innovative failure analysis approach for acquiring a multi-level perspective, direct and indirect causation of failures, and generating better and more consistent reports. Through this approach failures can be more fully understood, existing programs can be evaluated and future failures avoided. The x-fw development involved a review of the historical failure analysis and prevention literature, coupled with examination of numerous failure case studies. Analytical approaches included use of a relational failure "knowledge base" for classification and sorting of x-fw elements and attributes for each case. In addition a novel "management mapping" technique was developed as a means of displaying an integrated snapshot of indirect causes within the management chain. Further research opportunities will extend the depth of knowledge available for many of the component level cases. In addition, the x-fw has the potential to expand the scope of space sector lessons learned, and contribute to knowledge management and organizational learning.

  6. SILHIL Replication of Electric Aircraft Powertrain Dynamics and Inner-Loop Control for V&V of System Health Management Routines

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Teubert, Christopher Allen; Cuong Chi, Quach; Hogge, Edward; Vazquez, Sixto; Goebel, Kai; George, Vachtsevanos

    2013-01-01

    Software-in-the-loop and Hardware-in-the-loop testing of failure prognostics and decision making tools for aircraft systems will facilitate more comprehensive and cost-effective testing than what is practical to conduct with flight tests. A framework is described for the offline recreation of dynamic loads on simulated or physical aircraft powertrain components based on a real-time simulation of airframe dynamics running on a flight simulator, an inner-loop flight control policy executed by either an autopilot routine or a human pilot, and a supervisory fault management control policy. The creation of an offline framework for verifying and validating supervisory failure prognostics and decision making routines is described for the example of battery charge depletion failure scenarios onboard a prototype electric unmanned aerial vehicle.

  7. Designing and Implementation of a Heart Failure Telemonitoring System

    PubMed Central

    Safdari, Reza; Jafarpour, Maryam; Mokhtaran, Mehrshad; Naderi, Nasim

    2017-01-01

    Introduction: The aim of this study was to identify patients at-risk, enhancing self-care management of HF patients at home and reduce the disease exacerbations and readmissions. Method: In this research according to standard heart failure guidelines and Semi-structured interviews with 10 heart failure Specialists, a draft heart failure rule set for alerts and patient instructions was developed. Eventually, the clinical champion of the project vetted the rule set. Also we designed a transactional system to enhance monitoring and follow up of CHF patients. With this system, CHF patients are required to measure their physiological measurements (vital signs and body weight) every day and to submit their symptoms using the app. additionally, based on their data, they will receive customized notifications and motivation messages to classify risk of disease exacerbation. The architecture of system comprised of six major components: 1) a patient data collection suite including a mobile app and website; 2) Data Receiver; 3) Database; 4) a Specialists expert Panel; 5) Rule engine classifier; 6) Notifier engine. Results: This system has implemented in Iran for the first time and we are currently in the testing phase with 10 patients to evaluate the technical performance of our system. The developed expert system generates alerts and instructions based on the patient’s data and the notify engine notifies responsible nurses and physicians and sometimes patients. Detailed analysis of those results will be reported in a future report. Conclusion: This study is based on the design of a telemonitoring system for heart failure self-care that intents to overcome the gap that occurs when patients discharge from the hospital and tries to accurate requirement of readmission. A rule set for classifying and resulting automated alerts and patient instructions for heart failure telemonitoring was developed. It also facilitates daily communication among patients and heart failure clinicians so any deterioration in health could be identified immediately. PMID:29114106

  8. Designing and Implementation of a Heart Failure Telemonitoring System.

    PubMed

    Safdari, Reza; Jafarpour, Maryam; Mokhtaran, Mehrshad; Naderi, Nasim

    2017-09-01

    The aim of this study was to identify patients at-risk, enhancing self-care management of HF patients at home and reduce the disease exacerbations and readmissions. In this research according to standard heart failure guidelines and Semi-structured interviews with 10 heart failure Specialists, a draft heart failure rule set for alerts and patient instructions was developed. Eventually, the clinical champion of the project vetted the rule set. Also we designed a transactional system to enhance monitoring and follow up of CHF patients. With this system, CHF patients are required to measure their physiological measurements (vital signs and body weight) every day and to submit their symptoms using the app. additionally, based on their data, they will receive customized notifications and motivation messages to classify risk of disease exacerbation. The architecture of system comprised of six major components: 1) a patient data collection suite including a mobile app and website; 2) Data Receiver; 3) Database; 4) a Specialists expert Panel; 5) Rule engine classifier; 6) Notifier engine. This system has implemented in Iran for the first time and we are currently in the testing phase with 10 patients to evaluate the technical performance of our system. The developed expert system generates alerts and instructions based on the patient's data and the notify engine notifies responsible nurses and physicians and sometimes patients. Detailed analysis of those results will be reported in a future report. This study is based on the design of a telemonitoring system for heart failure self-care that intents to overcome the gap that occurs when patients discharge from the hospital and tries to accurate requirement of readmission. A rule set for classifying and resulting automated alerts and patient instructions for heart failure telemonitoring was developed. It also facilitates daily communication among patients and heart failure clinicians so any deterioration in health could be identified immediately.

  9. Using WNTR to Model Water Distribution System Resilience ...

    EPA Pesticide Factsheets

    The Water Network Tool for Resilience (WNTR) is a new open source Python package developed by the U.S. Environmental Protection Agency and Sandia National Laboratories to model and evaluate resilience of water distribution systems. WNTR can be used to simulate a wide range of disruptive events, including earthquakes, contamination incidents, floods, climate change, and fires. The software includes the EPANET solver as well as a WNTR solver with the ability to model pressure-driven demand hydraulics, pipe breaks, component degradation and failure, changes to supply and demand, and cascading failure. Damage to individual components in the network (i.e. pipes, tanks) can be selected probabilistically using fragility curves. WNTR can also simulate different types of resilience-enhancing actions, including scheduled pipe repair or replacement, water conservation efforts, addition of back-up power, and use of contamination warning systems. The software can be used to estimate potential damage in a network, evaluate preparedness, prioritize repair strategies, and identify worse case scenarios. As a Python package, WNTR takes advantage of many existing python capabilities, including parallel processing of scenarios and graphics capabilities. This presentation will outline the modeling components in WNTR, demonstrate their use, give the audience information on how to get started using the code, and invite others to participate in this open source project. This pres

  10. NASA aviation safety reporting system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    An analytical study of reports relating to cockpit altitude alert systems was performed. A recent change in the Federal Air Regulation permits the system to be modified so that the alerting signal approaching altitude has only a visual component; the auditory signal would continue to be heard if a deviation from an assigned altitude occurred. Failure to observe altitude alert signals and failure to reset the system were the commonest cause of altitude deviations related to this system. Cockpit crew distraction was the most frequent reason for these failures. It was noted by numerous reporters that the presence of altitude alert system made them less aware of altitude; this lack of altitude awareness is discussed. Failures of crew coordination were also noted. It is suggested that although modification of the altitude alert system may be highly desirable in short-haul aircraft, it may not be desirable for long-haul aircraft in which cockpit workloads are much lower for long periods of time. In these cockpits, the aural alert approaching altitudes is perceived as useful and helpful. If the systems are to be modified, it appears that additional emphasis on altitude awareness during recurrent training will be necessary; it is also possible that flight crew operating procedures during climb and descent may need examination with respect to monitoring responsibilities. A selection of alert bulletins and responses to them is presented.

  11. Application of Function-Failure Similarity Method to Rotorcraft Component Design

    NASA Technical Reports Server (NTRS)

    Roberts, Rory A.; Stone, Robert E.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the designs that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. During the design of aircraft, a general technique is needed to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to specific components, which are described by their functionality. The failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using this technique, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. The fundamentals of this method were previously introduced for a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.

  12. Triplexer Monitor Design for Failure Detection in FTTH System

    NASA Astrophysics Data System (ADS)

    Fu, Minglei; Le, Zichun; Hu, Jinhua; Fei, Xia

    2012-09-01

    Triplexer was one of the key components in FTTH systems, which employed an analog overlay channel for video broadcasting in addition to bidirectional digital transmission. To enhance the survivability of triplexer as well as the robustness of FTTH system, a multi-ports device named triplexer monitor was designed and realized, by which failures at triplexer ports can be detected and localized. Triplexer monitor was composed of integrated circuits and its four input ports were connected with the beam splitter whose power division ratio was 95∶5. By means of detecting the sampled optical signal from the beam splitters, triplexer monitor tracked the status of the four ports in triplexer (e.g. 1310 nm, 1490 nm, 1550 nm and com ports). In this paper, the operation scenario of the triplexer monitor with external optical devices was addressed. And the integrated circuit structure of the triplexer monitor was also given. Furthermore, a failure localization algorithm was proposed, which based on the state transition diagram. In order to measure the failure detection and localization time under the circumstance of different failed ports, an experimental test-bed was built. Experiment results showed that the detection time for the failure at 1310 nm port by the triplexer monitor was less than 8.20 ms. For the failure at 1490 nm or 1550 nm port it was less than 8.20 ms and for the failure at com port it was less than 7.20 ms.

  13. Lifecycle Prognostics Architecture for Selected High-Cost Active Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N. Lybeck; B. Pham; M. Tawfik

    There are an extensive body of knowledge and some commercial products available for calculating prognostics, remaining useful life, and damage index parameters. The application of these technologies within the nuclear power community is still in its infancy. Online monitoring and condition-based maintenance is seeing increasing acceptance and deployment, and these activities provide the technological bases for expanding to add predictive/prognostics capabilities. In looking to deploy prognostics there are three key aspects of systems that are presented and discussed: (1) component/system/structure selection, (2) prognostic algorithms, and (3) prognostics architectures. Criteria are presented for component selection: feasibility, failure probability, consequences of failure,more » and benefits of the prognostics and health management (PHM) system. The basis and methods commonly used for prognostics algorithms are reviewed and summarized. Criteria for evaluating PHM architectures are presented: open, modular architecture; platform independence; graphical user interface for system development and/or results viewing; web enabled tools; scalability; and standards compatibility. Thirteen software products were identified and discussed in the context of being potentially useful for deployment in a PHM program applied to systems in a nuclear power plant (NPP). These products were evaluated by using information available from company websites, product brochures, fact sheets, scholarly publications, and direct communication with vendors. The thirteen products were classified into four groups of software: (1) research tools, (2) PHM system development tools, (3) deployable architectures, and (4) peripheral tools. Eight software tools fell into the deployable architectures category. Of those eight, only two employ all six modules of a full PHM system. Five systems did not offer prognostic estimates, and one system employed the full health monitoring suite but lacked operations and maintenance support. Each product is briefly described in Appendix A. Selection of the most appropriate software package for a particular application will depend on the chosen component, system, or structure. Ongoing research will determine the most appropriate choices for a successful demonstration of PHM systems in aging NPPs.« less

  14. Failure Diagnosis and Prognosis of Rolling - Element Bearings using Artificial Neural Networks: A Critical Overview

    NASA Astrophysics Data System (ADS)

    Rao, B. K. N.; Srinivasa Pai, P.; Nagabhushana, T. N.

    2012-05-01

    Rolling - Element Bearings are extensively used in almost all global industries. Any critical failures in these vitally important components would not only affect the overall systems performance but also its reliability, safety, availability and cost-effectiveness. Proactive strategies do exist to minimise impending failures in real time and at a minimum cost. Continuous innovative developments are taking place in the field of Artificial Neural Networks (ANNs) technology. Significant research and development are taking place in many universities, private and public organizations and a wealth of published literature is available highlighting the potential benefits of employing ANNs in intelligently monitoring, diagnosing, prognosing and managing rolling-element bearing failures. This paper attempts to critically review the recent trends in this topical area of interest.

  15. 16 CFR § 1207.5 - Design.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... installed swimming pool slide shall be such that no structural failures of any component part shall cause failures of any other component part of the slide as described in the performance tests in paragraphs (d)(4... number and placement of such fasteners shall not cause a failure of the tread under the ladder loading...

  16. Centralized Cryptographic Key Management and Critical Risk Assessment - CRADA Final Report For CRADA Number NFE-11-03562

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, R. K.; Peters, Scott

    The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) Cyber Security for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing Cyber Security for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modifiedmore » and used as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less

  17. Cryptographic Key Management and Critical Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K

    The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) CyberSecurity for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing CyberSecurity for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modified and usedmore » as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less

  18. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control/electrical power generation subsystem

    NASA Technical Reports Server (NTRS)

    Patton, Jeff A.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C)/Electrical Power Generation (EPG) hardware. The EPD and C/EPG hardware is required for performing critical functions of cryogenic reactant storage, electrical power generation and product water distribution in the Orbiter. Specifically, the EPD and C/EPG hardware consists of the following components: Power Section Assembly (PSA); Reactant Control Subsystem (RCS); Thermal Control Subsystem (TCS); Water Removal Subsystem (WRS); and Power Reactant Storage and Distribution System (PRSDS). The IOA analysis process utilized available EPD and C/EPG hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  19. A Methodology for Quantifying Certain Design Requirements During the Design Phase

    NASA Technical Reports Server (NTRS)

    Adams, Timothy; Rhodes, Russel

    2005-01-01

    A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.

  20. Reliability analysis and initial requirements for FC systems and stacks

    NASA Astrophysics Data System (ADS)

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  1. A new method for computing the reliability of consecutive k-out-of-n:F systems

    NASA Astrophysics Data System (ADS)

    Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak

    2016-01-01

    In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  2. Approach to developing reliable space reactor power systems

    NASA Technical Reports Server (NTRS)

    Mondt, Jack F.; Shinbrot, Charles H.

    1991-01-01

    During Phase II, the Engineering Development Phase, the SP-100 Project has defined and is pursuing a new approach to developing reliable power systems. The approach to developing such a system during the early technology phase is described along with some preliminary examples to help explain the approach. Developing reliable components to meet space reactor power system requirements is based on a top-down systems approach which includes a point design based on a detailed technical specification of a 100-kW power system. The SP-100 system requirements implicitly recognize the challenge of achieving a high system reliability for a ten-year lifetime, while at the same time using technologies that require very significant development efforts. A low-cost method for assessing reliability, based on an understanding of fundamental failure mechanisms and design margins for specific failure mechanisms, is being developed as part of the SP-100 Program.

  3. High Pressure Coolant Injection (HPCI) System Risk-Based Inspection Guide for Browns Ferry Nuclear Power Station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.; DiBiasio, A.; Gunther, W.

    1993-09-01

    The High Pressure Coolant Injection (HPCI) system has been examined from a risk perspective. A System Risk-Based Inspection Guide (S-RIG) has been developed as an aid to HPCI system inspections at the Browns Ferry Nuclear Power Plant, Units 1, 2 and 3. The role of. the HPCI system in mitigating accidents is discussed in this S-RIG, along with insights on identified risk-based failure modes which could prevent proper operation of the system. The S-RIG provides a review of industry-wide operating experience, including plant-specific illustrative examples to augment the PRA and operational considerations in identifying a catalogue of basic PRA failuremore » modes for the HPCI system. It is designed to be used as a reference for routine inspections, self-initiated safety system functional inspections (SSFIs), and the evaluation of risk significance of component failures at the nuclear power plant.« less

  4. Failure analysis of aluminum alloy components

    NASA Technical Reports Server (NTRS)

    Johari, O.; Corvin, I.; Staschke, J.

    1973-01-01

    Analysis of six service failures in aluminum alloy components which failed in aerospace applications is reported. Identification of fracture surface features from fatigue and overload modes was straightforward, though the specimens were not always in a clean, smear-free condition most suitable for failure analysis. The presence of corrosion products and of chemically attacked or mechanically rubbed areas here hindered precise determination of the cause of crack initiation, which was then indirectly inferred from the scanning electron fractography results. In five failures the crack propagation was by fatigue, though in each case the fatigue crack initiated from a different cause. Some of these causes could be eliminated in future components by better process control. In one failure, the cause was determined to be impact during a crash; the features of impact fracture were distinguished from overload fractures by direct comparisons of the received specimens with laboratory-generated failures.

  5. Toward Failure Modeling In Complex Dynamic Systems: Impact of Design and Manufacturing Variations

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; McAdams, Daniel A.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    When designing vehicle vibration monitoring systems for aerospace devices, it is common to use well-established models of vibration features to determine whether failures or defects exist. Most of the algorithms used for failure detection rely on these models to detect significant changes during a flight environment. In actual practice, however, most vehicle vibration monitoring systems are corrupted by high rates of false alarms and missed detections. Research conducted at the NASA Ames Research Center has determined that a major reason for the high rates of false alarms and missed detections is the numerous sources of statistical variations that are not taken into account in the. modeling assumptions. In this paper, we address one such source of variations, namely, those caused during the design and manufacturing of rotating machinery components that make up aerospace systems. We present a novel way of modeling the vibration response by including design variations via probabilistic methods. The results demonstrate initial feasibility of the method, showing great promise in developing a general methodology for designing more accurate aerospace vehicle vibration monitoring systems.

  6. Deriving Function-failure Similarity Information for Failure-free Rotorcraft Component Design

    NASA Technical Reports Server (NTRS)

    Roberts, Rory A.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the design that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. The aircraft design needs to be passed through a general technique to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to certain components, which are described by their functionality. In turn, the failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using the technique proposed in this paper, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. This method was previously applied to a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.

  7. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  8. Flexible materials technology

    NASA Technical Reports Server (NTRS)

    Steurer, W. H.

    1980-01-01

    A survey of all presently defined or proposed large space systems indicated an ever increasing demand for flexible components and materials, primarily as a result of the widening disparity between the stowage space of launch vehicles and the size of advanced systems. Typical flexible components and material requirements were identified on the basis of recurrence and/or functional commonality. This was followed by the evaluation of candidate materials and the search for material capabilities which promise to satisfy the postulated requirements. Particular attention was placed on thin films, and on the requirements of deployable antennas. The assessment of the performance of specific materials was based primarily on the failure mode, derived from a detailed failure analysis. In view of extensive on going work on thermal and environmental degradation effects, prime emphasis was placed on the assessment of the performance loss by meteoroid damage. Quantitative data were generated for tension members and antenna reflector materials. A methodology was developed for the representation of the overall materials performance as related to systems service life. A number of promising new concepts for flexible materials were identified.

  9. Failure Analysis Techniques for the Evaluation of Electrical and Electronic Components in Aircraft Accident Investigations

    DTIC Science & Technology

    1990-08-01

    of the review are presented in Tables 1 and 2 by aircraft and type of component. The totals for each component are combined in Table 3. Adjusted...of Table 3 have been grouped according to basic system functions and combined percentages for each of the basic functions have been computed as shown...and the free oxygen combines with the tungsten to form 29 Fig. 2.5 Notching of lamp aged 77 hours at 28 Volts DC. 2000X. (Reference 2.1) 30 DAMAGE

  10. Environmental testing to prevent on-orbit TDRS failures

    NASA Technical Reports Server (NTRS)

    Cutler, Robert M.

    1994-01-01

    Can improved environmental testing prevent on-orbit component failures such as those experienced in the Tracking and Data Relay Satellite (TDRS) constellation? TDRS communications have been available to user spacecraft continuously for over 11 years, during which the five TDRS's placed in orbit have demonstrated their redundancies and robustness by surviving 26 component failures. Nevertheless, additional environmental testing prior to launch could prevent the occurrence of some types of failures, and could help to maintain communication services. Specific testing challenges involve traveling wave tube assemblies (TWTA's) whose lives may decrease with on-off cycling, and heaters that are subject to thermal cycles. The development of test conditions and procedures should account for known thermal variations. Testing may also have the potential to prevent failures in which components such as diplexers have had their lives dramatically shortened because of particle migration in a weightless environment. Reliability modeling could be used to select additional components that could benefit from special testing, but experience shows that this approach has serious limitations. Through knowledge of on-orbit experience, and with advances in testing, communication satellite programs might avoid the occurrence of some types of failures, and extend future spacecraft longevity beyond the current TDRS design life of ten years. However, determining which components to test, and how must testing to do, remain problematical.

  11. Performance and reliability of the NASA biomass production chamber

    NASA Technical Reports Server (NTRS)

    Fortson, R. E.; Sager, J. C.; Chetirkin, P. V.

    1994-01-01

    The Biomass Production Chamber (BPC) at the Kennedy Space Center is part of the Controlled Ecological Life Support System (CELSS) Breadboard Project. Plants are grown in a closed environment in an effort to quantify their contributions to the requirements for life support. Performance of this system is described. Also, in building this system, data from component and subsystem failures are being recorded. These data are used to identify problem areas in the design and implementation. The techniques used to measure the reliability will be useful in the design and construction of future CELSS. Possible methods for determining the reliability of a green plant, the primary component of CELSS, are discussed.

  12. Vulnerability and cosusceptibility determine the size of network cascades

    DOE PAGES

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    2017-01-27

    In a network, a local disturbance can propagate and eventually cause a substantial part of the system to fail in cascade events that are easy to conceptualize but extraordinarily difficult to predict. Furthermore, we develop a statistical framework that can predict cascade size distributions by incorporating two ingredients only: the vulnerability of individual components and the cosusceptibility of groups of components (i.e., their tendency to fail together). Using cascades in power grids as a representative example, we show that correlations between component failures define structured and often surprisingly large groups of cosusceptible components. Aside from their implications for blackout studies,more » these results provide insights and a new modeling framework for understanding cascades in financial systems, food webs, and complex networks in general.« less

  13. A Study of Energy Management Systems and its Failure Modes in Smart Grid Power Distribution

    NASA Astrophysics Data System (ADS)

    Musani, Aatif

    The subject of this thesis is distribution level load management using a pricing signal in a smart grid infrastructure. The project relates to energy management in a spe-cialized distribution system known as the Future Renewable Electric Energy Delivery and Management (FREEDM) system. Energy management through demand response is one of the key applications of smart grid. Demand response today is envisioned as a method in which the price could be communicated to the consumers and they may shift their loads from high price periods to the low price periods. The development and deployment of the FREEDM system necessitates controls of energy and power at the point of end use. In this thesis, the main objective is to develop the control model of the Energy Management System (EMS). The energy and power management in the FREEDM system is digitally controlled therefore all signals containing system states are discrete. The EMS is modeled as a discrete closed loop transfer function in the z-domain. A breakdown of power and energy control devices such as EMS components may result in energy con-sumption error. This leads to one of the main focuses of the thesis which is to identify and study component failures of the designed control system. Moreover, H-infinity ro-bust control method is applied to ensure effectiveness of the control architecture. A focus of the study is cyber security attack, specifically bad data detection in price. Test cases are used to illustrate the performance of the EMS control design, the effect of failure modes and the application of robust control technique. The EMS was represented by a linear z-domain model. The transfer function be-tween the pricing signal and the demand response was designed and used as a test bed. EMS potential failure modes were identified and studied. Three bad data detection meth-odologies were implemented and a voting policy was used to declare bad data. The run-ning mean and standard deviation analysis method proves to be the best method to detect bad data. An H-infinity robust control technique was applied for the first time to design discrete EMS controller for the FREEDM system.

  14. Fabry-Perot interferometer development for rocket engine plume spectroscopy

    NASA Astrophysics Data System (ADS)

    Bickford, R. L.; Madzsar, G.

    1990-07-01

    This paper describes a new rugged high-resolution Fabry-Perot interferometer (FPI) designed for rocket engine plume spectroscopy, which is capable of detecting spectral signatures of eroding engine components during rocket engine tests and/or flight operations. The FPI system will make it possible to predict and to respond to the incipient rocket engine failures and to indicate the presence of rocket components degradation. The design diagram of the FPI spectrometer is presented.

  15. Fabry-Perot interferometer development for rocket engine plume spectroscopy

    NASA Technical Reports Server (NTRS)

    Bickford, R. L.; Madzsar, G.

    1990-01-01

    This paper describes a new rugged high-resolution Fabry-Perot interferometer (FPI) designed for rocket engine plume spectroscopy, which is capable of detecting spectral signatures of eroding engine components during rocket engine tests and/or flight operations. The FPI system will make it possible to predict and to respond to the incipient rocket engine failures and to indicate the presence of rocket components degradation. The design diagram of the FPI spectrometer is presented.

  16. Probabilistic Prediction of Lifetimes of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.

    2006-01-01

    ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.

  17. Space Environment Testing of Photovoltaic Array Systems at NASA's Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Schneider, Todd A.; Vaughn, Jason A.; Wright, Kenneth H., Jr.; Phillips, Brandon S.

    2015-01-01

    CubeSats, Communication Satellites, and Outer Planet Science Satellites all share one thing in common: Mission success depends on maintaining power in the harsh space environment. For a vast majority of satellites, spacecraft power is sourced by a photovoltaic (PV) array system. Built around PV cells, the array systems also include wiring, substrates, connectors, and protection diodes. Each of these components must function properly throughout the mission in order for power production to remain at nominal levels. Failure of even one component can lead to a crippling loss of power. To help ensure PV array systems do not suffer failures on-orbit due to the space environment, NASA's Marshall Space Flight Center (MSFC) has developed a wide ranging test and evaluation capability. Key elements of this capability include: Testing: a. Ultraviolet (UV) Exposure b. Charged Particle Radiation (Electron and Proton) c. Thermal Cycling d. Plasma and Beam Environments Evaluation: a. Electrostatic Discharge (ESD) Screening b. Optical Inspection and easurement c. PV Power Output including Large Area Pulsed Solar Simulator (LAPSS) measurements This paper will describe the elements of the space environment which particularly impact PV array systems. MSFC test capabilities will be described to show how the relevant space environments can be applied to PV array systems in the laboratory. A discussion of MSFC evaluation capabilities will also be provided. The sample evaluation capabilities offer test engineers a means to quantify the effects of the space environment on their PV array system or component. Finally, examples will be shown of the effects of the space environment on actual PV array materials tested at MSFC.

  18. Selective monitoring

    NASA Astrophysics Data System (ADS)

    Homem-de-Mello, Luiz S.

    1992-04-01

    While in NASA's earlier space missions such as Voyager the number of sensors was in the hundreds, future platforms such as the Space Station Freedom will have tens of thousands sensors. For these planned missions it will be impossible to use the comprehensive monitoring strategy that was used in the past in which human operators monitored all sensors all the time. A selective monitoring strategy must be substituted for the current comprehensive strategy. This selective monitoring strategy uses computer tools to preprocess the incoming data and direct the operators' attention to the most critical parts of the physical system at any given time. There are several techniques that can be used to preprocess the incoming information. This paper presents an approach to using diagnostic reasoning techniques to preprocess the sensor data and detect which parts of the physical system require more attention because components have failed or are most likely to have failed. Given the sensor readings and a model of the physical system, a number of assertions are generated and expressed as Boolean equations. The resulting system of Boolean equations is solved symbolically. Using a priori probabilities of component failure and Bayes' rule, revised probabilities of failure can be computed. These will indicate what components have failed or are the most likely to have failed. This approach is suitable for systems that are well understood and for which the correctness of the assertions can be guaranteed. Also, the system must be such that assertions can be made from instantaneous measurements. And the system must be such that changes are slow enough to allow the computation.

  19. Reliability analysis of C-130 turboprop engine components using artificial neural network

    NASA Astrophysics Data System (ADS)

    Qattan, Nizar A.

    In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine turbine under actual operating conditions, which can be used by aircraft operators for assessing system and component failures and customizing the maintenance programs recommended by the manufacturer.

  20. 49 CFR 234.207 - Adjustment, repair, or replacement of component.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... warning system fails to perform its intended function, the cause shall be determined and the faulty... completed, a railroad shall take appropriate action under § 234.105, Activation failure, § 234.106, Partial activation, or § 234.107, False activation, of this part. ...

  1. 49 CFR 234.207 - Adjustment, repair, or replacement of component.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... warning system fails to perform its intended function, the cause shall be determined and the faulty... completed, a railroad shall take appropriate action under § 234.105, Activation failure, § 234.106, Partial activation, or § 234.107, False activation, of this part. ...

  2. Independent Orbiter Assessment (IOA): Analysis of the Orbiter Experiment (OEX) subsystem

    NASA Technical Reports Server (NTRS)

    Compton, J. M.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Experiments hardware. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The Orbiter Experiments (OEX) Program consists of a multiple set of experiments for the purpose of gathering environmental and aerodynamic data to develop more accurate ground models for Shuttle performance and to facilitate the design of future spacecraft. This assessment only addresses currently manifested experiments and their support systems. Specifically this list consists of: Shuttle Entry Air Data System (SEADS); Shuttle Upper Atmosphere Mass Spectrometer (SUMS); Forward Fuselage Support System for OEX (FFSSO); Shuttle Infrared Laced Temperature Sensor (SILTS); Aerodynamic Coefficient Identification Package (ACIP); and Support System for OEX (SSO). There are only two potential critical items for the OEX, since the experiments only gather data for analysis post mission and are totally independent systems except for power. Failure of any experiment component usually only causes a loss of experiment data and in no way jeopardizes the crew or mission.

  3. Implantable Cardiac Defibrillator Lead Failure and Management.

    PubMed

    Swerdlow, Charles D; Kalahasty, Gautham; Ellenbogen, Kenneth A

    2016-03-22

    The implantable-cardioverter defibrillator (ICD) lead is the most vulnerable component of the ICD system. Despite advanced engineering design, sophisticated manufacturing techniques, and extensive bench, pre-clinical, and clinical testing, lead failure (LF) remains the Achilles' heel of the ICD system. ICD LF has a broad range of adverse outcomes, ranging from intermittent inappropriate pacing to proarrhythmia leading to patient mortality. ICD LF is often considered in the context of design or construction defects, but is more appropriately considered in the context of the finite service life of a mechanical component placed in chemically stressful environment and subjected to continuous mechanical stresses. This clinical review summarizes LF mechanisms, assessment, and differential diagnosis of LF, including lead diagnostics, recent prominent lead recalls, and management of LF and functioning, but recalled leads. Despite recent advances in lead technology, physicians will likely continue to need to understand how to manage patients with transvenous ICD leads. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  4. A System for Integrated Reliability and Safety Analyses

    NASA Technical Reports Server (NTRS)

    Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Coumeri, Marc; Scheidler, Peter, Jr.; Bonesteel, Charles

    1999-01-01

    We present an integrated reliability and aviation safety analysis tool. The reliability models for selected infrastructure components of the air traffic control system are described. The results of this model are used to evaluate the likelihood of seeing outcomes predicted by simulations with failures injected. We discuss the design of the simulation model, and the user interface to the integrated toolset.

  5. System Certification Procedures and Criteria Manual for Deep Submergence Systems

    DTIC Science & Technology

    1973-07-01

    Certification Milestone Events. The applicant and SCA interplay and negotiations between milestones is stressed . Effective and frequent communication...a series of events beginning with a single failure, often relatively minor, which may place the DSq Personnel or equipments under additional stresses ...for the particular DSS. p. Support ship handling system components such as cranes , brakes, and cables when the DSS is handled with personnel aboard. q

  6. HLH Drive System

    DTIC Science & Technology

    1977-09-01

    Material Comparison ....... .. 359 D-16 Comparison Chart - Rotor Brake Designs, Boeing Vertol, HLH ........... 360 D-17 Conventional Steel Disk Dynamic ...engines off. 0 In the event of a rotor brake caliper or disc failure, the system shall preclude damage to critical dynamic components. * The rotor brake... Dynamic System Test Rig (DSTR) shown in Figure. .8 provided a means for integrating and testing the aft and conbiner trans- missions, the aft rotor , thr’ee

  7. Error and attack tolerance of complex networks

    NASA Astrophysics Data System (ADS)

    Albert, Réka; Jeong, Hawoong; Barabási, Albert-László

    2000-07-01

    Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network. Complex communication networks display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web, the Internet, social networks and cells. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.

  8. Combined expert system/neural networks method for process fault diagnosis

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1995-01-01

    A two-level hierarchical approach for process fault diagnosis is an operating system employs a function-oriented approach at a first level and a component characteristic-oriented approach at a second level, where the decision-making procedure is structured in order of decreasing intelligence with increasing precision. At the first level, the diagnostic method is general and has knowledge of the overall process including a wide variety of plant transients and the functional behavior of the process components. An expert system classifies malfunctions by function to narrow the diagnostic focus to a particular set of possible faulty components that could be responsible for the detected functional misbehavior of the operating system. At the second level, the diagnostic method limits its scope to component malfunctions, using more detailed knowledge of component characteristics. Trained artificial neural networks are used to further narrow the diagnosis and to uniquely identify the faulty component by classifying the abnormal condition data as a failure of one of the hypothesized components through component characteristics. Once an anomaly is detected, the hierarchical structure is used to successively narrow the diagnostic focus from a function misbehavior, i.e., a function oriented approach, until the fault can be determined, i.e., a component characteristic-oriented approach.

  9. Combined expert system/neural networks method for process fault diagnosis

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1995-08-15

    A two-level hierarchical approach for process fault diagnosis of an operating system employs a function-oriented approach at a first level and a component characteristic-oriented approach at a second level, where the decision-making procedure is structured in order of decreasing intelligence with increasing precision. At the first level, the diagnostic method is general and has knowledge of the overall process including a wide variety of plant transients and the functional behavior of the process components. An expert system classifies malfunctions by function to narrow the diagnostic focus to a particular set of possible faulty components that could be responsible for the detected functional misbehavior of the operating system. At the second level, the diagnostic method limits its scope to component malfunctions, using more detailed knowledge of component characteristics. Trained artificial neural networks are used to further narrow the diagnosis and to uniquely identify the faulty component by classifying the abnormal condition data as a failure of one of the hypothesized components through component characteristics. Once an anomaly is detected, the hierarchical structure is used to successively narrow the diagnostic focus from a function misbehavior, i.e., a function oriented approach, until the fault can be determined, i.e., a component characteristic-oriented approach. 9 figs.

  10. Electromigration failures under bidirectional current stress

    NASA Astrophysics Data System (ADS)

    Tao, Jiang; Cheung, Nathan W.; Hu, Chenming

    1998-01-01

    Electromigration failure under DC stress has been studied for more than 30 years, and the methodologies for accelerated DC testing and design rules have been well established in the IC industry. However, the electromigration behavior and design rules under time-varying current stress are still unclear. In CMOS circuits, as many interconnects carry pulsed-DC (local VCC and VSS lines) and bidirectional AC current (clock and signal lines), it is essential to assess the reliability of metallization systems under these conditions. Failure mechanisms of different metallization systems (Al-Si, Al-Cu, Cu, TiN/Al-alloy/TiN, etc.) and different metallization structures (via, plug and interconnect) under AC current stress in a wide frequency range (from mHz to 500 MHz) has been study in this paper. Based on these experimental results, a damage healing model is developed, and electromigration design rules are proposed. It shows that in the circuit operating frequency range, the "design-rule current" is the time-average current. The pure AC component of the current only contributes to self-heating, while the average (DC component) current contributes to electromigration. To ensure longer thermal-migration lifetime under high frequency AC stress, an additional design rule is proposed to limit the temperature rise due to self-joule heating.

  11. Predictive modeling for corrective maintenance of imaging devices from machine logs.

    PubMed

    Patil, Ravindra B; Patil, Meru A; Ravi, Vidya; Naik, Sarif

    2017-07-01

    In the cost sensitive healthcare industry, an unplanned downtime of diagnostic and therapy imaging devices can be a burden on the financials of both the hospitals as well as the original equipment manufacturers (OEMs). In the current era of connectivity, it is easier to get these devices connected to a standard monitoring station. Once the system is connected, OEMs can monitor the health of these devices remotely and take corrective actions by providing preventive maintenance thereby avoiding major unplanned downtime. In this article, we present an overall methodology of predicting failure of these devices well before customer experiences it. We use data-driven approach based on machine learning to predict failures in turn resulting in reduced machine downtime, improved customer satisfaction and cost savings for the OEMs. One of the use-case of predicting component failure of PHILIPS iXR system is explained in this article.

  12. Impaired Calcium Entry into Cells Is Associated with Pathological Signs of Zinc Deficiency12

    PubMed Central

    O’Dell, Boyd L.; Browning, Jimmy D.

    2013-01-01

    Zinc is an essential trace element whose deficiency gives rise to specific pathological signs. These signs occur because an essential metabolic function is impaired as the result of failure to form or maintain a specific metal-ion protein complex. Although zinc is a component of many essential metalloenzymes and transcription factors, few of these have been identified with a specific sign of incipient zinc deficiency. Zinc also functions as a structural component of other essential proteins. Recent research with Swiss murine fibroblasts, 3T3 cells, has shown that zinc deficiency impairs calcium entry into cells, a process essential for many cell functions, including proliferation, maturation, contraction, and immunity. Impairment of calcium entry and the subsequent failure of cell proliferation could explain the growth failure associated with zinc deficiency. Defective calcium uptake is associated with impaired nerve transmission and pathology of the peripheral nervous system, as well as the failure of platelet aggregation and the bleeding tendency of zinc deficiency. There is a strong analogy between the pathology of genetic diseases that result in impaired calcium entry and other signs of zinc deficiency, such as decreased and cyclic food intake, taste abnormalities, abnormal water balance, skin lesions, impaired reproduction, depressed immunity, and teratogenesis. This analogy suggests that failure of calcium entry is involved in these signs of zinc deficiency as well. PMID:23674794

  13. Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive

    NASA Technical Reports Server (NTRS)

    Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)

    2001-01-01

    Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.

  14. Projecting LED product life based on application

    NASA Astrophysics Data System (ADS)

    Narendran, Nadarajah; Liu, Yi-wei; Mou, Xi; Thotagamuwa, Dinusha R.; Eshwarage, Oshadhi V. Madihe

    2016-09-01

    LED products have started to displace traditional light sources in many lighting applications. One of the commonly claimed benefits for LED lighting products is their long useful lifetime in applications. Today there are many replacement lamp products using LEDs in the marketplace. Typically, lifetime claims of these replacement lamps are in the 25,000-hour range. According to current industry practice, the time for the LED light output to reach the 70% value is estimated according to IESNA LM-80 and TM-21 procedures and the resulting value is reported as the whole system life. LED products generally experience different thermal environments and switching (on-off cycling) patterns when used in applications. Current industry test methods often do not produce accurate lifetime estimates for LED systems because only one component of the system, namely the LED, is tested under a continuous-on burning condition without switching on and off, and because they estimate for only one failure type, lumen depreciation. The objective of the study presented in this manuscript was to develop a test method that could help predict LED system life in any application by testing the whole LED system, including on-off power cycling with sufficient dwell time, and considering both failure types, catastrophic and parametric. The study results showed for the LED A-lamps tested in this study, both failure types, catastrophic and parametric, exist. The on-off cycling encourages catastrophic failure, and maximum operating temperature influences the lumen depreciation rate and parametric failure time. It was also clear that LED system life is negatively affected by on-off switching, contrary to commonly held belief. In addition, the study results showed that most of the LED systems failed catastrophically much ahead of the LED light output reaching the 70% value. This emphasizes the fact that life testing of LED systems must consider catastrophic failure in addition to lumen depreciation, and the shorter of the two failure modes must be selected as the system life. The results of this study show a shorter time test procedure can be developed to accurately predict LED system life in any application by knowing the LED temperature and the switching cycle.

  15. Preliminary tests of vulnerability of typical aircraft electronics to lightning-induced voltages

    NASA Technical Reports Server (NTRS)

    Plumer, J. A.; Walko, L. C.

    1974-01-01

    Tests made on two pieces of typical aircraft electronics equipment to ascertain their vulnerability to simulated lightning-induced transient voltages representative of those which might occur in flight when the aircraft is struck by lightning were conducted. The test results demonstrated that such equipment can be interfered with or damaged by transient voltages as low as 21 volts peak. Greater voltages can cause failure of semiconductor components within the equipment. The results emphasize a need for establishment of coordinated system susceptibility and component vulnerability criteria to achieve lightning protection of aerospace electrical and electronic systems.

  16. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  17. Strain gage system evaluation program

    NASA Technical Reports Server (NTRS)

    Dolleris, G. W.; Mazur, H. J.; Kokoszka, E., Jr.

    1978-01-01

    A program was conducted to determine the reliability of various strain gage systems when applied to rotating compressor blades in an aircraft gas turbine engine. A survey of current technology strain gage systems was conducted to provide a basis for selecting candidate systems for evaluation. Testing and evaluation was conducted in an F 100 engine. Sixty strain gage systems of seven different designs were installed on the first and third stages of an F 100 engine fan. Nineteen strain gage failures occurred during 62 hours of engine operation, for a survival rate of 68 percent. Of the failures, 16 occurred at blade-to-disk leadwire jumps (84 percent), two at a leadwire splice (11 percent), and one at a gage splice (5 percent). Effects of erosion, temperature, G-loading, and stress levels are discussed. Results of a post-test analysis of the individual components of each strain gage system are presented.

  18. Independent Orbiter Assessment (IOA): Analysis of the nose wheel steering subsystem

    NASA Technical Reports Server (NTRS)

    Mediavilla, Anthony Scott

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Nose Wheel Steering (NWS) hardware are documented. The NWS hardware provides primary directional control for the Orbiter vehicle during landing rollout. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The original NWS design was envisioned as a backup system to differential braking for directional control of the Orbiter during landing rollout. No real effort was made to design the NWS system as fail operational. The brakes have much redundancy built into their design but the poor brake/tire performance has forced the NSTS to upgrade NWS to the primary mode of directional control during rollout. As a result, a large percentage of the NWS system components have become Potential Critical Items (PCI).

  19. Biofeedback in the treatment of heart failure.

    PubMed

    McKee, Michael G; Moravec, Christine S

    2010-07-01

    Biofeedback training can be used to reduce activation of the sympathetic nervous system (SNS) and increase activation of the parasympathetic nervous system (PNS). It is well established that hyperactivation of the SNS contributes to disease progression in chronic heart failure. It has been postulated that underactivation of the PNS may also play a role in heart failure pathophysiology. In addition to autonomic imbalance, a chronic inflammatory process is now recognized as being involved in heart failure progression, and recent work has established that activation of the inflammatory process may be attenuated by vagal nerve stimulation. By interfering with both autonomic imbalance and the inflammatory process, biofeedback-assisted stress management may be an effective treatment for patients with heart failure by improving clinical status and quality of life. Recent studies have suggested that biofeedback and stress management have a positive impact in patients with chronic heart failure, and patients with higher perceived control over their disease have been shown to have better quality of life. Our ongoing study of biofeedback-assisted stress management in the treatment of end-stage heart failure will also examine biologic end points in treated patients at the time of heart transplant, in order to assess the effects of biofeedback training on the cellular and molecular components of the failing heart. We hypothesize that the effects of biofeedback training will extend to remodeling the failing human heart, in addition to improving quality of life.

  20. Fabrication of MEMS components using ultrafine-grained aluminium alloys

    NASA Astrophysics Data System (ADS)

    Qiao, Xiao Guang; Gao, Nong; Moktadir, Zakaria; Kraft, Michael; Starink, Marco J.

    2010-04-01

    A novel process for the fabrication of a microelectromechanical systems (MEMS) metallic component with features smaller than 10 µm and high thermal conductivity was investigated. This may be applied to new or improved microscale components, such as (micro-) heat exchangers. In the first stage of processing, equal channel angular pressing (ECAP) was employed to refine the grain size of commercial purity aluminium (Al-1050) to the ultrafine-grained (UFG) material. Embossing was conducted using a micro silicon mould fabricated by deep reactive ion etching (DRIE). Both cold embossing and hot embossing were performed on the coarse-grained and UFG Al-1050. Cold embossing on UFG Al-1050 led to a partially transferred pattern from the micro silicon mould and high failure rate of the mould. Hot embossing on UFG Al-1050 provided a smooth embossed surface with a fully transferred pattern and a low failure rate of the mould, while hot embossing on the coarse-grained Al-1050 resulted in a rougher surface with shear bands.

  1. Independent Orbiter Assessment (IOA): Analysis of the hydraulics/water spray boiler subsystem

    NASA Technical Reports Server (NTRS)

    Duval, J. D.; Davidson, W. R.; Parkman, William E.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results for the Orbiter Hydraulics/Water Spray Boiler Subsystem. The hydraulic system provides hydraulic power to gimbal the main engines, actuate the main engine propellant control valves, move the aerodynamic flight control surfaces, lower the landing gear, apply wheel brakes, steer the nosewheel, and dampen the external tank (ET) separation. Each hydraulic system has an associated water spray boiler which is used to cool the hydraulic fluid and APU lubricating oil. The IOA analysis process utilized available HYD/WSB hardware drawings, schematics and documents for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 430 failure modes analyzed, 166 were determined to be PCIs.

  2. Impact of disease management programs on healthcare expenditures for patients with diabetes, depression, heart failure or chronic obstructive pulmonary disease: a systematic review of the literature.

    PubMed

    de Bruin, Simone R; Heijink, Richard; Lemmens, Lidwien C; Struijs, Jeroen N; Baan, Caroline A

    2011-07-01

    Evaluating the impact of disease management programs on healthcare expenditures for patients with diabetes, depression, heart failure or COPD. Systematic Pubmed search for studies reporting the impact of disease management programs on healthcare expenditures. Included were studies that contained two or more components of Wagner's chronic care model and were published between January 2007 and December 2009. Thirty-one papers were selected, describing disease management programs for patients with diabetes (n=14), depression (n=4), heart failure (n=8), and COPD (n=5). Twenty-one studies reported incremental healthcare costs per patient per year, of which 13 showed cost-savings. Incremental costs ranged between -$16,996 and $3305 per patient per year. Substantial variation was found between studies in terms of study design, number and combination of components of disease management programs, interventions within components, and characteristics of economic evaluations. Although it is widely believed that disease management programs reduce healthcare expenditures, the present study shows that evidence for this claim is still inconclusive. Nevertheless disease management programs are increasingly implemented in healthcare systems worldwide. To support well-considered decision-making in this field, well-designed economic evaluations should be stimulated. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. The Livingstone Model of a Main Propulsion System

    NASA Technical Reports Server (NTRS)

    Bajwa, Anupa; Sweet, Adam; Korsmeyer, David (Technical Monitor)

    2003-01-01

    Livingstone is a discrete, propositional logic-based inference engine that has been used for diagnosis of physical systems. We present a component-based model of a Main Propulsion System (MPS) and say how it is used with Livingstone (L2) in order to implement a diagnostic system for integrated vehicle health management (IVHM) for the Propulsion IVHM Technology Experiment (PITEX). We start by discussing the process of conceptualizing such a model. We describe graphical tools that facilitated the generation of the model. The model is composed of components (which map onto physical components), connections between components and constraints. A component is specified by variables, with a set of discrete, qualitative values for each variable in its local nominal and failure modes. For each mode, the model specifies the component's behavior and transitions. We describe the MPS components' nominal and fault modes and associated Livingstone variables and data structures. Given this model, and observed external commands and observations from the system, Livingstone tracks the state of the MPS over discrete time-steps by choosing trajectories that are consistent with observations. We briefly discuss how the compiled model fits into the overall PITEX architecture. Finally we summarize our modeling experience, discuss advantages and disadvantages of our approach, and suggest enhancements to the modeling process.

  4. Effects of Assuming Independent Component Failure Times, If They Are Actually Dependent, In a Series System

    DTIC Science & Technology

    1988-05-31

    non-negative random variables with system life Y = r ( TI, ..., rp ) and failure pattern kT) - [, ifY =- Td , I I and Y<Ty, j* (2.2) S=a , otherwise...Moeschberger - 3a. TYPE OF REPORT 1i3b. TIME COVERED 114. DATE OF REPORT (Year, Month, Day) S. PAGE COUNT Final I FROM9 -1- 8 2 Td .2-3l--8 7 IMay 31...T1iY > Td )since average concordanceeover the range Y > Tiis 0. When i - I arid l=-0, then Ti -Xi <.04= Ti, Xi < V ,Yi <Xi. Here if Ti Y1&<T, the

  5. Investigation of pump and pump switch failures in rainwater harvesting systems

    NASA Astrophysics Data System (ADS)

    Moglia, Magnus; Gan, Kein; Delbridge, Nathan; Sharma, Ashok K.; Tjandraatmadja, Grace

    2016-07-01

    Rainwater harvesting is an important technology in cities that can contribute to a number of functions, such as sustainable water management in the face of demand growth and drought as well as the detention of rainwater to increase flood protection and reduce damage to waterways. The objective of this article is to investigate the integrity of residential rainwater harvesting systems, drawing on the results of the field inspection of 417 rainwater systems across Melbourne that was combined with a survey of householders' situation, maintenance behaviour and attitudes. Specifically, the study moves beyond the assumption that rainwater systems are always operational and functional and draws on the collected data to explore the various reasons and rates of failure associated with pumps and pump switches, leaving for later further exploration of the failure in other components such as the collection area, gutters, tank, and overflows. To the best of the authors' knowledge, there is no data like this in academic literature or in the water sector. Straightforward Bayesian Network models were constructed in order to analyse the factors contributing to various types of failures, including system age, type of use, the reason for installation, installer, and maintenance behaviour. Results show that a number of issues commonly exist, such as failure of pumps (5% of systems), automatic pump switches that mediate between the tank and reticulated water (9% of systems), and systems with inadequate setups (i.e. no pump) limiting their use. In conclusion, there appears to be a lack of enforcement or quality controls in both installation practices by sometimes unskilled contractors and lack of ongoing maintenance checks. Mechanisms for quality control and asset management are required, but difficult to promote or enforce. Further work is needed into how privately owned assets that have public benefits could be better managed.

  6. J-2X Abort System Development

    NASA Technical Reports Server (NTRS)

    Santi, Louis M.; Butas, John P.; Aguilar, Robert B.; Sowers, Thomas S.

    2008-01-01

    The J-2X is an expendable liquid hydrogen (LH2)/liquid oxygen (LOX) gas generator cycle rocket engine that is currently being designed as the primary upper stage propulsion element for the new NASA Ares vehicle family. The J-2X engine will contain abort logic that functions as an integral component of the Ares vehicle abort system. This system is responsible for detecting and responding to conditions indicative of impending Loss of Mission (LOM), Loss of Vehicle (LOV), and/or catastrophic Loss of Crew (LOC) failure events. As an earth orbit ascent phase engine, the J-2X is a high power density propulsion element with non-negligible risk of fast propagation rate failures that can quickly lead to LOM, LOV, and/or LOC events. Aggressive reliability requirements for manned Ares missions and the risk of fast propagating J-2X failures dictate the need for on-engine abort condition monitoring and autonomous response capability as well as traditional abort agents such as the vehicle computer, flight crew, and ground control not located on the engine. This paper describes the baseline J-2X abort subsystem concept of operations, as well as the development process for this subsystem. A strategy that leverages heritage system experience and responds to an evolving engine design as well as J-2X specific test data to support abort system development is described. The utilization of performance and failure simulation models to support abort system sensor selection, failure detectability and discrimination studies, decision threshold definition, and abort system performance verification and validation is outlined. The basis for abort false positive and false negative performance constraints is described. Development challenges associated with information shortfalls in the design cycle, abort condition coverage and response assessment, engine-vehicle interface definition, and abort system performance verification and validation are also discussed.

  7. Analyzing and Predicting Effort Associated with Finding and Fixing Software Faults

    NASA Technical Reports Server (NTRS)

    Hamill, Maggie; Goseva-Popstojanova, Katerina

    2016-01-01

    Context: Software developers spend a significant amount of time fixing faults. However, not many papers have addressed the actual effort needed to fix software faults. Objective: The objective of this paper is twofold: (1) analysis of the effort needed to fix software faults and how it was affected by several factors and (2) prediction of the level of fix implementation effort based on the information provided in software change requests. Method: The work is based on data related to 1200 failures, extracted from the change tracking system of a large NASA mission. The analysis includes descriptive and inferential statistics. Predictions are made using three supervised machine learning algorithms and three sampling techniques aimed at addressing the imbalanced data problem. Results: Our results show that (1) 83% of the total fix implementation effort was associated with only 20% of failures. (2) Both safety critical failures and post-release failures required three times more effort to fix compared to non-critical and pre-release counterparts, respectively. (3) Failures with fixes spread across multiple components or across multiple types of software artifacts required more effort. The spread across artifacts was more costly than spread across components. (4) Surprisingly, some types of faults associated with later life-cycle activities did not require significant effort. (5) The level of fix implementation effort was predicted with 73% overall accuracy using the original, imbalanced data. Using oversampling techniques improved the overall accuracy up to 77%. More importantly, oversampling significantly improved the prediction of the high level effort, from 31% to around 85%. Conclusions: This paper shows the importance of tying software failures to changes made to fix all associated faults, in one or more software components and/or in one or more software artifacts, and the benefit of studying how the spread of faults and other factors affect the fix implementation effort.

  8. Reliability analysis of a robotic system using hybridized technique

    NASA Astrophysics Data System (ADS)

    Kumar, Naveen; Komal; Lather, J. S.

    2017-09-01

    In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.

  9. HFE (Human Factors Engineering) Technology for Navy Weapon System Acquisition.

    DTIC Science & Technology

    1979-07-01

    requirements 2-31 to electrical components using: Failure Modes and Effects Analysis ( FMEA ) and LOR data, component design requirements and a selected...3- 60 * ,.- .- I; L , , _ m m _ --- : " I. I ._ . - I- The use of SAINT can specify various outputs of the simulation, histograms, plots, summary...Electro Safety . 60 .98 .95 .65 .92 .70 .42 .62 Personnel Relationships .74 .70 .79 .63 .40 .77 .85 .80 Electro Circuit Analysis .63 .90 .95 .58 .40

  10. Design of high temperature ceramic components against fast fracture and time-dependent failure using cares/life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.

    1995-08-01

    A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less

  11. ASSESSMENT OF DYNAMIC PRA TECHNIQUES WITH INDUSTRY AVERAGE COMPONENT PERFORMANCE DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadav, Vaibhav; Agarwal, Vivek; Gribok, Andrei V.

    In the nuclear industry, risk monitors are intended to provide a point-in-time estimate of the system risk given the current plant configuration. Current risk monitors are limited in that they do not properly take into account the deteriorating states of plant equipment, which are unit-specific. Current approaches to computing risk monitors use probabilistic risk assessment (PRA) techniques, but the assessment is typically a snapshot in time. Living PRA models attempt to address limitations of traditional PRA models in a limited sense by including temporary changes in plant and system configurations. However, information on plant component health are not considered. Thismore » often leaves risk monitors using living PRA models incapable of conducting evaluations with dynamic degradation scenarios evolving over time. There is a need to develop enabling approaches to solidify risk monitors to provide time and condition-dependent risk by integrating traditional PRA models with condition monitoring and prognostic techniques. This paper presents estimation of system risk evolution over time by integrating plant risk monitoring data with dynamic PRA methods incorporating aging and degradation. Several online, non-destructive approaches have been developed for diagnosing plant component conditions in nuclear industry, i.e., condition indication index, using vibration analysis, current signatures, and operational history [1]. In this work the component performance measures at U.S. commercial nuclear power plants (NPP) [2] are incorporated within the various dynamic PRA methodologies [3] to provide better estimates of probability of failures. Aging and degradation is modeled within the Level-1 PRA framework and is applied to several failure modes of pumps and can be extended to a range of components, viz. valves, generators, batteries, and pipes.« less

  12. VIDANA: Data Management System for Nano Satellites

    NASA Astrophysics Data System (ADS)

    Montenegro, Sergio; Walter, Thomas; Dilger, Erik

    2013-08-01

    A Vidana data management system is a network of software and hardware components. This implies a software network, a hardware network and a smooth connection between both of them. Our strategy is based on our innovative middleware. A reliable interconnection network (SW & HW) which can interconnect many unreliable redundant components such as sensors, actuators, communication devices, computers, and storage elements,... and software components! Component failures are detected, the affected device is disabled and its function is taken over by a redundant component. Our middleware doesn't connect only software, but also devices and software together. Software and hardware communicate with each other without having to distinguish which functions are in software and which are implemented in hardware. Components may be turned on and off at any time, and the whole system will autonomously adapt to its new configuration in order to continue fulfilling its task. In VIDANA we aim dynamic adaptability (run tine), static adaptability (tailoring), and unified HW/SW communication protocols. For many of these aspects we use "learn from the nature" where we can find astonishing reference implementations.

  13. TSTA Piping and Flame Arrestor Operating Experience Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadwallader, Lee C.; Willms, R. Scott

    The Tritium Systems Test Assembly (TSTA) was a facility dedicated to tritium handling technology and experiment research at the Los Alamos National Laboratory. The facility operated from 1984 to 2001, running a prototype fusion fuel processing loop with ~100 grams of tritium as well as small experiments. There have been several operating experience reports written on this facility’s operation and maintenance experience. This paper describes analysis of two additional components from TSTA, small diameter gas piping that handled small amounts of tritium in a nitrogen carrier gas, and the flame arrestor used in this piping system. The operating experiences andmore » the component failure rates for these components are discussed in this paper. Comparison data from other applications are also presented.« less

  14. Preservation of renal function in atypical hemolytic uremic syndrome by eculizumab: a case report.

    PubMed

    Giordano, Mario; Castellano, Giuseppe; Messina, Giovanni; Divella, Claretta; Bellantuono, Rosa; Puteo, Flora; Colella, Vincenzo; Depalo, Tommaso; Gesualdo, Loreto

    2012-11-01

    Genetic mutations in complement components are associated with the development of atypical hemolytic uremic syndrome (aHUS), a rare disease with high morbidity rate triggered by infections or unidentified factors. The uncontrolled activation of the alternative pathway of complement results in systemic endothelial damage leading to progressive development of renal failure. A previously healthy 8-month-old boy was referred to our hospital because of onset of fever, vomiting, and a single episode of nonbloody diarrhea. Acute kidney injury with preserved diuresis, hemolytic anemia, and thrombocytopenia were detected, and common protocols for management of HUS were followed without considerable improvement. The persistent low levels of complement component C3 led us to hypothesize the occurrence of aHUS. In fact, the child carried a specific mutation in complement factor H (Cfh; nonsense mutation in 3514G>T, serum levels of Cfh 138 mg/L, normal range 350-750). Given the lack of response to therapy and the occurrence of kidney failure requiring dialysis, we used eculizumab as rescue therapy, a monoclonal humanized antibody against the complement component C5. One week from the first administration, we observed a significant improvement of all clinical and laboratory parameters with complete recovery from hemodialysis, even in the presence of systemic infections. Our case report shows that complement inhibiting treatment allows the preservation of renal function and avoids disease relapses during systemic infections.

  15. What Reliability Engineers Should Know about Space Radiation Effects

    NASA Technical Reports Server (NTRS)

    DiBari, Rebecca

    2013-01-01

    Space radiation in space systems present unique failure modes and considerations for reliability engineers. Radiation effects is not a one size fits all field. Threat conditions that must be addressed for a given mission depend on the mission orbital profile, the technologies of parts used in critical functions and on application considerations, such as supply voltages, temperature, duty cycle, and redundancy. In general, the threats that must be addressed are of two types-the cumulative degradation mechanisms of total ionizing dose (TID) and displacement damage (DD). and the prompt responses of components to ionizing particles (protons and heavy ions) falling under the heading of single-event effects. Generally degradation mechanisms behave like wear-out mechanisms on any active components in a system: Total Ionizing Dose (TID) and Displacement Damage: (1) TID affects all active devices over time. Devices can fail either because of parametric shifts that prevent the device from fulfilling its application or due to device failures where the device stops functioning altogether. Since this failure mode varies from part to part and lot to lot, lot qualification testing with sufficient statistics is vital. Displacement damage failures are caused by the displacement of semiconductor atoms from their lattice positions. As with TID, failures can be either parametric or catastrophic, although parametric degradation is more common for displacement damage. Lot testing is critical not just to assure proper device fi.mctionality throughout the mission. It can also suggest remediation strategies when a device fails. This paper will look at these effects on a variety of devices in a variety of applications. This paper will look at these effects on a variety of devices in a variety of applications. (2) On the NEAR mission a functional failure was traced to a PIN diode failure caused by TID induced high leakage currents. NEAR was able to recover from the failure by reversing the current of a nearby Thermal Electric Cooler (turning the TEC into a heater). The elevated temperature caused the PIN diode to anneal and the device to recover. It was by lot qualification testing that NEAR knew the diode would recover when annealed. This paper will look at these effects on a variety of devices in a variety of applications. Single Event Effects (SEE): (1) In contrast to TID and displacement damage, Single Event Effects (SEE) resemble random failures. SEE modes can range from changes in device logic (single-event upset, or SEU). temporary disturbances (single-event transient) to catastrophic effects such as the destructive SEE modes, single-event latchup (SEL). single-event gate rupture (SEGR) and single-event burnout (SEB) (2) The consequences of nondestructive SEE modes such as SEU and SET depend critically on their application--and may range from trivial nuisance errors to catastrophic loss of mission. It is critical not just to ensure that potentially susceptible devices are well characterized for their susceptibility, but also to work with design engineers to understand the implications of each error mode. -For destructive SEE, the predominant risk mitigation strategy is to avoid susceptible parts, or if that is not possible. to avoid conditions under which the part may be susceptible. Destructive SEE mechanisms are often not well understood, and testing is slow and expensive, making rate prediction very challenging. (3) Because the consequences of radiation failure and degradation modes depend so critically on the application as well as the component technology, it is essential that radiation, component. design and system engineers work togetherpreferably starting early in the program to ensure critical applications are addressed in time to optimize the probability of mission success.

  16. Studies on Automobile Clutch Release Bearing Characteristics with Acoustic Emission

    NASA Astrophysics Data System (ADS)

    Chen, Guoliang; Chen, Xiaoyang

    Automobile clutch release bearings are important automotive driveline components. For the clutch release bearing, early fatigue failure diagnosis is significant, but the early fatigue failure response signal is not obvious, because failure signals are susceptible to noise on the transmission path and to working environment factors such as interference. With an improvement in vehicle design, clutch release bearing fatigue life indicators have increasingly become an important requirement. Contact fatigue is the main failure mode of release rolling bearing components. Acoustic emission techniques in contact fatigue failure detection have unique advantages, which include highly sensitive nondestructive testing methods. In the acoustic emission technique to detect a bearing, signals are collected from multiple sensors. Each signal contains partial fault information, and there is overlap between the signals' fault information. Therefore, the sensor signals receive simultaneous source information integration is complete fragment rolling bearing fault acoustic emission signal, which is the key issue of accurate fault diagnosis. Release bearing comprises the following components: the outer ring, inner ring, rolling ball, cage. When a failure occurs (such as cracking, pitting), the other components will impact damaged point to produce acoustic emission signal. Release bearings mainly emit an acoustic emission waveform with a Rayleigh wave propagation. Elastic waves emitted from the sound source, and it is through the part surface bearing scattering. Dynamic simulation of rolling bearing failure will contribute to a more in-depth understanding of the characteristics of rolling bearing failure, because monitoring and fault diagnosis of rolling bearings provide a theoretical basis and foundation.

  17. Space Shuttle Main Engine Quantitative Risk Assessment: Illustrating Modeling of a Complex System with a New QRA Software Package

    NASA Technical Reports Server (NTRS)

    Smart, Christian

    1998-01-01

    During 1997, a team from Hernandez Engineering, MSFC, Rocketdyne, Thiokol, Pratt & Whitney, and USBI completed the first phase of a two year Quantitative Risk Assessment (QRA) of the Space Shuttle. The models for the Shuttle systems were entered and analyzed by a new QRA software package. This system, termed the Quantitative Risk Assessment System(QRAS), was designed by NASA and programmed by the University of Maryland. The software is a groundbreaking PC-based risk assessment package that allows the user to model complex systems in a hierarchical fashion. Features of the software include the ability to easily select quantifications of failure modes, draw Event Sequence Diagrams(ESDs) interactively, perform uncertainty and sensitivity analysis, and document the modeling. This paper illustrates both the approach used in modeling and the particular features of the software package. The software is general and can be used in a QRA of any complex engineered system. The author is the project lead for the modeling of the Space Shuttle Main Engines (SSMEs), and this paper focuses on the modeling completed for the SSMEs during 1997. In particular, the groundrules for the study, the databases used, the way in which ESDs were used to model catastrophic failure of the SSMES, the methods used to quantify the failure rates, and how QRAS was used in the modeling effort are discussed. Groundrules were necessary to limit the scope of such a complex study, especially with regard to a liquid rocket engine such as the SSME, which can be shut down after ignition either on the pad or in flight. The SSME was divided into its constituent components and subsystems. These were ranked on the basis of the possibility of being upgraded and risk of catastrophic failure. Once this was done the Shuttle program Hazard Analysis and Failure Modes and Effects Analysis (FMEA) were used to create a list of potential failure modes to be modeled. The groundrules and other criteria were used to screen out the many failure modes that did not contribute significantly to the catastrophic risk. The Hazard Analysis and FMEA for the SSME were also used to build ESDs that show the chain of events leading from the failure mode occurence to one of the following end states: catastrophic failure, engine shutdown, or siccessful operation( successful with respect to the failure mode under consideration).

  18. Dynamic Response and Failure Mechanism of Brittle Rocks Under Combined Compression-Shear Loading Experiments

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Dai, Feng

    2018-03-01

    A novel method is developed for characterizing the mechanical response and failure mechanism of brittle rocks under dynamic compression-shear loading: an inclined cylinder specimen using a modified split Hopkinson pressure bar (SHPB) system. With the specimen axis inclining to the loading direction of SHPB, a shear component can be introduced into the specimen. Both static and dynamic experiments are conducted on sandstone specimens. Given carefully pulse shaping, the dynamic equilibrium of the inclined specimens can be satisfied, and thus the quasi-static data reduction is employed. The normal and shear stress-strain relationships of specimens are subsequently established. The progressive failure process of the specimen illustrated via high-speed photographs manifests a mixed failure mode accommodating both the shear-dominated failure and the localized tensile damage. The elastic and shear moduli exhibit certain loading-path dependence under quasi-static loading but loading-path insensitivity under high loading rates. Loading rate dependence is evidently demonstrated through the failure characteristics involving fragmentation, compression and shear strength and failure surfaces based on Drucker-Prager criterion. Our proposed method is convenient and reliable to study the dynamic response and failure mechanism of rocks under combined compression-shear loading.

  19. Object Relations and the Development of Values.

    ERIC Educational Resources Information Center

    Gazda, George M.; Sedgwick, Charlalee

    1990-01-01

    Claims acquisition of values is related to successes and failures of early relationships. Describes steps person goes through in making identifications, explaining steps that move person toward construction of value system. Refers to works of Heinz Kohut to explain how child's idealizing has within it necessary components for child's growth in…

  20. Uncemented glenoid component in total shoulder arthroplasty. Survivorship and outcomes.

    PubMed

    Martin, Scott David; Zurakowski, David; Thornhill, Thomas S

    2005-06-01

    Glenoid component loosening continues to be a major factor affecting the long-term survivorship of total shoulder replacements. Radiolucent lines, cement fracture, migration, and loosening requiring revision are common problems with cemented glenoid components. The purpose of this study was to evaluate the results of total shoulder arthroplasty with an uncemented glenoid component and to identify predictors of glenoid component failure. One hundred and forty-seven consecutive total shoulder arthroplasties were performed in 132 patients (mean age, 63.3 years) with use of an uncemented glenoid component fixed with screws between 1988 and 1996. One hundred and forty shoulders in 124 patients were available for follow-up at an average of 7.5 years. One shoulder in which the arthroplasty had failed at 2.4 years and for which the duration of follow-up was four years was also included for completeness. The preoperative diagnoses included osteoarthritis in seventy-two shoulders and rheumatoid arthritis in fifty-five. Radiolucency was noted around the glenoid component and/or screws in fifty-three of the 140 shoulders. The mean modified ASES (American Shoulder and Elbow Surgeons) score (and standard deviation) improved from 15.6 +/- 11.8 points preoperatively to 75.8 +/- 17.5 points at the time of follow-up. Eighty-five shoulders were not painful, forty-two were slightly or mildly painful, ten were moderately painful, and three were severely painful. Fifteen (11%) of the glenoid components failed clinically, and ten of them also had radiographic signs of failure. Eleven other shoulders had radiographic signs of failure but no symptoms at the time of writing. Three factors had a significant independent association with clinical failure: male gender (p = 0.02), pain (p < 0.01), and radiolucency adjacent to the flat tray (p < 0.001). In addition, the annual risk of implant revision was nearly seven times higher for patients with radiographic signs of failure. Clinical survivorship was 95% at five years and 85% at ten years. The failure rates of the total shoulder arthroplasties in this study were higher than those in previously reported studies of cemented polyethylene components with similar durations of follow-up. Screw breakage and excessive polyethylene wear were common problems that may lead to additional failures of these uncemented glenoid components in the future.

  1. Independent Orbiter Assessment (IOA): Analysis of the orbital maneuvering system

    NASA Technical Reports Server (NTRS)

    Prust, C. D.; Paul, D. J.; Burkemper, V. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbital Maneuvering System (OMS) hardware are documented. The OMS provides the thrust to perform orbit insertion, orbit circularization, orbit transfer, rendezvous, and deorbit. The OMS is housed in two independent pods located one on each side of the tail and consists of the following subsystems: Helium Pressurization; Propellant Storage and Distribution; Orbital Maneuvering Engine; and Electrical Power Distribution and Control. The IOA analysis process utilized available OMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluted and analyzed for possible failure modes and effects. Criticality was asigned based upon the severity of the effect for each failure mode.

  2. Performance-based seismic design of nonstructural building components: The next frontier of earthquake engineering

    NASA Astrophysics Data System (ADS)

    Filiatrault, Andre; Sullivan, Timothy

    2014-08-01

    With the development and implementation of performance-based earthquake engineering, harmonization of performance levels between structural and nonstructural components becomes vital. Even if the structural components of a building achieve a continuous or immediate occupancy performance level after a seismic event, failure of architectural, mechanical or electrical components can lower the performance level of the entire building system. This reduction in performance caused by the vulnerability of nonstructural components has been observed during recent earthquakes worldwide. Moreover, nonstructural damage has limited the functionality of critical facilities, such as hospitals, following major seismic events. The investment in nonstructural components and building contents is far greater than that of structural components and framing. Therefore, it is not surprising that in many past earthquakes, losses from damage to nonstructural components have exceeded losses from structural damage. Furthermore, the failure of nonstructural components can become a safety hazard or can hamper the safe movement of occupants evacuating buildings, or of rescue workers entering buildings. In comparison to structural components and systems, there is relatively limited information on the seismic design of nonstructural components. Basic research work in this area has been sparse, and the available codes and guidelines are usually, for the most part, based on past experiences, engineering judgment and intuition, rather than on objective experimental and analytical results. Often, design engineers are forced to start almost from square one after each earthquake event: to observe what went wrong and to try to prevent repetitions. This is a consequence of the empirical nature of current seismic regulations and guidelines for nonstructural components. This review paper summarizes current knowledge on the seismic design and analysis of nonstructural building components, identifying major knowledge gaps that will need to be filled by future research. Furthermore, considering recent trends in earthquake engineering, the paper explores how performance-based seismic design might be conceived for nonstructural components, drawing on recent developments made in the field of seismic design and hinting at the specific considerations required for nonstructural components.

  3. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  4. Performance and reliability of the NASA Biomass Production Chamber

    NASA Technical Reports Server (NTRS)

    Sager, J. C.; Chetirkin, P. V.

    1994-01-01

    The Biomass Production Chamber (BPC) at the Kennedy Space Center is part of the Controlled Ecological Life Support System (CELSS) Breadboard Project. Plants are grown in a closed environment in an effort to quantify their contributions to the requirements for life support. Performance of this system is described. Also, in building this system, data from component and subsystem failures are being recorded. These data are used to identify problem areas in the design and implementation. The techniques used to measure the reliability will be useful in the design and construction of future CELSS. Possible methods for determining the reliability of a green plant, the primary component of a CELSS, are discussed.

  5. Cascading failures in interdependent systems under a flow redistribution model

    NASA Astrophysics Data System (ADS)

    Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman

    2018-02-01

    Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.

  6. Cascading failures in interdependent systems under a flow redistribution model.

    PubMed

    Zhang, Yingrui; Arenas, Alex; Yağan, Osman

    2018-02-01

    Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {L_{A,i},C_{A,i}}_{i=1}^{n} and {L_{B,i},C_{B,i}}_{i=1}^{n}, respectively. When a line fails in system A, a fraction of its load is redistributed to alive lines in B, while remaining (1-a) fraction is redistributed equally among all functional lines in A; a line failure in B is treated similarly with b giving the fraction to be redistributed to A. We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p_{1} fraction of lines in A and p_{2} fraction in B. We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b, and robustness is maximized at non-trivial a,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.

  7. Application of reliability-centered-maintenance to BWR ECCS motor operator valve performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.; Choi, Y.A.

    1993-01-01

    This paper describes the application of reliability-centered maintenance (RCM) methods to plant probabilistic risk assessment (PRA) and safety analyses for four boiling water reactor emergency core cooling systems (ECCSs): (1) high-pressure coolant injection (HPCI); (2) reactor core isolation cooling (RCIC); (3) residual heat removal (RHR); and (4) core spray systems. Reliability-centered maintenance is a system function-based technique for improving a preventive maintenance program that is applied on a component basis. Those components that truly affect plant function are identified, and maintenance tasks are focused on preventing their failures. The RCM evaluation establishes the relevant criteria that preserve system function somore » that an RCM-focused approach can be flexible and dynamic.« less

  8. Nanocrystalline cerium oxide materials for solid fuel cell systems

    DOEpatents

    Brinkman, Kyle S

    2015-05-05

    Disclosed are solid fuel cells, including solid oxide fuel cells and PEM fuel cells that include nanocrystalline cerium oxide materials as a component of the fuel cells. A solid oxide fuel cell can include nanocrystalline cerium oxide as a cathode component and microcrystalline cerium oxide as an electrolyte component, which can prevent mechanical failure and interdiffusion common in other fuel cells. A solid oxide fuel cell can also include nanocrystalline cerium oxide in the anode. A PEM fuel cell can include cerium oxide as a catalyst support in the cathode and optionally also in the anode.

  9. Independent Orbiter Assessment (IOA): Analysis of the orbiter main propulsion system

    NASA Technical Reports Server (NTRS)

    Mcnicoll, W. J.; Mcneely, M.; Holden, K. A.; Emmons, T. E.; Lowery, H. J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Main Propulsion System (MPS) hardware are documented. The Orbiter MPS consists of two subsystems: the Propellant Management Subsystem (PMS) and the Helium Subsystem. The PMS is a system of manifolds, distribution lines and valves by which the liquid propellants pass from the External Tank (ET) to the Space Shuttle Main Engines (SSMEs) and gaseous propellants pass from the SSMEs to the ET. The Helium Subsystem consists of a series of helium supply tanks and their associated regulators, check valves, distribution lines, and control valves. The Helium Subsystem supplies helium that is used within the SSMEs for inflight purges and provides pressure for actuation of SSME valves during emergency pneumatic shutdowns. The balance of the helium is used to provide pressure to operate the pneumatically actuated valves within the PMS. Each component was evaluated and analyzed for possible failure modes and effects. Criticalities were assigned based on the worst possible effect of each failure mode. Of the 690 failure modes analyzed, 349 were determined to be PCIs.

  10. Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.

  11. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.

  12. Space Station Freedom electric power system availability study

    NASA Technical Reports Server (NTRS)

    Turnquist, Scott R.

    1990-01-01

    The results are detailed of follow-on availability analyses performed on the Space Station Freedom electric power system (EPS). The scope includes analyses of several EPS design variations, these are: the 4-photovoltaic (PV) module baseline EPS design, a 6-PV module EPS design, and a 3-solar dynamic module EPS design which included a 10 kW PV module. The analyses performed included: determining the discrete power levels that the EPS will operate at upon various component failures and the availability of each of these operating states; ranking EPS components by the relative contribution each component type gives to the power availability of the EPS; determining the availability impacts of including structural and long-life EPS components in the availability models used in the analyses; determining optimum sparing strategies, for storing space EPS components on-orbit, to maintain high average-power-capability with low lift-mass requirements; and analyses to determine the sensitivity of EPS-availability to uncertainties in the component reliability and maintainability data used.

  13. Advanced Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    Perotti, J.

    2003-01-01

    Current and future requirements of the aerospace sensors and transducers field make it necessary for the design and development of new data acquisition devices and instrumentation systems. New designs are sought to incorporate self-health, self-calibrating, self-repair capabilities, allowing greater measurement reliability and extended calibration cycles. With the addition of power management schemes, state-of-the-art data acquisition systems allow data to be processed and presented to the users with increased efficiency and accuracy. The design architecture presented in this paper displays an innovative approach to data acquisition systems. The design incorporates: electronic health self-check, device/system self-calibration, electronics and function self-repair, failure detection and prediction, and power management (reduced power consumption). These requirements are driven by the aerospace industry need to reduce operations and maintenance costs, to accelerate processing time and to provide reliable hardware with minimum costs. The project's design architecture incorporates some commercially available components identified during the market research investigation like: Field Programmable Gate Arrays (FPGA) Programmable Analog Integrated Circuits (PAC IC) and Field Programmable Analog Arrays (FPAA); Digital Signal Processing (DSP) electronic/system control and investigation of specific characteristics found in technologies like: Electronic Component Mean Time Between Failure (MTBF); and Radiation Hardened Component Availability. There are three main sections discussed in the design architecture presented in this document. They are the following: (a) Analog Signal Module Section, (b) Digital Signal/Control Module Section and (c) Power Management Module Section. These sections are discussed in detail in the following pages. This approach to data acquisition systems has resulted in the assignment of patent rights to Kennedy Space Center under U.S. patent # 6,462,684. Furthermore, NASA KSC commercialization office has issued licensing rights to Circuit Avenue Netrepreneurs, LLC , a minority-owned business founded in 1999 located in Camden, NJ.

  14. Catastrophic optical bulk degradation (COBD) in high-power single- and multi-mode InGaAs-AlGaAs strained quantum well lasers

    NASA Astrophysics Data System (ADS)

    Sin, Yongkun; Lingley, Zachary; Brodie, Miles; Presser, Nathan; Moss, Steven C.

    2017-02-01

    High-power single-mode (SM) and multi-mode (MM) InGaAs-AlGaAs strained quantum well (QW) lasers are critical components for both telecommunications and space satellite communications systems. However, little has been reported on failure modes and degradation mechanisms of high-power SM and MM InGaAs-AlGaAs strained QW lasers although it is crucial to understand failure modes and underlying degradation mechanisms in developing these lasers that meet lifetime requirements for space satellite systems, where extremely high reliability of these lasers is required. Our present study addresses the aforementioned issues by performing long-term life-tests followed by failure mode analysis (FMA) and physics of failure investigation. We performed long-term accelerated life-tests on state-of-the-art SM and MM InGaAs-AlGaAs strained QW lasers under ACC (automatic current control) mode. Our life-tests have accumulated over 25,000 test hours for SM lasers and over 35,000 test hours for MM lasers. FMA was performed on failed SM lasers using electron beam induced current (EBIC). This technique allowed us to identify failure types by observing dark line defects. All the SM failures we studied showed catastrophic and sudden degradation and all of these failures were bulk failures. Our group previously reported that bulk failure or COBD (catastrophic optical bulk damage) is the dominant failure mode of MM InGaAs-AlGaAs strained QW lasers. Since degradation mechanisms responsible for COBD are still not well understood, we also employed other techniques including focused ion beam (FIB) processing and high-resolution TEM to further study dark line defects and dislocations in post-aged lasers. Our long-term life-test results and FMA results are reported.

  15. Effect of Crystal Orientation on Fatigue Failure of Single Crystal Nickel Base Turbine Blade Superalloys

    NASA Technical Reports Server (NTRS)

    Arakere, Nagaraj K.; Swanson, Gregory R.

    2000-01-01

    High Cycle Fatigue (HCF) induced failures in aircraft gas-turbine engines is a pervasive problem affecting a wide range of components and materials. HCF is currently the primary cause of component failures in gas turbine aircraft engines. Turbine blades in high performance aircraft and rocket engines are increasingly being made of single crystal nickel superalloys. Single-crystal Nickel-base superalloys were developed to provide superior creep, stress rupture, melt resistance and thermomechanical fatigue capabilities over polycrystalline alloys previously used in the production of turbine blades and vanes. Currently the most widely used single crystal turbine blade superalloys are PWA 1480/1493 and PWA 1484. These alloys play an important role in commercial, military and space propulsion systems. PWA1493, identical to PWA1480, but with tighter chemical constituent control, is used in the NASA SSME (Space Shuttle Main Engine) alternate turbopump, a liquid hydrogen fueled rocket engine. Objectives for this paper are motivated by the need for developing failure criteria and fatigue life evaluation procedures for high temperature single crystal components, using available fatigue data and finite element modeling of turbine blades. Using the FE (finite element) stress analysis results and the fatigue life relations developed, the effect of variation of primary and secondary crystal orientations on life is determined, at critical blade locations. The most advantageous crystal orientation for a given blade design is determined. Results presented demonstrates that control of secondary and primary crystallographic orientation has the potential to optimize blade design by increasing its resistance to fatigue crack growth without adding additional weight or cost.

  16. PACS quality control and automatic problem notifier

    NASA Astrophysics Data System (ADS)

    Honeyman-Buck, Janice C.; Jones, Douglas; Frost, Meryll M.; Staab, Edward V.

    1997-05-01

    One side effect of installing a clinical PACS Is that users become dependent upon the technology and in some cases it can be very difficult to revert back to a film based system if components fail. The nature of system failures range from slow deterioration of function as seen in the loss of monitor luminance through sudden catastrophic loss of the entire PACS networks. This paper describes the quality control procedures in place at the University of Florida and the automatic notification system that alerts PACS personnel when a failure has happened or is anticipated. The goal is to recover from a failure with a minimum of downtime and no data loss. Routine quality control is practiced on all aspects of PACS, from acquisition, through network routing, through display, and including archiving. Whenever possible, the system components perform self and between platform checks for active processes, file system status, errors in log files, and system uptime. When an error is detected or a exception occurs, an automatic page is sent to a pager with a diagnostic code. Documentation on each code, trouble shooting procedures, and repairs are kept on an intranet server accessible only to people involved in maintaining the PACS. In addition to the automatic paging system for error conditions, acquisition is assured by an automatic fax report sent on a daily basis to all technologists acquiring PACS images to be used as a cross check that all studies are archived prior to being removed from the acquisition systems. Daily quality control is preformed to assure that studies can be moved from each acquisition and contrast adjustment. The results of selected quality control reports will be presented. The intranet documentation server will be described with the automatic pager system. Monitor quality control reports will be described and the cost of quality control will be quantified. As PACS is accepted as a clinical tool, the same standards of quality control must be established as are expected on other equipment used in the diagnostic process.

  17. J-2X Turbopump Cavitation Diagnostics

    NASA Technical Reports Server (NTRS)

    Santi, I. Michael; Butas, John P.; Tyler, Thomas R., Jr.; Aguilar, Robert; Sowers, T. Shane

    2010-01-01

    The J-2X is the upper stage engine currently being designed by Pratt & Whitney Rocketdyne (PWR) for the Ares I Crew Launch Vehicle (CLV). Propellant supply requirements for the J-2X are defined by the Ares Upper Stage to J-2X Interface Control Document (ICD). Supply conditions outside ICD defined start or run boxes can induce turbopump cavitation leading to interruption of J-2X propellant flow during hot fire operation. In severe cases, cavitation can lead to uncontained engine failure with the potential to cause a vehicle catastrophic event. Turbopump and engine system performance models supported by system design information and test data are required to predict existence, severity, and consequences of a cavitation event. A cavitation model for each of the J-2X fuel and oxidizer turbopumps was developed using data from pump water flow test facilities at Pratt & Whitney Rocketdyne (PWR) and Marshall Space Flight Center (MSFC) together with data from Powerpack 1A testing at Stennis Space Center (SSC) and from heritage systems. These component models were implemented within the PWR J-2X Real Time Model (RTM) to provide a foundation for predicting system level effects following turbopump cavitation. The RTM serves as a general failure simulation platform supporting estimation of J-2X redline system effectiveness. A study to compare cavitation induced conditions with component level structural limit thresholds throughout the engine was performed using the RTM. Results provided insight into system level turbopump cavitation effects and redline system effectiveness in preventing structural limit violations. A need to better understand structural limits and redline system failure mitigation potential in the event of fuel side cavitation was indicated. This paper examines study results, efforts to mature J-2X turbopump cavitation models and structural limits, and issues with engine redline detection of cavitation and the use of vehicle-side abort triggers to augment the engine redline system.

  18. Design, development and deployment of public service photovoltaic power/load systems for the Gabonese Republic

    NASA Technical Reports Server (NTRS)

    Kaszeta, William J.

    1987-01-01

    Five different types of public service photovoltaic power/load systems installed in the Gabonese Republic are discussed. The village settings, the systems, performance results and some problems encountered are described. Most of the systems performed well, but some of the systems had problems due to failure of components or installation errors. The project was reasonably successful in collecting and reporting data for system performance evaluation that will be useful for guiding officials and system designers involved in village power applications in developing countries.

  19. Reliability and Maintainability Analysis of Fluidic Back-Up Flight Control System and Components.

    DTIC Science & Technology

    1981-09-01

    industry. 2 r ~~m~ NADC 80227- 60 Maintainability Review of FMEA worksheets indicates that the standard hydraulic components of the servoactuator will...achieved. Procedures for conducting the FMEA and evaluating the 6 & | I NADC 80227- 60 severity of each failure mode are included as Appendix A...KEYSER N62269-81-M-3047 UNCLASSIFIED NADC-80227- 60 NL 66 11111.5 .4 11 6 MICROCOPY RESOLUTION TEST CHART N~ATIONAL BUR[AU Of STANDARDS 1%3A, REPORT

  20. Proof that green tea tannin suppresses the increase in the blood methylguanidine level associated with renal failure.

    PubMed

    Yokozawa, T; Dong, E; Oura, H

    1997-02-01

    The effects of a green tea tannin mixture and its individual tannin components on methylguanidine were examined in rats with renal failure. The green tea tannin mixture caused a dose-dependent decrease in methylguanidine, a substance which accumulates in the blood with the progression of renal failure. Among individual tannin components, the effect was most conspicuous with (-)-epigallocatechin 3-O-gallate and (-)-epicatechin 3-O-gallate, while other components not linked to gallic acid showed only weak effects. Thus, the effect on methylguanidine was found to vary among different types of tannin.

  1. System Study: Emergency Power System 1998–2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-02-01

    This report presents an unreliability evaluation of the emergency power system (EPS) at 104 U.S. commercial nuclear power plants. Demand, run hours, and failure data from fiscal year 1998 through 2013 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10-year period, while yearly estimates for system unreliability are provided for the entire active period. No statistically significant trends were identified in the EPS results.

  2. The Importance of Engine External's Health

    NASA Technical Reports Server (NTRS)

    Stoner, Barry L.

    2006-01-01

    Engine external components include all the fluid carrying, electron carrying, and support devices that are needed to operate the propulsion system. These components are varied and include: pumps, valves, actuators, solenoids, sensors, switches, heat exchangers, electrical generators, electrical harnesses, tubes, ducts, clamps and brackets. The failure of any component to perform its intended function will result in a maintenance action, a dispatch delay, or an engine in flight shutdown. The life of each component, in addition to its basic functional design, is closely tied to its thermal and dynamic environment .Therefore, to reach a mature design life, the component's thermal and dynamic environment must be understood and controlled, which can only be accomplished by attention to design analysis and testing. The purpose of this paper is to review analysis and test techniques toward achieving good component health.

  3. UGV acceptance testing

    NASA Astrophysics Data System (ADS)

    Kramer, Jeffrey A.; Murphy, Robin R.

    2006-05-01

    With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The process also demonstrated that the testing by the manufacturer can provide important design data that can be used to identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in subsystems.

  4. Modes of failure of Osteonics constrained tripolar implants: a retrospective analysis of forty-three failed implants.

    PubMed

    Guyen, Olivier; Lewallen, David G; Cabanela, Miguel E

    2008-07-01

    The Osteonics constrained tripolar implant has been one of the most commonly used options to manage recurrent instability after total hip arthroplasty. Mechanical failures were expected and have been reported. The purpose of this retrospective review was to identify the observed modes of failure of this device. Forty-three failed Osteonics constrained tripolar implants were revised at our institution between September 1997 and April 2005. All revisions related to the constrained acetabular component only were considered as failures. All of the devices had been inserted for recurrent or intraoperative instability during revision procedures. Seven different methods of implantation were used. Operative reports and radiographs were reviewed to identify the modes of failure. The average time to failure of the forty-three implants was 28.4 months. A total of five modes of failure were observed: failure at the bone-implant interface (type I), which occurred in eleven hips; failure at the mechanisms holding the constrained liner to the metal shell (type II), in six hips; failure of the retaining mechanism of the bipolar component (type III), in ten hips; dislocation of the prosthetic head at the inner bearing of the bipolar component (type IV), in three hips; and infection (type V), in twelve hips. The mode of failure remained unknown in one hip that had been revised at another institution. The Osteonics constrained tripolar total hip arthroplasty implant is a complex device involving many parts. We showed that failure of this device can occur at most of its interfaces. It would therefore appear logical to limit its application to salvage situations.

  5. Float level switch for a nuclear power plant containment vessel

    DOEpatents

    Powell, J.G.

    1993-11-16

    This invention is a float level switch used to sense rise or drop in water level in a containment vessel of a nuclear power plant during a loss of coolant accident. The essential components of the device are a guide tube, a reed switch inside the guide tube, a float containing a magnetic portion that activates a reed switch, and metal-sheathed, ceramic-insulated conductors connecting the reed switch to a monitoring system outside the containment vessel. Special materials and special sealing techniques prevent failure of components and allow the float level switch to be connected to a monitoring system outside the containment vessel. 1 figures.

  6. Float level switch for a nuclear power plant containment vessel

    DOEpatents

    Powell, James G.

    1993-01-01

    This invention is a float level switch used to sense rise or drop in water level in a containment vessel of a nuclear power plant during a loss of coolant accident. The essential components of the device are a guide tube, a reed switch inside the guide tube, a float containing a magnetic portion that activates a reed switch, and metal-sheathed, ceramic-insulated conductors connecting the reed switch to a monitoring system outside the containment vessel. Special materials and special sealing techniques prevent failure of components and allow the float level switch to be connected to a monitoring system outside the containment vessel.

  7. Mission Management Computer Software for RLV-TD

    NASA Astrophysics Data System (ADS)

    Manju, C. R.; Joy, Josna Susan; Vidya, L.; Sheenarani, I.; Sruthy, C. N.; Viswanathan, P. C.; Dinesh, Sudin; Jayalekshmy, L.; Karuturi, Kesavabrahmaji; Sheema, E.; Syamala, S.; Unnikrishnan, S. Manju; Ali, S. Akbar; Paramasivam, R.; Sheela, D. S.; Shukkoor, A. Abdul; Lalithambika, V. R.; Mookiah, T.

    2017-12-01

    The Mission Management Computer (MMC) software is responsible for the autonomous navigation, sequencing, guidance and control of the Re-usable Launch Vehicle (RLV), through lift-off, ascent, coasting, re-entry, controlled descent and splashdown. A hard real-time system has been designed for handling the mission requirements in an integrated manner and for meeting the stringent timing constraints. Redundancy management and fault-tolerance techniques are also built into the system, in order to achieve a successful mission even in presence of component failures. This paper describes the functions and features of the components of the MMC software which has accomplished the successful RLV-Technology Demonstrator mission.

  8. A risk-based approach to sanitary sewer pipe asset management.

    PubMed

    Baah, Kelly; Dubey, Brajesh; Harvey, Richard; McBean, Edward

    2015-02-01

    Wastewater collection systems are an important component of proper management of wastewater to prevent environmental and human health implications from mismanagement of anthropogenic waste. Due to aging and inadequate asset management practices, the wastewater collection assets of many cities around the globe are in a state of rapid decline and in need of urgent attention. Risk management is a tool which can help prioritize resources to better manage and rehabilitate wastewater collection systems. In this study, a risk matrix and a weighted sum multi-criteria decision-matrix are used to assess the consequence and risk of sewer pipe failure for a mid-sized city, using ArcGIS. The methodology shows that six percent of the uninspected sewer pipe assets of the case study have a high consequence of failure while four percent of the assets have a high risk of failure and hence provide priorities for inspection. A map incorporating risk of sewer pipe failure and consequence is developed to facilitate future planning, rehabilitation and maintenance programs. The consequence of failure assessment also includes a novel failure impact factor which captures the effect of structurally defective stormwater pipes on the failure assessment. The methodology recommended in this study can serve as a basis for future planning and decision making and has the potential to be universally applied by municipal sewer pipe asset managers globally to effectively manage the sanitary sewer pipe infrastructure within their jurisdiction. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Pulse Code Modulation (PCM) encoder handbook for Aydin Vector MMP-600 series system

    NASA Technical Reports Server (NTRS)

    Currier, S. F.; Powell, W. R.

    1986-01-01

    The hardware and software characteristics of a time division multiplex system are described. The system is used to sample analog and digital data. The data is merged with synchronization information to produce a serial pulse coded modulation (PCM) bit stream. Information presented herein is required by users to design compatible interfaces and assure effective utilization of this encoder system. GSFC/Wallops Flight Facility has flown approximately 50 of these systems through 1984 on sounding rockets with no inflight failures. Aydin Vector manufactures all of the components for these systems.

  10. Comparative Normal/Failing Rat Myocardium Cell Membrane Chromatographic Analysis System for Screening Specific Components That Counteract Doxorubicin-Induced Heart Failure from Acontium carmichaeli

    PubMed Central

    2015-01-01

    Cell membrane chromatography (CMC) derived from pathological tissues is ideal for screening specific components acting on specific diseases from complex medicines owing to the maximum simulation of in vivo drug-receptor interactions. However, there are no pathological tissue-derived CMC models that have ever been developed, as well as no visualized affinity comparison of potential active components between normal and pathological CMC columns. In this study, a novel comparative normal/failing rat myocardium CMC analysis system based on online column selection and comprehensive two-dimensional (2D) chromatography/monolithic column/time-of-flight mass spectrometry was developed for parallel comparison of the chromatographic behaviors on both normal and pathological CMC columns, as well as rapid screening of the specific therapeutic agents that counteract doxorubicin (DOX)-induced heart failure from Acontium carmichaeli (Fuzi). In total, 16 potential active alkaloid components with similar structures in Fuzi were retained on both normal and failing myocardium CMC models. Most of them had obvious decreases of affinities on failing myocardium CMC compared with normal CMC model except for four components, talatizamine (TALA), 14-acetyl-TALA, hetisine, and 14-benzoylneoline. One compound TALA with the highest affinity was isolated for further in vitro pharmacodynamic validation and target identification to validate the screen results. Voltage-dependent K+ channel was confirmed as a binding target of TALA and 14-acetyl-TALA with high affinities. The online high throughput comparative CMC analysis method is suitable for screening specific active components from herbal medicines by increasing the specificity of screened results and can also be applied to other biological chromatography models. PMID:24731167

  11. Evaluation of an In-Situ, Liquid Lubrication System for Space Mechanisms Using a Vacuum Spiral Orbit Tribometer

    NASA Technical Reports Server (NTRS)

    Jansen, Mark J.; Jones, William R., Jr.; Pepper, Stephen V.

    2002-01-01

    Many moving mechanical assemblies (MMAs) for space applications rely on a small, initial charge of lubricant for the entire mission lifetime, often in excess of five years. In many cases, the premature failure of a lubricated component can result in mission failure. If lubricant could be resupplied to the contact in-situ, the life of the MMA could be extended. A vacuum spiral orbit tribometer (SOT) was modified to accept a device to supply re-lubrication during testing. It was successfully demonstrated that a liquid lubricant (Pennzane (Registered Trademark)/Nye 2001A) could be evaporated into a contact during operation, lowering the friction coefficient and therefore extending the life of the system.

  12. Physics-of-Failure Approach to Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.

    2017-01-01

    As more and more electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of the electrical components present in the system. In case of electric vehicles, computing remaining battery charge is safety-critical. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle. In this presentation our approach to develop a system level health monitoring safety indicator for different electronic components is presented which runs estimation and prediction algorithms to determine state-of-charge and estimate remaining useful life of respective components. Given models of the current and future system behavior, the general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S; Guerrero, M; Zhang, B

    Purpose: To implement a comprehensive non-measurement-based verification program for patient-specific IMRT QA Methods: Based on published guidelines, a robust IMRT QA program should assess the following components: 1) accuracy of dose calculation, 2) accuracy of data transfer from the treatment planning system (TPS) to the record-and-verify (RV) system, 3) treatment plan deliverability, and 4) accuracy of plan delivery. Results: We have implemented an IMRT QA program that consist of four components: 1) an independent re-calculation of the dose distribution in the patient anatomy with a commercial secondary dose calculation program: Mobius3D (Mobius Medical Systems, Houston, TX), with dose accuracy evaluationmore » using gamma analysis, PTV mean dose, PTV coverage to 95%, and organ-at-risk mean dose; 2) an automated in-house-developed plan comparison system that compares all relevant plan parameters such as MU, MLC position, beam iso-center position, collimator, gantry, couch, field size settings, and bolus placement, etc. between the plan and the RV system; 3) use of the RV system to check the plan deliverability and further confirm using “mode-up” function on treatment console for plans receiving warning; and 4) implementation of a comprehensive weekly MLC QA, in addition to routine accelerator monthly and daily QA. Among 1200 verifications, there were 9 cases of suspicious calculations, 5 cases of delivery failure, no data transfer errors, and no failure of weekly MLC QA. These 9 suspicious cases were due to the PTV extending to the skin or to heterogeneity correction effects, which would not have been caught using phantom measurement-based QA. The delivery failure was due to the rounding variation of MLC position between the planning system and RV system. Conclusion: A very efficient, yet comprehensive, non-measurement-based patient-specific QA program has been implemented and used clinically for about 18 months with excellent results.« less

  14. High-reliability computing for the smarter planet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Graham, Paul; Manuzzato, Andrea

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.« less

  15. Brittle-to-ductile transition in a fiber bundle with strong heterogeneity.

    PubMed

    Kovács, Kornél; Hidalgo, Raul Cruz; Pagonabarraga, Ignacio; Kun, Ferenc

    2013-04-01

    We analyze the failure process of a two-component system with widely different fracture strength in the framework of a fiber bundle model with localized load sharing. A fraction 0≤α≤1 of the bundle is strong and it is represented by unbreakable fibers, while fibers of the weak component have randomly distributed failure strength. Computer simulations revealed that there exists a critical composition α(c) which separates two qualitatively different behaviors: Below the critical point, the failure of the bundle is brittle, characterized by an abrupt damage growth within the breakable part of the system. Above α(c), however, the macroscopic response becomes ductile, providing stability during the entire breaking process. The transition occurs at an astonishingly low fraction of strong fibers which can have importance for applications. We show that in the ductile phase, the size distribution of breaking bursts has a power law functional form with an exponent μ=2 followed by an exponential cutoff. In the brittle phase, the power law also prevails but with a higher exponent μ=9/2. The transition between the two phases shows analogies to continuous phase transitions. Analyzing the microstructure of the damage, it was found that at the beginning of the fracture process cracks nucleate randomly, while later on growth and coalescence of cracks dominate, which give rise to power law distributed crack sizes.

  16. Fracture and Failure at and Near Interfaces Under Pressure

    DTIC Science & Technology

    1998-06-18

    realistic data for comparison with improved analytical results, and to 2) initiate a new computational approach for stress analysis of cracks at and near...new computational approach for stress analysis of cracks in solid propellants at and near interfaces, which analysis can draw on the ever expanding...tactical and strategic missile systems. The most important and most difficult component of the system analysis has been the predictability or

  17. Analytical Method to Evaluate Failure Potential During High-Risk Component Development

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Communicating failure mode information during design and manufacturing is a crucial task for failure prevention. Most processes use Failure Modes and Effects types of analyses, as well as prior knowledge and experience, to determine the potential modes of failures a product might encounter during its lifetime. When new products are being considered and designed, this knowledge and information is expanded upon to help designers extrapolate based on their similarity with existing products and the potential design tradeoffs. This paper makes use of similarities and tradeoffs that exist between different failure modes based on the functionality of each component/product. In this light, a function-failure method is developed to help the design of new products with solutions for functions that eliminate or reduce the potential of a failure mode. The method is applied to a simplified rotating machinery example in this paper, and is proposed as a means to account for helicopter failure modes during design and production, addressing stringent safety and performance requirements for NASA applications.

  18. Investigation of Spiral Bevel Gear Condition Indicator Validation Via AC-29-2C Using Damage Progression Tests

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.

    2014-01-01

    This report documents the results of spiral bevel gear rig tests performed under a NASA Space Act Agreement with the Federal Aviation Administration (FAA) to support validation and demonstration of rotorcraft Health and Usage Monitoring Systems (HUMS) for maintenance credits via FAA Advisory Circular (AC) 29-2C, Section MG-15, Airworthiness Approval of Rotorcraft (HUMS) (Ref. 1). The overarching goal of this work was to determine a method to validate condition indicators in the lab that better represent their response to faults in the field. Using existing in-service helicopter HUMS flight data from faulted spiral bevel gears as a "Case Study," to better understand the differences between both systems, and the availability of the NASA Glenn Spiral Bevel Gear Fatigue Rig, a plan was put in place to design, fabricate and test comparable gear sets with comparable failure modes within the constraints of the test rig. The research objectives of the rig tests were to evaluate the capability of detecting gear surface pitting fatigue and other generated failure modes on spiral bevel gear teeth using gear condition indicators currently used in fielded HUMS. Nineteen final design gear sets were tested. Tables were generated for each test, summarizing the failure modes observed on the gear teeth for each test during each inspection interval and color coded based on damage mode per inspection photos. Gear condition indicators (CI) Figure of Merit 4 (FM4), Root Mean Square (RMS), +/- 1 Sideband Index (SI1) and +/- 3 Sideband Index (SI3) were plotted along with rig operational parameters. Statistical tables of the means and standard deviations were calculated within inspection intervals for each CI. As testing progressed, it became clear that certain condition indicators were more sensitive to a specific component and failure mode. These tests were clustered together for further analysis. Maintenance actions during testing were also documented. Correlation coefficients were calculated between each CI, component, damage state and torque. Results found test rig and gear design, type of fault and data acquisition can affect CI performance. Results found FM4, SI1 and SI3 can be used to detect macro pitting on two more gear or pinion teeth as long as it is detected prior to progressing to other components or transitioning to another failure mode. The sensitivity of RMS to system and operational conditions limit its reliability for systems that are not maintained at steady state. Failure modes that occurred due to scuffing or fretting were challenging to detect with current gear diagnostic tools, since the damage is distributed across all the gear and pinion teeth, smearing the impacting signatures typically used to differentiate between a healthy and damaged tooth contact. This is one of three final reports published on the results of this project. In the second report, damage modes experienced in the field will be mapped to the failure modes created in the test rig. The helicopter CI data will then be re-processed with the same analysis techniques applied to spiral bevel rig test data. In the third report, results from the rig and helicopter data analysis will be correlated. Observations, findings and lessons learned using sub-scale rig failure progression tests to validate helicopter gear condition indicators will be presented.

  19. Study of reactor Brayton power systems for nuclear electric spacecraft

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The feasibility of using Brayton power systems for nuclear electric spacecraft was investigated. The primary performance parameters of systems mass and radiator area were determined for systems from 100 to 1000 kW sub e. Mathematical models of all system components were used to determine masses and volumes. Two completely independent systems provide propulsion power so that no single-point failure can jeopardize a mission. The waste heat radiators utilize armored heat pipes to limit meteorite puncture. The armor thickness was statistically determined to achieve the required probability of survival. A 400 kW sub e reference system received primary attention as required by the contract. The components of this system were defined and a conceptual layout was developed with encouraging results. An arrangement with redundant Brayton power systems having a 1500 K (2240 F) turbine inlet temperature was shown to be compatible with the dimensions of the space shuttle orbiter payload bay.

  20. Methods And Systms For Analyzing The Degradation And Failure Of Mechanical Systems

    DOEpatents

    Jarrell, Donald B.; Sisk, Daniel R.; Hatley, Darrel D.; Kirihara, Leslie J.; Peters, Timothy J.

    2005-02-08

    Methods and systems for identifying, understanding, and predicting the degradation and failure of mechanical systems are disclosed. The methods include measuring and quantifying stressors that are responsible for the activation of degradation mechanisms in the machine component of interest. The intensity of the stressor may be correlated with the rate of physical degradation according to some determinable function such that a derivative relationship exists between the machine performance, degradation, and the underlying stressor. The derivative relationship may be used to make diagnostic and prognostic calculations concerning the performance and projected life of the machine. These calculations may be performed in real time to allow the machine operator to quickly adjust the operational parameters of the machinery in order to help minimize or eliminate the effects of the degradation mechanism, thereby prolonging the life of the machine. Various systems implementing the methods are also disclosed.

Top