Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Probabilistic safety analysis of earth retaining structures during earthquakes
NASA Astrophysics Data System (ADS)
Grivas, D. A.; Souflis, C.
1982-07-01
A procedure is presented for determining the probability of failure of Earth retaining structures under static or seismic conditions. Four possible modes of failure (overturning, base sliding, bearing capacity, and overall sliding) are examined and their combined effect is evaluated with the aid of combinatorial analysis. The probability of failure is shown to be a more adequate measure of safety than the customary factor of safety. As Earth retaining structures may fail in four distinct modes, a system analysis can provide a single estimate for the possibility of failure. A Bayesian formulation of the safety retaining walls is found to provide an improved measure for the predicted probability of failure under seismic loading. The presented Bayesian analysis can account for the damage incurred to a retaining wall during an earthquake to provide an improved estimate for its probability of failure during future seismic events.
A risk assessment method for multi-site damage
NASA Astrophysics Data System (ADS)
Millwater, Harry Russell, Jr.
This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.
Probability of failure prediction for step-stress fatigue under sine or random stress
NASA Technical Reports Server (NTRS)
Lambert, R. G.
1979-01-01
A previously proposed cumulative fatigue damage law is extended to predict the probability of failure or fatigue life for structural materials with S-N fatigue curves represented as a scatterband of failure points. The proposed law applies to structures subjected to sinusoidal or random stresses and includes the effect of initial crack (i.e., flaw) sizes. The corrected cycle ratio damage function is shown to have physical significance.
Estimating earthquake-induced failure probability and downtime of critical facilities.
Porter, Keith; Ramer, Kyle
2012-01-01
Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.
Estimation of probability of failure for damage-tolerant aerospace structures
NASA Astrophysics Data System (ADS)
Halbert, Keith
The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Recent advances in computational structural reliability analysis methods
NASA Astrophysics Data System (ADS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-10-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Recent advances in computational structural reliability analysis methods
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-01-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
ERIC Educational Resources Information Center
Brookhart, Susan M.; And Others
1997-01-01
Process Analysis is described as a method for identifying and measuring the probability of events that could cause the failure of a program, resulting in a cause-and-effect tree structure of events. The method is illustrated through the evaluation of a pilot instructional program at an elementary school. (SLD)
Differential reliability : probabilistic engineering applied to wood members in bending-tension
Stanley K. Suddarth; Frank E. Woeste; William L. Galligan
1978-01-01
Reliability analysis is a mathematical technique for appraising the design and materials of engineered structures to provide a quantitative estimate of probability of failure. Two or more cases which are similar in all respects but one may be analyzed by this method; the contrast between the probabilities of failure for these cases allows strong analytical focus on the...
Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods
NASA Astrophysics Data System (ADS)
Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed
2018-04-01
This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.
NASA Astrophysics Data System (ADS)
Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-06-01
Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.
Probabilistic inspection strategies for minimizing service failures
NASA Technical Reports Server (NTRS)
Brot, Abraham
1994-01-01
The INSIM computer program is described which simulates the 'limited fatigue life' environment in which aircraft structures generally operate. The use of INSIM to develop inspection strategies which aim to minimize service failures is demonstrated. Damage-tolerance methodology, inspection thresholds and customized inspections are simulated using the probability of failure as the driving parameter.
Application of a Probalistic Sizing Methodology for Ceramic Structures
NASA Astrophysics Data System (ADS)
Rancurel, Michael; Behar-Lafenetre, Stephanie; Cornillon, Laurence; Leroy, Francois-Henri; Coe, Graham; Laine, Benoit
2012-07-01
Ceramics are increasingly used in the space industry to take advantage of their stability and high specific stiffness properties. Their brittle behaviour often leads to size them by increasing the safety factors that are applied on the maximum stresses. It induces to oversize the structures. This is inconsistent with the major driver in space architecture, the mass criteria. This paper presents a methodology to size ceramic structures based on their failure probability. Thanks to failure tests on samples, the Weibull law which characterizes the strength distribution of the material is obtained. A-value (Q0.0195%) and B-value (Q0.195%) are then assessed to take into account the limited number of samples. A knocked-down Weibull law that interpolates the A- & B- values is also obtained. Thanks to these two laws, a most-likely and a knocked- down prediction of failure probability are computed for complex ceramic structures. The application of this methodology and its validation by test is reported in the paper.
NASA Astrophysics Data System (ADS)
Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen
2018-05-01
To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.
Probabilistic structural analysis of aerospace components using NESSUS
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.
1988-01-01
Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.
NASA Technical Reports Server (NTRS)
Yunis, Isam S.; Carney, Kelly S.
1993-01-01
A new aerospace application of structural reliability techniques is presented, where the applied forces depend on many probabilistic variables. This application is the plume impingement loading of the Space Station Freedom Photovoltaic Arrays. When the space shuttle berths with Space Station Freedom it must brake and maneuver towards the berthing point using its primary jets. The jet exhaust, or plume, may cause high loads on the photovoltaic arrays. The many parameters governing this problem are highly uncertain and random. An approach, using techniques from structural reliability, as opposed to the accepted deterministic methods, is presented which assesses the probability of failure of the array mast due to plume impingement loading. A Monte Carlo simulation of the berthing approach is used to determine the probability distribution of the loading. A probability distribution is also determined for the strength of the array. Structural reliability techniques are then used to assess the array mast design. These techniques are found to be superior to the standard deterministic dynamic transient analysis, for this class of problem. The results show that the probability of failure of the current array mast design, during its 15 year life, is minute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilton, Harry H.
Protocols are developed for formulating optimal viscoelastic designer functionally graded materials tailored to best respond to prescribed loading and boundary conditions. In essence, an inverse approach is adopted where material properties instead of structures per se are designed and then distributed throughout structural elements. The final measure of viscoelastic material efficacy is expressed in terms of failure probabilities vs. survival time000.
14 CFR 23.571 - Metallic pressurized cabin structures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure... structure is shown by tests, or by analysis supported by test evidence, to be able to withstand the repeated... is shown by analysis, tests, or both that catastrophic failure of the structure is not probable after...
14 CFR 23.571 - Metallic pressurized cabin structures.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure... structure is shown by tests, or by analysis supported by test evidence, to be able to withstand the repeated... is shown by analysis, tests, or both that catastrophic failure of the structure is not probable after...
14 CFR 23.571 - Metallic pressurized cabin structures.
Code of Federal Regulations, 2011 CFR
2011-01-01
... AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure... structure is shown by tests, or by analysis supported by test evidence, to be able to withstand the repeated... is shown by analysis, tests, or both that catastrophic failure of the structure is not probable after...
14 CFR 23.571 - Metallic pressurized cabin structures.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure... structure is shown by tests, or by analysis supported by test evidence, to be able to withstand the repeated... is shown by analysis, tests, or both that catastrophic failure of the structure is not probable after...
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
A Numerical Round Robin for the Reliability Prediction of Structural Ceramics
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Janosik, Lesley A.
1993-01-01
A round robin has been conducted on integrated fast fracture design programs for brittle materials. An informal working group (WELFEP-WEakest Link failure probability prediction by Finite Element Postprocessors) was formed to discuss and evaluate the implementation of the programs examined in the study. Results from the study have provided insight on the differences between the various programs examined. Conclusions from the study have shown that when brittle materials are used in design, analysis must understand how to apply the concepts presented herein to failure probability analysis.
Assessing changes in failure probability of dams in a changing climate
NASA Astrophysics Data System (ADS)
Mallakpour, I.; AghaKouchak, A.; Moftakhari, H.; Ragno, E.
2017-12-01
Dams are crucial infrastructures and provide resilience against hydrometeorological extremes (e.g., droughts and floods). In 2017, California experienced series of flooding events terminating a 5-year drought, and leading to incidents such as structural failure of Oroville Dam's spillway. Because of large socioeconomic repercussions of such incidents, it is of paramount importance to evaluate dam failure risks associated with projected shifts in the streamflow regime. This becomes even more important as the current procedures for design of hydraulic structures (e.g., dams, bridges, spillways) are based on the so-called stationary assumption. Yet, changes in climate are anticipated to result in changes in statistics of river flow (e.g., more extreme floods) and possibly increasing the failure probability of already aging dams. Here, we examine changes in discharge under two representative concentration pathways (RCPs): RCP4.5 and RCP8.5. In this study, we used routed daily streamflow data from ten global climate models (GCMs) in order to investigate possible climate-induced changes in streamflow in northern California. Our results show that while the average flow does not show a significant change, extreme floods are projected to increase in the future. Using the extreme value theory, we estimate changes in the return periods of 50-year and 100-year floods in the current and future climates. Finally, we use the historical and future return periods to quantify changes in failure probability of dams in a warming climate.
Design with brittle materials - An interdisciplinary educational program
NASA Technical Reports Server (NTRS)
Mueller, J. I.; Bollard, R. J. H.; Hartz, B. J.; Kobayashi, A. S.; Love, W. J.; Scott, W. D.; Taggart, R.; Whittemore, O. J.
1980-01-01
A series of interdisciplinary design courses being offered to senior and graduate engineering students at the University of Washington is described. Attention is given to the concepts and some of the details on group design projects that have been undertaken during the past two years. It is noted that ceramic materials normally demonstrate a large scatter in strength properties. As a consequence, when designing with these materials, the conventional 'mil standards' design stresses with acceptable margins of safety cannot by employed and the designer is forced to accept a probable number of failures in structures of a given brittle material. It is this prediction of the probability of failure for structures of given, well-characterized materials that forms the basis for this series of courses.
Methods, apparatus and system for notification of predictable memory failure
Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2017-01-03
A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.
Probabilistic structural analysis methods for space transportation propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Moore, N.; Anis, C.; Newell, J.; Nagpal, V.; Singhal, S.
1991-01-01
Information on probabilistic structural analysis methods for space propulsion systems is given in viewgraph form. Information is given on deterministic certification methods, probability of failure, component response analysis, stress responses for 2nd stage turbine blades, Space Shuttle Main Engine (SSME) structural durability, and program plans. .
Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability
2015-07-01
12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015...Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability Marwan M. Harajli Graduate Student, Dept. of Civil and Environ...criterion is usually the failure probability . In this paper, we examine the buffered failure probability as an attractive alternative to the failure
Analysis of asteroid (216) Kleopatra using dynamical and structural constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirabayashi, Masatoshi; Scheeres, Daniel J., E-mail: masatoshi.hirabayashi@colorado.edu
This paper evaluates a dynamically and structurally stable size for Asteroid (216) Kleopatra. In particular, we investigate two different failure modes: material shedding from the surface and structural failure of the internal body. We construct zero-velocity curves in the vicinity of this asteroid to determine surface shedding, while we utilize a limit analysis to calculate the lower and upper bounds of structural failure under the zero-cohesion assumption. Surface shedding does not occur at the current spin period (5.385 hr) and cannot directly initiate the formation of the satellites. On the other hand, this body may be close to structural failure;more » in particular, the neck may be situated near a plastic state. In addition, the neck's sensitivity to structural failure changes as the body size varies. We conclude that plastic deformation has probably occurred around the neck part in the past. If the true size of this body is established through additional measurements, this method will provide strong constraints on the current friction angle for the body.« less
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Probabilistic evaluation of uncertainties and risks in aerospace components
NASA Technical Reports Server (NTRS)
Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.
1992-01-01
This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.
Reliability analysis of the F-8 digital fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goodman, H. A.
1981-01-01
The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.
Three-Dimensional Geometric Nonlinear Contact Stress Analysis of Riveted Joints
NASA Technical Reports Server (NTRS)
Shivakumar, Kunigal N.; Ramanujapuram, Vivek
1998-01-01
The problems associated with fatigue were brought into the forefront of research by the explosive decompression and structural failure of the Aloha Airlines Flight 243 in 1988. The structural failure of this airplane has been attributed to debonding and multiple cracking along the longitudinal lap splice riveted joint in the fuselage. This crash created what may be termed as a minor "Structural Integrity Revolution" in the commercial transport industry. Major steps have been taken by the manufacturers, operators and authorities to improve the structural airworthiness of the aging fleet of airplanes. Notwithstanding, this considerable effort there are still outstanding issues and concerns related to the formulation of Widespread Fatigue Damage which is believed to have been a contributing factor in the probable cause of the Aloha accident. The lesson from this accident was that Multiple-Site Damage (MSD) in "aging" aircraft can lead to extensive aircraft damage. A strong candidate in which MSD is highly probable to occur is the riveted lap joint.
Fishnet statistics for probabilistic strength and scaling of nacreous imbricated lamellar materials
NASA Astrophysics Data System (ADS)
Luo, Wen; Bažant, Zdeněk P.
2017-12-01
Similar to nacre (or brick masonry), imbricated (or staggered) lamellar structures are widely found in nature and man-made materials, and are of interest for biomimetics. They can achieve high defect insensitivity and fracture toughness, as demonstrated in previous studies. But the probability distribution with a realistic far-left tail is apparently unknown. Here, strictly for statistical purposes, the microstructure of nacre is approximated by a diagonally pulled fishnet with quasibrittle links representing the shear bonds between parallel lamellae (or platelets). The probability distribution of fishnet strength is calculated as a sum of a rapidly convergent series of the failure probabilities after the rupture of one, two, three, etc., links. Each of them represents a combination of joint probabilities and of additive probabilities of disjoint events, modified near the zone of failed links by the stress redistributions caused by previously failed links. Based on previous nano- and multi-scale studies at Northwestern, the strength distribution of each link, characterizing the interlamellar shear bond, is assumed to be a Gauss-Weibull graft, but with a deeper Weibull tail than in Type 1 failure of non-imbricated quasibrittle materials. The autocorrelation length is considered equal to the link length. The size of the zone of failed links at maximum load increases with the coefficient of variation (CoV) of link strength, and also with fishnet size. With an increasing width-to-length aspect ratio, a rectangular fishnet gradually transits from the weakest-link chain to the fiber bundle, as the limit cases. The fishnet strength at failure probability 10-6 grows with the width-to-length ratio. For a square fishnet boundary, the strength at 10-6 failure probability is about 11% higher, while at fixed load the failure probability is about 25-times higher than it is for the non-imbricated case. This is a major safety advantage of the fishnet architecture over particulate or fiber reinforced materials. There is also a strong size effect, partly similar to that of Type 1 while the curves of log-strength versus log-size for different sizes could cross each other. The predicted behavior is verified by about a million Monte Carlo simulations for each of many fishnet geometries, sizes and CoVs of link strength. In addition to the weakest-link or fiber bundle, the fishnet becomes the third analytically tractable statistical model of structural strength, and has the former two as limit cases.
Assessing performance and validating finite element simulations using probabilistic knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolin, Ronald M.; Rodriguez, E. A.
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less
NASA Astrophysics Data System (ADS)
Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo
2018-03-01
The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.
Reliability analysis of structures under periodic proof tests in service
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1976-01-01
A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Sequential experimental design based generalised ANOVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less
The influence of microstructure on the probability of early failure in aluminum-based interconnects
NASA Astrophysics Data System (ADS)
Dwyer, V. M.
2004-09-01
For electromigration in short aluminum interconnects terminated by tungsten vias, the well known "short-line" effect applies. In a similar manner, for longer lines, early failure is determined by a critical value Lcrit for the length of polygranular clusters. Any cluster shorter than Lcrit is "immortal" on the time scale of early failure where the figure of merit is not the standard t50 value (the time to 50% failures), but rather the total probability of early failure, Pcf. Pcf is a complex function of current density, linewidth, line length, and material properties (the median grain size d50 and grain size shape factor σd). It is calculated here using a model based around the theory of runs, which has proved itself to be a useful tool for assessing the probability of extreme events. Our analysis shows that Pcf is strongly dependent on σd, and a change in σd from 0.27 to 0.5 can cause an order of magnitude increase in Pcf under typical test conditions. This has implications for the web-based two-dimensional grain-growth simulator MIT/EmSim, which generates grain patterns with σd=0.27, while typical as-patterned structures are better represented by a σd in the range 0.4 - 0.6. The simulator will consequently overestimate interconnect reliability due to this particular electromigration failure mode.
14 CFR 25.571 - Damage-tolerance and fatigue evaluation of structure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... contribute to a catastrophic failure (such as wing, empennage, control surfaces and their systems, the... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Structure Fatigue Evaluation § 25... and sonic excitation environment, that— (1) Sonic fatigue cracks are not probable in any part of the...
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
An experimental evaluation of software redundancy as a strategy for improving reliability
NASA Technical Reports Server (NTRS)
Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.
1990-01-01
The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.
14 CFR 27.571 - Fatigue evaluation of flight structure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... § 27.309, except that maneuvering load factors need not exceed the maximum values expected in operation... paragraph (a)(3) of this section. (b) Fatigue tolerance evaluation. It must be shown that the fatigue tolerance of the structure ensures that the probability of catastrophic fatigue failure is extremely remote...
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
NASA Technical Reports Server (NTRS)
Thomas, J. M.; Hanagud, S.
1975-01-01
The results of two questionnaires sent to engineering experts are statistically analyzed and compared with objective data from Saturn V design and testing. Engineers were asked how likely it was for structural failure to occur at load increments above and below analysts' stress limit predictions. They were requested to estimate the relative probabilities of different failure causes, and of failure at each load increment given a specific cause. Three mathematical models are constructed based on the experts' assessment of causes. The experts' overall assessment of prediction strength fits the Saturn V data better than the models do, but a model test option (T-3) based on the overall assessment gives more design change likelihood to overstrength structures than does an older standard test option. T-3 compares unfavorably with the standard option in a cost optimum structural design problem. The report reflects a need for subjective data when objective data are unavailable.
1980-03-14
failure Sigmar (Or) in line 50, the standard deviation of the relative error of the weights Sigmap (o) in line 60, the standard deviation of the phase...200, the weight structures in the x and y coordinates Q in line 210, the probability of element failure Sigmar (Or) in line 220, the standard...NUMBER OF ELEMENTS =u;2*H 120 PRINT "Pr’obability of elemenit failure al;O 130 PRINT "Standard dtvi&t ion’ oe r.1&tive ýrror of wl; Sigmar 14 0 PRINT
Determination of failure limits for sterilizable solid rocket motor
NASA Technical Reports Server (NTRS)
Lambert, W. L.; Mastrolia, E. J.; Mcconnell, J. D.
1974-01-01
A structural evaluation to establish probable failure limits and a series of environmental tests involving temperature cycling, sustained acceleration, and vibration were conducted on an 18-inch diameter solid rocket motor. Despite the fact that thermal, acceleration and vibration loads representing a severe overtest of conventional environmental requirements were imposed on the sterilizable motor, no structural failure of the grain or flexible support system was detected. The following significant conclusions are considered justified. It is concluded that: (1) the flexible grain retention system, which permitted heat sterilization at 275 F on the test motor, can readily be adopted to meet the environmental requirements of an operational motor design, and (2) if further substantiation of structural integrity is desired, the motor used is considered acceptable for static firing.
Bridge reliability assessment based on the PDF of long-term monitored extreme strains
NASA Astrophysics Data System (ADS)
Jiao, Meiju; Sun, Limin
2011-04-01
Structural health monitoring (SHM) systems can provide valuable information for the evaluation of bridge performance. As the development and implementation of SHM technology in recent years, the data mining and use has received increasingly attention and interests in civil engineering. Based on the principle of probabilistic and statistics, a reliability approach provides a rational basis for analysis of the randomness in loads and their effects on structures. A novel approach combined SHM systems with reliability method to evaluate the reliability of a cable-stayed bridge instrumented with SHM systems was presented in this paper. In this study, the reliability of the steel girder of the cable-stayed bridge was denoted by failure probability directly instead of reliability index as commonly used. Under the assumption that the probability distributions of the resistance are independent to the responses of structures, a formulation of failure probability was deduced. Then, as a main factor in the formulation, the probability density function (PDF) of the strain at sensor locations based on the monitoring data was evaluated and verified. That Donghai Bridge was taken as an example for the application of the proposed approach followed. In the case study, 4 years' monitoring data since the operation of the SHM systems was processed, and the reliability assessment results were discussed. Finally, the sensitivity and accuracy of the novel approach compared with FORM was discussed.
Failure probability analysis of optical grid
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng
2008-11-01
Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.
User Guide to the Aircraft Cumulative Probability Chart Template
2009-07-01
Technology Organisation *AeroStructures Technologies DSTO-TR-2332 ABSTRACT To ensure aircraft structural integrity is maintained to an acceptable level...cracking (or failure) which may be used to assess the life of aircraft structures . RELEASE LIMITATION Approved for public release Report...ADDRESS(ES) DSTO Defence Science and Technology Organisation ,506 Lorimer St,Fishermans Bend Victoria 3207 Australia, , , 8. PERFORMING ORGANIZATION
NASA Technical Reports Server (NTRS)
White, A. L.
1983-01-01
This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.
Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong
2016-01-01
Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention.
Bazant, Zdenĕk P; Pang, Sze-Dai
2006-06-20
In mechanical design as well as protection from various natural hazards, one must ensure an extremely low failure probability such as 10(-6). How to achieve that goal is adequately understood only for the limiting cases of brittle or ductile structures. Here we present a theory to do that for the transitional class of quasibrittle structures, having brittle constituents and characterized by nonnegligible size of material inhomogeneities. We show that the probability distribution of strength of the representative volume element of material is governed by the Maxwell-Boltzmann distribution of atomic energies and the stress dependence of activation energy barriers; that it is statistically modeled by a hierarchy of series and parallel couplings; and that it consists of a broad Gaussian core having a grafted far-left power-law tail with zero threshold and amplitude depending on temperature and load duration. With increasing structure size, the Gaussian core shrinks and Weibull tail expands according to the weakest-link model for a finite chain of representative volume elements. The model captures experimentally observed deviations of the strength distribution from Weibull distribution and of the mean strength scaling law from a power law. These deviations can be exploited for verification and calibration. The proposed theory will increase the safety of concrete structures, composite parts of aircraft or ships, microelectronic components, microelectromechanical systems, prosthetic devices, etc. It also will improve protection against hazards such as landslides, avalanches, ice breaks, and rock or soil failures.
NASA Technical Reports Server (NTRS)
Townsend, J.; Meyers, C.; Ortega, R.; Peck, J.; Rheinfurth, M.; Weinstock, B.
1993-01-01
Probabilistic structural analyses and design methods are steadily gaining acceptance within the aerospace industry. The safety factor approach to design has long been the industry standard, and it is believed by many to be overly conservative and thus, costly. A probabilistic approach to design may offer substantial cost savings. This report summarizes several probabilistic approaches: the probabilistic failure analysis (PFA) methodology developed by Jet Propulsion Laboratory, fast probability integration (FPI) methods, the NESSUS finite element code, and response surface methods. Example problems are provided to help identify the advantages and disadvantages of each method.
Probabilistic Evaluation of Blade Impact Damage
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Abumeri, G. H.
2003-01-01
The response to high velocity impact of a composite blade is probabilistically evaluated. The evaluation is focused on quantifying probabilistically the effects of uncertainties (scatter) in the variables that describe the impact, the blade make-up (geometry and material), the blade response (displacements, strains, stresses, frequencies), the blade residual strength after impact, and the blade damage tolerance. The results of probabilistic evaluations results are in terms of probability cumulative distribution functions and probabilistic sensitivities. Results show that the blade has relatively low damage tolerance at 0.999 probability of structural failure and substantial at 0.01 probability.
Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls
NASA Astrophysics Data System (ADS)
Guha Ray, A.; Baidya, D. K.
2012-09-01
Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
NASA Technical Reports Server (NTRS)
Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.
2010-01-01
Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project
Kiryu, Hisanori; Kin, Taishin; Asai, Kiyoshi
2007-02-15
Recent transcriptomic studies have revealed the existence of a considerable number of non-protein-coding RNA transcripts in higher eukaryotic cells. To investigate the functional roles of these transcripts, it is of great interest to find conserved secondary structures from multiple alignments on a genomic scale. Since multiple alignments are often created using alignment programs that neglect the special conservation patterns of RNA secondary structures for computational efficiency, alignment failures can cause potential risks of overlooking conserved stem structures. We investigated the dependence of the accuracy of secondary structure prediction on the quality of alignments. We compared three algorithms that maximize the expected accuracy of secondary structures as well as other frequently used algorithms. We found that one of our algorithms, called McCaskill-MEA, was more robust against alignment failures than others. The McCaskill-MEA method first computes the base pairing probability matrices for all the sequences in the alignment and then obtains the base pairing probability matrix of the alignment by averaging over these matrices. The consensus secondary structure is predicted from this matrix such that the expected accuracy of the prediction is maximized. We show that the McCaskill-MEA method performs better than other methods, particularly when the alignment quality is low and when the alignment consists of many sequences. Our model has a parameter that controls the sensitivity and specificity of predictions. We discussed the uses of that parameter for multi-step screening procedures to search for conserved secondary structures and for assigning confidence values to the predicted base pairs. The C++ source code that implements the McCaskill-MEA algorithm and the test dataset used in this paper are available at http://www.ncrna.org/papers/McCaskillMEA/. Supplementary data are available at Bioinformatics online.
King, C.-Y.; Luo, G.
1990-01-01
Electric resistance and emissions of hydrogen and radon isotopes of concrete (which is somewhat similar to fault-zone materials) under increasing uniaxial compression were continuously monitored to check whether they show any pre- and post-failure changes that may correspond to similar changes reported for earthquakes. The results show that all these parameters generally begin to increase when the applied stresses reach 20% to 90% of the corresponding failure stresses, probably due to the occurrence and growth of dilatant microcracks in the specimens. The prefailure changes have different patterns for different specimens, probably because of differences in spatial and temporal distributions of the microcracks. The resistance shows large co-failure increases, and the gas emissions show large post-failure increases. The post-failure increase of radon persists longer and stays at a higher level than that of hydrogen, suggesting a difference in the emission mechanisms for these two kinds of gases. The H2 increase may be mainly due to chemical reaction at the crack surfaces while they are fresh, whereas the Rn increases may be mainly the result of the increased emanation area of such surfaces. The results suggest that monitoring of resistivity and gas emissions may be useful for predicting earthquakes and failures of concrete structures. ?? 1990 Birkha??user Verlag.
Advances on the Failure Analysis of the Dam-Foundation Interface of Concrete Dams.
Altarejos-García, Luis; Escuder-Bueno, Ignacio; Morales-Torres, Adrián
2015-12-02
Failure analysis of the dam-foundation interface in concrete dams is characterized by complexity, uncertainties on models and parameters, and a strong non-linear softening behavior. In practice, these uncertainties are dealt with a well-structured mixture of experience, best practices and prudent, conservative design approaches based on the safety factor concept. Yet, a sound, deep knowledge of some aspects of this failure mode remain unveiled, as they have been offset in practical applications by the use of this conservative approach. In this paper we show a strategy to analyse this failure mode under a reliability-based approach. The proposed methodology of analysis integrates epistemic uncertainty on spatial variability of strength parameters and data from dam monitoring. The purpose is to produce meaningful and useful information regarding the probability of occurrence of this failure mode that can be incorporated in risk-informed dam safety reviews. In addition, relationships between probability of failure and factors of safety are obtained. This research is supported by a more than a decade of intensive professional practice on real world cases and its final purpose is to bring some clarity, guidance and to contribute to the improvement of current knowledge and best practices on such an important dam safety concern.
Advances on the Failure Analysis of the Dam—Foundation Interface of Concrete Dams
Altarejos-García, Luis; Escuder-Bueno, Ignacio; Morales-Torres, Adrián
2015-01-01
Failure analysis of the dam-foundation interface in concrete dams is characterized by complexity, uncertainties on models and parameters, and a strong non-linear softening behavior. In practice, these uncertainties are dealt with a well-structured mixture of experience, best practices and prudent, conservative design approaches based on the safety factor concept. Yet, a sound, deep knowledge of some aspects of this failure mode remain unveiled, as they have been offset in practical applications by the use of this conservative approach. In this paper we show a strategy to analyse this failure mode under a reliability-based approach. The proposed methodology of analysis integrates epistemic uncertainty on spatial variability of strength parameters and data from dam monitoring. The purpose is to produce meaningful and useful information regarding the probability of occurrence of this failure mode that can be incorporated in risk-informed dam safety reviews. In addition, relationships between probability of failure and factors of safety are obtained. This research is supported by a more than a decade of intensive professional practice on real world cases and its final purpose is to bring some clarity, guidance and to contribute to the improvement of current knowledge and best practices on such an important dam safety concern. PMID:28793709
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Load and resistance factor design of bridge foundations accounting for pile group-soil interaction.
DOT National Transportation Integrated Search
2015-11-01
Pile group foundations are used in most foundation solutions for transportation structures. Rigorous and reliable pile design methods are : required to produce designs whose level of safety (probability of failure) is known. By utilizing recently dev...
Probability of in-vessel steam explosion-induced containment failure for a KWU PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esmaili, H.; Khatib-Rahbar, M.; Zuchuat, O.
During postulated core meltdown accidents in light water reactors, there is a likelihood for an in-vessel steam explosion when the melt contacts the coolant in the lower plenum. The objective of the work described in this paper is to determine the conditional probability of in-vessel steam explosion-induced containment failure for a Kraftwerk Union (KWU) pressurized water reactor (PWR). The energetics of the explosion depends on the mass of the molten fuel that mixes with the coolant and participates in the explosion and on the conversion of fuel thermal energy into mechanical work. The work can result in the generation ofmore » dynamic pressures that affect the lower head (and possibly lead to its failure), and it can cause acceleration of a slug (fuel and coolant material) upward that can affect the upper internal structures and vessel head and ultimately cause the failure of the upper head. If the upper head missile has sufficient energy, it can reach the containment shell and penetrate it. The analysis, must therefore, take into account all possible dissipation mechanisms.« less
Development of STS/Centaur failure probabilities liftoff to Centaur separation
NASA Technical Reports Server (NTRS)
Hudson, J. M.
1982-01-01
The results of an analysis to determine STS/Centaur catastrophic vehicle response probabilities for the phases of vehicle flight from STS liftoff to Centaur separation from the Orbiter are presented. The analysis considers only category one component failure modes as contributors to the vehicle response mode probabilities. The relevant component failure modes are grouped into one of fourteen categories of potential vehicle behavior. By assigning failure rates to each component, for each of its failure modes, the STS/Centaur vehicle response probabilities in each phase of flight can be calculated. The results of this study will be used in a DOE analysis to ascertain the hazard from carrying a nuclear payload on the STS.
Automatic Monitoring System Design and Failure Probability Analysis for River Dikes on Steep Channel
NASA Astrophysics Data System (ADS)
Chang, Yin-Lung; Lin, Yi-Jun; Tung, Yeou-Koung
2017-04-01
The purposes of this study includes: (1) design an automatic monitoring system for river dike; and (2) develop a framework which enables the determination of dike failure probabilities for various failure modes during a rainstorm. The historical dike failure data collected in this study indicate that most dikes in Taiwan collapsed under the 20-years return period discharge, which means the probability of dike failure is much higher than that of overtopping. We installed the dike monitoring system on the Chiu-She Dike which located on the middle stream of Dajia River, Taiwan. The system includes: (1) vertical distributed pore water pressure sensors in front of and behind the dike; (2) Time Domain Reflectometry (TDR) to measure the displacement of dike; (3) wireless floating device to measure the scouring depth at the toe of dike; and (4) water level gauge. The monitoring system recorded the variation of pore pressure inside the Chiu-She Dike and the scouring depth during Typhoon Megi. The recorded data showed that the highest groundwater level insides the dike occurred 15 hours after the peak discharge. We developed a framework which accounts for the uncertainties from return period discharge, Manning's n, scouring depth, soil cohesion, and friction angle and enables the determination of dike failure probabilities for various failure modes such as overtopping, surface erosion, mass failure, toe sliding and overturning. The framework was applied to Chiu-She, Feng-Chou, and Ke-Chuang Dikes on Dajia River. The results indicate that the toe sliding or overturning has the highest probability than other failure modes. Furthermore, the overall failure probability (integrate different failure modes) reaches 50% under 10-years return period flood which agrees with the historical failure data for the study reaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Dean; Oberkampf, William Louis; Helton, Jon Craig
2004-12-01
Relationships to determine the probability that a weak link (WL)/strong link (SL) safety system will fail to function as intended in a fire environment are investigated. In the systems under study, failure of the WL system before failure of the SL system is intended to render the overall system inoperational and thus prevent the possible occurrence of accidents with potentially serious consequences. Formal developments of the probability that the WL system fails to deactivate the overall system before failure of the SL system (i.e., the probability of loss of assured safety, PLOAS) are presented for several WWSL configurations: (i) onemore » WL, one SL, (ii) multiple WLs, multiple SLs with failure of any SL before any WL constituting failure of the safety system, (iii) multiple WLs, multiple SLs with failure of all SLs before any WL constituting failure of the safety system, and (iv) multiple WLs, multiple SLs and multiple sublinks in each SL with failure of any sublink constituting failure of the associated SL and failure of all SLs before failure of any WL constituting failure of the safety system. The indicated probabilities derive from time-dependent temperatures in the WL/SL system and variability (i.e., aleatory uncertainty) in the temperatures at which the individual components of this system fail and are formally defined as multidimensional integrals. Numerical procedures based on quadrature (i.e., trapezoidal rule, Simpson's rule) and also on Monte Carlo techniques (i.e., simple random sampling, importance sampling) are described and illustrated for the evaluation of these integrals. Example uncertainty and sensitivity analyses for PLOAS involving the representation of uncertainty (i.e., epistemic uncertainty) with probability theory and also with evidence theory are presented.« less
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
Development of GENOA Progressive Failure Parallel Processing Software Systems
NASA Technical Reports Server (NTRS)
Abdi, Frank; Minnetyan, Levon
1999-01-01
A capability consisting of software development and experimental techniques has been developed and is described. The capability is integrated into GENOA-PFA to model polymer matrix composite (PMC) structures. The capability considers the physics and mechanics of composite materials and structure by integration of a hierarchical multilevel macro-scale (lamina, laminate, and structure) and micro scale (fiber, matrix, and interface) simulation analyses. The modeling involves (1) ply layering methodology utilizing FEM elements with through-the-thickness representation, (2) simulation of effects of material defects and conditions (e.g., voids, fiber waviness, and residual stress) on global static and cyclic fatigue strengths, (3) including material nonlinearities (by updating properties periodically) and geometrical nonlinearities (by Lagrangian updating), (4) simulating crack initiation. and growth to failure under static, cyclic, creep, and impact loads. (5) progressive fracture analysis to determine durability and damage tolerance. (6) identifying the percent contribution of various possible composite failure modes involved in critical damage events. and (7) determining sensitivities of failure modes to design parameters (e.g., fiber volume fraction, ply thickness, fiber orientation. and adhesive-bond thickness). GENOA-PFA progressive failure analysis is now ready for use to investigate the effects on structural responses to PMC material degradation from damage induced by static, cyclic (fatigue). creep, and impact loading in 2D/3D PMC structures subjected to hygrothermal environments. Its use will significantly facilitate targeting design parameter changes that will be most effective in reducing the probability of a given failure mode occurring.
Bristow, Michael R; Kao, David P; Breathett, Khadijah K; Altman, Natasha L; Gorcsan, John; Gill, Edward A; Lowes, Brian D; Gilbert, Edward M; Quaife, Robert A; Mann, Douglas L
2017-11-01
Diagnosis, prognosis, treatment, and development of new therapies for diseases or syndromes depend on a reliable means of identifying phenotypes associated with distinct predictive probabilities for these various objectives. Left ventricular ejection fraction (LVEF) provides the current basis for combined functional and structural phenotyping in heart failure by classifying patients as those with heart failure with reduced ejection fraction (HFrEF) and those with heart failure with preserved ejection fraction (HFpEF). Recently the utility of LVEF as the major phenotypic determinant of heart failure has been challenged based on its load dependency and measurement variability. We review the history of the development and adoption of LVEF as a critical measurement of LV function and structure and demonstrate that, in chronic heart failure, load dependency is not an important practical issue, and we provide hemodynamic and molecular biomarker evidence that LVEF is superior or equal to more unwieldy methods of identifying phenotypes of ventricular remodeling. We conclude that, because it reliably measures both left ventricular function and structure, LVEF remains the best current method of assessing pathologic remodeling in heart failure in both individual clinical and multicenter group settings. Because of the present and future importance of left ventricular phenotyping in heart failure, LVEF should be measured by using the most accurate technology and methodologic refinements available, and improved characterization methods should continue to be sought. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.
Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi
2015-10-01
In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.
Electromigration resistance in a short three-contact interconnect tree
NASA Astrophysics Data System (ADS)
Chang, C. W.; Choi, Z.-S.; Thompson, C. V.; Gan, C. L.; Pey, K. L.; Choi, W. K.; Hwang, N.
2006-05-01
Electromigration has been characterized in via-terminated interconnect lines with additional vias in the middle, creating two adjacent segments that can be stressed independently. The mortality of a segment was found to depend on the direction and magnitude of the current in the adjacent segment, confirming that there is not a fixed value of the product of the current density and segment length, jL, that defines immortality in individual segments that are part of a multisegment interconnect tree. Instead, it is found that the probability of failure of a multisegment tree increases with the increasing value of an effective jL product defined in earlier work. However, contrary to expectations, the failures were still observed when (jL)eff was less than the critical jL product for which lines were found to be immortal in single-segment test structures. It is argued that this is due to reservoir effects associated with unstressed segments or due to liner failure at the central via. Multisegment test structures are therefore shown to reveal more types of failure mechanisms and mortality conditions that are not found in tests with single-segment structures.
Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong
2016-01-01
Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention. PMID:27875545
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Chambers, David W
2010-01-01
Every plan contains risk. To proceed without planning some means of managing that risk is to court failure. The basic logic of risk is explained. It consists in identifying a threshold where some corrective action is necessary, the probability of exceeding that threshold, and the attendant cost should the undesired outcome occur. This is the probable cost of failure. Various risk categories in dentistry are identified, including lack of liquidity; poor quality; equipment or procedure failures; employee slips; competitive environments; new regulations; unreliable suppliers, partners, and patients; and threats to one's reputation. It is prudent to make investments in risk management to the extent that the cost of managing the risk is less than the probable loss due to risk failure and when risk management strategies can be matched to type of risk. Four risk management strategies are discussed: insurance, reducing the probability of failure, reducing the costs of failure, and learning. A risk management accounting of the financial meltdown of October 2008 is provided.
Fishnet model for failure probability tail of nacre-like imbricated lamellar materials
NASA Astrophysics Data System (ADS)
Luo, Wen; Bažant, Zdeněk P.
2017-12-01
Nacre, the iridescent material of the shells of pearl oysters and abalone, consists mostly of aragonite (a form of CaCO3), a brittle constituent of relatively low strength (≈10 MPa). Yet it has astonishing mean tensile strength (≈150 MPa) and fracture energy (≈350 to 1,240 J/m2). The reasons have recently become well understood: (i) the nanoscale thickness (≈300 nm) of nacre's building blocks, the aragonite lamellae (or platelets), and (ii) the imbricated, or staggered, arrangement of these lamellea, bound by biopolymer layers only ≈25 nm thick, occupying <5% of volume. These properties inspire manmade biomimetic materials. For engineering applications, however, the failure probability of ≤10-6 is generally required. To guarantee it, the type of probability density function (pdf) of strength, including its tail, must be determined. This objective, not pursued previously, is hardly achievable by experiments alone, since >10^8 tests of specimens would be needed. Here we outline a statistical model of strength that resembles a fishnet pulled diagonally, captures the tail of pdf of strength and, importantly, allows analytical safety assessments of nacreous materials. The analysis shows that, in terms of safety, the imbricated lamellar structure provides a major additional advantage—˜10% strength increase at tail failure probability 10^-6 and a 1 to 2 orders of magnitude tail probability decrease at fixed stress. Another advantage is that a high scatter of microstructure properties diminishes the strength difference between the mean and the probability tail, compared with the weakest link model. These advantages of nacre-like materials are here justified analytically and supported by millions of Monte Carlo simulations.
Statistically based material properties: A military handbook-17 perspective
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Vangel, Mark G.
1990-01-01
The statistical procedures and their importance in obtaining composite material property values in designing structures for aircraft and military combat systems are described. The property value is such that the strength exceeds this value with a prescribed probability with 95 percent confidence in the assertion. The survival probabilities are the 99th percentile and 90th percentile for the A and B basis values respectively. The basis values for strain to failure measurements are defined in a similar manner. The B value is the primary concern.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.
Risk-based decision making to manage water quality failures caused by combined sewer overflows
NASA Astrophysics Data System (ADS)
Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.
2017-12-01
Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean- pbf optimization. The effectiveness of using buffered failure probability compared to the failure probability is tested by comparing the solutions obtained by using mean-pbf and mean-pf optimizations.
Risk Analysis of Earth-Rock Dam Failures Based on Fuzzy Event Tree Method
Fu, Xiao; Gu, Chong-Shi; Su, Huai-Zhi; Qin, Xiang-Nan
2018-01-01
Earth-rock dams make up a large proportion of the dams in China, and their failures can induce great risks. In this paper, the risks associated with earth-rock dam failure are analyzed from two aspects: the probability of a dam failure and the resulting life loss. An event tree analysis method based on fuzzy set theory is proposed to calculate the dam failure probability. The life loss associated with dam failure is summarized and refined to be suitable for Chinese dams from previous studies. The proposed method and model are applied to one reservoir dam in Jiangxi province. Both engineering and non-engineering measures are proposed to reduce the risk. The risk analysis of the dam failure has essential significance for reducing dam failure probability and improving dam risk management level. PMID:29710824
14 CFR 23.613 - Material strength properties and design values.
Code of Federal Regulations, 2011 CFR
2011-01-01
...: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component; 99 percent probability... would result in applied loads being safely distributed to other load carrying members; 90 percent...
14 CFR 23.613 - Material strength properties and design values.
Code of Federal Regulations, 2012 CFR
2012-01-01
...: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component; 99 percent probability... would result in applied loads being safely distributed to other load carrying members; 90 percent...
14 CFR 23.613 - Material strength properties and design values.
Code of Federal Regulations, 2013 CFR
2013-01-01
...: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component; 99 percent probability... would result in applied loads being safely distributed to other load carrying members; 90 percent...
14 CFR 25.613 - Material strength properties and material design values.
Code of Federal Regulations, 2011 CFR
2011-01-01
... following probability: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent... elements would result in applied loads being safely distributed to other load carrying members, 90 percent...
14 CFR 27.613 - Material strength properties and design values.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...
14 CFR 23.613 - Material strength properties and design values.
Code of Federal Regulations, 2014 CFR
2014-01-01
...: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component; 99 percent probability... would result in applied loads being safely distributed to other load carrying members; 90 percent...
14 CFR 25.613 - Material strength properties and material design values.
Code of Federal Regulations, 2014 CFR
2014-01-01
... following probability: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent... elements would result in applied loads being safely distributed to other load carrying members, 90 percent...
14 CFR 29.613 - Material strength properties and design values.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...
14 CFR 25.613 - Material strength properties and material design values.
Code of Federal Regulations, 2012 CFR
2012-01-01
... following probability: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent... elements would result in applied loads being safely distributed to other load carrying members, 90 percent...
14 CFR 27.613 - Material strength properties and design values.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...
14 CFR 25.613 - Material strength properties and material design values.
Code of Federal Regulations, 2010 CFR
2010-01-01
... following probability: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent... elements would result in applied loads being safely distributed to other load carrying members, 90 percent...
14 CFR 29.613 - Material strength properties and design values.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...
14 CFR 27.613 - Material strength properties and design values.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...
14 CFR 27.613 - Material strength properties and design values.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...
14 CFR 25.613 - Material strength properties and material design values.
Code of Federal Regulations, 2013 CFR
2013-01-01
... following probability: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent... elements would result in applied loads being safely distributed to other load carrying members, 90 percent...
14 CFR 23.613 - Material strength properties and design values.
Code of Federal Regulations, 2010 CFR
2010-01-01
...: (1) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component; 99 percent probability... would result in applied loads being safely distributed to other load carrying members; 90 percent...
14 CFR 27.613 - Material strength properties and design values.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...
14 CFR 29.613 - Material strength properties and design values.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...
14 CFR 29.613 - Material strength properties and design values.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...
14 CFR 29.613 - Material strength properties and design values.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Where applied loads are eventually distributed through a single member within an assembly, the failure of which would result in loss of structural integrity of the component, 99 percent probability with... elements would result in applied loads being safely distributed to other load-carrying members, 90 percent...
24 CFR 27.20 - Conditions of foreclosure sale.
Code of Federal Regulations, 2011 CFR
2011-04-01
... probable causes of project failure resulting in its default; (2) A financial analysis of the project... analysis of the project, including the condition of the structure and grounds, the need for rehabilitation... financial feasibility of the project after foreclosure and sale subject to the terms to be required by the...
NASA Astrophysics Data System (ADS)
Hanish Nithin, Anu; Omenzetter, Piotr
2017-04-01
Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.
Ulusoy, Nuran
2017-01-01
The aim of this study was to evaluate the effects of two endocrown designs and computer aided design/manufacturing (CAD/CAM) materials on stress distribution and failure probability of restorations applied to severely damaged endodontically treated maxillary first premolar tooth (MFP). Two types of designs without and with 3 mm intraradicular extensions, endocrown (E) and modified endocrown (ME), were modeled on a 3D Finite element (FE) model of the MFP. Vitablocks Mark II (VMII), Vita Enamic (VE), and Lava Ultimate (LU) CAD/CAM materials were used for each type of design. von Mises and maximum principle values were evaluated and the Weibull function was incorporated with FE analysis to calculate the long term failure probability. Regarding the stresses that occurred in enamel, for each group of material, ME restoration design transmitted less stress than endocrown. During normal occlusal function, the overall failure probability was minimum for ME with VMII. ME restoration design with VE was the best restorative option for premolar teeth with extensive loss of coronal structure under high occlusal loads. Therefore, ME design could be a favorable treatment option for MFPs with missing palatal cusp. Among the CAD/CAM materials tested, VMII and VE were found to be more tooth-friendly than LU. PMID:29119108
A methodology for estimating risks associated with landslides of contaminated soil into rivers.
Göransson, Gunnel; Norrman, Jenny; Larson, Magnus; Alén, Claes; Rosén, Lars
2014-02-15
Urban areas adjacent to surface water are exposed to soil movements such as erosion and slope failures (landslides). A landslide is a potential mechanism for mobilisation and spreading of pollutants. This mechanism is in general not included in environmental risk assessments for contaminated sites, and the consequences associated with contamination in the soil are typically not considered in landslide risk assessments. This study suggests a methodology to estimate the environmental risks associated with landslides in contaminated sites adjacent to rivers. The methodology is probabilistic and allows for datasets with large uncertainties and the use of expert judgements, providing quantitative estimates of probabilities for defined failures. The approach is illustrated by a case study along the river Göta Älv, Sweden, where failures are defined and probabilities for those failures are estimated. Failures are defined from a pollution perspective and in terms of exceeding environmental quality standards (EQSs) and acceptable contaminant loads. Models are then suggested to estimate probabilities of these failures. A landslide analysis is carried out to assess landslide probabilities based on data from a recent landslide risk classification study along the river Göta Älv. The suggested methodology is meant to be a supplement to either landslide risk assessment (LRA) or environmental risk assessment (ERA), providing quantitative estimates of the risks associated with landslide in contaminated sites. The proposed methodology can also act as a basis for communication and discussion, thereby contributing to intersectoral management solutions. From the case study it was found that the defined failures are governed primarily by the probability of a landslide occurring. The overall probabilities for failure are low; however, if a landslide occurs the probabilities of exceeding EQS are high and the probability of having at least a 10% increase in the contamination load within one year is also high. Copyright © 2013 Elsevier B.V. All rights reserved.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
Reliability Analysis of a Glacier Lake Warning System Using a Bayesian Net
NASA Astrophysics Data System (ADS)
Sturny, Rouven A.; Bründl, Michael
2013-04-01
Beside structural mitigation measures like avalanche defense structures, dams and galleries, warning and alarm systems have become important measures for dealing with Alpine natural hazards. Integrating them into risk mitigation strategies and comparing their effectiveness with structural measures requires quantification of the reliability of these systems. However, little is known about how reliability of warning systems can be quantified and which methods are suitable for comparing their contribution to risk reduction with that of structural mitigation measures. We present a reliability analysis of a warning system located in Grindelwald, Switzerland. The warning system was built for warning and protecting residents and tourists from glacier outburst floods as consequence of a rapid drain of the glacier lake. We have set up a Bayesian Net (BN, BPN) that allowed for a qualitative and quantitative reliability analysis. The Conditional Probability Tables (CPT) of the BN were determined according to manufacturer's reliability data for each component of the system as well as by assigning weights for specific BN nodes accounting for information flows and decision-making processes of the local safety service. The presented results focus on the two alerting units 'visual acoustic signal' (VAS) and 'alerting of the intervention entities' (AIE). For the summer of 2009, the reliability was determined to be 94 % for the VAS and 83 % for the AEI. The probability of occurrence of a major event was calculated as 0.55 % per day resulting in an overall reliability of 99.967 % for the VAS and 99.906 % for the AEI. We concluded that a failure of the VAS alerting unit would be the consequence of a simultaneous failure of the four probes located in the lake and the gorge. Similarly, we deduced that the AEI would fail either if there were a simultaneous connectivity loss of the mobile and fixed network in Grindelwald, an Internet access loss or a failure of the regional operations centre. However, the probability of a common failure of these components was assumed to be low. Overall it can be stated that due to numerous redundancies, the investigated warning system is highly reliable and its influence on risk reduction is very high. Comparable studies in the future are needed to classify these results and to gain more experience how the reliability of warning systems could be determined in practice.
NASA Astrophysics Data System (ADS)
Wang, Yu; Jiang, Wenchun; Luo, Yun; Zhang, Yucai; Tu, Shan-Tung
2017-12-01
The reduction and re-oxidation of anode have significant effects on the integrity of the solid oxide fuel cell (SOFC) sealed by the glass-ceramic (GC). The mechanical failure is mainly controlled by the stress distribution. Therefore, a three dimensional model of SOFC is established to investigate the stress evolution during the reduction and re-oxidation by finite element method (FEM) in this paper, and the failure probability is calculated using the Weibull method. The results demonstrate that the reduction of anode can decrease the thermal stresses and reduce the failure probability due to the volumetric contraction and porosity increasing. The re-oxidation can result in a remarkable increase of the thermal stresses, and the failure probabilities of anode, cathode, electrolyte and GC all increase to 1, which is mainly due to the large linear strain rather than the porosity decreasing. The cathode and electrolyte fail as soon as the linear strains are about 0.03% and 0.07%. Therefore, the re-oxidation should be controlled to ensure the integrity, and a lower re-oxidation temperature can decrease the stress and failure probability.
Reducing fatigue damage for ships in transit through structured decision making
Nichols, J.M.; Fackler, P.L.; Pacifici, K.; Murphy, K.D.; Nichols, J.D.
2014-01-01
Research in structural monitoring has focused primarily on drawing inference about the health of a structure from the structure’s response to ambient or applied excitation. Knowledge of the current state can then be used to predict structural integrity at a future time and, in principle, allows one to take action to improve safety, minimize ownership costs, and/or increase the operating envelope. While much time and effort has been devoted toward data collection and system identification, research to-date has largely avoided the question of how to choose an optimal maintenance plan. This work describes a structured decision making (SDM) process for taking available information (loading data, model output, etc.) and producing a plan of action for maintaining the structure. SDM allows the practitioner to specify his/her objectives and then solves for the decision that is optimal in the sense that it maximizes those objectives. To demonstrate, we consider the problem of a Naval vessel transiting a fixed distance in varying sea-state conditions. The physics of this problem are such that minimizing transit time increases the probability of fatigue failure in the structural supports. It is shown how SDM produces the optimal trip plan in the sense that it minimizes both transit time and probability of failure in the manner of our choosing (i.e., through a user-defined cost function). The example illustrates the benefit of SDM over heuristic approaches to maintaining the vessel.
Contraceptive failure in the United States
Trussell, James
2013-01-01
This review provides an update of previous estimates of first-year probabilities of contraceptive failure for all methods of contraception available in the United States. Estimates are provided of probabilities of failure during typical use (which includes both incorrect and inconsistent use) and during perfect use (correct and consistent use). The difference between these two probabilities reveals the consequences of imperfect use; it depends both on how unforgiving of imperfect use a method is and on how hard it is to use that method perfectly. These revisions reflect new research on contraceptive failure both during perfect use and during typical use. PMID:21477680
Analysis of Emergency Diesel Generators Failure Incidents in Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Hunt, Ronderio LaDavis
In early years of operation, emergency diesel generators have had a minimal rate of demand failures. Emergency diesel generators are designed to operate as a backup when the main source of electricity has been disrupted. As of late, EDGs (emergency diesel generators) have been failing at NPPs (nuclear power plants) around the United States causing either station blackouts or loss of onsite and offsite power. These failures occurred from a specific type called demand failures. This thesis evaluated the current problem that raised concern in the nuclear industry which was averaging 1 EDG demand failure/year in 1997 to having an excessive event of 4 EDG demand failure year which occurred in 2011. To determine the next occurrence of the extreme event and possible cause to an event of such happening, two analyses were conducted, the statistical and root cause analysis. Considering the statistical analysis in which an extreme event probability approach was applied to determine the next occurrence year of an excessive event as well as, the probability of that excessive event occurring. Using the root cause analysis in which the potential causes of the excessive event occurred by evaluating, the EDG manufacturers, aging, policy changes/ maintenance practices and failure components. The root cause analysis investigated the correlation between demand failure data and historical data. Final results from the statistical analysis showed expectations of an excessive event occurring in a fixed range of probability and a wider range of probability from the extreme event probability approach. The root-cause analysis of the demand failure data followed historical statistics for the EDG manufacturer, aging and policy changes/ maintenance practices but, indicated a possible cause regarding the excessive event with the failure components. Conclusions showed the next excessive demand failure year, prediction of the probability and the next occurrence year of such failures, with an acceptable confidence level, was difficult but, it was likely that this type of failure will not be a 100 year event. It was noticeable to see that the majority of the EDG demand failures occurred within the main components as of 2005. The overall analysis of this study provided from percentages, indicated that it would be appropriate to make the statement that the excessive event was caused by the overall age (wear and tear) of the Emergency Diesel Generators in Nuclear Power Plants. Future Work will be to better determine the return period of the excessive event once the occurrence has happened for a second time by implementing the extreme event probability approach.
Zhu, Yan Qiu; Sekine, Toshimori; Li, Yan Hui; Fay, Michael W; Zhao, Yi Min; Patrick Poa, C H; Wang, Wen Xin; Roe, Martin J; Brown, Paul D; Fleischer, Niles; Tenne, Reshef
2005-11-23
The excellent shock-absorbing performance of WS2 and MoS2 nanoparticles with inorganic fullerene-like structures (IFs) under very high shock wave pressures of 25 GPa is described. The combined techniques of X-ray diffraction, Raman spectroscopy, X-ray photoelectron spectroscopy, thermal analysis, and transmission electron microscopy have been used to evaluate the diverse, intriguing features of shock recovered IFs, of interest for their tribological applications, thereby allowing improved understanding of their antishock behavior and structure-property relationships. Two possible failure mechanisms are proposed and discussed. The supershock-absorbing ability of the IF-WS2 enables them to survive pressures up to 25 GPa accompanied with concurrent temperatures of up to 1000 degrees C without any significant structural degradation or phase change making them probably the strongest cage molecules now known.
Game-Theoretic strategies for systems of components using product-form utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.
Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less
ZERODUR: deterministic approach for strength design
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.
Himalayan Sackung and Associations to Regional Structure
NASA Astrophysics Data System (ADS)
Shroder, J. F.; Bishop, M. P.; Olsenholler, J.
2003-12-01
Recognition of sackung slope failure or deep-seated, rock-slope deformation in the Himalaya has been rather limited, in part because: (1) many geoscientists do not recognize its characteristics; (2) large-scale aerial photographs and topographic maps used to identify the characteristic surficial, topographic manifestations of the failure type are commonly low-level state secrets in that region; and (3) no systematic survey for sackung has ever been made in the Himalaya. In the Pakistani-controlled, western Himalaya, some unconventional access to aerial photographs in the Kaghan and Nanga Parbat areas allowed first recognition of several characteristic ridge-top grabens and anti-slope scarps. Later release of declassified, stereo imagery from the CORONA and KEYHOLE satellite series enabled discovery of other examples in the K2 region. Comparison of mapped sackung failures with geologic base maps has demonstrated some coincidence of sackung with various structural trends, including synformal structures in upper thrust plates or along the traces of high-angle faults. In all probability these structural trends have provided plentiful ancillary planes of weakness along which gravitationally driven sackung is facilitated. Sackung failure in the Himalaya appears to be a spatially scale-dependent manifestation of a gravitational-collapse continuum of the brittle, upper crust, mainly involving mountain ridges. In contrast, gravitational collapse of the whole range may involve some similar failures but also include listric faulting, as well as subsidence movement into zones of ductility at depth. Temporal scale dependence of sackung may also be threshold dominated, wherein initial long-continued, slow failure ultimately leads to the commonly catastrophic rock-slope collapses recently recognized throughout the western Himalaya and now differentiated from their original mismapping as glacial moraines. Such sackung in Himalayan terrain undergoing active deglaciation from global warming may increase catastrophic slope-failure hazard.
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.
NASA Astrophysics Data System (ADS)
Sil, Arjun; Longmailai, Thaihamdau
2017-09-01
The lateral displacement of Reinforced Concrete (RC) frame building during an earthquake has an important impact on the structural stability and integrity. However, seismic analysis and design of RC building needs more concern due to its complex behavior as the performance of the structure links to the features of the system having many influencing parameters and other inherent uncertainties. The reliability approach takes into account the factors and uncertainty in design influencing the performance or response of the structure in which the safety level or the probability of failure could be ascertained. This present study, aims to assess the reliability of seismic performance of a four storey residential RC building seismically located in Zone-V as per the code provisions given in the Indian Standards IS: 1893-2002. The reliability assessment performed by deriving an explicit expression for maximum roof-lateral displacement as a failure function by regression method. A total of 319, four storey RC buildings were analyzed by linear static method using SAP2000. However, the change in the lateral-roof displacement with the variation of the parameters (column dimension, beam dimension, grade of concrete, floor height and total weight of the structure) was observed. A generalized relation established by regression method which could be used to estimate the expected lateral displacement owing to those selected parameters. A comparison made between the displacements obtained from analysis with that of the equation so formed. However, it shows that the proposed relation could be used directly to determine the expected maximum lateral displacement. The data obtained from the statistical computations was then used to obtain the probability of failure and the reliability.
Statistical modeling of SRAM yield performance and circuit variability
NASA Astrophysics Data System (ADS)
Cheng, Qi; Chen, Yijian
2015-03-01
In this paper, we develop statistical models to investigate SRAM yield performance and circuit variability in the presence of self-aligned multiple patterning (SAMP) process. It is assumed that SRAM fins are fabricated by a positivetone (spacer is line) self-aligned sextuple patterning (SASP) process which accommodates two types of spacers, while gates are fabricated by a more pitch-relaxed self-aligned quadruple patterning (SAQP) process which only allows one type of spacer. A number of possible inverter and SRAM structures are identified and the related circuit multi-modality is studied using the developed failure-probability and yield models. It is shown that SRAM circuit yield is significantly impacted by the multi-modality of fins' spatial variations in a SRAM cell. The sensitivity of 6-transistor SRAM read/write failure probability to SASP process variations is calculated and the specific circuit type with the highest probability to fail in the reading/writing operation is identified. Our study suggests that the 6-transistor SRAM configuration may not be scalable to 7-nm half pitch and more robust SRAM circuit design needs to be researched.
[Comments on the use of the "life-table method" in orthopedics].
Hassenpflug, J; Hahne, H J; Hedderich, J
1992-01-01
In the description of long term results, e.g. of joint replacements, survivorship analysis is used increasingly in orthopaedic surgery. The survivorship analysis is more useful to describe the frequency of failure rather than global statements in percentage. The relative probability of failure for fixed intervals is drawn from the number of controlled patients and the frequency of failure. The complementary probabilities of success are linked in their temporal sequence thus representing the probability of survival at a fixed endpoint. Necessary condition for the use of this procedure is the exact definition of moment and manner of failure. It is described how to establish survivorship tables.
Empty sella syndrome secondary to intrasellar cyst in adolescence.
Raiti, S; Albrink, M J; Maclaren, N K; Chadduck, W M; Gabriele, O F; Chou, S M
1976-09-01
A 15-year-old boy had growth failure and failure of sexual development. The probable onset was at age 10. Endocrine studies showed hypopituitarism with deficiency of growth hormone and follicle-stimulating hormone, an abnormal response to metyrapone, and deficiency of thyroid function. Luteinizing hormone level was in the low-normal range. Posterior pituitary function was normal. Roentgenogram showed a large sella with some destruction of the posterior clinoids. Transsphenoidal exploration was carried out. The sella was empty except for a whitish membrane; no pituitary tissue was seen. The sella was packed with muscle. Recovery was uneventful, and the patient was given replacement therapy. On histologic examination,the cyst wall showed low pseudostratified cuboidal epithelium and occasional squamous metaplasia. Hemosiderin-filled phagocytes and acinar structures were also seen. The diagnosis was probable rupture of an intrasellar epithelial cyst, leading to empty sella syndrome.
Rodrigues, Samantha A; Thambyah, Ashvin; Broom, Neil D
2015-03-01
The annulus-endplate anchorage system performs a critical role in the disc, creating a strong structural link between the compliant annulus and the rigid vertebrae. Endplate failure is thought to be associated with disc herniation, a recent study indicating that this failure mode occurs more frequently than annular rupture. The aim was to investigate the structural principles governing annulus-endplate anchorage and the basis of its strength and mechanisms of failure. Loading experiments were performed on ovine lumbar motion segments designed to induce annulus-endplate failure, followed by macro- to micro- to fibril-level structural analyses. The study was funded by a doctoral scholarship from our institution. Samples were loaded to failure in three modes: torsion using intact motion segments, in-plane tension of the anterior annulus-endplate along one of the oblique fiber angles, and axial tension of the anterior annulus-endplate. The anterior region was chosen for its ease of access. Decalcification was used to investigate the mechanical influence of the mineralized component. Structural analysis was conducted on both the intact and failed samples using differential interference contrast optical microscopy and scanning electron microscopy. Two main modes of anchorage failure were observed--failure at the tidemark or at the cement line. Samples subjected to axial tension contained more tidemark failures compared with those subjected to torsion and in-plane tension. Samples decalcified before testing frequently contained damage at the cement line, this being more extensive than in fresh samples. Analysis of the intact samples at their anchorage sites revealed that annular subbundle fibrils penetrate beyond the cement line to a limited depth and appear to merge with those in the vertebral and cartilaginous endplates. Annulus-endplate anchorage is more vulnerable to failure in axial tension compared with both torsion and in-plane tension and is probably due to acute fiber bending at the soft-hard interface of the tidemark. This finding is consistent with evidence showing that flexion, which induces a similar pattern of axial tension, increases the risk of herniation involving endplate failure. The study also highlights the important strengthening role of calcification at this junction and provides new evidence of a fibril-based form of structural integration across the cement line. Copyright © 2015 Elsevier Inc. All rights reserved.
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
Probability of Failure of Damaged Ship Structures - Phase 3
2014-04-01
U. Akpan, B. Yuen, T. S. Koko , F. Lin, J. Wallace Martec Limited Prepared By: Martec Limited Suite 400, 1888 Brunswick St, Halifax, Nova Scotia, B3J...3J8 Martec Technical Report TR-13-15 Contract Project Manager: T.S. Koko , 902-425-5101 CSA: Malcolm J. Smith, Group Leader/ NPSS, 902-426-3100 ext 383
Vulnerability of bridges to scour: insights from an international expert elicitation workshop
NASA Astrophysics Data System (ADS)
Lamb, Rob; Aspinall, Willy; Odbert, Henry; Wagener, Thorsten
2017-08-01
Scour (localised erosion) during flood events is one of the most significant threats to bridges over rivers and estuaries, and has been the cause of numerous bridge failures, with damaging consequences. Mitigation of the risk of bridges being damaged by scour is therefore important to many infrastructure owners, and is supported by industry guidance. Even after mitigation, some residual risk remains, though its extent is difficult to quantify because of the uncertainties inherent in the prediction of scour and the assessment of the scour risk. This paper summarises findings from an international expert workshop on bridge scour risk assessment that explores uncertainties about the vulnerability of bridges to scour. Two specialised structured elicitation methods were applied to explore the factors that experts in the field consider important when assessing scour risk and to derive pooled expert judgements of bridge failure probabilities that are conditional on a range of assumed scenarios describing flood event severity, bridge and watercourse types and risk mitigation protocols. The experts' judgements broadly align with industry good practice, but indicate significant uncertainty about quantitative estimates of bridge failure probabilities, reflecting the difficulty in assessing the residual risk of failure. The data and findings presented here could provide a useful context for the development of generic scour fragility models and their associated uncertainties.
Fatigue Reliability of Gas Turbine Engine Structures
NASA Technical Reports Server (NTRS)
Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.
1997-01-01
The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.
Nonstationary envelope process and first excursion probability
NASA Technical Reports Server (NTRS)
Yang, J.
1972-01-01
A definition of the envelope of nonstationary random processes is proposed. The establishment of the envelope definition makes it possible to simulate the nonstationary random envelope directly. Envelope statistics, such as the density function, joint density function, moment function, and level crossing rate, which are relevent to analyses of catastrophic failure, fatigue, and crack propagation in structures, are derived. Applications of the envelope statistics to the prediction of structural reliability under random loadings are discussed in detail.
Accident hazard evaluation and control decisions on forested recreation sites
Lee A. Paine
1971-01-01
Accident hazard associated with trees on recreation sites is inherently concerned with probabilities. The major factors include the probabilities of mechanical failure and of target impact if failure occurs, the damage potential of the failure, and the target value. Hazard may be evaluated as the product of these factors; i.e., expected loss during the current...
NASA Astrophysics Data System (ADS)
Le, Jia-Liang; Bažant, Zdeněk P.; Bazant, Martin Z.
2011-07-01
Engineering structures must be designed for an extremely low failure probability such as 10 -6, which is beyond the means of direct verification by histogram testing. This is not a problem for brittle or ductile materials because the type of probability distribution of structural strength is fixed and known, making it possible to predict the tail probabilities from the mean and variance. It is a problem, though, for quasibrittle materials for which the type of strength distribution transitions from Gaussian to Weibullian as the structure size increases. These are heterogeneous materials with brittle constituents, characterized by material inhomogeneities that are not negligible compared to the structure size. Examples include concrete, fiber composites, coarse-grained or toughened ceramics, rocks, sea ice, rigid foams and bone, as well as many materials used in nano- and microscale devices. This study presents a unified theory of strength and lifetime for such materials, based on activation energy controlled random jumps of the nano-crack front, and on the nano-macro multiscale transition of tail probabilities. Part I of this study deals with the case of monotonic and sustained (or creep) loading, and Part II with fatigue (or cyclic) loading. On the scale of the representative volume element of material, the probability distribution of strength has a Gaussian core onto which a remote Weibull tail is grafted at failure probability of the order of 10 -3. With increasing structure size, the Weibull tail penetrates into the Gaussian core. The probability distribution of static (creep) lifetime is related to the strength distribution by the power law for the static crack growth rate, for which a physical justification is given. The present theory yields a simple relation between the exponent of this law and the Weibull moduli for strength and lifetime. The benefit is that the lifetime distribution can be predicted from short-time tests of the mean size effect on strength and tests of the power law for the crack growth rate. The theory is shown to match closely numerous test data on strength and static lifetime of ceramics and concrete, and explains why their histograms deviate systematically from the straight line in Weibull scale. Although the present unified theory is built on several previous advances, new contributions are here made to address: (i) a crack in a disordered nano-structure (such as that of hydrated Portland cement), (ii) tail probability of a fiber bundle (or parallel coupling) model with softening elements, (iii) convergence of this model to the Gaussian distribution, (iv) the stress-life curve under constant load, and (v) a detailed random walk analysis of crack front jumps in an atomic lattice. The nonlocal behavior is captured in the present theory through the finiteness of the number of links in the weakest-link model, which explains why the mean size effect coincides with that of the previously formulated nonlocal Weibull theory. Brittle structures correspond to the large-size limit of the present theory. An important practical conclusion is that the safety factors for strength and tolerable minimum lifetime for large quasibrittle structures (e.g., concrete structures and composite airframes or ship hulls, as well as various micro-devices) should be calculated as a function of structure size and geometry.
NASA Astrophysics Data System (ADS)
Massmann, Joel; Freeze, R. Allan
1987-02-01
This paper puts in place a risk-cost-benefit analysis for waste management facilities that explicitly recognizes the adversarial relationship that exists in a regulated market economy between the owner/operator of a waste management facility and the government regulatory agency under whose terms the facility must be licensed. The risk-cost-benefit analysis is set up from the perspective of the owner/operator. It can be used directly by the owner/operator to assess alternative design strategies. It can also be used by the regulatory agency to assess alternative regulatory policy, but only in an indirect manner, by examining the response of an owner/operator to the stimuli of various policies. The objective function is couched in terms of a discounted stream of benefits, costs, and risks over an engineering time horizon. Benefits are in the form of revenues for services provided; costs are those of construction and operation of the facility. Risk is defined as the cost associated with the probability of failure, with failure defined as the occurrence of a groundwater contamination event that violates the licensing requirements established for the facility. Failure requires a breach of the containment structure and contaminant migration through the hydrogeological environment to a compliance surface. The probability of failure can be estimated on the basis of reliability theory for the breach of containment and with a Monte-Carlo finite-element simulation for the advective contaminant transport. In the hydrogeological environment the hydraulic conductivity values are defined stochastically. The probability of failure is reduced by the presence of a monitoring network operated by the owner/operator and located between the source and the regulatory compliance surface. The level of reduction in the probability of failure depends on the probability of detection of the monitoring network, which can be calculated from the stochastic contaminant transport simulations. While the framework is quite general, the development in this paper is specifically suited for a landfill in which the primary design feature is one or more synthetic liners in parallel. Contamination is brought about by the release of a single, inorganic nonradioactive species into a saturated, high-permeability, advective, steady state horizontal flow system which can be analyzed with a two-dimensional analysis. It is possible to carry out sensitivity analyses for a wide variety of influences on this system, including landfill size, liner design, hydrogeological parameters, amount of exploration, extent of monitoring network, nature of remedial schemes, economic factors, and regulatory policy.
Transient Reliability of Ceramic Structures For Heat Engine Applications
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Jadaan, Osama M.
2002-01-01
The objectives of this report was to develop a methodology to predict the time-dependent reliability (probability of failure) of brittle material components subjected to transient thermomechanical loading, taking into account the change in material response with time. This methodology for computing the transient reliability in ceramic components subjected to fluctuation thermomechanical loading was developed, assuming SCG (Slow Crack Growth) as the delayed mode of failure. It takes into account the effect of varying Weibull modulus and materials with time. It was also coded into a beta version of NASA's CARES/Life code, and an example demonstrating its viability was presented.
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
NASA Technical Reports Server (NTRS)
Onwubiko, Chin-Yere; Onyebueke, Landon
1996-01-01
The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.
NASA Astrophysics Data System (ADS)
Rambalakos, Andreas
Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.
Time-dependent earthquake probabilities
Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.
2005-01-01
We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.
Two-IMU FDI performance of the sequential probability ratio test during shuttle entry
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
Uncertainty Quantification of the FUN3D-Predicted NASA CRM Flutter Boundary
NASA Technical Reports Server (NTRS)
Stanford, Bret K.; Massey, Steven J.
2017-01-01
A nonintrusive point collocation method is used to propagate parametric uncertainties of the flexible Common Research Model, a generic transport configuration, through the unsteady aeroelastic CFD solver FUN3D. A range of random input variables are considered, including atmospheric flow variables, structural variables, and inertial (lumped mass) variables. UQ results are explored for a range of output metrics (with a focus on dynamic flutter stability), for both subsonic and transonic Mach numbers, for two different CFD mesh refinements. A particular focus is placed on computing failure probabilities: the probability that the wing will flutter within the flight envelope.
Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-01-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433
Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-04-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.
Factors controlling the structures of magma chambers in basaltic volcanoes
NASA Technical Reports Server (NTRS)
Wilson, L.; Head, James W.
1991-01-01
The depths, vertical extents, and lateral extents of magma chambers and their formation are discussed. The depth to the center of a magma chamber is most probably determined by the density structure of the lithosphere; this process is explained. It is commonly assumed that magma chambers grow until the stress on the roof, floor, and side-wall boundaries exceed the strength of the wall rocks. Attempts to grow further lead to dike propagation events which reduce the stresses below the critical values of rock failure. The tensile or compressive failure of the walls is discussed with respect to magma migration. The later growth of magma chambers is accomplished by lateral dike injection into the country rocks. The factors controlling the patterns of growth and cooling of such dikes are briefly mentioned.
Reliability assessment of slender concrete columns at the stability failure
NASA Astrophysics Data System (ADS)
Valašík, Adrián; Benko, Vladimír; Strauss, Alfred; Täubling, Benjamin
2018-01-01
The European Standard for designing concrete columns within the use of non-linear methods shows deficiencies in terms of global reliability, in case that the concrete columns fail by the loss of stability. The buckling failure is a brittle failure which occurs without warning and the probability of its formation depends on the columns slenderness. Experiments with slender concrete columns were carried out in cooperation with STRABAG Bratislava LTD in Central Laboratory of Faculty of Civil Engineering SUT in Bratislava. The following article aims to compare the global reliability of slender concrete columns with slenderness of 90 and higher. The columns were designed according to methods offered by EN 1992-1-1 [1]. The mentioned experiments were used as basis for deterministic nonlinear modelling of the columns and subsequent the probabilistic evaluation of structural response variability. Final results may be utilized as thresholds for loading of produced structural elements and they aim to present probabilistic design as less conservative compared to classic partial safety factor based design and alternative ECOV method.
2014-02-25
risk of drug or alcohol abuse. In addition, patients with PTSD often display structural changes in the pre- frontal cortex, the amygdala, and the... triglyceride levels (12–15). An 1871 report noted that serious cardiac disorders (car- diomyopathies, heart failure, heart pain, etc.) were a consequence of...epicardium probably play a key role in the EMT process (30, 31). Maintaining the proper ECM structure is critical to pre- serving the architecture and
Reliability Analysis of Systems Subject to First-Passage Failure
NASA Technical Reports Server (NTRS)
Lutes, Loren D.; Sarkani, Shahram
2009-01-01
An obvious goal of reliability analysis is the avoidance of system failure. However, it is generally recognized that it is often not feasible to design a practical or useful system for which failure is impossible. Thus it is necessary to use techniques that estimate the likelihood of failure based on modeling the uncertainty about such items as the demands on and capacities of various elements in the system. This usually involves the use of probability theory, and a design is considered acceptable if it has a sufficiently small probability of failure. This report contains findings of analyses of systems subject to first-passage failure.
Probabilistic structural analysis methods of hot engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Hopkins, D. A.
1989-01-01
Development of probabilistic structural analysis methods for hot engine structures at Lewis Research Center is presented. Three elements of the research program are: (1) composite load spectra methodology; (2) probabilistic structural analysis methodology; and (3) probabilistic structural analysis application. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) turbine blade temperature, pressure, and torque of the space shuttle main engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; and (3) evaluation of the failure probability. Collectively, the results demonstrate that the structural durability of hot engine structural components can be effectively evaluated in a formal probabilistic/reliability framework.
Dam failure analysis for the Lago El Guineo Dam, Orocovis, Puerto Rico
Gómez-Fragoso, Julieta; Heriberto Torres-Sierra,
2016-08-09
The U.S. Geological Survey, in cooperation with the Puerto Rico Electric Power Authority, completed hydrologic and hydraulic analyses to assess the potential hazard to human life and property associated with the hypothetical failure of the Lago El Guineo Dam. The Lago El Guineo Dam is within the headwaters of the Río Grande de Manatí and impounds a drainage area of about 4.25 square kilometers.The hydrologic assessment was designed to determine the outflow hydrographs and peak discharges for Lago El Guineo and other subbasins in the Río Grande de Manatí hydrographic basin for three extreme rainfall events: (1) a 6-hour probable maximum precipitation event, (2) a 24-hour probable maximum precipitation event, and (3) a 24-hour, 100-year recurrence rainfall event. The hydraulic study simulated a dam failure of Lago El Guineo Dam using flood hydrographs generated from the hydrologic study. The simulated dam failure generated a hydrograph that was routed downstream from Lago El Guineo Dam through the lower reaches of the Río Toro Negro and the Río Grande de Manatí to determine water-surface profiles developed from the event-based hydrologic scenarios and “sunny day” conditions. The Hydrologic Engineering Center’s Hydrologic Modeling System (HEC–HMS) and Hydrologic Engineering Center’s River Analysis System (HEC–RAS) computer programs, developed by the U.S. Army Corps of Engineers, were used for the hydrologic and hydraulic modeling, respectively. The flow routing in the hydraulic analyses was completed using the unsteady flow module available in the HEC–RAS model.Above the Lago El Guineo Dam, the simulated inflow peak discharges from HEC–HMS resulted in about 550 and 414 cubic meters per second for the 6- and 24-hour probable maximum precipitation events, respectively. The 24-hour, 100-year recurrence storm simulation resulted in a peak discharge of about 216 cubic meters per second. For the hydrologic analysis, no dam failure conditions are considered within the model. The results of the hydrologic simulations indicated that for all hydrologic conditions scenarios, the Lago El Guineo Dam would not experience overtopping. For the dam breach hydraulic analysis, failure by piping was the selected hypothetical failure mode for the Lago El Guineo Dam.Results from the simulated dam failure of the Lago El Guineo Dam using the HEC–RAS model for the 6- and 24-hour probable maximum precipitation events indicated peak discharges below the dam of 1,342.43 and 1,434.69 cubic meters per second, respectively. Dam failure during the 24-hour, 100-year recurrence rainfall event resulted in a peak discharge directly downstream from Lago El Guineo Dam of 1,183.12 cubic meters per second. Dam failure during sunny-day conditions (no precipitation) produced a peak discharge at Lago El Guineo Dam of 1,015.31 cubic meters per second assuming the initial water-surface elevation was at the morning-glory spillway invert elevation.The results of the hydraulic analysis indicate that the flood would extend to many inhabited areas along the stream banks from the Lago El Guineo Dam to the mouth of the Río Grande as a result of the simulated failure of the Lago El Guineo Dam. Low-lying regions in the vicinity of Ciales, Manatí, and Barceloneta, Puerto Rico, are among the regions that would be most affected by failure of the Lago El Guineo Dam. Effects of the flood control (levee) structure constructed in 2000 to provide protection to the low-lying populated areas of Barceloneta, Puerto Rico, were considered in the hydraulic analysis of dam failure. The results indicate that overtopping can be expected in the aforementioned levee during 6- and 24-hour probable maximum precipitation events. The levee was not overtopped during dam failure scenarios under the 24-hour, 100-year recurrence rainfall event or sunny-day conditions.
Permanently enhanced dynamic triggering probabilities as evidenced by two M ≥ 7.5 earthquakes
Gomberg, Joan S.
2013-01-01
The 2012 M7.7 Haida Gwaii earthquake radiated waves that likely dynamically triggered the 2013M7.5 Craig earthquake, setting two precedents. First, the triggered earthquake is the largest dynamically triggered shear failure event documented to date. Second, the events highlight a connection between geologic structure, sedimentary troughs that act as waveguides, and triggering probability. The Haida Gwaii earthquake excited extraordinarily large waves within and beyond the Queen Charlotte Trough, which propagated well into mainland Alaska and likely triggering the Craig earthquake along the way. Previously, focusing and associated dynamic triggering have been attributed to unpredictable source effects. This case suggests that elevated dynamic triggering probabilities may exist along the many structures where sedimentary troughs overlie major faults, such as subduction zones’ accretionary prisms and transform faults’ axial valleys. Although data are sparse, I find no evidence of accelerating seismic activity in the vicinity of the Craig rupture between it and the Haida Gwaii earthquake.
NASA Technical Reports Server (NTRS)
Vitali, Roberto; Lutomski, Michael G.
2004-01-01
National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.
Probabilistic Sizing and Verification of Space Ceramic Structures
NASA Astrophysics Data System (ADS)
Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit
2012-07-01
Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.
Reliability considerations in the placement of control system components
NASA Technical Reports Server (NTRS)
Montgomery, R. C.
1983-01-01
This paper presents a methodology, along with applications to a grid type structure, for incorporating reliability considerations in the decision for actuator placement on large space structures. The method involves the minimization of a criterion that considers mission life and the reliability of the system components. It is assumed that the actuator gains are to be readjusted following failures, but their locations cannot be changed. The goal of the design is to suppress vibrations of the grid and the integral square of the grid modal amplitudes is used as a measure of performance of the control system. When reliability of the actuators is considered, a more pertinent measure is the expected value of the integral; that is, the sum of the squares of the modal amplitudes for each possible failure state considered, multiplied by the probability that the failure state will occur. For a given set of actuator locations, the optimal criterion may be graphed as a function of the ratio of the mean time to failure of the components and the design mission life or reservicing interval. The best location of the actuators is typically different for a short mission life than for a long one.
NASA Astrophysics Data System (ADS)
Ravishankar, Bharani
Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.
Microstructure and Dynamic Failure Properties of Freeze-Cast Materials for Thermobaric Warhead Cases
2012-12-01
Function LLNL Lawrence Livermore National Laboratory PDF Probability Density Function PMMA Poly(Methyl Methacrylate) RM Reactive Materials SEM...FREEZE CAST MATERIALS Freeze casting technology combines compounds such as aluminum oxide and poly(methyl methacrylate) ( PMMA ) to develop a...Subsequently, the porous structure can be infiltrated with a variety of materials, such as a standard polymer like PMMA . This hybrid material is believed
Hip Implant Modified To Increase Probability Of Retention
NASA Technical Reports Server (NTRS)
Canabal, Francisco, III
1995-01-01
Modification in design of hip implant proposed to increase likelihood of retention of implant in femur after hip-repair surgery. Decreases likelihood of patient distress and expense associated with repetition of surgery after failed implant procedure. Intended to provide more favorable flow of cement used to bind implant in proximal extreme end of femur, reducing structural flaws causing early failure of implant/femur joint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Sonjoy; Goswami, Kundan; Datta, Biswa N.
2014-12-10
Failure of structural systems under dynamic loading can be prevented via active vibration control which shifts the damped natural frequencies of the systems away from the dominant range of loading spectrum. The damped natural frequencies and the dynamic load typically show significant variations in practice. A computationally efficient methodology based on quadratic partial eigenvalue assignment technique and optimization under uncertainty has been formulated in the present work that will rigorously account for these variations and result in an economic and resilient design of structures. A novel scheme based on hierarchical clustering and importance sampling is also developed in this workmore » for accurate and efficient estimation of probability of failure to guarantee the desired resilience level of the designed system. Numerical examples are presented to illustrate the proposed methodology.« less
NASA Technical Reports Server (NTRS)
McCarty, John P.; Lyles, Garry M.
1997-01-01
Propulsion system quality is defined in this paper as having high reliability, that is, quality is a high probability of within-tolerance performance or operation. Since failures are out-of-tolerance performance, the probability of failures and their occurrence is the difference between high and low quality systems. Failures can be described at 3 levels: the system failure (which is the detectable end of a failure), the failure mode (which is the failure process), and the failure cause (which is the start). Failure causes can be evaluated & classified by type. The results of typing flight history failures shows that most failures are in unrecognized modes and result from human error or noise, i.e. failures are when engineers learn how things really work. Although the study based on US launch vehicles, a sampling of failures from other countries indicates the finding has broad application. The parameters of the design of a propulsion system are not single valued, but have dispersions associated with the manufacturing of parts. Many tests are needed to find failures, if the dispersions are large relative to tolerances, which could contribute to the large number of failures in unrecognized modes.
NASA Technical Reports Server (NTRS)
Scalzo, F.
1983-01-01
Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.
Sensitivity analysis of limit state functions for probability-based plastic design
NASA Technical Reports Server (NTRS)
Frangopol, D. M.
1984-01-01
The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.
Probabilistic analysis on the failure of reactivity control for the PWR
NASA Astrophysics Data System (ADS)
Sony Tjahyani, D. T.; Deswandri; Sunaryo, G. R.
2018-02-01
The fundamental safety function of the power reactor is to control reactivity, to remove heat from the reactor, and to confine radioactive material. The safety analysis is used to ensure that each parameter is fulfilled during the design and is done by deterministic and probabilistic method. The analysis of reactivity control is important to be done because it will affect the other of fundamental safety functions. The purpose of this research is to determine the failure probability of the reactivity control and its failure contribution on a PWR design. The analysis is carried out by determining intermediate events, which cause the failure of reactivity control. Furthermore, the basic event is determined by deductive method using the fault tree analysis. The AP1000 is used as the object of research. The probability data of component failure or human error, which is used in the analysis, is collected from IAEA, Westinghouse, NRC and other published documents. The results show that there are six intermediate events, which can cause the failure of the reactivity control. These intermediate events are uncontrolled rod bank withdrawal at low power or full power, malfunction of boron dilution, misalignment of control rod withdrawal, malfunction of improper position of fuel assembly and ejection of control rod. The failure probability of reactivity control is 1.49E-03 per year. The causes of failures which are affected by human factor are boron dilution, misalignment of control rod withdrawal and malfunction of improper position for fuel assembly. Based on the assessment, it is concluded that the failure probability of reactivity control on the PWR is still within the IAEA criteria.
Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Brunett, Acacia
2015-04-26
The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less
14 CFR 417.224 - Probability of failure analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...
14 CFR 417.224 - Probability of failure analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...
14 CFR 417.224 - Probability of failure analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...
14 CFR 417.224 - Probability of failure analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...
14 CFR 417.224 - Probability of failure analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...
Reliability and risk assessment of structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1991-01-01
Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.
Review of Literature on Probability of Detection for Liquid Penetrant Nondestructive Testing
2011-11-01
increased maintenance costs , or catastrophic failure of safety- critical structure. Knowledge of the reliability achieved by NDT methods, including...representative components to gather data for statistical analysis, which can be prohibitively expensive. To account for sampling variability inherent in any...Sioux City and Pensacola. (Those recommendations were discussed in Section 3.4.) Drury et al report on a factorial experiment aimed at identifying the
Multichannel analysis of the surface waves of earth materials in some parts of Lagos State, Nigeria
NASA Astrophysics Data System (ADS)
Adegbola, R. B.; Oyedele, K. F.; Adeoti, L.; Adeloye, A. B.
2016-09-01
We present a method that utilizes multichannel analysis of surface waves (MASW), which was used to measure shear wave velocities, with a view to establishing the probable causes of road failure, subsidence and weakening of structures in some local government areas in Lagos, Nigeria. MASW data were acquired using a 24-channel seismograph. The acquired data were processed and transformed into a two-dimensional (2-D) structure reflective of the depth and surface wave velocity distribution within a depth of 0-15 m beneath the surface using SURFSEIS software. The shear wave velocity data were compared with other geophysical/ borehole data that were acquired along the same profile. The comparison and correlation illustrate the accuracy and consistency of MASW-derived shear wave velocity profiles. Rigidity modulus and N-value were also generated. The study showed that the low velocity/ very low velocity data are reflective of organic clay/ peat materials and thus likely responsible for the failure, subsidence and weakening of structures within the study areas.
Sundaram, Aparna; Vaughan, Barbara; Kost, Kathryn; Bankole, Akinrinola; Finer, Lawrence; Singh, Susheela; Trussell, James
2017-03-01
Contraceptive failure rates measure a woman's probability of becoming pregnant while using a contraceptive. Information about these rates enables couples to make informed contraceptive choices. Failure rates were last estimated for 2002, and social and economic changes that have occurred since then necessitate a reestimation. To estimate failure rates for the most commonly used reversible methods in the United States, data from the 2006-2010 National Survey of Family Growth were used; some 15,728 contraceptive use intervals, contributed by 6,683 women, were analyzed. Data from the Guttmacher Institute's 2008 Abortion Patient Survey were used to adjust for abortion underreporting. Kaplan-Meier methods were used to estimate the associated single-decrement probability of failure by duration of use. Failure rates were compared with those from 1995 and 2002. Long-acting reversible contraceptives (the IUD and the implant) had the lowest failure rates of all methods (1%), while condoms and withdrawal carried the highest probabilities of failure (13% and 20%, respectively). However, the failure rate for the condom had declined significantly since 1995 (from 18%), as had the failure rate for all hormonal methods combined (from 8% to 6%). The failure rate for all reversible methods combined declined from 12% in 2002 to 10% in 2006-2010. These broad-based declines in failure rates reverse a long-term pattern of minimal change. Future research should explore what lies behind these trends, as well as possibilities for further improvements. © 2017 The Authors. Perspectives on Sexual and Reproductive Health published by Wiley Periodicals, Inc., on behalf of the Guttmacher Institute.
Mechanical failure probability of glasses in Earth orbit
NASA Technical Reports Server (NTRS)
Kinser, Donald L.; Wiedlocher, David E.
1992-01-01
Results of five years of earth-orbital exposure on mechanical properties of glasses indicate that radiation effects on mechanical properties of glasses, for the glasses examined, are less than the probable error of measurement. During the 5 year exposure, seven micrometeorite or space debris impacts occurred on the samples examined. These impacts were located in locations which were not subjected to effective mechanical testing, hence limited information on their influence upon mechanical strength was obtained. Combination of these results with micrometeorite and space debris impact frequency obtained by other experiments permits estimates of the failure probability of glasses exposed to mechanical loading under earth-orbit conditions. This probabilistic failure prediction is described and illustrated with examples.
On defense strategies for system of systems using aggregated correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Imam, Neena; Ma, Chris Y. T.
2017-04-01
We consider a System of Systems (SoS) wherein each system Si, i = 1; 2; ... ;N, is composed of discrete cyber and physical components which can be attacked and reinforced. We characterize the disruptions using aggregate failure correlation functions given by the conditional failure probability of SoS given the failure of an individual system. We formulate the problem of ensuring the survival of SoS as a game between an attacker and a provider, each with a utility function composed of asurvival probability term and a cost term, both expressed in terms of the number of components attacked and reinforced.more » The survival probabilities of systems satisfy simple product-form, first-order differential conditions, which simplify the Nash Equilibrium (NE) conditions. We derive the sensitivity functions that highlight the dependence of SoS survival probability at NE on cost terms, correlation functions, and individual system survival probabilities.We apply these results to a simplified model of distributed cloud computing infrastructure.« less
A multivariate copula-based framework for dealing with hazard scenarios and failure probabilities
NASA Astrophysics Data System (ADS)
Salvadori, G.; Durante, F.; De Michele, C.; Bernardi, M.; Petrella, L.
2016-05-01
This paper is of methodological nature, and deals with the foundations of Risk Assessment. Several international guidelines have recently recommended to select appropriate/relevant Hazard Scenarios in order to tame the consequences of (extreme) natural phenomena. In particular, the scenarios should be multivariate, i.e., they should take into account the fact that several variables, generally not independent, may be of interest. In this work, it is shown how a Hazard Scenario can be identified in terms of (i) a specific geometry and (ii) a suitable probability level. Several scenarios, as well as a Structural approach, are presented, and due comparisons are carried out. In addition, it is shown how the Hazard Scenario approach illustrated here is well suited to cope with the notion of Failure Probability, a tool traditionally used for design and risk assessment in engineering practice. All the results outlined throughout the work are based on the Copula Theory, which turns out to be a fundamental theoretical apparatus for doing multivariate risk assessment: formulas for the calculation of the probability of Hazard Scenarios in the general multidimensional case (d≥2) are derived, and worthy analytical relationships among the probabilities of occurrence of Hazard Scenarios are presented. In addition, the Extreme Value and Archimedean special cases are dealt with, relationships between dependence ordering and scenario levels are studied, and a counter-example concerning Tail Dependence is shown. Suitable indications for the practical application of the techniques outlined in the work are given, and two case studies illustrate the procedures discussed in the paper.
A new algorithm for finding survival coefficients employed in reliability equations
NASA Technical Reports Server (NTRS)
Bouricius, W. G.; Flehinger, B. J.
1973-01-01
Product reliabilities are predicted from past failure rates and reasonable estimate of future failure rates. Algorithm is used to calculate probability that product will function correctly. Algorithm sums the probabilities of each survival pattern and number of permutations for that pattern, over all possible ways in which product can survive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
Probabilistic design of fibre concrete structures
NASA Astrophysics Data System (ADS)
Pukl, R.; Novák, D.; Sajdlová, T.; Lehký, D.; Červenka, J.; Červenka, V.
2017-09-01
Advanced computer simulation is recently well-established methodology for evaluation of resistance of concrete engineering structures. The nonlinear finite element analysis enables to realistically predict structural damage, peak load, failure, post-peak response, development of cracks in concrete, yielding of reinforcement, concrete crushing or shear failure. The nonlinear material models can cover various types of concrete and reinforced concrete: ordinary concrete, plain or reinforced, without or with prestressing, fibre concrete, (ultra) high performance concrete, lightweight concrete, etc. Advanced material models taking into account fibre concrete properties such as shape of tensile softening branch, high toughness and ductility are described in the paper. Since the variability of the fibre concrete material properties is rather high, the probabilistic analysis seems to be the most appropriate format for structural design and evaluation of structural performance, reliability and safety. The presented combination of the nonlinear analysis with advanced probabilistic methods allows evaluation of structural safety characterized by failure probability or by reliability index respectively. Authors offer a methodology and computer tools for realistic safety assessment of concrete structures; the utilized approach is based on randomization of the nonlinear finite element analysis of the structural model. Uncertainty of the material properties or their randomness obtained from material tests are accounted in the random distribution. Furthermore, degradation of the reinforced concrete materials such as carbonation of concrete, corrosion of reinforcement, etc. can be accounted in order to analyze life-cycle structural performance and to enable prediction of the structural reliability and safety in time development. The results can serve as a rational basis for design of fibre concrete engineering structures based on advanced nonlinear computer analysis. The presented methodology is illustrated on results from two probabilistic studies with different types of concrete structures related to practical applications and made from various materials (with the parameters obtained from real material tests).
Study on safety level of RC beam bridges under earthquake
NASA Astrophysics Data System (ADS)
Zhao, Jun; Lin, Junqi; Liu, Jinlong; Li, Jia
2017-08-01
This study considers uncertainties in material strengths and the modeling which have important effects on structural resistance force based on reliability theory. After analyzing the destruction mechanism of a RC bridge, structural functions and the reliability were given, then the safety level of the piers of a reinforced concrete continuous girder bridge with stochastic structural parameters against earthquake was analyzed. Using response surface method to calculate the failure probabilities of bridge piers under high-level earthquake, their seismic reliability for different damage states within the design reference period were calculated applying two-stage design, which describes seismic safety level of the built bridges to some extent.
Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.
Fuzzy-information-based robustness of interconnected networks against attacks and failures
NASA Astrophysics Data System (ADS)
Zhu, Qian; Zhu, Zhiliang; Wang, Yifan; Yu, Hai
2016-09-01
Cascading failure is fatal in applications and its investigation is essential and therefore became a focal topic in the field of complex networks in the last decade. In this paper, a cascading failure model is established for interconnected networks and the associated data-packet transport problem is discussed. A distinguished feature of the new model is its utilization of fuzzy information in resisting uncertain failures and malicious attacks. We numerically find that the giant component of the network after failures increases with tolerance parameter for any coupling preference and attacking ambiguity. Moreover, considering the effect of the coupling probability on the robustness of the networks, we find that the robustness of the assortative coupling and random coupling of the network model increases with the coupling probability. However, for disassortative coupling, there exists a critical phenomenon for coupling probability. In addition, a critical value that attacking information accuracy affects the network robustness is observed. Finally, as a practical example, the interconnected AS-level Internet in South Korea and Japan is analyzed. The actual data validates the theoretical model and analytic results. This paper thus provides some guidelines for preventing cascading failures in the design of architecture and optimization of real-world interconnected networks.
ERIC Educational Resources Information Center
Dougherty, Michael R.; Sprenger, Amber
2006-01-01
This article introduces 2 new sources of bias in probability judgment, discrimination failure and inhibition failure, which are conceptualized as arising from an interaction between error prone memory processes and a support theory like comparison process. Both sources of bias stem from the influence of irrelevant information on participants'…
A detailed description of the sequential probability ratio test for 2-IMU FDI
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
The sequential probability ratio test (SPRT) for 2-IMU FDI (inertial measuring unit failure detection/isolation) is described. The SPRT is a statistical technique for detecting and isolating soft IMU failures originally developed for the strapdown inertial reference unit. The flowchart of a subroutine incorporating the 2-IMU SPRT is included.
Optimized Vertex Method and Hybrid Reliability
NASA Technical Reports Server (NTRS)
Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.
2002-01-01
A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, T.L.; Simonen, F.A.
1992-05-01
Probabilistic fracture mechanics analysis is a major element of comprehensive probabilistic methodology on which current NRC regulatory requirements for pressurized water reactor vessel integrity evaluation are based. Computer codes such as OCA-P and VISA-II perform probabilistic fracture analyses to estimate the increase in vessel failure probability that occurs as the vessel material accumulates radiation damage over the operating life of the vessel. The results of such analyses, when compared with limits of acceptable failure probabilities, provide an estimation of the residual life of a vessel. Such codes can be applied to evaluate the potential benefits of plant-specific mitigating actions designedmore » to reduce the probability of failure of a reactor vessel. 10 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, T.L.; Simonen, F.A.
1992-01-01
Probabilistic fracture mechanics analysis is a major element of comprehensive probabilistic methodology on which current NRC regulatory requirements for pressurized water reactor vessel integrity evaluation are based. Computer codes such as OCA-P and VISA-II perform probabilistic fracture analyses to estimate the increase in vessel failure probability that occurs as the vessel material accumulates radiation damage over the operating life of the vessel. The results of such analyses, when compared with limits of acceptable failure probabilities, provide an estimation of the residual life of a vessel. Such codes can be applied to evaluate the potential benefits of plant-specific mitigating actions designedmore » to reduce the probability of failure of a reactor vessel. 10 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeda, Masatoshi; Komura, Toshiyuki; Hirotani, Tsutomu
1995-12-01
Annual failure probabilities of buildings and equipment were roughly evaluated for two fusion-reactor-like buildings, with and without seismic base isolation, in order to examine the effectiveness of the base isolation system regarding siting issues. The probabilities are calculated considering nonlinearity and rupture of isolators. While the probability of building failure for both buildings on the same site was almost equal, the function failures for equipment showed that the base-isolated building had higher reliability than the non-isolated building. Even if the base-isolated building alone is located on a higher seismic hazard area, it could compete favorably with the ordinary one inmore » reliability of equipment.« less
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
A multistate dynamic site occupancy model for spatially aggregated sessile communities
Fukaya, Keiichi; Royle, J. Andrew; Okuda, Takehiro; Nakaoka, Masahiro; Noda, Takashi
2017-01-01
Estimation of transition probabilities of sessile communities seems easy in principle but may still be difficult in practice because resampling error (i.e. a failure to resample exactly the same location at fixed points) may cause significant estimation bias. Previous studies have developed novel analytical methods to correct for this estimation bias. However, they did not consider the local structure of community composition induced by the aggregated distribution of organisms that is typically observed in sessile assemblages and is very likely to affect observations.We developed a multistate dynamic site occupancy model to estimate transition probabilities that accounts for resampling errors associated with local community structure. The model applies a nonparametric multivariate kernel smoothing methodology to the latent occupancy component to estimate the local state composition near each observation point, which is assumed to determine the probability distribution of data conditional on the occurrence of resampling error.By using computer simulations, we confirmed that an observation process that depends on local community structure may bias inferences about transition probabilities. By applying the proposed model to a real data set of intertidal sessile communities, we also showed that estimates of transition probabilities and of the properties of community dynamics may differ considerably when spatial dependence is taken into account.Results suggest the importance of accounting for resampling error and local community structure for developing management plans that are based on Markovian models. Our approach provides a solution to this problem that is applicable to broad sessile communities. It can even accommodate an anisotropic spatial correlation of species composition, and may also serve as a basis for inferring complex nonlinear ecological dynamics.
NASA Astrophysics Data System (ADS)
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
Processor tradeoffs in distributed real-time systems
NASA Technical Reports Server (NTRS)
Krishna, C. M.; Shin, Kang G.; Bhandari, Inderpal S.
1987-01-01
The problem of the optimization of the design of real-time distributed systems is examined with reference to a class of computer architectures similar to the continuously reconfigurable multiprocessor flight control system structure, CM2FCS. Particular attention is given to the impact of processor replacement and the burn-in time on the probability of dynamic failure and mean cost. The solution is obtained numerically and interpreted in the context of real-time applications.
Skerjanc, William F.; Maki, John T.; Collin, Blaise P.; ...
2015-12-02
The success of modular high temperature gas-cooled reactors is highly dependent on the performance of the tristructural-isotopic (TRISO) coated fuel particle and the quality to which it can be manufactured. During irradiation, TRISO-coated fuel particles act as a pressure vessel to contain fission gas and mitigate the diffusion of fission products to the coolant boundary. The fuel specifications place limits on key attributes to minimize fuel particle failure under irradiation and postulated accident conditions. PARFUME (an integrated mechanistic coated particle fuel performance code developed at the Idaho National Laboratory) was used to calculate fuel particle failure probabilities. By systematically varyingmore » key TRISO-coated particle attributes, failure probability functions were developed to understand how each attribute contributes to fuel particle failure. Critical manufacturing limits were calculated for the key attributes of a low enriched TRISO-coated nuclear fuel particle with a kernel diameter of 425 μm. As a result, these critical manufacturing limits identify ranges beyond where an increase in fuel particle failure probability is expected to occur.« less
Lin, Chun-Li; Chang, Yen-Hsiang; Pa, Che-An
2009-10-01
This study evaluated the risk of failure for an endodontically treated premolar with mesio occlusodistal palatal (MODP) preparation and 3 different computer-aided design/computer-aided manufacturing (CAD/CAM) ceramic restoration configurations. Three 3-dimensional finite element (FE) models designed with CAD/CAM ceramic onlay, endocrown, and conventional crown restorations were constructed to perform simulations. The Weibull function was incorporated with FE analysis to calculate the long-term failure probability relative to different load conditions. The results indicated that the stress values on the enamel, dentin, and luting cement for endocrown restoration were the lowest values relative to the other 2 restorations. Weibull analysis revealed that the individual failure probability in the endocrown enamel, dentin, and luting cement obviously diminished more than those for onlay and conventional crown restorations. The overall failure probabilities were 27.5%, 1%, and 1% for onlay, endocrown, and conventional crown restorations, respectively, in normal occlusal condition. This numeric investigation suggests that endocrown and conventional crown restorations for endodontically treated premolars with MODP preparation present similar longevity.
Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Song-Hua Shen; Gary DeMoss
2010-06-01
Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less
Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; He, Fei; Ma, Chris Y. T.
In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less
Integrated optimization of nonlinear R/C frames with reliability constraints
NASA Technical Reports Server (NTRS)
Soeiro, Alfredo; Hoit, Marc
1989-01-01
A structural optimization algorithm was researched including global displacements as decision variables. The algorithm was applied to planar reinforced concrete frames with nonlinear material behavior submitted to static loading. The flexural performance of the elements was evaluated as a function of the actual stress-strain diagrams of the materials. Formation of rotational hinges with strain hardening were allowed and the equilibrium constraints were updated accordingly. The adequacy of the frames was guaranteed by imposing as constraints required reliability indices for the members, maximum global displacements for the structure and a maximum system probability of failure.
Stochastic damage evolution in textile laminates
NASA Technical Reports Server (NTRS)
Dzenis, Yuris A.; Bogdanovich, Alexander E.; Pastore, Christopher M.
1993-01-01
A probabilistic model utilizing random material characteristics to predict damage evolution in textile laminates is presented. Model is based on a division of each ply into two sublaminas consisting of cells. The probability of cell failure is calculated using stochastic function theory and maximal strain failure criterion. Three modes of failure, i.e. fiber breakage, matrix failure in transverse direction, as well as matrix or interface shear cracking, are taken into account. Computed failure probabilities are utilized in reducing cell stiffness based on the mesovolume concept. A numerical algorithm is developed predicting the damage evolution and deformation history of textile laminates. Effect of scatter of fiber orientation on cell properties is discussed. Weave influence on damage accumulation is illustrated with the help of an example of a Kevlar/epoxy laminate.
Fuzzy Bayesian Network-Bow-Tie Analysis of Gas Leakage during Biomass Gasification
Yan, Fang; Xu, Kaili; Yao, Xiwen; Li, Yang
2016-01-01
Biomass gasification technology has been rapidly developed recently. But fire and poisoning accidents caused by gas leakage restrict the development and promotion of biomass gasification. Therefore, probabilistic safety assessment (PSA) is necessary for biomass gasification system. Subsequently, Bayesian network-bow-tie (BN-bow-tie) analysis was proposed by mapping bow-tie analysis into Bayesian network (BN). Causes of gas leakage and the accidents triggered by gas leakage can be obtained by bow-tie analysis, and BN was used to confirm the critical nodes of accidents by introducing corresponding three importance measures. Meanwhile, certain occurrence probability of failure was needed in PSA. In view of the insufficient failure data of biomass gasification, the occurrence probability of failure which cannot be obtained from standard reliability data sources was confirmed by fuzzy methods based on expert judgment. An improved approach considered expert weighting to aggregate fuzzy numbers included triangular and trapezoidal numbers was proposed, and the occurrence probability of failure was obtained. Finally, safety measures were indicated based on the obtained critical nodes. The theoretical occurrence probabilities in one year of gas leakage and the accidents caused by it were reduced to 1/10.3 of the original values by these safety measures. PMID:27463975
Wang, Yao; Jing, Lei; Ke, Hong-Liang; Hao, Jian; Gao, Qun; Wang, Xiao-Xun; Sun, Qiang; Xu, Zhi-Jun
2016-09-20
The accelerated aging tests under electric stress for one type of LED lamp are conducted, and the differences between online and offline tests of the degradation of luminous flux are studied in this paper. The transformation of the two test modes is achieved with an adjustable AC voltage stabilized power source. Experimental results show that the exponential fitting of the luminous flux degradation in online tests possesses a higher fitting degree for most lamps, and the degradation rate of the luminous flux by online tests is always lower than that by offline tests. Bayes estimation and Weibull distribution are used to calculate the failure probabilities under the accelerated voltages, and then the reliability of the lamps under rated voltage of 220 V is estimated by use of the inverse power law model. Results show that the relative error of the lifetime estimation by offline tests increases as the failure probability decreases, and it cannot be neglected when the failure probability is less than 1%. The relative errors of lifetime estimation are 7.9%, 5.8%, 4.2%, and 3.5%, at the failure probabilities of 0.1%, 1%, 5%, and 10%, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Lin, Guang
In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less
Thorndahl, S; Willems, P
2008-01-01
Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.
Kinoshita, Koji; Kawai, Makoto; Minai, Kosuke; Ogawa, Kazuo; Inoue, Yasunori; Yoshimura, Michihiro
2016-07-15
Plasma B-type natriuretic peptide (BNP) levels may vary widely among patients with similar stages of heart failure, in whom obesity might be the only factor reducing plasma BNP levels. We investigated the effect of obesity and body mass index (BMI) on plasma BNP levels using serial measurements before and after treatment (pre- and post-BNP and pre- and post-BMI) in patients with acute heart failure. Multiple regression analysis and covariance structure analysis were performed to study the interactions between clinical factors in 372 patients. The pre-BMI was shown as a combination index of obesity and fluid accumulation, whereas the post-BMI was a conventional index of obesity. There was a significant inverse correlation between BMI and BNP in each condition before and after treatment for heart failure. The direct significant associations of the log pre-BNP with the log post-BNP (β: 0.387), the post-BMI (β: -0.043), and the pre-BMI (β: 0.030) were analyzed by using structural equation modeling. The post-BMI was inversely correlated, but importantly, the pre-BMI was positively correlated, with the log pre-BNP, because the pre-BMI probably entailed an element of fluid accumulation. There were few patients with extremely high levels of pre-BNP among those with high post-BMI, due to suppressed secretion of BNP. The low plasma BNP levels in true obesity patients with acute heart failure are of concern, because plasma BNP cannot increase in such patients. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Ding, Aidong Adam; Hsieh, Jin-Jian; Wang, Weijing
2015-01-01
Bivariate survival analysis has wide applications. In the presence of covariates, most literature focuses on studying their effects on the marginal distributions. However covariates can also affect the association between the two variables. In this article we consider the latter issue by proposing a nonstandard local linear estimator for the concordance probability as a function of covariates. Under the Clayton copula, the conditional concordance probability has a simple one-to-one correspondence with the copula parameter for different data structures including those subject to independent or dependent censoring and dependent truncation. The proposed method can be used to study how covariates affect the Clayton association parameter without specifying marginal regression models. Asymptotic properties of the proposed estimators are derived and their finite-sample performances are examined via simulations. Finally, for illustration, we apply the proposed method to analyze a bone marrow transplant data set.
van der Burg-de Graauw, N; Cobbaert, C M; Middelhoff, C J F M; Bantje, T A; van Guldener, C
2009-05-01
B-type natriuretic peptide (BNP) and its inactive counterpart NT-proBNP can help to identify or rule out heart failure in patients presenting with acute dyspnoea. It is not well known whether measurement of these peptides can be omitted in certain patient groups. We conducted a prospective observational study of 221 patients presenting with acute dyspnoea at the emergency department. The attending physicians estimated the probability of heart failure by clinical judgement. NT-proBNP was measured, but not reported. An independent panel made a final diagnosis of all available data including NT-proBNP level and judged whether and how NT-proBNP would have altered patient management. NT-proBNP levels were highest in patients with heart failure, alone or in combination with pulmonary failure. Additive value of NT-proBNP was present in 40 of 221 (18%) of the patients, and it mostly indicated that a more intensive treatment for heart failure would have been needed. Clinical judgement was an independent predictor of additive value of NT-proBNP with a maximum at a clinical probability of heart failure of 36%. NT-proBNP measurement has additive value in a substantial number of patients presenting with acute dyspnoea, but can possibly be omitted in patients with a clinical probability of heart failure of >70%.
Reliability-based management of buried pipelines considering external corrosion defects
NASA Astrophysics Data System (ADS)
Miran, Seyedeh Azadeh
Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.
Alani, Amir M.; Faramarzi, Asaad
2015-01-01
In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. PMID:26068092
NASA Astrophysics Data System (ADS)
Zhang, H.; Guan, Z. W.; Wang, Q. Y.; Liu, Y. J.; Li, J. K.
2018-05-01
The effects of microstructure and stress ratio on high cycle fatigue of nickel superalloy Nimonic 80A were investigated. The stress ratios of 0.1, 0.5 and 0.8 were chosen to perform fatigue tests in a frequency of 110 Hz. Cleavage failure was observed, and three competing failure crack initiation modes were discovered by a scanning electron microscope, which were classified as surface without facets, surface with facets and subsurface with facets. With increasing the stress ratio from 0.1 to 0.8, the occurrence probability of surface and subsurface with facets also increased and reached the maximum value at R = 0.5, meanwhile the probability of surface initiation without facets decreased. The effect of microstructure on the fatigue fracture behavior at different stress ratios was also observed and discussed. Based on the Goodman diagram, it was concluded that the fatigue strength of 50% probability of failure at R = 0.1, 0.5 and 0.8 is lower than the modified Goodman line.
Deviation from Power Law Behavior in Landslide Phenomenon
NASA Astrophysics Data System (ADS)
Li, L.; Lan, H.; Wu, Y.
2013-12-01
Power law distribution of magnitude is widely observed in many natural hazards (e.g., earthquake, floods, tornadoes, and forest fires). Landslide is unique as the size distribution of landslide is characterized by a power law decrease with a rollover in the small size end. Yet, the emergence of the rollover, i.e., the deviation from power law behavior for small size landslides, remains a mystery. In this contribution, we grouped the forces applied on landslide bodies into two categories: 1) the forces proportional to the volume of failure mass (gravity and friction), and 2) the forces proportional to the area of failure surface (cohesion). Failure occurs when the forces proportional to volume exceed the forces proportional to surface area. As such, given a certain mechanical configuration, the failure volume to failure surface area ratio must exceed a corresponding threshold to guarantee a failure. Assuming all landslides share a uniform shape, which means the volume to surface area ratio of landslide regularly increase with the landslide volume, a cutoff of landslide volume distribution in the small size end can be defined. However, in realistic landslide phenomena, where heterogeneities of landslide shape and mechanical configuration are existent, a simple cutoff of landslide volume distribution does not exist. The stochasticity of landslide shape introduce a probability distribution of the volume to surface area ratio with regard to landslide volume, with which the probability that the volume to surface ratio exceed the threshold can be estimated regarding values of landslide volume. An experiment based on empirical data showed that this probability can induce the power law distribution of landslide volume roll down in the small size end. We therefore proposed that the constraints on the failure volume to failure surface area ratio together with the heterogeneity of landslide geometry and mechanical configuration attribute for the deviation from power law behavior in landslide phenomenon. Figure shows that a rollover of landslide size distribution in the small size end is produced as the probability for V/S (the failure volume to failure surface ratio of landslide) exceeding the mechanical threshold applied to the power law distribution of landslide volume.
NASA Astrophysics Data System (ADS)
Nemeth, Noel N.; Jadaan, Osama M.; Palfi, Tamas; Baker, Eric H.
Brittle materials today are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts, thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing brittle material components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The NASA CARES/Life 1 (Ceramic Analysis and Reliability Evaluation of Structure/Life) code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. This capability includes predicting the time-dependent failure probability of ceramic components against catastrophic rupture when subjected to transient thermomechanical loads (including cyclic loads). The developed methodology allows for changes in material response that can occur with temperature or time (i.e. changing fatigue and Weibull parameters with temperature or time). For this article an overview of the transient reliability methodology and how this methodology is extended to account for proof testing is described. The CARES/Life code has been modified to have the ability to interface with commercially available finite element analysis (FEA) codes executed for transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.
POF-Darts: Geometric adaptive sampling for probability of failure
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; ...
2016-06-18
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
Diverse Redundant Systems for Reliable Space Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.
Defense strategies for asymmetric networked systems under composite utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less
Subcritical crack growth in SiNx thin-film barriers studied by electro-mechanical two-point bending
NASA Astrophysics Data System (ADS)
Guan, Qingling; Laven, Jozua; Bouten, Piet C. P.; de With, Gijsbertus
2013-06-01
Mechanical failure resulting from subcritical crack growth in the SiNx inorganic barrier layer applied on a flexible multilayer structure was studied by an electro-mechanical two-point bending method. A 10 nm conducting tin-doped indium oxide layer was sputtered as an electrical probe to monitor the subcritical crack growth in the 150 nm dielectric SiNx layer carried by a polyethylene naphthalate substrate. In the electro-mechanical two-point bending test, dynamic and static loads were applied to investigate the crack propagation in the barrier layer. As consequence of using two loading modes, the characteristic failure strain and failure time could be determined. The failure probability distribution of strain and lifetime under each loading condition was described by Weibull statistics. In this study, results from the tests in dynamic and static loading modes were linked by a power law description to determine the critical failure over a range of conditions. The fatigue parameter n from the power law reduces greatly from 70 to 31 upon correcting for internal strain. The testing method and analysis tool as described in the paper can be used to understand the limit of thin-film barriers in terms of their mechanical properties.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
A simplified fragility analysis of fan type cable stayed bridges
NASA Astrophysics Data System (ADS)
Khan, R. A.; Datta, T. K.; Ahmad, S.
2005-06-01
A simplified fragility analysis of fan type cable stayed bridges using Probabilistic Risk Analysis (PRA) procedure is presented for determining their failure probability under random ground motion. Seismic input to the bridge support is considered to be a risk consistent response spectrum which is obtained from a separate analysis. For the response analysis, the bridge deck is modeles as a beam supported on spring at different points. The stiffnesses of the springs are determined by a separate 2D static analysis of cable-tower-deck system. The analysis provides a coupled stiffness matrix for the spring system. A continuum method of analysis using dynamic stiffness is used to determine the dynamic properties of the bridges. The response of the bridge deck is obtained by the response spectrum method of analysis as applied to multidegree of freedom system which duly takes into account the quasi-static component of bridge deck vibration. The fragility analysis includes uncertainties arising due to the variation in ground motion, material property, modeling, method of analysis, ductility factor and damage concentration effect. Probability of failure of the bridge deck is determined by the First Order Second Moment (FOSM) method of reliability. A three span double plane symmetrical fan type cable stayed bridge of total span 689 m, is used as an illustrative example. The fragility curves for the bridge deck failure are obtained under a number of parametric variations. Some of the important conclusions of the study indicate that (i) not only vertical component but also the horizontal component of ground motion has considerable effect on the probability of failure; (ii) ground motion with no time lag between support excitations provides a smaller probability of failure as compared to ground motion with very large time lag between support excitation; and (iii) probability of failure may considerably increase soft soil condition.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Specifying design conservatism: Worst case versus probabilistic analysis
NASA Technical Reports Server (NTRS)
Miles, Ralph F., Jr.
1993-01-01
Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.
NASA Technical Reports Server (NTRS)
Vesely, William E.; Colon, Alfredo E.
2010-01-01
Design Safety/Reliability is associated with the probability of no failure-causing faults existing in a design. Confidence in the non-existence of failure-causing faults is increased by performing tests with no failure. Reliability-Growth testing requirements are based on initial assurance and fault detection probability. Using binomial tables generally gives too many required tests compared to reliability-growth requirements. Reliability-Growth testing requirements are based on reliability principles and factors and should be used.
Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy
2013-01-01
Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.
The assessment of low probability containment failure modes using dynamic PRA
NASA Astrophysics Data System (ADS)
Brunett, Acacia Joann
Although low probability containment failure modes in nuclear power plants may lead to large releases of radioactive material, these modes are typically crudely modeled in system level codes and have large associated uncertainties. Conventional risk assessment techniques (i.e. the fault-tree/event-tree methodology) are capable of accounting for these failure modes to some degree, however, they require the analyst to pre-specify the ordering of events, which can vary within the range of uncertainty of the phenomena. More recently, dynamic probabilistic risk assessment (DPRA) techniques have been developed which remove the dependency on the analyst. Through DPRA, it is now possible to perform a mechanistic and consistent analysis of low probability phenomena, with the timing of the possible events determined by the computational model simulating the reactor behavior. The purpose of this work is to utilize DPRA tools to assess low probability containment failure modes and the driving mechanisms. Particular focus is given to the risk-dominant containment failure modes considered in NUREG-1150, which has long been the standard for PRA techniques. More specifically, this work focuses on the low probability phenomena occurring during a station blackout (SBO) with late power recovery in the Zion Nuclear Power Plant, a Westinghouse pressurized water reactor (PWR). Subsequent to the major risk study performed in NUREG-1150, significant experimentation and modeling regarding the mechanisms driving containment failure modes have been performed. In light of this improved understanding, NUREG-1150 containment failure modes are reviewed in this work using the current state of knowledge. For some unresolved mechanisms, such as containment loading from high pressure melt ejection and combustion events, additional analyses are performed using the accident simulation tool MELCOR to explore the bounding containment loads for realistic scenarios. A dynamic treatment in the characterization of combustible gas ignition is also presented in this work. In most risk studies, combustion is treated simplistically in that it is assumed an ignition occurs if the gas mixture achieves a concentration favorable for ignition under the premise that an adequate ignition source is available. However, the criteria affecting ignition (such as the magnitude, location and frequency of the ignition sources) are complicated. This work demonstrates a technique for characterizing the properties of an ignition source to determine a probability of ignition. The ignition model developed in this work and implemented within a dynamic framework is utilized to analyze the implications and risk significance of late combustion events. This work also explores the feasibility of using dynamic event trees (DETs) with a deterministic sampling approach to analyze low probability phenomena. The flexibility of this approach is demonstrated through the rediscretization of containment fragility curves used in construction of the DET to show convergence to a true solution. Such a rediscretization also reduces the computational burden introduced through extremely fine fragility curve discretization by subsequent refinement of fragility curve regions of interest. Another advantage of the approach is the ability to perform sensitivity studies on the cumulative distribution functions (CDFs) used to determine branching probabilities without the need for rerunning the simulation code. Through review of the NUREG-1150 containment failure modes using the current state of knowledge, it is found that some failure modes, such as Alpha and rocket, can be excluded from further studies; other failure modes, such as failure to isolate, bypass, high pressure melt ejection (HPME), combustion-induced failure and overpressurization are still concerns to varying degrees. As part of this analysis, scoping studies performed in MELCOR show that HPME and the resulting direct containment heating (DCH) do not impose a significant threat to containment integrity. Additional scoping studies regarding the effect of recovery actions on in-vessel hydrogen generation show that reflooding a partially degraded core do not significantly affect hydrogen generation in-vessel, and the NUREG-1150 assumption that insufficient hydrogen is generated in-vessel to produce an energetic deflagration is confirmed. The DET analyses performed in this work show that very late power recovery produces the potential for very energetic combustion events which are capable of failing containment with a non-negligible probability, and that containment cooling systems have a significant impact on core concrete attack, and therefore combustible gas generation ex-vessel. Ultimately, the overall risk of combustion-induced containment failure is low, but its conditional likelihood can have a significant effect on accident mitigation strategies. It is also shown in this work that DETs are particularly well suited to examine low probability events because of their ability to rediscretize CDFs and observe solution convergence.
Optimization Testbed Cometboards Extended into Stochastic Domain
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.
2010-01-01
COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.
Brittle fracture in structural steels: perspectives at different size-scales.
Knott, John
2015-03-28
This paper describes characteristics of transgranular cleavage fracture in structural steel, viewed at different size-scales. Initially, consideration is given to structures and the service duty to which they are exposed at the macroscale, highlighting failure by plastic collapse and failure by brittle fracture. This is followed by sections describing the use of fracture mechanics and materials testing in carrying-out assessments of structural integrity. Attention then focuses on the microscale, explaining how values of the local fracture stress in notched bars or of fracture toughness in pre-cracked test-pieces are related to features of the microstructure: carbide thicknesses in wrought material; the sizes of oxide/silicate inclusions in weld metals. Effects of a microstructure that is 'heterogeneous' at the mesoscale are treated briefly, with respect to the extraction of test-pieces from thick sections and to extrapolations of data to low failure probabilities. The values of local fracture stress may be used to infer a local 'work-of-fracture' that is found experimentally to be a few times greater than that of two free surfaces. Reasons for this are discussed in the conclusion section on nano-scale events. It is suggested that, ahead of a sharp crack, it is necessary to increase the compliance by a cooperative movement of atoms (involving extra work) to allow the crack-tip bond to displace sufficiently for the energy of attraction between the atoms to reduce to zero. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
1981-05-15
Crane. is capable of imagining unicorns -- and we expect he is -- why does he find it relatively difficult to imagine himself avoiding a 30 minute...probability that the plan will succeed and to evaluate the risk of various causes of failure . We have suggested that the construction of scenarios is...expect that events will unfold as planned. However, the cumulative probability of at least one fatal failure could be overwhelmingly high even when
1986-04-07
34 Blackhol -" * Success/failure is too clear cut * The probability of failure is greater than the probability of success The Job Itsellf (59) • Does not...indecd, it is not -- or as one officer in the survey co-ented "a blackhole ." USAHEC is a viable career oppor- tunity; it is career enhancing; and
VHSIC/VHSIC-Like Reliability Prediction Modeling
1989-10-01
prediction would require ’ kowledge of event statistics as well as device robustness. Ii1 Additionally, although this is primarily a theoretical, bottom...Degradation in Section 5.3 P = Power PDIP = Plastic DIP P(f) = Probability of Failure due to EOS or ESD P(flc) = Probability of Failure given Contact from an...the results of those stresses: Device Stress Part Number Power Dissipation Manufacturer Test Type Part Description Junction Teniperatune Package Type
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
Lodi, Sara; Phillips, Andrew; Fidler, Sarah; Hawkins, David; Gilson, Richard; McLean, Ken; Fisher, Martin; Post, Frank; Johnson, Anne M.; Walker-Nthenda, Louise; Dunn, David; Porter, Kholoud
2013-01-01
Background The development of HIV drug resistance and subsequent virological failure are often cited as potential disadvantages of early cART initiation. However, their long-term probability is not known, and neither is the role of duration of infection at the time of initiation. Methods Patients enrolled in the UK Register of HIV seroconverters were followed-up from cART initiation to last HIV-RNA measurement. Through survival analysis we examined predictors of virologic failure (2HIV-RNA ≥400 c/l while on cART) including CD4 count and HIV duration at initiation. We also estimated the cumulative probabilities of failure and drug resistance (from the available HIV nucleotide sequences) for early initiators (cART within 12 months of seroconversion). Results Of 1075 starting cART at a median (IQR) CD4 count 272 (190,370) cells/mm3 and HIV duration 3 (1,6) years, virological failure occurred in 163 (15%). Higher CD4 count at initiation, but not HIV infection duration at cART initiation, was independently associated with lower risk of failure (p=0.033 and 0.592 respectively). Among 230 patients initiating cART early, 97 (42%) discontinued it after a median of 7 months; cumulative probabilities of resistance and failure by 8 years were 7% (95% CI 4,11) and 19% (13,25), respectively. Conclusion Although the rate of discontinuation of early cART in our cohort was high, the long-term rate of virological failure was low. Our data do not support early cART initiation being associated with increased risk of failure and drug resistance. PMID:24086588
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
Holbrook, Christopher M.; Perry, Russell W.; Brandes, Patricia L.; Adams, Noah S.
2013-01-01
In telemetry studies, premature tag failure causes negative bias in fish survival estimates because tag failure is interpreted as fish mortality. We used mark-recapture modeling to adjust estimates of fish survival for a previous study where premature tag failure was documented. High rates of tag failure occurred during the Vernalis Adaptive Management Plan’s (VAMP) 2008 study to estimate survival of fall-run Chinook salmon (Oncorhynchus tshawytscha) during migration through the San Joaquin River and Sacramento-San Joaquin Delta, California. Due to a high rate of tag failure, the observed travel time distribution was likely negatively biased, resulting in an underestimate of tag survival probability in this study. Consequently, the bias-adjustment method resulted in only a small increase in estimated fish survival when the observed travel time distribution was used to estimate the probability of tag survival. Since the bias-adjustment failed to remove bias, we used historical travel time data and conducted a sensitivity analysis to examine how fish survival might have varied across a range of tag survival probabilities. Our analysis suggested that fish survival estimates were low (95% confidence bounds range from 0.052 to 0.227) over a wide range of plausible tag survival probabilities (0.48–1.00), and this finding is consistent with other studies in this system. When tags fail at a high rate, available methods to adjust for the bias may perform poorly. Our example highlights the importance of evaluating the tag life assumption during survival studies, and presents a simple framework for evaluating adjusted survival estimates when auxiliary travel time data are available.
Performance analysis of the word synchronization properties of the outer code in a TDRSS decoder
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A self-synchronizing coding scheme for NASA's TDRSS satellite system is a concatenation of a (2,1,7) inner convolutional code with a (255,223) Reed-Solomon outer code. Both symbol and word synchronization are achieved without requiring that any additional symbols be transmitted. An important parameter which determines the performance of the word sync procedure is the ratio of the decoding failure probability to the undetected error probability. Ideally, the former should be as small as possible compared to the latter when the error correcting capability of the code is exceeded. A computer simulation of a (255,223) Reed-Solomon code as carried out. Results for decoding failure probability and for undetected error probability are tabulated and compared.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
NASA Astrophysics Data System (ADS)
Sorrentino, Valerio; Matasci, Battista; Abellan, Antonio; Jaboyedoff, Michel; Marino, Ermanno; Pignalosa, Antonio; Santo, Antonio
2016-04-01
Rockfalls and other types of landslides are the dominant processes causing a retreat of sea cliffs. The coastal areas constitute an important tourist attraction and a large number of people rest beneath the cliffs on a daily basis, considerably increasing the risk associated to rockfalls. We present an approach to assess rockfall susceptibility at the cliff scale based on terrestrial laser scanner (TLS) point clouds. The test area is a coastal cliff situated in the southern part of the Cilento (Centola Municipality, Campania Region), in which a natural arch was formed. This cliff is constituted by heavy fractured carbonate rock mass with a strong structural control. In June 2015 TLS data were acquired with long-range scanner RIEGL VZ1000®. The structural analysis of the cliff was performed in the field and using Coltop 3D software on the point cloud. As a result, 10 discontinuity sets (joint, faults and bedding planes) were individuated and the different characteristics such as orientation, spacing and persistence were measured. The kinematically unstable areas were highlighted using a script that computes an index of susceptibility to rockfalls based on the spatial distribution of failure mechanisms. The susceptibility index computation is based on the average surface that every joint set (or combinations of two joint sets in the case of wedge failure) forms on the topography according to its spacing, trace length, and incidence angle. This susceptibility index also depends on the steepness of the joint set (or of the intersection line in the case of wedge failure). As a result the most important discontinuity sets in terms of potential planar failure, wedge failure and toppling were individuated and an assessment of rockfall susceptibility at the cliff scale was achieved. Results show that the kinematically feasible failures are not equally distributed along the cliff but concentrated on certain areas. The most susceptible areas for planar failure are related to the discontinuity set K10 (71/097), whereas for toppling the highest susceptibility is reached with K1 (60/218). Concerning wedge failure, the combination of K10 and K1 yields the highest susceptibility values. It shows also clustering with higher density which is probably related to regional structures. More detailed investigations of the rockfall susceptibility and failure mechanisms will be performed during the forthcoming months. The relationship with regional structures will be also investigated in more detail. Perspectives also include using the methodology on the other side of the natural arch in order to provide a global susceptibility assessment of the area.
Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.
2008-01-01
High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight, enable higher operating temperatures requiring less cooling and thus leading to increased engine efficiencies. However, these materials are brittle and show degradation with time at high operating temperatures due to creep as well as cyclic mechanical and thermal loads. In addition, these materials are heterogeneous in their make-up and various factors affect their properties in a specific design environment. Most of these advanced composites involve two- and three-dimensional fiber architectures and require a complex multi-step high temperature processing. Since there are uncertainties associated with each of these in addition to the variability in the constituent material properties, the observed behavior of composite materials exhibits scatter. Traditional material failure analyses employing a deterministic approach, where failure is assumed to occur when some allowable stress level or equivalent stress is exceeded, are not adequate for brittle material component design. Such phenomenological failure theories are reasonably successful when applied to ductile materials such as metals. Analysis of failure in structural components is governed by the observed scatter in strength, stiffness and loading conditions. In such situations, statistical design approaches must be used. Accounting for these phenomena requires a change in philosophy on the design engineer s part that leads to a reduced focus on the use of safety factors in favor of reliability analyses. The reliability approach demands that the design engineer must tolerate a finite risk of unacceptable performance. This risk of unacceptable performance is identified as a component's probability of failure (or alternatively, component reliability). The primary concern of the engineer is minimizing this risk in an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.
Commercialization of NESSUS: Status
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Millwater, Harry R.
1991-01-01
A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.
Reliability Analysis of Sealing Structure of Electromechanical System Based on Kriging Model
NASA Astrophysics Data System (ADS)
Zhang, F.; Wang, Y. M.; Chen, R. W.; Deng, W. W.; Gao, Y.
2018-05-01
The sealing performance of aircraft electromechanical system has a great influence on flight safety, and the reliability of its typical seal structure is analyzed by researcher. In this paper, we regard reciprocating seal structure as a research object to study structural reliability. Having been based on the finite element numerical simulation method, the contact stress between the rubber sealing ring and the cylinder wall is calculated, and the relationship between the contact stress and the pressure of the hydraulic medium is built, and the friction force on different working conditions are compared. Through the co-simulation, the adaptive Kriging model obtained by EFF learning mechanism is used to describe the failure probability of the seal ring, so as to evaluate the reliability of the sealing structure. This article proposes a new idea of numerical evaluation for the reliability analysis of sealing structure, and also provides a theoretical basis for the optimal design of sealing structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.
1995-08-01
A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less
NASA Astrophysics Data System (ADS)
Heller, R. A.; Thangjitham, S.; Wang, X.
1992-04-01
The state of stress in a cylindrical structure consisting of multiple layers of carbon-carbon composite and subjected to thermal and pressure shock are analyzed using an elasticity approach. The reliability of the structure based on the weakest link concept and the Weibull distribution is also calculated. Coupled thermo-elasticity is first assumed and is shown to be unnecessary for the material considered. The effects of external and internal thermal shock as well as a superimposed pressure shock are examined. It is shown that for the geometry chosen, the structure may fail when exposed to thermal shock alone while a superimposed pressure shock can mitigate the probability of failure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.
2015-09-01
This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Oman India Pipeline: An operational repair strategy based on a rational assessment of risk
DOE Office of Scientific and Technical Information (OSTI.GOV)
German, P.
1996-12-31
This paper describes the development of a repair strategy for the operational phase of the Oman India Pipeline based upon the probability and consequences of a pipeline failure. Risk analyses and cost benefit analyses performed provide guidance on the level of deepwater repair development effort appropriate for the Oman India Pipeline project and identifies critical areas toward which more intense development effort should be directed. The risk analysis results indicate that the likelihood of a failure of the Oman India Pipeline during its 40-year life is low. Furthermore, the probability of operational failure of the pipeline in deepwater regions ismore » extremely low, the major proportion of operational failure risk being associated with the shallow water regions.« less
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
Game-theoretic strategies for asymmetric networked systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
Abstract—We consider an infrastructure consisting of a network of systems each composed of discrete components that can be reinforced at a certain cost to guard against attacks. The network provides the vital connectivity between systems, and hence plays a critical, asymmetric role in the infrastructure operations. We characterize the system-level correlations using the aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network. The survival probabilities of systems and network satisfy first-order differential conditions that capture the component-level correlations. We formulate the problem of ensuring the infrastructure survival as a gamemore » between anattacker and a provider, using the sum-form and product-form utility functions, each composed of a survival probability term and a cost term. We derive Nash Equilibrium conditions which provide expressions for individual system survival probabilities, and also the expected capacity specified by the total number of operational components. These expressions differ only in a single term for the sum-form and product-form utilities, despite their significant differences.We apply these results to simplified models of distributed cloud computing infrastructures.« less
Probabilistic Prediction of Lifetimes of Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.
2006-01-01
ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.
Uncertainty Quantification for Polynomial Systems via Bernstein Expansions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.
Jahanfar, Ali; Amirmojahedi, Mohsen; Gharabaghi, Bahram; Dubey, Brajesh; McBean, Edward; Kumar, Dinesh
2017-03-01
Rapid population growth of major urban centres in many developing countries has created massive landfills with extraordinary heights and steep side-slopes, which are frequently surrounded by illegal low-income residential settlements developed too close to landfills. These extraordinary landfills are facing high risks of catastrophic failure with potentially large numbers of fatalities. This study presents a novel method for risk assessment of landfill slope failure, using probabilistic analysis of potential failure scenarios and associated fatalities. The conceptual framework of the method includes selecting appropriate statistical distributions for the municipal solid waste (MSW) material shear strength and rheological properties for potential failure scenario analysis. The MSW material properties for a given scenario is then used to analyse the probability of slope failure and the resulting run-out length to calculate the potential risk of fatalities. In comparison with existing methods, which are solely based on the probability of slope failure, this method provides a more accurate estimate of the risk of fatalities associated with a given landfill slope failure. The application of the new risk assessment method is demonstrated with a case study for a landfill located within a heavily populated area of New Delhi, India.
Effects of footwear and stride length on metatarsal strains and failure in running.
Firminger, Colin R; Fung, Anita; Loundagin, Lindsay L; Edwards, W Brent
2017-11-01
The metatarsal bones of the foot are particularly susceptible to stress fracture owing to the high strains they experience during the stance phase of running. Shoe cushioning and stride length reduction represent two potential interventions to decrease metatarsal strain and thus stress fracture risk. Fourteen male recreational runners ran overground at a 5-km pace while motion capture and plantar pressure data were collected during four experimental conditions: traditional shoe at preferred and 90% preferred stride length, and minimalist shoe at preferred and 90% preferred stride length. Combined musculoskeletal - finite element modeling based on motion analysis and computed tomography data were used to quantify metatarsal strains and the probability of failure was determined using stress-life predictions. No significant interactions between footwear and stride length were observed. Running in minimalist shoes increased strains for all metatarsals by 28.7% (SD 6.4%; p<0.001) and probability of failure for metatarsals 2-4 by 17.3% (SD 14.3%; p≤0.005). Running at 90% preferred stride length decreased strains for metatarsal 4 by 4.2% (SD 2.0%; p≤0.007), and no differences in probability of failure were observed. Significant increases in metatarsal strains and the probability of failure were observed for recreational runners acutely transitioning to minimalist shoes. Running with a 10% reduction in stride length did not appear to be a beneficial technique for reducing the risk of metatarsal stress fracture, however the increased number of loading cycles for a given distance was not detrimental either. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. Schroeder; R. W. Youngblood
The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less
Experiences with Probabilistic Analysis Applied to Controlled Systems
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Giesy, Daniel P.
2004-01-01
This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.
Improved Correction of Misclassification Bias With Bootstrap Imputation.
van Walraven, Carl
2018-07-01
Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrar, Charles R; Gobbato, Maurizio; Conte, Joel
2009-01-01
The extensive use of lightweight advanced composite materials in unmanned aerial vehicles (UAVs) drastically increases the sensitivity to both fatigue- and impact-induced damage of their critical structural components (e.g., wings and tail stabilizers) during service life. The spar-to-skin adhesive joints are considered one of the most fatigue sensitive subcomponents of a lightweight UAV composite wing with damage progressively evolving from the wing root. This paper presents a comprehensive probabilistic methodology for predicting the remaining service life of adhesively-bonded joints in laminated composite structural components of UAVs. Non-destructive evaluation techniques and Bayesian inference are used to (i) assess the current statemore » of damage of the system and, (ii) update the probability distribution of the damage extent at various locations. A probabilistic model for future loads and a mechanics-based damage model are then used to stochastically propagate damage through the joint. Combined local (e.g., exceedance of a critical damage size) and global (e.g.. flutter instability) failure criteria are finally used to compute the probability of component failure at future times. The applicability and the partial validation of the proposed methodology are then briefly discussed by analyzing the debonding propagation, along a pre-defined adhesive interface, in a simply supported laminated composite beam with solid rectangular cross section, subjected to a concentrated load applied at mid-span. A specially developed Eliler-Bernoulli beam finite element with interlaminar slip along the damageable interface is used in combination with a cohesive zone model to study the fatigue-induced degradation in the adhesive material. The preliminary numerical results presented are promising for the future validation of the methodology.« less
Structural vulnerability assessment using reliability of slabs in avalanche area
NASA Astrophysics Data System (ADS)
Favier, Philomène; Bertrand, David; Eckert, Nicolas; Naaim, Mohamed
2013-04-01
Improvement of risk assessment or hazard zoning requires a better understanding of the physical vulnerability of structures. To consider natural hazard issue such as snow avalanches, once the flow is characterized, highlight on the mechanical behaviour of the structure is a decisive step. A challenging approach is to quantify the physical vulnerability of impacted structures according to various avalanche loadings. The main objective of this presentation is to introduce methodology and outcomes regarding the assessment of vulnerability of reinforced concrete buildings using reliability methods. Reinforced concrete has been chosen as it is one of the usual material used to build structures exposed to potential avalanche loadings. In avalanche blue zones, structures have to resist to a pressure up to 30kPa. Thus, by providing systematic fragility relations linked to the global failure of the structure, this method may serve the avalanche risk assessment. To do so, a slab was numerically designed. It represented the avalanche facing wall of a house. Different configuration cases of the element in stake have been treated to quantify numerical aspects of the problem, such as the boundary conditions or the mechanical behaviour of the structure. The structure is analysed according to four different limit states, semi-local and global failures are considered to describe the slab behaviour. The first state is attained when cracks appear in the tensile zone, then the two next states are described consistent with the Eurocode, the final state is the total collapse of the structure characterized by the yield line theory. Failure probability is estimated in accordance to the reliability framework. Monte Carlo simulations were conducted to quantify the fragility to different loadings. Sensitivity of models in terms of input distributions were defined with statistical tools such as confidence intervals and Sobol's indexes. Conclusion and discussion of this work are established to well determine contributions, limits and future needs or developments of the research. First of all, this study provides spectrum of fragility curves of reinforced concrete structures which could be used to improve risk assessment. Second, the influence of the failure criterion picked up in this survey are discussed. Then, the weight of the statistical distribution choice is analysed. Finally, the limit between vulnerability and fragility relations is set up to establish the boundary use of our approach.
Probabilistic simulation of uncertainties in thermal structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Shiao, Michael
1990-01-01
Development of probabilistic structural analysis methods for hot structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) blade temperature, pressure, and torque of the Space Shuttle Main Engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; (3) evaluation of the failure probability; (4) reliability and risk-cost assessment, and (5) an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.
Congenital nephrotic syndrome.
Hamed, Radi Ma
2003-01-01
The congenital nephrotic syndrome (CNS) is an uncommon disorder with onset of the nephrotic syndrome usually in the first three months of life. Several different diseases may cause the syndrome. These may be inherited, sporadic, acquired or part of a general malformation syndrome. The clinical course is marked by failure to thrive, recurrent life threatening bacterial infections, and early death from sepsis and/or uremia. A characteristic phenotype may be seen in children with CNS. The majority of reported cases of CNS are of the Finnish type (CNF). Although the role of the glomerular basement membrane has been emphasized as the barrier for retaining plasma proteins, recent studies have clearly shown that the slit diaphragm is the structure most likely to be the barrier in the glomerular capillary wall. The gene (NPHS1) was shown to encode a novel protein that was termed nephrin, due to its specific location in the kidney filter barrier, where it seems to form a highly organized filter structure. Nephrin is a transmembrane protein that probably forms the main building block of an isoporous zipper-like slit diaphragm filter structure. Defects in nephrin lead to the abnormal or absent slit diaphragm resulting in massive proteinuria and renal failure.
Semiparametric regression analysis of interval-censored competing risks data.
Mao, Lu; Lin, Dan-Yu; Zeng, Donglin
2017-09-01
Interval-censored competing risks data arise when each study subject may experience an event or failure from one of several causes and the failure time is not observed directly but rather is known to lie in an interval between two examinations. We formulate the effects of possibly time-varying (external) covariates on the cumulative incidence or sub-distribution function of competing risks (i.e., the marginal probability of failure from a specific cause) through a broad class of semiparametric regression models that captures both proportional and non-proportional hazards structures for the sub-distribution. We allow each subject to have an arbitrary number of examinations and accommodate missing information on the cause of failure. We consider nonparametric maximum likelihood estimation and devise a fast and stable EM-type algorithm for its computation. We then establish the consistency, asymptotic normality, and semiparametric efficiency of the resulting estimators for the regression parameters by appealing to modern empirical process theory. In addition, we show through extensive simulation studies that the proposed methods perform well in realistic situations. Finally, we provide an application to a study on HIV-1 infection with different viral subtypes. © 2017, The International Biometric Society.
Probability of loss of assured safety in systems with multiple time-dependent failure modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon Craig; Pilch, Martin.; Sallaberry, Cedric Jean-Marie.
2012-09-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allowmore » an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). Representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent are derived and numerically evaluated for a variety of WL/SL configurations, including PLOAS defined by (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS are considered.« less
Enhancing MPLS Protection Method with Adaptive Segment Repair
NASA Astrophysics Data System (ADS)
Chen, Chin-Ling
We propose a novel adaptive segment repair mechanism to improve traditional MPLS (Multi-Protocol Label Switching) failure recovery. The proposed mechanism protects one or more contiguous high failure probability links by dynamic setup of segment protection. Simulations demonstrate that the proposed mechanism reduces failure recovery time while also increasing network resource utilization.
Caballero Morales, Santiago Omar
2013-01-01
The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082
Effect of Preconditioning and Soldering on Failures of Chip Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander A.
2014-01-01
Soldering of molded case tantalum capacitors can result in damage to Ta205 dielectric and first turn-on failures due to thermo-mechanical stresses caused by CTE mismatch between materials used in the capacitors. It is also known that presence of moisture might cause damage to plastic cases due to the pop-corning effect. However, there are only scarce literature data on the effect of moisture content on the probability of post-soldering electrical failures. In this work, that is based on a case history, different groups of similar types of CWR tantalum capacitors from two lots were prepared for soldering by bake, moisture saturation, and longterm storage at room conditions. Results of the testing showed that both factors: initial quality of the lot, and preconditioning affect the probability of failures. Baking before soldering was shown to be effective to prevent failures even in lots susceptible to pop-corning damage. Mechanism of failures is discussed and recommendations for pre-soldering bake are suggested based on analysis of moisture characteristics of materials used in the capacitors' design.
PROCRU: A model for analyzing crew procedures in approach to landing
NASA Technical Reports Server (NTRS)
Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.
1980-01-01
A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.
NASA Technical Reports Server (NTRS)
Bouton, I.; Martin, G. L.
1972-01-01
Criteria to determine the probability of aircraft structural failure were established according to the Quantitative Structural Design Criteria by Statistical Methods, the QSDC Procedure. This criteria method was applied to the design of the space shuttle during this contract. An Applications Guide was developed to demonstrate the utilization of the QSDC Procedure, with examples of the application to a hypothetical space shuttle illustrating the application to specific design problems. Discussions of the basic parameters of the QSDC Procedure: the Limit and Omega Conditions, and the strength scatter, have been included. Available data pertinent to the estimation of the strength scatter have also been included.
Multidisciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Multi-Disciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song
1997-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
An Evidence Theoretic Approach to Design of Reliable Low-Cost UAVs
2009-07-28
given period. For complex systems with various stages of missions, “ success ” becomes hard to define. For a UAV, for example, is success defined as...For this reason, the proposed methods in this thesis investigate probability of failure (PoF ) rather than probability of success . Further, failure will...reduction in system PoF . Figure 25 illustrates this; a single component 43 (A) from the original system (Figure 25a) is modified to act in a subsystem with
On the estimation of risk associated with an attenuation prediction
NASA Technical Reports Server (NTRS)
Crane, R. K.
1992-01-01
Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximationmore » and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.« less
NASA Astrophysics Data System (ADS)
Popov, V. D.; Khamidullina, N. M.
2006-10-01
In developing radio-electronic devices (RED) of spacecraft operating in the fields of ionizing radiation in space, one of the most important problems is the correct estimation of their radiation tolerance. The “weakest link” in the element base of onboard microelectronic devices under radiation effect is the integrated microcircuits (IMC), especially of large scale (LSI) and very large scale (VLSI) degree of integration. The main characteristic of IMC, which is taken into account when making decisions on using some particular type of IMC in the onboard RED, is the probability of non-failure operation (NFO) at the end of the spacecraft’s lifetime. It should be noted that, until now, the NFO has been calculated only from the reliability characteristics, disregarding the radiation effect. This paper presents the so-called “reliability” approach to determination of radiation tolerance of IMC, which allows one to estimate the probability of non-failure operation of various types of IMC with due account of radiation-stimulated dose failures. The described technique is applied to RED onboard the Spektr-R spacecraft to be launched in 2007.
14 CFR 25.729 - Retracting mechanism.
Code of Federal Regulations, 2014 CFR
2014-01-01
... design take-off weight), occurring during retraction and extension at any airspeed up to 1.5 VSR1 (with... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any...
14 CFR 25.729 - Retracting mechanism.
Code of Federal Regulations, 2013 CFR
2013-01-01
... design take-off weight), occurring during retraction and extension at any airspeed up to 1.5 VSR1 (with... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any...
NASA Technical Reports Server (NTRS)
Russell, Richard
2005-01-01
Conclusions: The hot gases, having flooded the wing interior, quickly heated the upper and lower wing surfaces allowing the aluminum honeycomb facesheets and the wing tiles to debond. The thin-wall aluminum truss tubes would soon collapse and the aerodynamic and structural integrity of the left wing would be effectively destroyed. The forensic evidence is consistent with the observed External Tank foam impact 81 seconds into launch. This is the most probable cause of the damage to the Reinforced Carbon-Carbon. (RCC) leading edge.
Computing Reliabilities Of Ceramic Components Subject To Fracture
NASA Technical Reports Server (NTRS)
Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.
1992-01-01
CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.
NASA Technical Reports Server (NTRS)
Ryan, Robert S.; Townsend, John S.
1993-01-01
The prospective improvement of probabilistic methods for space program analysis/design entails the further development of theories, codes, and tools which match specific areas of application, the drawing of lessons from previous uses of probability and statistics data bases, the enlargement of data bases (especially in the field of structural failures), and the education of engineers and managers on the advantages of these methods. An evaluation is presently made of the current limitations of probabilistic engineering methods. Recommendations are made for specific applications.
Failure Surfaces for the Design of Ceramic-Lined Gun Tubes
2004-12-01
density than steel making them attractive candidates as gun tube liners . A new design approach is necessary to address the large variability in strength...systems. Having established the failure criterion for the ceramic liner as the Weibull probability of failure, the need for a suitable failure...Report AMMRC SP-82-1, Materials Technology Laboratory, Watertown, Massachusetts, 1982. 7 R. Katz, Ceramic Gun Barrel Liners : Retrospect and Prospect
Mechanistic Considerations Used in the Development of the PROFIT PCI Failure Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pankaskie, P. J.
A fuel Pellet-Zircaloy Cladding (thermo-mechanical-chemical) Interactions (PC!) failure model for estimating the probability of failure in !ransient increases in power (PROFIT) was developed. PROFIT is based on 1) standard statistical methods applied to available PC! fuel failure data and 2) a mechanistic analysis of the environmental and strain-rate-dependent stress versus strain characteristics of Zircaloy cladding. The statistical analysis of fuel failures attributable to PCI suggested that parameters in addition to power, transient increase in power, and burnup are needed to define PCI fuel failures in terms of probability estimates with known confidence limits. The PROFIT model, therefore, introduces an environmentalmore » and strain-rate dependent strain energy absorption to failure (SEAF) concept to account for the stress versus strain anomalies attributable to interstitial-disloction interaction effects in the Zircaloy cladding. Assuming that the power ramping rate is the operating corollary of strain-rate in the Zircaloy cladding, then the variables of first order importance in the PCI fuel failure phenomenon are postulated to be: 1. pre-transient fuel rod power, P{sub I}, 2. transient increase in fuel rod power, {Delta}P, 3. fuel burnup, Bu, and 4. the constitutive material property of the Zircaloy cladding, SEAF.« less
Ayas, Mouhab; Eapen, Mary; Le-Rademacher, Jennifer; Carreras, Jeanette; Abdel-Azim, Hisham; Alter, Blanche P.; Anderlini, Paolo; Battiwalla, Minoo; Bierings, Marc; Buchbinder, David K.; Bonfim, Carmem; Camitta, Bruce M.; Fasth, Anders L.; Gale, Robert Peter; Lee, Michelle A.; Lund, Troy C.; Myers, Kasiani C.; Olsson, Richard F.; Page, Kristin M.; Prestidge, Tim D.; Radhi, Mohamed; Shah, Ami J.; Schultz, Kirk R.; Wirk, Baldeep; Wagner, John E.; Deeg, H. Joachim
2015-01-01
Second allogeneic hematopoietic cell transplantation (HCT) is the only salvage option for those for develop graft failure after their first HCT. Data on outcomes after second HCT in Fanconi anemia (FA) are scarce. We report outcomes after second allogeneic HCT for FA (n=81). The indication for second HCT was graft failure after the first HCT. Transplants occurred between 1990 and 2012. The timing of second transplantation predicted subsequent graft failure and survival. Graft failure was high when the second transplant occurred less than 3 months from the first. The 3-month probability of graft failure was 69% when the interval between first and second transplant was less than 3 months compared to 23% when the interval was longer (p<0.001). Consequently, survival rates were substantially lower when the interval between first and second transplant was less than 3 months, 23% at 1-year compared to 58%, when the interval was longer (p=0.001). The corresponding 5-year probabilities of survival were 16% and 45%, respectively (p=0.006). Taken together, these data suggest that fewer than half of FA patients undergoing a second HCT for graft failure are long-term survivors. There is an urgent need to develop strategies to lower graft failure after first HCT. PMID:26116087
NASA Astrophysics Data System (ADS)
Jackson, Andrew
2015-07-01
On launch, one of Swarm's absolute scalar magnetometers (ASMs) failed to function, leaving an asymmetrical arrangement of redundant spares on different spacecrafts. A decision was required concerning the deployment of individual satellites into the low-orbit pair or the higher "lonely" orbit. I analyse the probabilities for successful operation of two of the science components of the Swarm mission in terms of a classical probabilistic failure analysis, with a view to concluding a favourable assignment for the satellite with the single working ASM. I concentrate on the following two science aspects: the east-west gradiometer aspect of the lower pair of satellites and the constellation aspect, which requires a working ASM in each of the two orbital planes. I use the so-called "expert solicitation" probabilities for instrument failure solicited from Mission Advisory Group (MAG) members. My conclusion from the analysis is that it is better to have redundancy of ASMs in the lonely satellite orbit. Although the opposite scenario, having redundancy (and thus four ASMs) in the lower orbit, increases the chance of a working gradiometer late in the mission; it does so at the expense of a likely constellation. Although the results are presented based on actual MAG members' probabilities, the results are rather generic, excepting the case when the probability of individual ASM failure is very small; in this case, any arrangement will ensure a successful mission since there is essentially no failure expected at all. Since the very design of the lower pair is to enable common mode rejection of external signals, it is likely that its work can be successfully achieved during the first 5 years of the mission.
NASA Astrophysics Data System (ADS)
Iwakoshi, Takehisa; Hirota, Osamu
2014-10-01
This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.
Cycles till failure of silver-zinc cells with competing failure modes - Preliminary data analysis
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Leibecki, H. F.; Bozek, J. M.
1980-01-01
The data analysis of cycles to failure of silver-zinc electrochemical cells with competing failure modes is presented. The test ran 129 cells through charge-discharge cycles until failure; preliminary data analysis consisted of response surface estimate of life. Batteries fail through low voltage condition and an internal shorting condition; a competing failure modes analysis was made using maximum likelihood estimation for the extreme value life distribution. Extensive residual plotting and probability plotting were used to verify data quality and selection of model.
Fatigue Failure of External Hexagon Connections on Cemented Implant-Supported Crowns.
Malta Barbosa, João; Navarro da Rocha, Daniel; Hirata, Ronaldo; Freitas, Gileade; Bonfante, Estevam A; Coelho, Paulo G
2018-01-17
To evaluate the probability of survival and failure modes of different external hexagon connection systems restored with anterior cement-retained single-unit crowns. The postulated null hypothesis was that there would be no differences under accelerated life testing. Fifty-four external hexagon dental implants (∼4 mm diameter) were used for single cement-retained crown replacement and divided into 3 groups: (3i) Full OSSEOTITE, Biomet 3i (n = 18); (OL) OEX P4, Osseolife Implants (n = 18); and (IL) Unihex, Intra-Lock International (n = 18). Abutments were torqued to the implants, and maxillary central incisor crowns were cemented and subjected to step-stress-accelerated life testing in water. Use-level probability Weibull curves and probability of survival for a mission of 100,000 cycles at 200 N (95% 2-sided confidence intervals) were calculated. Stereo and scanning electron microscopes were used for failure inspection. The beta values for 3i, OL, and IL (1.60, 1.69, and 1.23, respectively) indicated that fatigue accelerated the failure of the 3 groups. Reliability for the 3i and OL (41% and 68%, respectively) was not different between each other, but both were significantly lower than IL group (98%). Abutment screw fracture was the failure mode consistently observed in all groups. Because the reliability was significantly different between the 3 groups, our postulated null hypothesis was rejected.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Security Threat Assessment of an Internet Security System Using Attack Tree and Vague Sets
2014-01-01
Security threat assessment of the Internet security system has become a greater concern in recent years because of the progress and diversification of information technology. Traditionally, the failure probabilities of bottom events of an Internet security system are treated as exact values when the failure probability of the entire system is estimated. However, security threat assessment when the malfunction data of the system's elementary event are incomplete—the traditional approach for calculating reliability—is no longer applicable. Moreover, it does not consider the failure probability of the bottom events suffered in the attack, which may bias conclusions. In order to effectively solve the problem above, this paper proposes a novel technique, integrating attack tree and vague sets for security threat assessment. For verification of the proposed approach, a numerical example of an Internet security system security threat assessment is adopted in this paper. The result of the proposed method is compared with the listing approaches of security threat assessment methods. PMID:25405226
Cascading failures with local load redistribution in interdependent Watts-Strogatz networks
NASA Astrophysics Data System (ADS)
Hong, Chen; Zhang, Jun; Du, Wen-Bo; Sallan, Jose Maria; Lordan, Oriol
2016-05-01
Cascading failures of loads in isolated networks have been studied extensively over the last decade. Since 2010, such research has extended to interdependent networks. In this paper, we study cascading failures with local load redistribution in interdependent Watts-Strogatz (WS) networks. The effects of rewiring probability and coupling strength on the resilience of interdependent WS networks have been extensively investigated. It has been found that, for small values of the tolerance parameter, interdependent networks are more vulnerable as rewiring probability increases. For larger values of the tolerance parameter, the robustness of interdependent networks firstly decreases and then increases as rewiring probability increases. Coupling strength has a different impact on robustness. For low values of coupling strength, the resilience of interdependent networks decreases with the increment of the coupling strength until it reaches a certain threshold value. For values of coupling strength above this threshold, the opposite effect is observed. Our results are helpful to understand and design resilient interdependent networks.
Security threat assessment of an Internet security system using attack tree and vague sets.
Chang, Kuei-Hu
2014-01-01
Security threat assessment of the Internet security system has become a greater concern in recent years because of the progress and diversification of information technology. Traditionally, the failure probabilities of bottom events of an Internet security system are treated as exact values when the failure probability of the entire system is estimated. However, security threat assessment when the malfunction data of the system's elementary event are incomplete--the traditional approach for calculating reliability--is no longer applicable. Moreover, it does not consider the failure probability of the bottom events suffered in the attack, which may bias conclusions. In order to effectively solve the problem above, this paper proposes a novel technique, integrating attack tree and vague sets for security threat assessment. For verification of the proposed approach, a numerical example of an Internet security system security threat assessment is adopted in this paper. The result of the proposed method is compared with the listing approaches of security threat assessment methods.
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
NASA Technical Reports Server (NTRS)
Johnson, E. H.
1975-01-01
The optimal design was investigated of simple structures subjected to dynamic loads, with constraints on the structures' responses. Optimal designs were examined for one dimensional structures excited by harmonically oscillating loads, similar structures excited by white noise, and a wing in the presence of continuous atmospheric turbulence. The first has constraints on the maximum allowable stress while the last two place bounds on the probability of failure of the structure. Approximations were made to replace the time parameter with a frequency parameter. For the first problem, this involved the steady state response, and in the remaining cases, power spectral techniques were employed to find the root mean square values of the responses. Optimal solutions were found by using computer algorithms which combined finite elements methods with optimization techniques based on mathematical programming. It was found that the inertial loads for these dynamic problems result in optimal structures that are radically different from those obtained for structures loaded statically by forces of comparable magnitude.
Bažant, Zdeněk P.; Le, Jia-Liang; Bazant, Martin Z.
2009-01-01
The failure probability of engineering structures such as aircraft, bridges, dams, nuclear structures, and ships, as well as microelectronic components and medical implants, must be kept extremely low, typically <10−6. The safety factors needed to ensure it have so far been assessed empirically. For perfectly ductile and perfectly brittle structures, the empirical approach is sufficient because the cumulative distribution function (cdf) of random material strength is known and fixed. However, such an approach is insufficient for structures consisting of quasibrittle materials, which are brittle materials with inhomogeneities that are not negligible compared with the structure size. The reason is that the strength cdf of quasibrittle structure varies from Gaussian to Weibullian as the structure size increases. In this article, a recently proposed theory for the strength cdf of quasibrittle structure is refined by deriving it from fracture mechanics of nanocracks propagating by small, activation-energy-controlled, random jumps through the atomic lattice. This refinement also provides a plausible physical justification of the power law for subcritical creep crack growth, hitherto considered empirical. The theory is further extended to predict the cdf of structural lifetime at constant load, which is shown to be size- and geometry-dependent. The size effects on structure strength and lifetime are shown to be related and the latter to be much stronger. The theory fits previously unexplained deviations of experimental strength and lifetime histograms from the Weibull distribution. Finally, a boundary layer method for numerical calculation of the cdf of structural strength and lifetime is outlined. PMID:19561294
Predicting Quarantine Failure Rates
2004-01-01
Preemptive quarantine through contact-tracing effectively controls emerging infectious diseases. Occasionally this quarantine fails, however, and infected persons are released. The probability of quarantine failure is typically estimated from disease-specific data. Here a simple, exact estimate of the failure rate is derived that does not depend on disease-specific parameters. This estimate is universally applicable to all infectious diseases. PMID:15109418
Sensitivity study on durability variables of marine concrete structures
NASA Astrophysics Data System (ADS)
Zhou, Xin'gang; Li, Kefei
2013-06-01
In order to study the influence of parameters on durability of marine concrete structures, the parameter's sensitivity analysis was studied in this paper. With the Fick's 2nd law of diffusion and the deterministic sensitivity analysis method (DSA), the sensitivity factors of apparent surface chloride content, apparent chloride diffusion coefficient and its time dependent attenuation factor were analyzed. The results of the analysis show that the impact of design variables on concrete durability was different. The values of sensitivity factor of chloride diffusion coefficient and its time dependent attenuation factor were higher than others. Relative less error in chloride diffusion coefficient and its time dependent attenuation coefficient induces a bigger error in concrete durability design and life prediction. According to probability sensitivity analysis (PSA), the influence of mean value and variance of concrete durability design variables on the durability failure probability was studied. The results of the study provide quantitative measures of the importance of concrete durability design and life prediction variables. It was concluded that the chloride diffusion coefficient and its time dependent attenuation factor have more influence on the reliability of marine concrete structural durability. In durability design and life prediction of marine concrete structures, it was very important to reduce the measure and statistic error of durability design variables.
Sharland, Michael J; Waring, Stephen C; Johnson, Brian P; Taran, Allise M; Rusin, Travis A; Pattock, Andrew M; Palcher, Jeanette A
2018-01-01
Assessing test performance validity is a standard clinical practice and although studies have examined the utility of cognitive/memory measures, few have examined attention measures as indicators of performance validity beyond the Reliable Digit Span. The current study further investigates the classification probability of embedded Performance Validity Tests (PVTs) within the Brief Test of Attention (BTA) and the Conners' Continuous Performance Test (CPT-II), in a large clinical sample. This was a retrospective study of 615 patients consecutively referred for comprehensive outpatient neuropsychological evaluation. Non-credible performance was defined two ways: failure on one or more PVTs and failure on two or more PVTs. Classification probability of the BTA and CPT-II into non-credible groups was assessed. Sensitivity, specificity, positive predictive value, and negative predictive value were derived to identify clinically relevant cut-off scores. When using failure on two or more PVTs as the indicator for non-credible responding compared to failure on one or more PVTs, highest classification probability, or area under the curve (AUC), was achieved by the BTA (AUC = .87 vs. .79). CPT-II Omission, Commission, and Total Errors exhibited higher classification probability as well. Overall, these findings corroborate previous findings, extending them to a large clinical sample. BTA and CPT-II are useful embedded performance validity indicators within a clinical battery but should not be used in isolation without other performance validity indicators.
Stochastic Nonlinear Response of Woven CMCs
NASA Technical Reports Server (NTRS)
Kuang, C. Liu; Arnold, Steven M.
2013-01-01
It is well known that failure of a material is a locally driven event. In the case of ceramic matrix composites (CMCs), significant variations in the microstructure of the composite exist and their significance on both deformation and life response need to be assessed. Examples of these variations include changes in the fiber tow shape, tow shifting/nesting and voids within and between tows. In the present work, the influence of scale specific architectural features of woven ceramic composite are examined stochastically at both the macroscale (woven repeating unit cell (RUC)) and structural scale (idealized using multiple RUCs). The recently developed MultiScale Generalized Method of Cells methodology is used to determine the overall deformation response, proportional elastic limit (first matrix cracking), and failure under tensile loading conditions and associated probability distribution functions. Prior results showed that the most critical architectural parameter to account for is weave void shape and content with other parameters being less in severity. Current results show that statistically only the post-elastic limit region (secondary hardening modulus and ultimate tensile strength) is impacted by local uncertainties both at the macro and structural level.
Progressive failure on the North Anatolian fault since 1939 by earthquake stress triggering
Stein, R.S.; Barka, A.A.; Dieterich, J.H.
1997-01-01
10 M ??? 6.7 earthquakes ruptured 1000 km of the North Anatolian fault (Turkey) during 1939-1992, providing an unsurpassed opportunity to study how one large shock sets up the next. We use the mapped surface slip and fault geometry to infer the transfer of stress throughout the sequence. Calculations of the change in Coulomb failure stress reveal that nine out of 10 ruptures were brought closer to failure by the preceding shocks, typically by 1-10 bar, equivalent to 3-30 years of secular stressing. We translate the calculated stress changes into earthquake probability gains using an earthquake-nucleation constitutive relation, which includes both permanent and transient effects of the sudden stress changes. The transient effects of the stress changes dominate during the mean 10 yr period between triggering and subsequent rupturing shocks in the Anatolia sequence. The stress changes result in an average three-fold gain in the net earthquake probability during the decade after each event. Stress is calculated to be high today at several isolated sites along the fault. During the next 30 years, we estimate a 15 per cent probability of a M ??? 6.7 earthquake east of the major eastern centre of Ercinzan, and a 12 per cent probability for a large event south of the major western port city of Izmit. Such stress-based probability calculations may thus be useful to assess and update earthquake hazards elsewhere.
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2010-12-01
A cost-effective and service-differentiated provisioning strategy is very desirable to service providers so that they can offer users satisfactory services, while optimizing network resource allocation. Providing differentiated protection services to connections for surviving link failure has been extensively studied in recent years. However, the differentiated protection services for workflow-based applications, which consist of many interdependent tasks, have scarcely been studied. This paper investigates the problem of providing differentiated services for workflow-based applications in optical grid. In this paper, we develop three differentiated protection services provisioning strategies which can provide security level guarantee and network-resource optimization for workflow-based applications. The simulation demonstrates that these heuristic algorithms provide protection cost-effectively while satisfying the applications' failure probability requirements.
Patel, Teresa; Fisher, Stanley P.
2016-01-01
Objective This study aimed to utilize failure modes and effects analysis (FMEA) to transform clinical insights into a risk mitigation plan for intrathecal (IT) drug delivery in pain management. Methods The FMEA methodology, which has been used for quality improvement, was adapted to assess risks (i.e., failure modes) associated with IT therapy. Ten experienced pain physicians scored 37 failure modes in the following categories: patient selection for therapy initiation (efficacy and safety concerns), patient safety during IT therapy, and product selection for IT therapy. Participants assigned severity, probability, and detection scores for each failure mode, from which a risk priority number (RPN) was calculated. Failure modes with the highest RPNs (i.e., most problematic) were discussed, and strategies were proposed to mitigate risks. Results Strategic discussions focused on 17 failure modes with the most severe outcomes, the highest probabilities of occurrence, and the most challenging detection. The topic of the highest‐ranked failure mode (RPN = 144) was manufactured monotherapy versus compounded combination products. Addressing failure modes associated with appropriate patient and product selection was predicted to be clinically important for the success of IT therapy. Conclusions The methodology of FMEA offers a systematic approach to prioritizing risks in a complex environment such as IT therapy. Unmet needs and information gaps are highlighted through the process. Risk mitigation and strategic planning to prevent and manage critical failure modes can contribute to therapeutic success. PMID:27477689
Stress Transmission and Failure in Disordered Porous Media
NASA Astrophysics Data System (ADS)
Laubie, Hadrien; Radjai, Farhang; Pellenq, Roland; Ulm, Franz-Josef
2017-08-01
By means of extensive lattice-element simulations, we investigate stress transmission and its relation with failure properties in increasingly disordered porous systems. We observe a non-Gaussian broadening of stress probability density functions under tensile loading with increasing porosity and disorder, revealing a gradual transition from a state governed by single-pore stress concentration to a state controlled by multipore interactions and metric disorder. This effect is captured by the excess kurtosis of stress distributions and shown to be nicely correlated with the second moment of local porosity fluctuations, which appears thus as a (dis)order parameter for the system. By generating statistical ensembles of porous textures with varying porosity and disorder, we derive a general expression for the fracture stress as a decreasing function of porosity and disorder. Focusing on critical sites where the local stress is above the global fracture threshold, we also analyze the transition to failure in terms of a coarse-graining length. These findings provide a general framework which can also be more generally applied to multiphase and structural heterogeneous materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly
Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition,more » substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.« less
Failure mode and effects analysis: too little for too much?
Dean Franklin, Bryony; Shebl, Nada Atef; Barber, Nick
2012-07-01
Failure mode and effects analysis (FMEA) is a structured prospective risk assessment method that is widely used within healthcare. FMEA involves a multidisciplinary team mapping out a high-risk process of care, identifying the failures that can occur, and then characterising each of these in terms of probability of occurrence, severity of effects and detectability, to give a risk priority number used to identify failures most in need of attention. One might assume that such a widely used tool would have an established evidence base. This paper considers whether or not this is the case, examining the evidence for the reliability and validity of its outputs, the mathematical principles behind the calculation of a risk prioirty number, and variation in how it is used in practice. We also consider the likely advantages of this approach, together with the disadvantages in terms of the healthcare professionals' time involved. We conclude that although FMEA is popular and many published studies have reported its use within healthcare, there is little evidence to support its use for the quantitative prioritisation of process failures. It lacks both reliability and validity, and is very time consuming. We would not recommend its use as a quantitative technique to prioritise, promote or study patient safety interventions. However, the stage of FMEA involving multidisciplinary mapping process seems valuable and work is now needed to identify the best way of converting this into plans for action.
Numerical simulation of backward erosion piping in heterogeneous fields
NASA Astrophysics Data System (ADS)
Liang, Yue; Yeh, Tian-Chyi Jim; Wang, Yu-Li; Liu, Mingwei; Wang, Junjie; Hao, Yonghong
2017-04-01
Backward erosion piping (BEP) is one of the major causes of seepage failures in levees. Seepage fields dictate the BEP behaviors and are influenced by the heterogeneity of soil properties. To investigate the effects of the heterogeneity on the seepage failures, we develop a numerical algorithm and conduct simulations to study BEP progressions in geologic media with spatially stochastic parameters. Specifically, the void ratio e, the hydraulic conductivity k, and the ratio of the particle contents r of the media are represented as the stochastic variables. They are characterized by means and variances, the spatial correlation structures, and the cross correlation between variables. Results of the simulations reveal that the heterogeneity accelerates the development of preferential flow paths, which profoundly increase the likelihood of seepage failures. To account for unknown heterogeneity, we define the probability of the seepage instability (PI) to evaluate the failure potential of a given site. Using Monte-Carlo simulation (MCS), we demonstrate that the PI value is significantly influenced by the mean and the variance of ln k and its spatial correlation scales. But the other parameters, such as means and variances of e and r, and their cross correlation, have minor impacts. Based on PI analyses, we introduce a risk rating system to classify the field into different regions according to risk levels. This rating system is useful for seepage failures prevention and assists decision making when BEP occurs.
The influence of structural response on sympathetic detonation
NASA Technical Reports Server (NTRS)
Watson, J. L.
1980-01-01
The role that a munition's structural response plays in the ignition process and the development of violent reactions and detonations is explored. The munition's structural response is identified as one of the factors that influences reaction violence. If the structural response of a round is known, this knowledge can be used to redstruce the probability that a large explosion would result from the sequential detonation of individual rounds within a large storage array. The response of an acceptor round was studied. The castings fail in the same manner regardless of whether or not there is a fill material present in the round. These failures are caused by stress waves which are transformed from compressive waves to tensile waves by reflection as the impact energy moves around the casting. Since these waves move in opposite directions around the projectile circumference and collide opposite the point of impact, very high tensile forces are developed which can crack the casing.
Research and application of borehole structure optimization based on pre-drill risk assessment
NASA Astrophysics Data System (ADS)
Zhang, Guohui; Liu, Xinyun; Chenrong; Hugui; Yu, Wenhua; Sheng, Yanan; Guan, Zhichuan
2017-11-01
Borehole structure design based on pre-drill risk assessment and considering risks related to drilling operation is the pre-condition for safe and smooth drilling operation. Major risks of drilling operation include lost circulation, blowout, sidewall collapsing, sticking and failure of drilling tools etc. In the study, studying data from neighboring wells was used to calculate the profile of formation pressure with credibility in the target well, then the borehole structure design for the target well assessment by using the drilling risk assessment to predict engineering risks before drilling. Finally, the prediction results were used to optimize borehole structure design to prevent such drilling risks. The newly-developed technique provides a scientific basis for lowering probability and frequency of drilling engineering risks, and shortening time required to drill a well, which is of great significance for safe and high-efficient drilling.
Statistical physics of the yielding transition in amorphous solids.
Karmakar, Smarajit; Lerner, Edan; Procaccia, Itamar
2010-11-01
The art of making structural, polymeric, and metallic glasses is rapidly developing with many applications. A limitation is that under increasing external strain all amorphous solids (like their crystalline counterparts) have a finite yield stress which cannot be exceeded without effecting a plastic response which typically leads to mechanical failure. Understanding this is crucial for assessing the risk of failure of glassy materials under mechanical loads. Here we show that the statistics of the energy barriers ΔE that need to be surmounted changes from a probability distribution function that goes smoothly to zero as ΔE=0 to a pdf which is finite at ΔE=0 . This fundamental change implies a dramatic transition in the mechanical stability properties with respect to external strain. We derive exact results for the scaling exponents that characterize the magnitudes of average energy and stress drops in plastic events as a function of system size.
A Probability Problem from Real Life: The Tire Exploded.
ERIC Educational Resources Information Center
Bartlett, Albert A.
1993-01-01
Discusses the probability of seeing a tire explode or disintegrate while traveling down the highway. Suggests that a person observing 10 hours a day would see a failure on the average of once every 300 years. (MVL)
NASA Astrophysics Data System (ADS)
Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei
2017-02-01
The reliability of Sn-3.0Ag-0.5Cu (SAC 305) solder joint under a broad level of drop impacts was studied. The failure performance of solder joint, failure probability and failure position were analyzed under two shock test conditions, i.e., 1000 g for 1 ms and 300 g for 2 ms. The stress distribution on the solder joint was calculated by ABAQUS. The results revealed that the dominant reason was the tension due to the difference in stiffness between the print circuit board and ball grid array, and the maximum tension of 121.1 MPa and 31.1 MPa, respectively, under both 1000 g or 300 g drop impact, was focused on the corner of the solder joint which was located in the outmost corner of the solder ball row. The failure modes were summarized into the following four modes: initiation and propagation through the (1) intermetallic compound layer, (2) Ni layer, (3) Cu pad, or (4) Sn-matrix. The outmost corner of the solder ball row had a high failure probability under both 1000 g and 300 g drop impact. The number of failures of solder ball under the 300 g drop impact was higher than that under the 1000 g drop impact. The characteristic drop values for failure were 41 and 15,199, respectively, following the statistics.
de Carvalho, Paulo Victor Rodrigues; Gomes, José Orlando; Huber, Gilbert Jacob; Vidal, Mario Cesar
2009-05-01
A fundamental challenge in improving the safety of complex systems is to understand how accidents emerge in normal working situations, with equipment functioning normally in normally structured organizations. We present a field study of the en route mid-air collision between a commercial carrier and an executive jet, in the clear afternoon Amazon sky in which 154 people lost their lives, that illustrates one response to this challenge. Our focus was on how and why the several safety barriers of a well structured air traffic system melted down enabling the occurrence of this tragedy, without any catastrophic component failure, and in a situation where everything was functioning normally. We identify strong consistencies and feedbacks regarding factors of system day-to-day functioning that made monitoring and awareness difficult, and the cognitive strategies that operators have developed to deal with overall system behavior. These findings emphasize the active problem-solving behavior needed in air traffic control work, and highlight how the day-to-day functioning of the system can jeopardize such behavior. An immediate consequence is that safety managers and engineers should review their traditional safety approach and accident models based on equipment failure probability, linear combinations of failures, rules and procedures, and human errors, to deal with complex patterns of coincidence possibilities, unexpected links, resonance among system functions and activities, and system cognition.
One hundred years of return period: Strengths and limitations
NASA Astrophysics Data System (ADS)
Volpi, E.; Fiori, A.; Grimaldi, S.; Lombardo, F.; Koutsoyiannis, D.
2015-10-01
One hundred years from its original definition by Fuller, the probabilistic concept of return period is widely used in hydrology as well as in other disciplines of geosciences to give an indication on critical event rareness. This concept gains its popularity, especially in engineering practice for design and risk assessment, due to its ease of use and understanding; however, return period relies on some basic assumptions that should be satisfied for a correct application of this statistical tool. Indeed, conventional frequency analysis in hydrology is performed by assuming as necessary conditions that extreme events arise from a stationary distribution and are independent of one another. The main objective of this paper is to investigate the properties of return period when the independence condition is omitted; hence, we explore how the different definitions of return period available in literature affect results of frequency analysis for processes correlated in time. We demonstrate that, for stationary processes, the independence condition is not necessary in order to apply the classical equation of return period (i.e., the inverse of exceedance probability). On the other hand, we show that the time-correlation structure of hydrological processes modifies the shape of the distribution function of which the return period represents the first moment. This implies that, in the context of time-dependent processes, the return period might not represent an exhaustive measure of the probability of failure, and that its blind application could lead to misleading results. To overcome this problem, we introduce the concept of Equivalent Return Period, which controls the probability of failure still preserving the virtue of effectively communicating the event rareness.
NASA Astrophysics Data System (ADS)
Guler Yigitoglu, Askin
In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Technical Reports Server (NTRS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-01-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Astrophysics Data System (ADS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-10-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A
2010-12-01
This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Anderson, Leif; Box, Neil; Carter, Katrina; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael
2012-01-01
There are two general shortcomings to the current annual sparing assessment: 1. The vehicle functions are currently assessed according to confidence targets, which can be misleading- overly conservative or optimistic. 2. The current confidence levels are arbitrarily determined and do not account for epistemic uncertainty (lack of knowledge) in the ORU failure rate. There are two major categories of uncertainty that impact Sparing Assessment: (a) Aleatory Uncertainty: Natural variability in distribution of actual failures around an Mean Time Between Failure (MTBF) (b) Epistemic Uncertainty : Lack of knowledge about the true value of an Orbital Replacement Unit's (ORU) MTBF We propose an approach to revise confidence targets and account for both categories of uncertainty, an approach we call Probability and Confidence Trade-space (PACT) evaluation.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
A Computational Framework to Control Verification and Robustness Analysis
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2010-01-01
This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.
Structural behavior of the space shuttle SRM Tang-Clevis joint
NASA Technical Reports Server (NTRS)
Greene, W. H.; Knight, N. F., Jr.; Stockwell, A. E.
1986-01-01
The space shuttle Challenger accident investigation focused on the failure of a tang-clevis joint on the right solid rocket motor. The existence of relative motion between the inner arm of the clevis and the O-ring sealing surface on the tang has been identified as a potential contributor to this failure. This motion can cause the O-rings to become unseated and therefore lose their sealing capability. Finite element structural analyses have been performed to predict both deflections and stresses in the joint under the primary, pressure loading condition. These analyses have demonstrated the difficulty of accurately predicting the structural behavior of the tang-clevis joint. Stresses in the vicinity of the connecting pins, obtained from elastic analyses, considerably exceed the material yield allowables indicating that inelastic analyses are probably necessary. Two modifications have been proposed to control the relative motion between the inner clevis arm and the tang at the O-ring sealing surface. One modification, referred to as the capture feature, uses additional material on the inside of the tang to restrict motion of the inner clevis arm. The other modification uses external stiffening rings above and below the joint to control the local bending in the shell near the joint. Both of these modifications are shown to be effective in controlling the relative motion in the joint.
Structural behavior of the space shuttle SRM tang-clevis joint
NASA Technical Reports Server (NTRS)
Greene, William H.; Knight, Norman F., Jr.; Stockwell, Alan E.
1988-01-01
The space shuttle Challenger accident investigation focused on the failure of a tang-clevis joint on the right solid rocket motor. The existence of relative motion between the inner arm of the clevis and the O-ring sealing surface on the tang has been identified as a potential contributor to this failure. This motion can cause the O-rings to become unseated and therefore lose their sealing capability. Finite element structural analyses have been performed to predict both deflections and stresses in the joint under the primary, pressure loading condition. These analyses have demonstrated the difficulty of accurately predicting the structural behavior of the tang-clevis joint. Stresses in the vicinity of the connecting pins, obtained from elastic analyses, considerably exceed the material yield allowables indicating that inelastic analyses are probably necessary. Two modifications have been proposed to control the relative motion between the inner clevis arm and the tang at the O-ring sealing surface. One modification, referred to as the capture feature, uses additional material on the inside of the tang to restrict motion of the inner clevis arm. The other modification uses external stiffening rings above and below the joint to control the local bending in the shell near the joint. Both of these modifications are shown to be effective in controlling the relative motion in the joint.
Compounding effects of sea level rise and fluvial flooding.
Moftakhari, Hamed R; Salvadori, Gianfausto; AghaKouchak, Amir; Sanders, Brett F; Matthew, Richard A
2017-09-12
Sea level rise (SLR), a well-documented and urgent aspect of anthropogenic global warming, threatens population and assets located in low-lying coastal regions all around the world. Common flood hazard assessment practices typically account for one driver at a time (e.g., either fluvial flooding only or ocean flooding only), whereas coastal cities vulnerable to SLR are at risk for flooding from multiple drivers (e.g., extreme coastal high tide, storm surge, and river flow). Here, we propose a bivariate flood hazard assessment approach that accounts for compound flooding from river flow and coastal water level, and we show that a univariate approach may not appropriately characterize the flood hazard if there are compounding effects. Using copulas and bivariate dependence analysis, we also quantify the increases in failure probabilities for 2030 and 2050 caused by SLR under representative concentration pathways 4.5 and 8.5. Additionally, the increase in failure probability is shown to be strongly affected by compounding effects. The proposed failure probability method offers an innovative tool for assessing compounding flood hazards in a warming climate.
Improving online risk assessment with equipment prognostics and health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie B.; Liu, Xiaotong; Briere, Chris
The current approach to evaluating the risk of nuclear power plant (NPP) operation relies on static probabilities of component failure, which are based on industry experience with the existing fleet of nominally similar light water reactors (LWRs). As the nuclear industry looks to advanced reactor designs that feature non-light water coolants (e.g., liquid metal, high temperature gas, molten salt), this operating history is not available. Many advanced reactor designs use advanced components, such as electromagnetic pumps, that have not been used in the US commercial nuclear fleet. Given the lack of rich operating experience, we cannot accurately estimate the evolvingmore » probability of failure for basic components to populate the fault trees and event trees that typically comprise probabilistic risk assessment (PRA) models. Online equipment prognostics and health management (PHM) technologies can bridge this gap to estimate the failure probabilities for components under operation. The enhanced risk monitor (ERM) incorporates equipment condition assessment into the existing PRA and risk monitor framework to provide accurate and timely estimates of operational risk.« less
Statistical Performance Evaluation Of Soft Seat Pressure Relief Valves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Stephen P.; Gross, Robert E.
2013-03-26
Risk-based inspection methods enable estimation of the probability of failure on demand for spring-operated pressure relief valves at the United States Department of Energy's Savannah River Site in Aiken, South Carolina. This paper presents a statistical performance evaluation of soft seat spring operated pressure relief valves. These pressure relief valves are typically smaller and of lower cost than hard seat (metal to metal) pressure relief valves and can provide substantial cost savings in fluid service applications (air, gas, liquid, and steam) providing that probability of failure on demand (the probability that the pressure relief valve fails to perform its intendedmore » safety function during a potentially dangerous over pressurization) is at least as good as that for hard seat valves. The research in this paper shows that the proportion of soft seat spring operated pressure relief valves failing is the same or less than that of hard seat valves, and that for failed valves, soft seat valves typically have failure ratios of proof test pressure to set pressure less than that of hard seat valves.« less
Probabilistic finite elements for fatigue and fracture analysis
NASA Astrophysics Data System (ADS)
Belytschko, Ted; Liu, Wing Kam
Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.
Risk-Based Object Oriented Testing
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Stapko, Ruth; Gallo, Albert
2000-01-01
Software testing is a well-defined phase of the software development life cycle. Functional ("black box") testing and structural ("white box") testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes. Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible.
Probabilistic finite elements for fatigue and fracture analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Liu, Wing Kam
1992-01-01
Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.
Rock Slide Risk Assessment: A Semi-Quantitative Approach
NASA Astrophysics Data System (ADS)
Duzgun, H. S. B.
2009-04-01
Rock slides can be better managed by systematic risk assessments. Any risk assessment methodology for rock slides involves identification of rock slide risk components, which are hazard, elements at risk and vulnerability. For a quantitative/semi-quantitative risk assessment for rock slides, a mathematical value the risk has to be computed and evaluated. The quantitative evaluation of risk for rock slides enables comparison of the computed risk with the risk of other natural and/or human-made hazards and providing better decision support and easier communication for the decision makers. A quantitative/semi-quantitative risk assessment procedure involves: Danger Identification, Hazard Assessment, Elements at Risk Identification, Vulnerability Assessment, Risk computation, Risk Evaluation. On the other hand, the steps of this procedure require adaptation of existing or development of new implementation methods depending on the type of landslide, data availability, investigation scale and nature of consequences. In study, a generic semi-quantitative risk assessment (SQRA) procedure for rock slides is proposed. The procedure has five consecutive stages: Data collection and analyses, hazard assessment, analyses of elements at risk and vulnerability and risk assessment. The implementation of the procedure for a single rock slide case is illustrated for a rock slope in Norway. Rock slides from mountain Ramnefjell to lake Loen are considered to be one of the major geohazards in Norway. Lake Loen is located in the inner part of Nordfjord in Western Norway. Ramnefjell Mountain is heavily jointed leading to formation of vertical rock slices with height between 400-450 m and width between 7-10 m. These slices threaten the settlements around Loen Valley and tourists visiting the fjord during summer season, as the released slides have potential of creating tsunami. In the past, several rock slides had been recorded from the Mountain Ramnefjell between 1905 and 1950. Among them, four of the slides caused formation of tsunami waves which washed up to 74 m above the lake level. Two of the slides resulted in many fatalities in the inner part of the Loen Valley as well as great damages. There are three predominant joint structures in Ramnefjell Mountain, which controls failure and the geometry of the slides. The first joint set is a foliation plane striking northeast-southwest and dipping 35Ë -40Ë to the east-southeast. The second and the third joint sets are almost perpendicular and parallel to the mountain side and scarp, respectively. These three joint sets form slices of rock columns with width ranging between 7-10 m and height of 400-450 m. It is stated that the joints in set II are opened between 1-2 m, which may bring about collection of water during heavy rainfall or snow melt causing the slices to be pressed out. It is estimated that water in the vertical joints both reduces the shear strength of sliding plane and causes reduction of normal stress on the sliding plane due to formation of uplift force. Hence rock slides in Ramnefjell mountain occur in plane failure mode. The quantitative evaluation of rock slide risk requires probabilistic analysis of rock slope stability and identification of consequences if the rock slide occurs. In this study failure probability of a rock slice is evaluated by first-order reliability method (FORM). Then in order to use the calculated probability of failure value (Pf) in risk analyses, it is required to associate this Pf with frequency based probabilities (i.ePf / year) since the computed failure probabilities is a measure of hazard and not a measure of risk unless they are associated with the consequences of the failure. This can be done by either considering the time dependent behavior of the basic variables in the probabilistic models or associating the computed Pf with frequency of the failures in the region. In this study, the frequency of previous rock slides in the previous century in Remnefjell is used for evaluation of frequency based probability to be used in risk assessment. The major consequence of a rock slide is generation of a tsunami in the lake Loen, causing inundation of residential areas around the lake. Risk is assessed by adapting damage probability matrix approach, which is originally developed for risk assessment for buildings in case of earthquake.
ERIC Educational Resources Information Center
Pitts, Laura; Dymond, Simon
2012-01-01
Research on the high-probability (high-p) request sequence shows that compliance with low-probability (low-p) requests generally increases when preceded by a series of high-p requests. Few studies have conducted formal preference assessments to identify the consequences used for compliance, which may partly explain treatment failures, and still…
NASA Technical Reports Server (NTRS)
Bueno, R.; Chow, E.; Gershwin, S. B.; Willsky, A. S.
1975-01-01
The research is reported on the problems of failure detection and reliable system design for digital aircraft control systems. Failure modes, cross detection probability, wrong time detection, application of performance tools, and the GLR computer package are discussed.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Disasters as a necessary part of benefit-cost analyses.
Mark, R K; Stuart-Alexander, D E
1977-09-16
Benefit-cost analyses for water projects generally have not included the expected costs (residual risk) of low-probability disasters such as dam failures, impoundment-induced earthquakes, and landslides. Analysis of the history of these types of events demonstrates that dam failures are not uncommon and that the probability of a reservoir-triggered earth-quake increases with increasing reservoir depth. Because the expected costs from such events can be significant and risk is project-specific, estimates should be made for each project. The cost of expected damage from a "high-risk" project in an urban area could be comparable to project benefits.
NASA Technical Reports Server (NTRS)
Naumann, R. J.; Oran, W. A.; Whymark, R. R.; Rey, C.
1981-01-01
The single axis acoustic levitator that was flown on SPAR VI malfunctioned. The results of a series of tests, analyses, and investigation of hypotheses that were undertaken to determine the probable cause of failure are presented, together with recommendations for future flights of the apparatus. The most probable causes of the SPAR VI failure were lower than expected sound intensity due to mechanical degradation of the sound source, and an unexpected external force that caused the experiment sample to move radially and eventually be lost from the acoustic energy well.
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.
Bordin, Dimorvan; Bergamo, Edmara T P; Fardin, Vinicius P; Coelho, Paulo G; Bonfante, Estevam A
2017-07-01
To assess the probability of survival (reliability) and failure modes of narrow implants with different diameters. For fatigue testing, 42 implants with the same macrogeometry and internal conical connection were divided, according to diameter, as follows: narrow (Ø3.3×10mm) and extra-narrow (Ø2.9×10mm) (21 per group). Identical abutments were torqued to the implants and standardized maxillary incisor crowns were cemented and subjected to step-stress accelerated life testing (SSALT) in water. The use-level probability Weibull curves, and reliability for a mission of 50,000 and 100,000 cycles at 50N, 100, 150 and 180N were calculated. For the finite element analysis (FEA), two virtual models, simulating the samples tested in fatigue, were constructed. Loading at 50N and 100N were applied 30° off-axis at the crown. The von-Mises stress was calculated for implant and abutment. The beta (β) values were: 0.67 for narrow and 1.32 for extra-narrow implants, indicating that failure rates did not increase with fatigue in the former, but more likely were associated with damage accumulation and wear-out failures in the latter. Both groups showed high reliability (up to 97.5%) at 50 and 100N. A decreased reliability was observed for both groups at 150 and 180N (ranging from 0 to 82.3%), but no significant difference was observed between groups. Failure predominantly involved abutment fracture for both groups. FEA at 50N-load, Ø3.3mm showed higher von-Mises stress for abutment (7.75%) and implant (2%) when compared to the Ø2.9mm. There was no significant difference between narrow and extra-narrow implants regarding probability of survival. The failure mode was similar for both groups, restricted to abutment fracture. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sawant, M.; Christou, A.
2012-12-01
While use of LEDs in Fiber Optics and lighting applications is common, their use in medical diagnostic applications is not very extensive. Since the precise value of light intensity will be used to interpret patient results, understanding failure modes [1-4] is very important. We used the Failure Modes and Effects Criticality Analysis (FMECA) tool to identify the critical failure modes of the LEDs. FMECA involves identification of various failure modes, their effects on the system (LED optical output in this context), their frequency of occurrence, severity and the criticality of the failure modes. The competing failure modes/mechanisms were degradation of: active layer (where electron-hole recombination occurs to emit light), electrodes (provides electrical contact to the semiconductor chip), Indium Tin Oxide (ITO) surface layer (used to improve current spreading and light extraction), plastic encapsulation (protective polymer layer) and packaging failures (bond wires, heat sink separation). A FMECA table is constructed and the criticality is calculated by estimating the failure effect probability (β), failure mode ratio (α), failure rate (λ) and the operating time. Once the critical failure modes were identified, the next steps were generation of prior time to failure distribution and comparing with our accelerated life test data. To generate the prior distributions, data and results from previous investigations were utilized [5-33] where reliability test results of similar LEDs were reported. From the graphs or tabular data, we extracted the time required for the optical power output to reach 80% of its initial value. This is our failure criterion for the medical diagnostic application. Analysis of published data for different LED materials (AlGaInP, GaN, AlGaAs), the Semiconductor Structures (DH, MQW) and the mode of testing (DC, Pulsed) was carried out. The data was categorized according to the materials system and LED structure such as AlGaInP-DH-DC, AlGaInP-MQW-DC, GaN-DH-DC, and GaN-DH-DC. Although the reported testing was carried out at different temperature and current, the reported data was converted to the present application conditions of the medical environment. Comparisons between the model data and accelerated test results carried out in the present are reported. The use of accelerating agent modeling and regression analysis was also carried out. We have used the Inverse Power Law model with the current density J as the accelerating agent and the Arrhenius model with temperature as the accelerating agent. Finally, our reported methodology is presented as an approach for analyzing LED suitability for the target medical diagnostic applications.
Effect of Progressive Heart Failure on Cerebral Hemodynamics and Monoamine Metabolism in CNS.
Mamalyga, M L; Mamalyga, L M
2017-07-01
Compensated and decompensated heart failure are characterized by different associations of disorders in the brain and heart. In compensated heart failure, the blood flow in the common carotid and basilar arteries does not change. Exacerbation of heart failure leads to severe decompensation and is accompanied by a decrease in blood flow in the carotid and basilar arteries. Changes in monoamine content occurring in the brain at different stages of heart failure are determined by various factors. The functional exercise test showed unequal monoamine-synthesizing capacities of the brain in compensated and decompensated heart failure. Reduced capacity of the monoaminergic systems in decompensated heart failure probably leads to overstrain of the central regulatory mechanisms, their gradual exhaustion, and failure of the compensatory mechanisms, which contributes to progression of heart failure.
Failure Investigation of Radiant Platen Superheater Tube of Thermal Power Plant Boiler
NASA Astrophysics Data System (ADS)
Ghosh, D.; Ray, S.; Mandal, A.; Roy, H.
2015-04-01
This paper highlights a case study of typical premature failure of a radiant platen superheater tube of 210 MW thermal power plant boiler. Visual examination, dimensional measurement and chemical analysis, are conducted as part of the investigations. Apart from these, metallographic analysis and fractography are also conducted to ascertain the probable cause of failure. Finally it has been concluded that the premature failure of the super heater tube can be attributed to localized creep at high temperature. The corrective actions has also been suggested to avoid this type of failure in near future.
Reliability analysis of redundant systems. [a method to compute transition probabilities
NASA Technical Reports Server (NTRS)
Yeh, H. Y.
1974-01-01
A method is proposed to compute the transition probability (the probability of partial or total failure) of parallel redundant system. The effect of geometry of the system, the direction of load, and the degree of redundancy on the probability of complete survival of parachute-like system are also studied. The results show that the probability of complete survival of three-member parachute-like system is very sensitive to the variation of horizontal angle of the load. However, it becomes very insignificant as the degree of redundancy increases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sallaberry, Cedric Jean-Marie; Helton, Jon C.
2015-05-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high - consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to d eactivate the entire system before the SL system fails (i.e., degrades into a configurationmore » that could allow an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). This report describes the Fortran 90 program CPLOAS_2 that implements the following representations for PLOAS for situations in which both link physical properties and link failure properties are time - dependent: (i) failure of all SLs before failure of any WL, (ii) failure of any SL before f ailure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS can be included in the calculations performed by CPLOAS_2. Keywords: Aleatory uncertainty, CPLOAS_2, Epistemic uncertainty, Probability of loss of assured safety, Strong link, Uncertainty analysis, Weak link« less
NASA Technical Reports Server (NTRS)
Thomas, Leann; Utley, Dawn
2006-01-01
While there has been extensive research in defining project organizational structures for traditional projects, little research exists to support high technology government project s organizational structure definition. High-Technology Government projects differ from traditional projects in that they are non-profit, span across Government-Industry organizations, typically require significant integration effort, and are strongly susceptible to a volatile external environment. Systems Integration implementation has been identified as a major contributor to both project success and failure. The literature research bridges program management organizational planning, systems integration, organizational theory, and independent project reports, in order to assess Systems Integration (SI) organizational structure selection for improving the high-technology government project s probability of success. This paper will describe the methodology used to 1) Identify and assess SI organizational structures and their success rate, and 2) Identify key factors to be used in the selection of these SI organizational structures during the acquisition strategy process.
Prognostic Factors in Severe Chagasic Heart Failure
Costa, Sandra de Araújo; Rassi, Salvador; Freitas, Elis Marra da Madeira; Gutierrez, Natália da Silva; Boaventura, Fabiana Miranda; Sampaio, Larissa Pereira da Costa; Silva, João Bastista Masson
2017-01-01
Background Prognostic factors are extensively studied in heart failure; however, their role in severe Chagasic heart failure have not been established. Objectives To identify the association of clinical and laboratory factors with the prognosis of severe Chagasic heart failure, as well as the association of these factors with mortality and survival in a 7.5-year follow-up. Methods 60 patients with severe Chagasic heart failure were evaluated regarding the following variables: age, blood pressure, ejection fraction, serum sodium, creatinine, 6-minute walk test, non-sustained ventricular tachycardia, QRS width, indexed left atrial volume, and functional class. Results 53 (88.3%) patients died during follow-up, and 7 (11.7%) remained alive. Cumulative overall survival probability was approximately 11%. Non-sustained ventricular tachycardia (HR = 2.11; 95% CI: 1.04 - 4.31; p<0.05) and indexed left atrial volume ≥ 72 mL/m2 (HR = 3.51; 95% CI: 1.63 - 7.52; p<0.05) were the only variables that remained as independent predictors of mortality. Conclusions The presence of non-sustained ventricular tachycardia on Holter and indexed left atrial volume > 72 mL/m2 are independent predictors of mortality in severe Chagasic heart failure, with cumulative survival probability of only 11% in 7.5 years. PMID:28443956
Fatigue analysis of composite materials using the fail-safe concept
NASA Technical Reports Server (NTRS)
Stievenard, G.
1982-01-01
If R1 is the probability of having a crack on a flight component and R2 is the probability of seeing this crack propagate between two scheduled inspections, the global failure regulation states that this product must not exceed 0.0000001.
Human versus automation in responding to failures: an expected-value analysis
NASA Technical Reports Server (NTRS)
Sheridan, T. B.; Parasuraman, R.
2000-01-01
A simple analytical criterion is provided for deciding whether a human or automation is best for a failure detection task. The method is based on expected-value decision theory in much the same way as is signal detection. It requires specification of the probabilities of misses (false negatives) and false alarms (false positives) for both human and automation being considered, as well as factors independent of the choice--namely, costs and benefits of incorrect and correct decisions as well as the prior probability of failure. The method can also serve as a basis for comparing different modes of automation. Some limiting cases of application are discussed, as are some decision criteria other than expected value. Actual or potential applications include the design and evaluation of any system in which either humans or automation are being considered.
Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.
1999-01-01
A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.
Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy.
Broggi, Sara; Cantone, Marie Claire; Chiara, Anna; Di Muzio, Nadia; Longobardi, Barbara; Mangili, Paola; Veronese, Ivan
2013-09-06
The aim of this paper was the application of the failure mode and effects analysis (FMEA) approach to assess the risks for patients undergoing radiotherapy treatments performed by means of a helical tomotherapy unit. FMEA was applied to the preplanning imaging, volume determination, and treatment planning stages of the tomotherapy process and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system; and 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. A total of 74 failure modes were identified: 38 in the stage of preplanning imaging and volume determination, and 36 in the stage of planning. The threshold of 125 for RPN was exceeded in four cases: one case only in the phase of preplanning imaging and volume determination, and three cases in the stage of planning. The most critical failures appeared related to (i) the wrong or missing definition and contouring of the overlapping regions, (ii) the wrong assignment of the overlap priority to each anatomical structure, (iii) the wrong choice of the computed tomography calibration curve for dose calculation, and (iv) the wrong (or not performed) choice of the number of fractions in the planning station. On the basis of these findings, in addition to the safety strategies already adopted in the clinical practice, novel solutions have been proposed for mitigating the risk of these failures and to increase patient safety.
Paes, P N G; Bastian, F L; Jardim, P M
2017-09-01
Consider the efficacy of glass infiltration etching (SIE) treatment as a procedure to modify the zirconia surface resulting in higher interfacial fracture toughness. Y-TZP was subjected to 5 different surface treatments conditions consisting of no treatment (G1), SIE followed by hydrofluoric acid treatment (G2), heat treated at 750°C (G3), hydrofluoric acid treated (G4) and airborne-particle abrasion with alumina particles (G5). The effect of surface treatment on roughness was evaluated by Atomic Force Microscopy providing three different parameters: R a , R sk and surface area variation. The ceramic/resin cement interface was analyzed by Fracture Mechanics K I test with failure mode determined by fractographic analysis. Weibull's analysis was also performed to evaluate the structural integrity of the adhesion zone. G2 and G4 specimens showed very similar, and high R a values but different surface area variation (33% for G2 and 13% for G4) and they presented the highest fracture toughness (K IC ). Weibull's analysis showed G2 (SIE) tendency to exhibit higher K IC values than the other groups but with more data scatter and a higher early failure probability than G4 specimens. Selective glass infiltration etching surface treatment was effective in modifying the zirconia surface roughness, increasing the bonding area and hence the mechanical imbrications at the zirconia/resin cement interface resulting in higher fracture toughness (K IC ) values with higher K IC values obtained when failure probability above 20% was expected (Weibull's distribution) among all the experimental groups. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Sanchez-Alonso, Jose L.; Bhargava, Anamika; O’Hara, Thomas; Glukhov, Alexey V.; Schobesberger, Sophie; Bhogal, Navneet; Sikkel, Markus B.; Mansfield, Catherine; Korchev, Yuri E.; Lyon, Alexander R.; Punjabi, Prakash P.; Nikolaev, Viacheslav O.; Trayanova, Natalia A.
2016-01-01
Rationale: Disruption in subcellular targeting of Ca2+ signaling complexes secondary to changes in cardiac myocyte structure may contribute to the pathophysiology of a variety of cardiac diseases, including heart failure (HF) and certain arrhythmias. Objective: To explore microdomain-targeted remodeling of ventricular L-type Ca2+ channels (LTCCs) in HF. Methods and Results: Super-resolution scanning patch-clamp, confocal and fluorescence microscopy were used to explore the distribution of single LTCCs in different membrane microdomains of nonfailing and failing human and rat ventricular myocytes. Disruption of membrane structure in both species led to the redistribution of functional LTCCs from their canonical location in transversal tubules (T-tubules) to the non-native crest of the sarcolemma, where their open probability was dramatically increased (0.034±0.011 versus 0.154±0.027, P<0.001). High open probability was linked to enhance calcium–calmodulin kinase II–mediated phosphorylation in non-native microdomains and resulted in an elevated ICa,L window current, which contributed to the development of early afterdepolarizations. A novel model of LTCC function in HF was developed; after its validation with experimental data, the model was used to ascertain how HF-induced T-tubule loss led to altered LTCC function and early afterdepolarizations. The HF myocyte model was then implemented in a 3-dimensional left ventricle model, demonstrating that such early afterdepolarizations can propagate and initiate reentrant arrhythmias. Conclusions: Microdomain-targeted remodeling of LTCC properties is an important event in pathways that may contribute to ventricular arrhythmogenesis in the settings of HF-associated remodeling. This extends beyond the classical concept of electric remodeling in HF and adds a new dimension to cardiovascular disease. PMID:27572487
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2009-01-01
A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.
Bowden, Vanessa K; Visser, Troy A W; Loft, Shayne
2017-06-01
It is generally assumed that drivers speed intentionally because of factors such as frustration with the speed limit or general impatience. The current study examined whether speeding following an interruption could be better explained by unintentional prospective memory (PM) failure. In these situations, interrupting drivers may create a PM task, with speeding the result of drivers forgetting their newly encoded intention to travel at a lower speed after interruption. Across 3 simulated driving experiments, corrected or uncorrected speeding in recently reduced speed zones (from 70 km/h to 40 km/h) increased on average from 8% when uninterrupted to 33% when interrupted. Conversely, the probability that participants traveled under their new speed limit in recently increased speed zones (from 40 km/h to 70 km/h) increased from 1% when uninterrupted to 23% when interrupted. Consistent with a PM explanation, this indicates that interruptions lead to a general failure to follow changed speed limits, not just to increased speeding. Further testing a PM explanation, Experiments 2 and 3 manipulated variables expected to influence the probability of PM failures and subsequent speeding after interruptions. Experiment 2 showed that performing a cognitively demanding task during the interruption, when compared with unfilled interruptions, increased the probability of initially speeding from 1% to 11%, but that participants were able to correct (reduce) their speed. In Experiment 3, providing participants with 10s longer to encode the new speed limit before interruption decreased the probability of uncorrected speeding after an unfilled interruption from 30% to 20%. Theoretical implications and implications for road design interventions are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
neutron-Induced Failures in semiconductor Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wender, Stephen Arthur
2017-03-13
Single Event Effects are a very significant failure mode in modern semiconductor devices that may limit their reliability. Accelerated testing is important for semiconductor industry. Considerable more work is needed in this field to mitigate the problem. Mitigation of this problem will probably come from Physicists and Electrical Engineers working together
14 CFR 29.729 - Retracting mechanism.
Code of Federal Regulations, 2014 CFR
2014-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...
14 CFR 27.729 - Retracting mechanism.
Code of Federal Regulations, 2010 CFR
2010-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...
14 CFR 29.729 - Retracting mechanism.
Code of Federal Regulations, 2012 CFR
2012-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...
14 CFR 27.729 - Retracting mechanism.
Code of Federal Regulations, 2013 CFR
2013-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...
14 CFR 29.729 - Retracting mechanism.
Code of Federal Regulations, 2011 CFR
2011-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...
14 CFR 29.729 - Retracting mechanism.
Code of Federal Regulations, 2013 CFR
2013-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...
14 CFR 27.729 - Retracting mechanism.
Code of Federal Regulations, 2012 CFR
2012-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...
14 CFR 27.729 - Retracting mechanism.
Code of Federal Regulations, 2011 CFR
2011-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...
14 CFR 27.729 - Retracting mechanism.
Code of Federal Regulations, 2014 CFR
2014-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...
14 CFR 29.729 - Retracting mechanism.
Code of Federal Regulations, 2010 CFR
2010-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
Weibull-Based Design Methodology for Rotating Aircraft Engine Structures
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin; Hendricks, Robert C.; Soditus, Sherry
2002-01-01
The NASA Energy Efficient Engine (E(sup 3)-Engine) is used as the basis of a Weibull-based life and reliability analysis. Each component's life and thus the engine's life is defined by high-cycle fatigue (HCF) or low-cycle fatigue (LCF). Knowing the cumulative life distribution of each of the components making up the engine as represented by a Weibull slope is a prerequisite to predicting the life and reliability of the entire engine. As the engine Weibull slope increases, the predicted lives decrease. The predicted engine lives L(sub 5) (95 % probability of survival) of approximately 17,000 and 32,000 hr do correlate with current engine maintenance practices without and with refurbishment. respectively. The individual high pressure turbine (HPT) blade lives necessary to obtain a blade system life L(sub 0.1) (99.9 % probability of survival) of 9000 hr for Weibull slopes of 3, 6 and 9, are 47,391 and 20,652 and 15,658 hr, respectively. For a design life of the HPT disks having probable points of failure equal to or greater than 36,000 hr at a probability of survival of 99.9 %, the predicted disk system life L(sub 0.1) can vary from 9,408 to 24,911 hr.
A two-stage model of fracture of rocks
Kuksenko, V.; Tomilin, N.; Damaskinskaya, E.; Lockner, D.
1996-01-01
In this paper we propose a two-stage model of rock fracture. In the first stage, cracks or local regions of failure are uncorrelated occur randomly throughout the rock in response to loading of pre-existing flaws. As damage accumulates in the rock, there is a gradual increase in the probability that large clusters of closely spaced cracks or local failure sites will develop. Based on statistical arguments, a critical density of damage will occur where clusters of flaws become large enough to lead to larger-scale failure of the rock (stage two). While crack interaction and cooperative failure is expected to occur within clusters of closely spaced cracks, the initial development of clusters is predicted based on the random variation in pre-existing Saw populations. Thus the onset of the unstable second stage in the model can be computed from the generation of random, uncorrelated damage. The proposed model incorporates notions of the kinetic (and therefore time-dependent) nature of the strength of solids as well as the discrete hierarchic structure of rocks and the flaw populations that lead to damage accumulation. The advantage offered by this model is that its salient features are valid for fracture processes occurring over a wide range of scales including earthquake processes. A notion of the rank of fracture (fracture size) is introduced, and criteria are presented for both fracture nucleation and the transition of the failure process from one scale to another.
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
Defense Strategies for Asymmetric Networked Systems with Discrete Components.
Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun
2018-05-03
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.
Reducing the Risk of Human Space Missions with INTEGRITY
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Dillon-Merill, Robin L.; Tri, Terry O.; Henninger, Donald L.
2003-01-01
The INTEGRITY Program will design and operate a test bed facility to help prepare for future beyond-LEO missions. The purpose of INTEGRITY is to enable future missions by developing, testing, and demonstrating advanced human space systems. INTEGRITY will also implement and validate advanced management techniques including risk analysis and mitigation. One important way INTEGRITY will help enable future missions is by reducing their risk. A risk analysis of human space missions is important in defining the steps that INTEGRITY should take to mitigate risk. This paper describes how a Probabilistic Risk Assessment (PRA) of human space missions will help support the planning and development of INTEGRITY to maximize its benefits to future missions. PRA is a systematic methodology to decompose the system into subsystems and components, to quantify the failure risk as a function of the design elements and their corresponding probability of failure. PRA provides a quantitative estimate of the probability of failure of the system, including an assessment and display of the degree of uncertainty surrounding the probability. PRA provides a basis for understanding the impacts of decisions that affect safety, reliability, performance, and cost. Risks with both high probability and high impact are identified as top priority. The PRA of human missions beyond Earth orbit will help indicate how the risk of future human space missions can be reduced by integrating and testing systems in INTEGRITY.
Defense Strategies for Asymmetric Networked Systems with Discrete Components
Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.
2018-01-01
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588
A method for producing digital probabilistic seismic landslide hazard maps
Jibson, R.W.; Harp, E.L.; Michael, J.A.
2000-01-01
The 1994 Northridge, California, earthquake is the first earthquake for which we have all of the data sets needed to conduct a rigorous regional analysis of seismic slope instability. These data sets include: (1) a comprehensive inventory of triggered landslides, (2) about 200 strong-motion records of the mainshock, (3) 1:24 000-scale geologic mapping of the region, (4) extensive data on engineering properties of geologic units, and (5) high-resolution digital elevation models of the topography. All of these data sets have been digitized and rasterized at 10 m grid spacing using ARC/INFO GIS software on a UNIX computer. Combining these data sets in a dynamic model based on Newmark's permanent-deformation (sliding-block) analysis yields estimates of coseismic landslide displacement in each grid cell from the Northridge earthquake. The modeled displacements are then compared with the digital inventory of landslides triggered by the Northridge earthquake to construct a probability curve relating predicted displacement to probability of failure. This probability function can be applied to predict and map the spatial variability in failure probability in any ground-shaking conditions of interest. We anticipate that this mapping procedure will be used to construct seismic landslide hazard maps that will assist in emergency preparedness planning and in making rational decisions regarding development and construction in areas susceptible to seismic slope failure. ?? 2000 Elsevier Science B.V. All rights reserved.
Jibson, Randall W.; Harp, Edwin L.; Michael, John A.
1998-01-01
The 1994 Northridge, California, earthquake is the first earthquake for which we have all of the data sets needed to conduct a rigorous regional analysis of seismic slope instability. These data sets include (1) a comprehensive inventory of triggered landslides, (2) about 200 strong-motion records of the mainshock, (3) 1:24,000-scale geologic mapping of the region, (4) extensive data on engineering properties of geologic units, and (5) high-resolution digital elevation models of the topography. All of these data sets have been digitized and rasterized at 10-m grid spacing in the ARC/INFO GIS platform. Combining these data sets in a dynamic model based on Newmark's permanent-deformation (sliding-block) analysis yields estimates of coseismic landslide displacement in each grid cell from the Northridge earthquake. The modeled displacements are then compared with the digital inventory of landslides triggered by the Northridge earthquake to construct a probability curve relating predicted displacement to probability of failure. This probability function can be applied to predict and map the spatial variability in failure probability in any ground-shaking conditions of interest. We anticipate that this mapping procedure will be used to construct seismic landslide hazard maps that will assist in emergency preparedness planning and in making rational decisions regarding development and construction in areas susceptible to seismic slope failure.
System reliability of randomly vibrating structures: Computational modeling and laboratory testing
NASA Astrophysics Data System (ADS)
Sundar, V. S.; Ammanagi, S.; Manohar, C. S.
2015-09-01
The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.
Enhanced CARES Software Enables Improved Ceramic Life Prediction
NASA Technical Reports Server (NTRS)
Janosik, Lesley A.
1997-01-01
The NASA Lewis Research Center has developed award-winning software that enables American industry to establish the reliability and life of brittle material (e.g., ceramic, intermetallic, graphite) structures in a wide variety of 21st century applications. The CARES (Ceramics Analysis and Reliability Evaluation of Structures) series of software is successfully used by numerous engineers in industrial, academic, and government organizations as an essential element of the structural design and material selection processes. The latest version of this software, CARES/Life, provides a general- purpose design tool that predicts the probability of failure of a ceramic component as a function of its time in service. CARES/Life was recently enhanced by adding new modules designed to improve functionality and user-friendliness. In addition, a beta version of the newly-developed CARES/Creep program (for determining the creep life of monolithic ceramic components) has just been released to selected organizations.
What predicts successful literacy acquisition in a second language?
Frost, Ram; Siegelman, Noam; Narkiss, Alona; Afek, Liron
2013-01-01
We examined whether success (or failure) in assimilating the structure of a second language could be predicted by general statistical learning abilities that are non-linguistic in nature. We employed a visual statistical learning (VSL) task, monitoring our participants’ implicit learning of the transitional probabilities of visual shapes. A pretest revealed that performance in the VSL task is not correlated with abilities related to a general G factor or working memory. We found that native speakers of English who picked up the implicit statistical structure embedded in the continuous stream of shapes, on average, better assimilated the Semitic structure of Hebrew words. Our findings thus suggest that languages and their writing systems are characterized by idiosyncratic correlations of form and meaning, and these are picked up in the process of literacy acquisition, as they are picked up in any other type of learning, for the purpose of making sense of the environment. PMID:23698615
A reliability-based cost effective fail-safe design procedure
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1976-01-01
The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.
Wang, X-M; Yin, S-H; Du, J; Du, M-L; Wang, P-Y; Wu, J; Horbinski, C M; Wu, M-J; Zheng, H-Q; Xu, X-Q; Shu, W; Zhang, Y-J
2017-07-01
Retreatment of tuberculosis (TB) often fails in China, yet the risk factors associated with the failure remain unclear. To identify risk factors for the treatment failure of retreated pulmonary tuberculosis (PTB) patients, we analyzed the data of 395 retreated PTB patients who received retreatment between July 2009 and July 2011 in China. PTB patients were categorized into 'success' and 'failure' groups by their treatment outcome. Univariable and multivariable logistic regression were used to evaluate the association between treatment outcome and socio-demographic as well as clinical factors. We also created an optimized risk score model to evaluate the predictive values of these risk factors on treatment failure. Of 395 patients, 99 (25·1%) were diagnosed as retreatment failure. Our results showed that risk factors associated with treatment failure included drug resistance, low education level, low body mass index (6 months), standard treatment regimen, retreatment type, positive culture result after 2 months of treatment, and the place where the first medicine was taken. An Optimized Framingham risk model was then used to calculate the risk scores of these factors. Place where first medicine was taken (temporary living places) received a score of 6, which was highest among all the factors. The predicted probability of treatment failure increases as risk score increases. Ten out of 359 patients had a risk score >9, which corresponded to an estimated probability of treatment failure >70%. In conclusion, we have identified multiple clinical and socio-demographic factors that are associated with treatment failure of retreated PTB patients. We also created an optimized risk score model that was effective in predicting the retreatment failure. These results provide novel insights for the prognosis and improvement of treatment for retreated PTB patients.
Nygård, Lotte; Vogelius, Ivan R; Fischer, Barbara M; Kjær, Andreas; Langer, Seppo W; Aznar, Marianne C; Persson, Gitte F; Bentzen, Søren M
2018-04-01
The aim of the study was to build a model of first failure site- and lesion-specific failure probability after definitive chemoradiotherapy for inoperable NSCLC. We retrospectively analyzed 251 patients receiving definitive chemoradiotherapy for NSCLC at a single institution between 2009 and 2015. All patients were scanned by fludeoxyglucose positron emission tomography/computed tomography for radiotherapy planning. Clinical patient data and fludeoxyglucose positron emission tomography standardized uptake values from primary tumor and nodal lesions were analyzed by using multivariate cause-specific Cox regression. In patients experiencing locoregional failure, multivariable logistic regression was applied to assess risk of each lesion being the first site of failure. The two models were used in combination to predict probability of lesion failure accounting for competing events. Adenocarcinoma had a lower hazard ratio (HR) of locoregional failure than squamous cell carcinoma (HR = 0.45, 95% confidence interval [CI]: 0.26-0.76, p = 0.003). Distant failures were more common in the adenocarcinoma group (HR = 2.21, 95% CI: 1.41-3.48, p < 0.001). Multivariable logistic regression of individual lesions at the time of first failure showed that primary tumors were more likely to fail than lymph nodes (OR = 12.8, 95% CI: 5.10-32.17, p < 0.001). Increasing peak standardized uptake value was significantly associated with lesion failure (OR = 1.26 per unit increase, 95% CI: 1.12-1.40, p < 0.001). The electronic model is available at http://bit.ly/LungModelFDG. We developed a failure site-specific competing risk model based on patient- and lesion-level characteristics. Failure patterns differed between adenocarcinoma and squamous cell carcinoma, illustrating the limitation of aggregating them into NSCLC. Failure site-specific models add complementary information to conventional prognostic models. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Landslide Probability Assessment by the Derived Distributions Technique
NASA Astrophysics Data System (ADS)
Muñoz, E.; Ochoa, A.; Martínez, H.
2012-12-01
Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model is used along with the soil characteristic curve (suction vs. moisture) and the Mohr-Coulomb failure criteria in order to calculate the FOS of the slope. Data from two slopes located on steep tropical regions of the cities of Medellín (Colombia) and Rio de Janeiro (Brazil) where used to verify the model's performance. The results indicated significant differences between the obtained FOS values and the behavior observed on the field. The model shows relatively high values of FOS that do not reflect the instability of the analyzed slopes. For the two cases studied, the application of a more simple reliability concept (as the Probability of Failure - PR and Reliability Index - β), instead of a FOS could lead to more realistic results.
Cycles till failure of silver-zinc cells with completing failures modes: Preliminary data analysis
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Leibecki, H. F.; Bozek, J. M.
1980-01-01
One hundred and twenty nine cells were run through charge-discharge cycles until failure. The experiment design was a variant of a central composite factorial in five factors. Preliminary data analysis consisted of response surface estimation of life. Batteries fail under two basic modes; a low voltage condition and an internal shorting condition. A competing failure modes analysis using maximum likelihood estimation for the extreme value life distribution was performed. Extensive diagnostics such as residual plotting and probability plotting were employed to verify data quality and choice of model.
NASA Astrophysics Data System (ADS)
Arnone, E.; Noto, L. V.; Dialynas, Y. G.; Caracciolo, D.; Bras, R. L.
2015-12-01
This work presents the capabilities of a model, i.e. the tRIBS-VEGGIE-Landslide, in two different versions, i.e. developed within a probabilistic framework and coupled with a root cohesion module. The probabilistic model treats geotechnical and soil retention curve parameters as random variables across the basin and estimates theoretical probability distributions of slope stability and the associated "factor of safety" commonly used to describe the occurrence of shallow landslides. The derived distributions are used to obtain the spatio-temporal dynamics of probability of failure, conditioned on soil moisture dynamics at each watershed location. The framework has been tested in the Luquillo Experimental Forest (Puerto Rico) where shallow landslides are common. In particular, the methodology was used to evaluate how the spatial and temporal patterns of precipitation, whose variability is significant over the basin, affect the distribution of probability of failure. Another version of the model accounts for the additional cohesion exerted by vegetation roots. The approach is to use the Fiber Bundle Model (FBM) framework that allows for the evaluation of the root strength as a function of the stress-strain relationships of bundles of fibers. The model requires the knowledge of the root architecture to evaluate the additional reinforcement from each root diameter class. The root architecture is represented with a branching topology model based on Leonardo's rule. The methodology has been tested on a simple case study to explore the role of both hydrological and mechanical root effects. Results demonstrate that the effects of root water uptake can at times be more significant than the mechanical reinforcement; and that the additional resistance provided by roots depends heavily on the vegetation root structure and length.
Bonsu, Kwadwo Osei; Owusu, Isaac Kofi; Buabeng, Kwame Ohene; Reidpath, Daniel D; Kadirvelu, Amudha
2017-04-01
Randomized control trials of statins have not demonstrated significant benefits in outcomes of heart failure (HF). However, randomized control trials may not always be generalizable. The aim was to determine whether statin and statin type-lipophilic or -hydrophilic improve long-term outcomes in Africans with HF. This was a retrospective longitudinal study of HF patients aged ≥18 years hospitalized at a tertiary healthcare center between January 1, 2009 and December 31, 2013 in Ghana. Patients were eligible if they were discharged from first admission for HF (index admission) and followed up to time of all-cause, cardiovascular, and HF mortality or end of study. Multivariable time-dependent Cox model and inverse-probability-of-treatment weighting of marginal structural model were used to estimate associations between statin treatment and outcomes. Adjusted hazard ratios were also estimated for lipophilic and hydrophilic statin compared with no statin use. The study included 1488 patients (mean age 60.3±14.2 years) with 9306 person-years of observation. Using the time-dependent Cox model, the 5-year adjusted hazard ratios with 95% CI for statin treatment on all-cause, cardiovascular, and HF mortality were 0.68 (0.55-0.83), 0.67 (0.54-0.82), and 0.63 (0.51-0.79), respectively. Use of inverse-probability-of-treatment weighting resulted in estimates of 0.79 (0.65-0.96), 0.77 (0.63-0.96), and 0.77 (0.61-0.95) for statin treatment on all-cause, cardiovascular, and HF mortality, respectively, compared with no statin use. Among Africans with HF, statin treatment was associated with significant reduction in mortality. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
NASA Technical Reports Server (NTRS)
Duffy, S. F.; Hu, J.; Hopkins, D. A.
1995-01-01
The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.
Diagnostic reasoning techniques for selective monitoring
NASA Technical Reports Server (NTRS)
Homem-De-mello, L. S.; Doyle, R. J.
1991-01-01
An architecture for using diagnostic reasoning techniques in selective monitoring is presented. Given the sensor readings and a model of the physical system, a number of assertions are generated and expressed as Boolean equations. The resulting system of Boolean equations is solved symbolically. Using a priori probabilities of component failure and Bayes' rule, revised probabilities of failure can be computed. These will indicate what components have failed or are the most likely to have failed. This approach is suitable for systems that are well understood and for which the correctness of the assertions can be guaranteed. Also, the system must be such that changes are slow enough to allow the computation.
Probabilistic metrology or how some measurement outcomes render ultra-precise estimates
NASA Astrophysics Data System (ADS)
Calsamiglia, J.; Gendra, B.; Muñoz-Tapia, R.; Bagan, E.
2016-10-01
We show on theoretical grounds that, even in the presence of noise, probabilistic measurement strategies (which have a certain probability of failure or abstention) can provide, upon a heralded successful outcome, estimates with a precision that exceeds the deterministic bounds for the average precision. This establishes a new ultimate bound on the phase estimation precision of particular measurement outcomes (or sequence of outcomes). For probe systems subject to local dephasing, we quantify such precision limit as a function of the probability of failure that can be tolerated. Our results show that the possibility of abstaining can set back the detrimental effects of noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Pfenninger, Stefan
In this paper, we propose a strategy to control the self-organizing dynamics of the Bak-Tang-Wiesenfeld (BTW) sandpile model on complex networks by allowing some degree of failure tolerance for the nodes and introducing additional active dissipation while taking the risk of possible node damage. We show that the probability for large cascades significantly increases or decreases respectively when the risk for node damage outweighs the active dissipation and when the active dissipation outweighs the risk for node damage. By considering the potential additional risk from node damage, a non-trivial optimal active dissipation control strategy which minimizes the total cost inmore » the system can be obtained. Under some conditions the introduced control strategy can decrease the total cost in the system compared to the uncontrolled model. Moreover, when the probability of damaging a node experiencing failure tolerance is greater than the critical value, then no matter how successful the active dissipation control is, the total cost of the system will have to increase. This critical damage probability can be used as an indicator of the robustness of a network or system. Copyright (C) EPLA, 2015« less
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
Role of stress triggering in earthquake migration on the North Anatolian fault
Stein, R.S.; Dieterich, J.H.; Barka, A.A.
1996-01-01
Ten M???6.7 earthquakes ruptured 1,000 km of the North Anatolian fault (Turkey) during 1939-92, providing an unsurpassed opportunity to study how one large shock sets up the next. Calculations of the change in Coulomb failure stress reveal that 9 out of 10 ruptures were brought closer to failure by the preceding shocks, typically by 5 bars, equivalent to 20 years of secular stressing. We translate the calculated stress changes into earthquake probabilities using an earthquake-nucleation constitutive relation, which includes both permanent and transient stress effects. For the typical 10-year period between triggering and subsequent rupturing shocks in the Anatolia sequence, the stress changes yield an average three-fold gain in the ensuing earthquake probability. Stress is now calculated to be high at several isolated sites along the fault. During the next 30 years, we estimate a 15% probability of a M???6.7 earthquake east of the major eastern center of Erzincan, and a 12% probability for a large event south of the major western port city of Izmit. Such stress-based probability calculations may thus be useful to assess and update earthquake hazards elsewhere. ?? 1997 Elsevier Science Ltd.
Gamell, Marc; Teranishi, Keita; Kolla, Hemanth; ...
2017-10-26
In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less
SU-E-T-627: Failure Modes and Effect Analysis for Monthly Quality Assurance of Linear Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, J; Xiao, Y; Wang, J
2014-06-15
Purpose: To develop and implement a failure mode and effect analysis (FMEA) on routine monthly Quality Assurance (QA) tests (physical tests part) of linear accelerator. Methods: A systematic failure mode and effect analysis method was performed for monthly QA procedures. A detailed process tree of monthly QA was created and potential failure modes were defined. Each failure mode may have many influencing factors. For each factor, a risk probability number (RPN) was calculated from the product of probability of occurrence (O), the severity of effect (S), and detectability of the failure (D). The RPN scores are in a range ofmore » 1 to 1000, with higher scores indicating stronger correlation to a given influencing factor of a failure mode. Five medical physicists in our institution were responsible to discuss and to define the O, S, D values. Results: 15 possible failure modes were identified and all RPN scores of all influencing factors of these 15 failue modes were from 8 to 150, and the checklist of FMEA in monthly QA was drawn. The system showed consistent and accurate response to erroneous conditions. Conclusion: The influencing factors of RPN greater than 50 were considered as highly-correlated factors of a certain out-oftolerance monthly QA test. FMEA is a fast and flexible tool to develop an implement a quality management (QM) frame work of monthly QA, which improved the QA efficiency of our QA team. The FMEA work may incorporate more quantification and monitoring fuctions in future.« less
Schmeida, Mary; Savrin, Ronald A
2012-01-01
Heart failure readmission among the elderly is frequent and costly to both the patient and the Medicare trust fund. In this study, the authors explore the factors that are associated with states having heart failure readmission rates that are higher than the U.S. national rate. Acute inpatient hospital settings. 50 state-level data and multivariate regression analysis is used. The dependent variable Heart Failure 30-day Readmission Worse than U.S. Rate is based on adult Medicare Fee-for-Service patients hospitalized with a primary discharge diagnosis of heart failure and for which a subsequent inpatient readmission occurred within 30 days of their last discharge. One key variable found--states with a higher resident population speaking a primary language other than English at home--that is significantly associated with a decrease in probability in states ranking "worse" on heart failure 30-day readmission. Whereas, states with a higher median income, more total days of care per 1,000 Medicare enrollees, and a greater percentage of Medicare enrollees with prescription drug coverage have a greater probability for heart failure 30-day readmission to be "worse" than the U.S. national rate. Case management interventions targeting health literacy may be more effective than other factors to improve state-level hospital status on heart failure 30-day readmission. Factors such as total days of care per 1,000 Medicare enrollees and improving patient access to postdischarge medication(s) may not be as important as literacy. Interventions aimed to prevent disparities should consider higher income population groups as vulnerable for readmission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamell, Marc; Teranishi, Keita; Kolla, Hemanth
In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less
Arancibia, F; Ewig, S; Martinez, J A; Ruiz, M; Bauer, T; Marcos, M A; Mensa, J; Torres, A
2000-07-01
The aim of the study was to determine the causes and prognostic implications of antimicrobial treatment failures in patients with nonresponding and progressive life-threatening, community-acquired pneumonia. Forty-nine patients hospitalized with a presumptive diagnosis of community-acquired pneumonia during a 16-mo period, failure to respond to antimicrobial treatment, and documented repeated microbial investigation >/= 72 h after initiation of in-hospital antimicrobial treatment were recorded. A definite etiology of treatment failure could be established in 32 of 49 (65%) patients, and nine additional patients (18%) had a probable etiology. Treatment failures were mainly infectious in origin and included primary, persistent, and nosocomial infections (n = 10 [19%], 13 [24%], and 11 [20%] of causes, respectively). Definite but not probable persistent infections were mostly due to microbial resistance to the administered initial empiric antimicrobial treatment. Nosocomial infections were particularly frequent in patients with progressive pneumonia. Definite persistent infections and nosocomial infections had the highest associated mortality rates (75 and 88%, respectively). Nosocomial pneumonia was the only cause of treatment failure independently associated with death in multivariate analysis (RR, 16.7; 95% CI, 1.4 to 194.9; p = 0.03). We conclude that the detection of microbial resistance and the diagnosis of nosocomial pneumonia are the two major challenges in hospitalized patients with community-acquired pneumonia who do not respond to initial antimicrobial treatment. In order to establish these potentially life-threatening etiologies, a regular microbial reinvestigation seems mandatory for all patients presenting with antimicrobial treatment failures.
Predictors of treatment failure in young patients undergoing in vitro fertilization.
Jacobs, Marni B; Klonoff-Cohen, Hillary; Agarwal, Sanjay; Kritz-Silverstein, Donna; Lindsay, Suzanne; Garzo, V Gabriel
2016-08-01
The purpose of the study was to evaluate whether routinely collected clinical factors can predict in vitro fertilization (IVF) failure among young, "good prognosis" patients predominantly with secondary infertility who are less than 35 years of age. Using de-identified clinic records, 414 women <35 years undergoing their first autologous IVF cycle were identified. Logistic regression was used to identify patient-driven clinical factors routinely collected during fertility treatment that could be used to model predicted probability of cycle failure. One hundred ninety-seven patients with both primary and secondary infertility had a failed IVF cycle, and 217 with secondary infertility had a successful live birth. None of the women with primary infertility had a successful live birth. The significant predictors for IVF cycle failure among young patients were fewer previous live births, history of biochemical pregnancies or spontaneous abortions, lower baseline antral follicle count, higher total gonadotropin dose, unknown infertility diagnosis, and lack of at least one fair to good quality embryo. The full model showed good predictive value (c = 0.885) for estimating risk of cycle failure; at ≥80 % predicted probability of failure, sensitivity = 55.4 %, specificity = 97.5 %, positive predictive value = 95.4 %, and negative predictive value = 69.8 %. If this predictive model is validated in future studies, it could be beneficial for predicting IVF failure in good prognosis women under the age of 35 years.
NASA Technical Reports Server (NTRS)
Gyekenyesi, John P.; Nemeth, Noel N.
1987-01-01
The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Mayers, Matthew Z.; Berkelbach, Timothy C.; Hybertsen, Mark S.; ...
2015-10-09
Ground-state diffusion Monte Carlo is used to investigate the binding energies and intercarrier radial probability distributions of excitons, trions, and biexcitons in a variety of two-dimensional transition-metal dichalcogenide materials. We compare these results to approximate variational calculations, as well as to analogous Monte Carlo calculations performed with simplified carrier interaction potentials. Our results highlight the successes and failures of approximate approaches as well as the physical features that determine the stability of small carrier complexes in monolayer transition-metal dichalcogenide materials. In conclusion, we discuss points of agreement and disagreement with recent experiments.
Structural reliability assessment of the Oman India Pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Sharif, A.M.; Preston, R.
1996-12-31
Reliability techniques are increasingly finding application in design. The special design conditions for the deep water sections of the Oman India Pipeline dictate their use since the experience basis for application of standard deterministic techniques is inadequate. The paper discusses the reliability analysis as applied to the Oman India Pipeline, including selection of a collapse model, characterization of the variability in the parameters that affect pipe resistance to collapse, and implementation of first and second order reliability analyses to assess the probability of pipe failure. The reliability analysis results are used as the basis for establishing the pipe wall thicknessmore » requirements for the pipeline.« less
Ruggeri, Annalisa; Labopin, Myriam; Sormani, Maria Pia; Sanz, Guillermo; Sanz, Jaime; Volt, Fernanda; Michel, Gerard; Locatelli, Franco; Diaz De Heredia, Cristina; O'Brien, Tracey; Arcese, William; Iori, Anna Paola; Querol, Sergi; Kogler, Gesine; Lecchi, Lucilla; Pouthier, Fabienne; Garnier, Federico; Navarrete, Cristina; Baudoux, Etienne; Fernandes, Juliana; Kenzey, Chantal; Eapen, Mary; Gluckman, Eliane; Rocha, Vanderson; Saccardi, Riccardo
2014-09-01
Umbilical cord blood transplant recipients are exposed to an increased risk of graft failure, a complication leading to a higher rate of transplant-related mortality. The decision and timing to offer a second transplant after graft failure is challenging. With the aim of addressing this issue, we analyzed engraftment kinetics and outcomes of 1268 patients (73% children) with acute leukemia (64% acute lymphoblastic leukemia, 36% acute myeloid leukemia) in remission who underwent single-unit umbilical cord blood transplantation after a myeloablative conditioning regimen. The median follow-up was 31 months. The overall survival rate at 3 years was 47%; the 100-day cumulative incidence of transplant-related mortality was 16%. Longer time to engraftment was associated with increased transplant-related mortality and shorter overall survival. The cumulative incidence of neutrophil engraftment at day 60 was 86%, while the median time to achieve engraftment was 24 days. Probability density analysis showed that the likelihood of engraftment after umbilical cord blood transplantation increased after day 10, peaked on day 21 and slowly decreased to 21% by day 31. Beyond day 31, the probability of engraftment dropped rapidly, and the residual probability of engrafting after day 42 was 5%. Graft failure was reported in 166 patients, and 66 of them received a second graft (allogeneic, n=45). Rescue actions, such as the search for another graft, should be considered starting after day 21. A diagnosis of graft failure can be established in patients who have not achieved neutrophil recovery by day 42. Moreover, subsequent transplants should not be postponed after day 42. Copyright© Ferrata Storti Foundation.
Cascading failures in ac electricity grids.
Rohden, Martin; Jung, Daniel; Tamrakar, Samyak; Kettemann, Stefan
2016-09-01
Sudden failure of a single transmission element in a power grid can induce a domino effect of cascading failures, which can lead to the isolation of a large number of consumers or even to the failure of the entire grid. Here we present results of the simulation of cascading failures in power grids, using an alternating current (AC) model. We first apply this model to a regular square grid topology. For a random placement of consumers and generators on the grid, the probability to find more than a certain number of unsupplied consumers decays as a power law and obeys a scaling law with respect to system size. Varying the transmitted power threshold above which a transmission line fails does not seem to change the power-law exponent q≈1.6. Furthermore, we study the influence of the placement of generators and consumers on the number of affected consumers and demonstrate that large clusters of generators and consumers are especially vulnerable to cascading failures. As a real-world topology, we consider the German high-voltage transmission grid. Applying the dynamic AC model and considering a random placement of consumers, we find that the probability to disconnect more than a certain number of consumers depends strongly on the threshold. For large thresholds the decay is clearly exponential, while for small ones the decay is slow, indicating a power-law decay.
Analysis of composite laminates with multiple fasteners by boundary collocation technique
NASA Astrophysics Data System (ADS)
Sergeev, Boris Anatolievich
Mechanical fasteners remain the primary means of load transfer between structural components made of composite laminates. As, in pursuit of increasing efficiency of the structure, the operational load continues to grow, the load carried by each fastener increases accordingly. This accelerates initiation of fatigue-related cracks near the fasteners holes and increases probability of failure. Therefore, the assessment of the stresses around the fastener holes and the stress intensity factors associated with edge cracks becomes critical for damage-tolerant design. Because of the presence of unknown contact stresses and the contact region between the fastener and the laminate, the analysis of a pin-loaded hole becomes considerably more complex than that of a traction-free hole. The accurate prediction of the contact stress distribution along the hole boundary is critical for determining the stress intensity factors and is essential for reliable strength evaluation and failure prediction. This study concerns the development of an analytical methodology, based on the boundary collocation technique, to determine the contact stresses and stress intensity factors required for strength and life prediction of bolted joints with many fasteners. It provides an analytical capability for determining the non-linear contact stresses in mechanically fastened composite laminates while capturing the effects of finite geometry, presence of edge cracks, interaction among fasteners, material anisotropy, fastener flexibility, fastener-hole clearance, friction between the pin and the laminate, and by-pass loading. Also, the proposed approach permits the determination of the fastener load distribution, which significantly influences the failure load of a multi-fastener joint. The well known phenomenon of the fastener tightening torque (clamping force) influence on the load distribution among the different fastener in a multi-fastener joints is taken into account by means of bi-linear representation of the elastic fastener deflection. Finally, two different failure criteria, maximum strains averaged over the characteristic distances and Tsai-Wu criterion, were used to predict the failure load and failure mode in two composite-aluminum joints. The comparison of the present predictions with the published experimental results reveals their agreement.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
NDE of ceramics and ceramic composites
NASA Technical Reports Server (NTRS)
Vary, Alex; Klima, Stanley J.
1991-01-01
Although nondestructive evaluation (NDE) techniques for ceramics are fairly well developed, they are difficult to apply in many cases for high probability detection of the minute flaws that can cause failure in monolithic ceramics. Conventional NDE techniques are available for monolithic and fiber reinforced ceramic matrix composites, but more exact quantitative techniques needed are still being investigated and developed. Needs range from flaw detection to below 100 micron levels in monolithic ceramics to global imaging of fiber architecture and matrix densification anomalies in ceramic composites. NDE techniques that will ultimately be applicable to production and quality control of ceramic structures are still emerging from the lab. Needs are different depending on the processing stage, fabrication method, and nature of the finished product. NDE techniques are being developed in concert with materials processing research where they can provide feedback information to processing development and quality improvement. NDE techniques also serve as research tools for materials characterization and for understanding failure processes, e.g., during thermomechanical testing.
Size distribution of submarine landslides along the U.S. Atlantic margin
Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.
2009-01-01
Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.
Koster-Brouwer, Maria E; Verboom, Diana M; Scicluna, Brendon P; van de Groep, Kirsten; Frencken, Jos F; Janssen, Davy; Schuurman, Rob; Schultz, Marcus J; van der Poll, Tom; Bonten, Marc J M; Cremer, Olaf L
2018-03-01
Discrimination between infectious and noninfectious causes of acute respiratory failure is difficult in patients admitted to the ICU after a period of hospitalization. Using a novel biomarker test (SeptiCyte LAB), we aimed to distinguish between infection and inflammation in this population. Nested cohort study. Two tertiary mixed ICUs in the Netherlands. Hospitalized patients with acute respiratory failure requiring mechanical ventilation upon ICU admission from 2011 to 2013. Patients having an established infection diagnosis or an evidently noninfectious reason for intubation were excluded. None. Blood samples were collected upon ICU admission. Test results were categorized into four probability bands (higher bands indicating higher infection probability) and compared with the infection plausibility as rated by post hoc assessment using strict definitions. Of 467 included patients, 373 (80%) were treated for a suspected infection at admission. Infection plausibility was classified as ruled out, undetermined, or confirmed in 135 (29%), 135 (29%), and 197 (42%) patients, respectively. Test results correlated with infection plausibility (Spearman's rho 0.332; p < 0.001). After exclusion of undetermined cases, positive predictive values were 29%, 54%, and 76% for probability bands 2, 3, and 4, respectively, whereas the negative predictive value for band 1 was 76%. Diagnostic discrimination of SeptiCyte LAB and C-reactive protein was similar (p = 0.919). Among hospitalized patients admitted to the ICU with clinical uncertainty regarding the etiology of acute respiratory failure, the diagnostic value of SeptiCyte LAB was limited.
Risk-based maintenance of ethylene oxide production facilities.
Khan, Faisal I; Haddara, Mahmoud R
2004-05-20
This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.
NASA Technical Reports Server (NTRS)
Runkle, R.; Henson, K.
1982-01-01
A failure analysis of the parachute on the Space Transportation System 3 flight's solid rocket booster's is presented. During the reentry phase of the two Solid Rocket Boosters (SRBs), one 115 ft diameter main parachute failed on the right hand SRB (A12). This parachute failure caused the SRB to impact the Ocean at 110 ft/sec in lieu of the expected 3 parachute impact velocity of 88 ft/sec. This higher impact velocity relates directly to more SRB aft skirt and more motor case damage. The cause of the parachute failure, the potential risks of losing an SRB as a result of this failure, and recommendations to ensure that the probability of chute failures of this type in the future will be low are discussed.
Impact of coverage on the reliability of a fault tolerant computer
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1975-01-01
A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.
Kelly, J Robert; Rungruanganunt, Patchnee
2016-01-01
Zirconia is being widely used, at times apparently by simply copying a metal design into ceramic. Structurally, ceramics are sensitive to both design and processing (fabrication) details. The aim of this work was to examine four computer-aided design/computer-assisted manufacture (CAD/CAM) abutments using a modified International Standards Organization (ISO) implant fatigue protocol to determine performance as a function of design and processing. Two full zirconia and two hybrid (Ti-based) abutments (n = 12 each) were tested wet at 15 Hz at a variety of loads to failure. Failure probability distributions were examined at each load, and when found to be the same, data from all loads were combined for lifetime analysis from accelerated to clinical conditions. Two distinctly different failure modes were found for both full zirconia and Ti-based abutments. One of these for zirconia has been reported clinically in the literature, and one for the Ti-based abutments has been reported anecdotally. The ISO protocol modification in this study forced failures in the abutments; no implant bodies failed. Extrapolated cycles for 10% failure at 70 N were: full zirconia, Atlantis 2 × 10(7) and Straumann 3 × 10(7); and Ti-based, Glidewell 1 × 10(6) and Nobel 1 × 10(21). Under accelerated conditions (200 N), performance differed significantly: Straumann clearly outperformed Astra (t test, P = .013), and the Glidewell Ti-base abutment also outperformed Atlantis zirconia at 200 N (Nobel ran-out; t test, P = .035). The modified ISO protocol in this study produced failures that were seen clinically. The manufacture matters; differences in design and fabrication that influence performance cannot be discerned clinically.
Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy
Broggi, Sara; Cantone, Marie Claire; Chiara, Anna; Muzio, Nadia Di; Longobardi, Barbara; Mangili, Paola
2013-01-01
The aim of this paper was the application of the failure mode and effects analysis (FMEA) approach to assess the risks for patients undergoing radiotherapy treatments performed by means of a helical tomotherapy unit. FMEA was applied to the preplanning imaging, volume determination, and treatment planning stages of the tomotherapy process and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system; and 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. A total of 74 failure modes were identified: 38 in the stage of preplanning imaging and volume determination, and 36 in the stage of planning. The threshold of 125 for RPN was exceeded in four cases: one case only in the phase of preplanning imaging and volume determination, and three cases in the stage of planning. The most critical failures appeared related to (i) the wrong or missing definition and contouring of the overlapping regions, (ii) the wrong assignment of the overlap priority to each anatomical structure, (iii) the wrong choice of the computed tomography calibration curve for dose calculation, and (iv) the wrong (or not performed) choice of the number of fractions in the planning station. On the basis of these findings, in addition to the safety strategies already adopted in the clinical practice, novel solutions have been proposed for mitigating the risk of these failures and to increase patient safety. PACS number: 87.55.Qr PMID:24036868
NASA Astrophysics Data System (ADS)
Faulkner, B. R.; Lyon, W. G.
2001-12-01
We present a probabilistic model for predicting virus attenuation. The solution employs the assumption of complete mixing. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve 4-log attenuation. We tabulated data from related studies to develop probability density functions for input parameters, and utilized a database of soil hydraulic parameters based on the 12 USDA soil categories. Regulators can use the model based on limited information such as boring logs, climate data, and soil survey reports for a particular site of interest. Plackett-Burman sensitivity analysis indicated the most important main effects on probability of failure to achieve 4-log attenuation in our model were mean logarithm of saturated hydraulic conductivity (+0.396), mean water content (+0.203), mean solid-water mass transfer coefficient (-0.147), and the mean solid-water equilibrium partitioning coefficient (-0.144). Using the model, we predicted the probability of failure of a one-meter thick proposed hydrogeologic barrier and a water content of 0.3. With the currently available data and the associated uncertainty, we predicted soils classified as sand would fail (p=0.999), silt loams would also fail (p=0.292), but soils classified as clays would provide the required 4-log attenuation (p=0.001). The model is extendible in the sense that probability density functions of parameters can be modified as future studies refine the uncertainty, and the lightweight object-oriented design of the computer model (implemented in Java) will facilitate reuse with modified classes. This is an abstract of a proposed presentation and does not necessarily reflect EPA policy.
NASA Technical Reports Server (NTRS)
Putcha, Chandra S.; Mikula, D. F. Kip; Dueease, Robert A.; Dang, Lan; Peercy, Robert L.
1997-01-01
This paper deals with the development of a reliability methodology to assess the consequences of using hardware, without failure analysis or corrective action, that has previously demonstrated that it did not perform per specification. The subject of this paper arose from the need to provide a detailed probabilistic analysis to calculate the change in probability of failures with respect to the base or non-failed hardware. The methodology used for the analysis is primarily based on principles of Monte Carlo simulation. The random variables in the analysis are: Maximum Time of Operation (MTO) and operation Time of each Unit (OTU) The failure of a unit is considered to happen if (OTU) is less than MTO for the Normal Operational Period (NOP) in which this unit is used. NOP as a whole uses a total of 4 units. Two cases are considered. in the first specialized scenario, the failure of any operation or system failure is considered to happen if any of the units used during the NOP fail. in the second specialized scenario, the failure of any operation or system failure is considered to happen only if any two of the units used during the MOP fail together. The probability of failure of the units and the system as a whole is determined for 3 kinds of systems - Perfect System, Imperfect System 1 and Imperfect System 2. in a Perfect System, the operation time of the failed unit is the same as that of the MTO. In an Imperfect System 1, the operation time of the failed unit is assumed as 1 percent of the MTO. In an Imperfect System 2, the operation time of the failed unit is assumed as zero. in addition, simulated operation time of failed units is assumed as 10 percent of the corresponding units before zero value. Monte Carlo simulation analysis is used for this study. Necessary software has been developed as part of this study to perform the reliability calculations. The results of the analysis showed that the predicted change in failure probability (P(sub F)) for the previously failed units is as high as 49 percent above the baseline (perfect system) for the worst case. The predicted change in system P(sub F) for the previously failed units is as high as 36% for single unit failure without any redundancy. For redundant systems, with dual unit failure, the predicted change in P(sub F) for the previously failed units is as high as 16%. These results will help management to make decisions regarding the consequences of using previously failed units without adequate failure analysis or corrective action.
NASA Technical Reports Server (NTRS)
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.
1997-01-01
A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.
NASA Astrophysics Data System (ADS)
Zarekarizi, M.; Moradkhani, H.
2015-12-01
Extreme events are proven to be affected by climate change, influencing hydrologic simulations for which stationarity is usually a main assumption. Studies have discussed that this assumption would lead to large bias in model estimations and higher flood hazard consequently. Getting inspired by the importance of non-stationarity, we determined how the exceedance probabilities have changed over time in Johnson Creek River, Oregon. This could help estimate the probability of failure of a structure that was primarily designed to resist less likely floods according to common practice. Therefore, we built a climate informed Bayesian hierarchical model and non-stationarity was considered in modeling framework. Principle component analysis shows that North Atlantic Oscillation (NAO), Western Pacific Index (WPI) and Eastern Asia (EA) are mostly affecting stream flow in this river. We modeled flood extremes using peaks over threshold (POT) method rather than conventional annual maximum flood (AMF) mainly because it is possible to base the model on more information. We used available threshold selection methods to select a suitable threshold for the study area. Accounting for non-stationarity, model parameters vary through time with climate indices. We developed a couple of model scenarios and chose one which could best explain the variation in data based on performance measures. We also estimated return periods under non-stationarity condition. Results show that ignoring stationarity could increase the flood hazard up to four times which could increase the probability of an in-stream structure being overtopped.
Probability of survival of implant-supported metal ceramic and CAD/CAM resin nanoceramic crowns.
Bonfante, Estevam A; Suzuki, Marcelo; Lorenzoni, Fábio C; Sena, Lídia A; Hirata, Ronaldo; Bonfante, Gerson; Coelho, Paulo G
2015-08-01
To evaluate the probability of survival and failure modes of implant-supported resin nanoceramic relative to metal-ceramic crowns. Resin nanoceramic molar crowns (LU) (Lava Ultimate, 3M ESPE, USA) were milled and metal-ceramic (MC) (Co-Cr alloy, Wirobond C+, Bego, USA) with identical anatomy were fabricated (n=21). The metal coping and a burnout-resin veneer were created by CAD/CAM, using an abutment (Stealth-abutment, Bicon LLC, USA) and a milled crown from the LU group as models for porcelain hot-pressing (GC-Initial IQ-Press, GC, USA). Crowns were cemented, the implants (n=42, Bicon) embedded in acrylic-resin for mechanical testing, and subjected to single-load to fracture (SLF, n=3 each) for determination of step-stress profiles for accelerated-life testing in water (n=18 each). Weibull curves (50,000 cycles at 200N, 90% CI) were plotted. Weibull modulus (m) and characteristic strength (η) were calculated and a contour plot used (m versus η) for determining differences between groups. Fractography was performed in SEM and polarized-light microscopy. SLF mean values were 1871N (±54.03) for MC and 1748N (±50.71) for LU. Beta values were 0.11 for MC and 0.49 for LU. Weibull modulus was 9.56 and η=1038.8N for LU, and m=4.57 and η=945.42N for MC (p>0.10). Probability of survival (50,000 and 100,000 cycles at 200 and 300N) was 100% for LU and 99% for MC. Failures were cohesive within LU. In MC crowns, porcelain veneer fractures frequently extended to the supporting metal coping. Probability of survival was not different between crown materials, but failure modes differed. In load bearing regions, similar reliability should be expected for metal ceramics, known as the gold standard, and resin nanoceramic crowns over implants. Failure modes involving porcelain veneer fracture and delamination in MC crowns are less likely to be successfully repaired compared to cohesive failures in resin nanoceramic material. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Care 3 phase 2 report, maintenance manual
NASA Technical Reports Server (NTRS)
Bryant, L. A.; Stiffler, J. J.
1982-01-01
CARE 3 (Computer-Aided Reliability Estimation, version three) is a computer program designed to help estimate the reliability of complex, redundant systems. Although the program can model a wide variety of redundant structures, it was developed specifically for fault-tolerant avionics systems--systems distinguished by the need for extremely reliable performance since a system failure could well result in the loss of human life. It substantially generalizes the class of redundant configurations that could be accommodated, and includes a coverage model to determine the various coverage probabilities as a function of the applicable fault recovery mechanisms (detection delay, diagnostic scheduling interval, isolation and recovery delay, etc.). CARE 3 further generalizes the class of system structures that can be modeled and greatly expands the coverage model to take into account such effects as intermittent and transient faults, latent faults, error propagation, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wierzbicki, T.; Jones, N.
1989-01-01
The book discusses the fragmentation of solids under dynamic loading, the debris-impact protection of space structures, the controlled fracturing of structures by shock-wave interaction and focusing, the tearing of thin metal sheets, and the dynamic inelastic failure of beams, and dynamic rupture of shells. Consideration is also given to investigations of the failure of brittle and composite materials by numerical methods, the energy absorption of polymer matrix composite structures (frictional effects), the mechanics of deep plastic collapse of thin-walled structures, the denting and bending of tubular beams under local loads, the dynamic bending collapse of strain-softening cantilever beams, and themore » failure of bar structures under repeated loading. Other topics discussed are on the behavior of composite and metallic superstructures under blast loading, the catastrophic failure modes of marine structures, and industrial experience with structural failure.« less
The less familiar side of heart failure: symptomatic diastolic dysfunction.
Morris, Spencer A; Van Swol, Mark; Udani, Bela
2005-06-01
Arrange for echocardiography or radionuclide angiography within 72 hours of a heart failure exacerbation. An ejection fraction >50% in the presence of signs and symptoms of heart failure makes the diagnosis of diastolic heart failure probable. To treat associated hypertension, use angiotensin receptor blockers (ARBs), angiotensin-converting enzyme (ACE) inhibitors, beta-blockers, calcium channel blockers, or diuretics to achieve a blood pressure goal of <130/80 mm Hg. When using beta-blockers to control heart rate, titrate doses more aggressively than would be done for systolic failure, to reach a goal of 60 to 70 bpm. Use ACE inhibitors/ARBs to decrease hospitalizations, decrease symptoms, and prevent left ventricular remodeling.
Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.
What are the effects of hypertonic saline plus furosemide in acute heart failure?
Zepeda, Patricio; Rain, Carmen; Sepúlveda, Paola
2015-08-27
In search of new therapies to solve diuretic resistance in acute heart failure, the addition of hypertonic saline has been proposed. Searching in Epistemonikos database, which is maintained by screening 30 databases, we identified two systematic reviews including nine pertinent randomized controlled trials. We combined the evidence and generated a summary of findings following the GRADE approach. We concluded hypertonic saline associated with furosemide probably decrease mortality, length of hospital stay and hospital readmission in patients with acute decompensated heart failure.
Of pacemakers and statistics: the actuarial method extended.
Dussel, J; Wolbarst, A B; Scott-Millar, R N; Obel, I W
1980-01-01
Pacemakers cease functioning because of either natural battery exhaustion (nbe) or component failure (cf). A study of four series of pacemakers shows that a simple extension of the actuarial method, so as to incorporate Normal statistics, makes possible a quantitative differentiation between the two modes of failure. This involves the separation of the overall failure probability density function PDF(t) into constituent parts pdfnbe(t) and pdfcf(t). The approach should allow a meaningful comparison of the characteristics of different pacemaker types.
A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…
Failure Analysis by Statistical Techniques (FAST). Volume 1. User’s Manual
1974-10-31
REPORT NUMBER DNA 3336F-1 2. OOVT ACCESSION NO 4. TITLE Cand Sublllle) • FAILURE ANALYSIS BY STATISTICAL TECHNIQUES (FAST) Volume I, User’s...SS2), and t’ a facility ( SS7 ). The other three diagrams break down the three critical subsystems. T le median probability of survival of the
[Rare cause of heart failure in an elderly woman in Djibouti: left ventricular non compaction].
Massoure, P L; Lamblin, G; Bertani, A; Eve, O; Kaiser, E
2011-10-01
The purpose of this report is to describe the first case of left ventricular non compaction diagnosed in Djibouti. The patient was a 74-year-old Djiboutian woman with symptomatic heart failure. Echocardiography is the key tool for assessment of left ventricular non compaction. This rare cardiomyopathy is probably underdiagnosed in Africa.
An approximation formula for a class of fault-tolerant computers
NASA Technical Reports Server (NTRS)
White, A. L.
1986-01-01
An approximation formula is derived for the probability of failure for fault-tolerant process-control computers. These computers use redundancy and reconfiguration to achieve high reliability. Finite-state Markov models capture the dynamic behavior of component failure and system recovery, and the approximation formula permits an estimation of system reliability by an easy examination of the model.
A probabilisitic based failure model for components fabricated from anisotropic graphite
NASA Astrophysics Data System (ADS)
Xiao, Chengfeng
The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of invariants, known as an integrity basis, was developed for a non-linear elastic constitutive model. This integrity basis allowed the non-linear constitutive model to exhibit different behavior in tension and compression and moreover, the integrity basis was amenable to being augmented and extended to anisotropic behavior. This integrity basis served as the starting point in developing both an isotropic reliability model and a reliability model for transversely isotropic materials. At the heart of the reliability models is a failure function very similar in nature to the yield functions found in classic plasticity theory. The failure function is derived and presented in the context of a multiaxial stress space. States of stress inside the failure envelope denote safe operating states. States of stress on or outside the failure envelope denote failure. The phenomenological strength parameters associated with the failure function are treated as random variables. There is a wealth of failure data in the literature that supports this notion. The mathematical integration of a joint probability density function that is dependent on the random strength variables over the safe operating domain defined by the failure function provides a way to compute the reliability of a state of stress in a graphite core component fabricated from graphite. The evaluation of the integral providing the reliability associated with an operational stress state can only be carried out using a numerical method. Monte Carlo simulation with importance sampling was selected to make these calculations. The derivation of the isotropic reliability model and the extension of the reliability model to anisotropy are provided in full detail. Model parameters are cast in terms of strength parameters that can (and have been) characterized by multiaxial failure tests. Comparisons of model predictions with failure data is made and a brief comparison is made to reliability predictions called for in the ASME Boiler and Pressure Vessel Code. Future work is identified that would provide further verification and augmentation of the numerical methods used to evaluate model predictions.
Mitigating Thermal Runaway Risk in Lithium Ion Batteries
NASA Technical Reports Server (NTRS)
Darcy, Eric; Jeevarajan, Judy; Russell, Samuel
2014-01-01
The JSC/NESC team has successfully demonstrated Thermal Runaway (TR) risk reduction in a lithium ion battery for human space flight by developing and implementing verifiable design features which interrupt energy transfer between adjacent electrochemical cells. Conventional lithium ion (li-Ion) batteries can fail catastrophically as a result of a single cell going into thermal runaway. Thermal runaway results when an internal component fails to separate electrode materials leading to localized heating and complete combustion of the lithium ion cell. Previously, the greatest control to minimize the probability of cell failure was individual cell screening. Combining thermal runaway propagation mitigation design features with a comprehensive screening program reduces both the probability, and the severity, of a single cell failure.
Cardiac arrhythmia mechanisms in rats with heart failure induced by pulmonary hypertension
Benoist, David; Stones, Rachel; Drinkhill, Mark J.; Benson, Alan P.; Yang, Zhaokang; Cassan, Cecile; Gilbert, Stephen H.; Saint, David A.; Cazorla, Olivier; Steele, Derek S.; Bernus, Olivier
2012-01-01
Pulmonary hypertension provokes right heart failure and arrhythmias. Better understanding of the mechanisms underlying these arrhythmias is needed to facilitate new therapeutic approaches for the hypertensive, failing right ventricle (RV). The aim of our study was to identify the mechanisms generating arrhythmias in a model of RV failure induced by pulmonary hypertension. Rats were injected with monocrotaline to induce either RV hypertrophy or failure or with saline (control). ECGs were measured in conscious, unrestrained animals by telemetry. In isolated hearts, electrical activity was measured by optical mapping and myofiber orientation by diffusion tensor-MRI. Sarcoplasmic reticular Ca2+ handling was studied in single myocytes. Compared with control animals, the T-wave of the ECG was prolonged and in three of seven heart failure animals, prominent T-wave alternans occurred. Discordant action potential (AP) alternans occurred in isolated failing hearts and Ca2+ transient alternans in failing myocytes. In failing hearts, AP duration and dispersion were increased; conduction velocity and AP restitution were steeper. The latter was intrinsic to failing single myocytes. Failing hearts had greater fiber angle disarray; this correlated with AP duration. Failing myocytes had reduced sarco(endo)plasmic reticular Ca2+-ATPase activity, increased sarcoplasmic reticular Ca2+-release fraction, and increased Ca2+ spark leak. In hypertrophied hearts and myocytes, dysfunctional adaptation had begun, but alternans did not develop. We conclude that increased electrical and structural heterogeneity and dysfunctional sarcoplasmic reticular Ca2+ handling increased the probability of alternans, a proarrhythmic predictor of sudden cardiac death. These mechanisms are potential therapeutic targets for the correction of arrhythmias in hypertensive, failing RVs. PMID:22427523
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheong, S-K; Kim, J
Purpose: The aim of the study is the application of a Failure Modes and Effects Analysis (FMEA) to access the risks for patients undergoing a Low Dose Rate (LDR) Prostate Brachytherapy Treatment. Methods: FMEA was applied to identify all the sub processes involved in the stages of identifying patient, source handling, treatment preparation, treatment delivery, and post treatment. These processes characterize the radiation treatment associated with LDR Prostate Brachytherapy. The potential failure modes together with their causes and effects were identified and ranked in order of their importance. Three indexes were assigned for each failure mode: the occurrence rating (O),more » the severity rating (S), and the detection rating (D). A ten-point scale was used to score each category, ten being the number indicating most severe, most frequent, and least detectable failure mode, respectively. The risk probability number (RPN) was calculated as a product of the three attributes: RPN = O X S x D. The analysis was carried out by a working group (WG) at UPMC. Results: The total of 56 failure modes were identified including 32 modes before the treatment, 13 modes during the treatment, and 11 modes after the treatment. In addition to the protocols already adopted in the clinical practice, the prioritized risk management will be implanted to the high risk procedures on the basis of RPN score. Conclusion: The effectiveness of the FMEA method was established. The FMEA methodology provides a structured and detailed assessment method for the risk analysis of the LDR Prostate Brachytherapy Procedure and can be applied to other radiation treatment modes.« less
High-Temperature Graphitization Failure of Primary Superheater Tube
NASA Astrophysics Data System (ADS)
Ghosh, D.; Ray, S.; Roy, H.; Mandal, N.; Shukla, A. K.
2015-12-01
Failure of boiler tubes is the main cause of unit outages of the plant, which further affects the reliability, availability and safety of the unit. So failure analysis of boiler tubes is absolutely essential to predict the root cause of the failure and the steps are taken for future remedial action to prevent the failure in near future. This paper investigates the probable cause/causes of failure of the primary superheater tube in a thermal power plant boiler. Visual inspection, dimensional measurement, chemical analysis, metallographic examination and hardness measurement are conducted as the part of the investigative studies. Apart from these tests, mechanical testing and fractographic analysis are also conducted as supplements. Finally, it is concluded that the superheater tube is failed due to graphitization for prolonged exposure of the tube at higher temperature.
Defense strategies for cloud computing multi-site server infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; He, Fei
We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, andmore » also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.« less
Model-Based Method for Sensor Validation
NASA Technical Reports Server (NTRS)
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
WE-G-BRA-08: Failure Modes and Effects Analysis (FMEA) for Gamma Knife Radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Bhatnagar, J; Bednarz, G
2015-06-15
Purpose: To perform a failure modes and effects analysis (FMEA) study for Gamma Knife (GK) radiosurgery processes at our institution based on our experience with the treatment of more than 13,000 patients. Methods: A team consisting of medical physicists, nurses, radiation oncologists, neurosurgeons at the University of Pittsburgh Medical Center and an external physicist expert was formed for the FMEA study. A process tree and a failure mode table were created for the GK procedures using the Leksell GK Perfexion and 4C units. Three scores for the probability of occurrence (O), the severity (S), and the probability of no detectionmore » (D) for failure modes were assigned to each failure mode by each professional on a scale from 1 to 10. The risk priority number (RPN) for each failure mode was then calculated (RPN = OxSxD) as the average scores from all data sets collected. Results: The established process tree for GK radiosurgery consists of 10 sub-processes and 53 steps, including a sub-process for frame placement and 11 steps that are directly related to the frame-based nature of the GK radiosurgery. Out of the 86 failure modes identified, 40 failure modes are GK specific, caused by the potential for inappropriate use of the radiosurgery head frame, the imaging fiducial boxes, the GK helmets and plugs, and the GammaPlan treatment planning system. The other 46 failure modes are associated with the registration, imaging, image transfer, contouring processes that are common for all radiation therapy techniques. The failure modes with the highest hazard scores are related to imperfect frame adaptor attachment, bad fiducial box assembly, overlooked target areas, inaccurate previous treatment information and excessive patient movement during MRI scan. Conclusion: The implementation of the FMEA approach for Gamma Knife radiosurgery enabled deeper understanding of the overall process among all professionals involved in the care of the patient and helped identify potential weaknesses in the overall process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoisak, J; Manger, R; Dragojevic, I
Purpose: To perform a failure mode and effects analysis (FMEA) of the process for treating superficial skin cancers with the Xoft Axxent electronic brachytherapy (eBx) system, given the recent introduction of expanded quality control (QC) initiatives at our institution. Methods: A process map was developed listing all steps in superficial treatments with Xoft eBx, from the initial patient consult to the completion of the treatment course. The process map guided the FMEA to identify the failure modes for each step in the treatment workflow and assign Risk Priority Numbers (RPN), calculated as the product of the failure mode’s probability ofmore » occurrence (O), severity (S) and lack of detectability (D). FMEA was done with and without the inclusion of recent QC initiatives such as increased staffing, physics oversight, standardized source calibration, treatment planning and documentation. The failure modes with the highest RPNs were identified and contrasted before and after introduction of the QC initiatives. Results: Based on the FMEA, the failure modes with the highest RPN were related to source calibration, treatment planning, and patient setup/treatment delivery (Fig. 1). The introduction of additional physics oversight, standardized planning and safety initiatives such as checklists and time-outs reduced the RPNs of these failure modes. High-risk failure modes that could be mitigated with improved hardware and software interlocks were identified. Conclusion: The FMEA analysis identified the steps in the treatment process presenting the highest risk. The introduction of enhanced QC initiatives mitigated the risk of some of these failure modes by decreasing their probability of occurrence and increasing their detectability. This analysis demonstrates the importance of well-designed QC policies, procedures and oversight in a Xoft eBx programme for treatment of superficial skin cancers. Unresolved high risk failure modes highlight the need for non-procedural quality initiatives such as improved planning software and more robust hardware interlock systems.« less
ERIC Educational Resources Information Center
Beitzel, Brian D.; Staley, Richard K.; DuBois, Nelson F.
2011-01-01
Previous research has cast doubt on the efficacy of utilizing external representations as an aid to solving word problems. The present study replicates previous findings that concrete representations hinder college students' ability to solve probability word problems, and extends those findings to apply to a multimedia instructional context. Our…
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
Asteroid diversion considerations and comparisons of diversion techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, J. Michael; Miller, Paul; Rovny, Jared
The threat of asteroid impacts on Earth poses a low-probability but high consequence risk, with possible outcomes ranging from regional to global catastrophe. However, unique amongst such global threats we have the capability of averting such disasters. Diversion approaches by either kinetic impactor or nuclear energy deposition are the two most practical technologies for mitigating hazardous near Earth asteroids. One of the greatest challenges in understanding our options is the uncertain response of asteroids to such impulsive techniques, due both to our lack of knowledge of the composition and structure of these objects as well as their highly varied nature.more » Predicting whether we will simply divert or break up a given object is a crucial: the weak self-gravity and inferred weak structure of typical asteroids present the strong possibility the body will fragment for modest impulses. Predictive modeling of failure and fragmentation is one important tool for such studies. In this paper we apply advances in modeling failure and fracture using Adaptive Smoothed Particle Hydrodynamics (ASPH) to understand mega-cratering on asteroids as a validation exercise, and show examples of diverting the near Earth asteroid Bennu using both a kinetic impactor and ablative blow-off due to nuclear energy deposition.« less
Asteroid diversion considerations and comparisons of diversion techniques
Owen, J. Michael; Miller, Paul; Rovny, Jared; ...
2015-05-19
The threat of asteroid impacts on Earth poses a low-probability but high consequence risk, with possible outcomes ranging from regional to global catastrophe. However, unique amongst such global threats we have the capability of averting such disasters. Diversion approaches by either kinetic impactor or nuclear energy deposition are the two most practical technologies for mitigating hazardous near Earth asteroids. One of the greatest challenges in understanding our options is the uncertain response of asteroids to such impulsive techniques, due both to our lack of knowledge of the composition and structure of these objects as well as their highly varied nature.more » Predicting whether we will simply divert or break up a given object is a crucial: the weak self-gravity and inferred weak structure of typical asteroids present the strong possibility the body will fragment for modest impulses. Predictive modeling of failure and fragmentation is one important tool for such studies. In this paper we apply advances in modeling failure and fracture using Adaptive Smoothed Particle Hydrodynamics (ASPH) to understand mega-cratering on asteroids as a validation exercise, and show examples of diverting the near Earth asteroid Bennu using both a kinetic impactor and ablative blow-off due to nuclear energy deposition.« less
Effect of risk aversion on prioritizing conservation projects.
Tulloch, Ayesha I T; Maloney, Richard F; Joseph, Liana N; Bennett, Joseph R; Di Fonzo, Martina M I; Probert, William J M; O'Connor, Shaun M; Densem, Jodie P; Possingham, Hugh P
2015-04-01
Conservation outcomes are uncertain. Agencies making decisions about what threat mitigation actions to take to save which species frequently face the dilemma of whether to invest in actions with high probability of success and guaranteed benefits or to choose projects with a greater risk of failure that might provide higher benefits if they succeed. The answer to this dilemma lies in the decision maker's aversion to risk--their unwillingness to accept uncertain outcomes. Little guidance exists on how risk preferences affect conservation investment priorities. Using a prioritization approach based on cost effectiveness, we compared 2 approaches: a conservative probability threshold approach that excludes investment in projects with a risk of management failure greater than a fixed level, and a variance-discounting heuristic used in economics that explicitly accounts for risk tolerance and the probabilities of management success and failure. We applied both approaches to prioritizing projects for 700 of New Zealand's threatened species across 8303 management actions. Both decision makers' risk tolerance and our choice of approach to dealing with risk preferences drove the prioritization solution (i.e., the species selected for management). Use of a probability threshold minimized uncertainty, but more expensive projects were selected than with variance discounting, which maximized expected benefits by selecting the management of species with higher extinction risk and higher conservation value. Explicitly incorporating risk preferences within the decision making process reduced the number of species expected to be safe from extinction because lower risk tolerance resulted in more species being excluded from management, but the approach allowed decision makers to choose a level of acceptable risk that fit with their ability to accommodate failure. We argue for transparency in risk tolerance and recommend that decision makers accept risk in an adaptive management framework to maximize benefits and avoid potential extinctions due to inefficient allocation of limited resources. © 2014 Society for Conservation Biology.
Probability of Failure Analysis Standards and Guidelines for Expendable Launch Vehicles
NASA Astrophysics Data System (ADS)
Wilde, Paul D.; Morse, Elisabeth L.; Rosati, Paul; Cather, Corey
2013-09-01
Recognizing the central importance of probability of failure estimates to ensuring public safety for launches, the Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST), the National Aeronautics and Space Administration (NASA), and U.S. Air Force (USAF), through the Common Standards Working Group (CSWG), developed a guide for conducting valid probability of failure (POF) analyses for expendable launch vehicles (ELV), with an emphasis on POF analysis for new ELVs. A probability of failure analysis for an ELV produces estimates of the likelihood of occurrence of potentially hazardous events, which are critical inputs to launch risk analysis of debris, toxic, or explosive hazards. This guide is intended to document a framework for POF analyses commonly accepted in the US, and should be useful to anyone who performs or evaluates launch risk analyses for new ELVs. The CSWG guidelines provide performance standards and definitions of key terms, and are being revised to address allocation to flight times and vehicle response modes. The POF performance standard allows a launch operator to employ alternative, potentially innovative methodologies so long as the results satisfy the performance standard. Current POF analysis practice at US ranges includes multiple methodologies described in the guidelines as accepted methods, but not necessarily the only methods available to demonstrate compliance with the performance standard. The guidelines include illustrative examples for each POF analysis method, which are intended to illustrate an acceptable level of fidelity for ELV POF analyses used to ensure public safety. The focus is on providing guiding principles rather than "recipe lists." Independent reviews of these guidelines were performed to assess their logic, completeness, accuracy, self- consistency, consistency with risk analysis practices, use of available information, and ease of applicability. The independent reviews confirmed the general validity of the performance standard approach and suggested potential updates to improve the accuracy each of the example methods, especially to address reliability growth.
Survival Predictions of Ceramic Crowns Using Statistical Fracture Mechanics
Nasrin, S.; Katsube, N.; Seghi, R.R.; Rokhlin, S.I.
2017-01-01
This work establishes a survival probability methodology for interface-initiated fatigue failures of monolithic ceramic crowns under simulated masticatory loading. A complete 3-dimensional (3D) finite element analysis model of a minimally reduced molar crown was developed using commercially available hardware and software. Estimates of material surface flaw distributions and fatigue parameters for 3 reinforced glass-ceramics (fluormica [FM], leucite [LR], and lithium disilicate [LD]) and a dense sintered yttrium-stabilized zirconia (YZ) were obtained from the literature and incorporated into the model. Utilizing the proposed fracture mechanics–based model, crown survival probability as a function of loading cycles was obtained from simulations performed on the 4 ceramic materials utilizing identical crown geometries and loading conditions. The weaker ceramic materials (FM and LR) resulted in lower survival rates than the more recently developed higher-strength ceramic materials (LD and YZ). The simulated 10-y survival rate of crowns fabricated from YZ was only slightly better than those fabricated from LD. In addition, 2 of the model crown systems (FM and LD) were expanded to determine regional-dependent failure probabilities. This analysis predicted that the LD-based crowns were more likely to fail from fractures initiating from margin areas, whereas the FM-based crowns showed a slightly higher probability of failure from fractures initiating from the occlusal table below the contact areas. These 2 predicted fracture initiation locations have some agreement with reported fractographic analyses of failed crowns. In this model, we considered the maximum tensile stress tangential to the interfacial surface, as opposed to the more universally reported maximum principal stress, because it more directly impacts crack propagation. While the accuracy of these predictions needs to be experimentally verified, the model can provide a fundamental understanding of the importance that pre-existing flaws at the intaglio surface have on fatigue failures. PMID:28107637
[Determinants of pride and shame: outcome, expected success and attribution].
Schützwohl, A
1991-01-01
In two experiments we investigated the relationship between subjective probability of success and pride and shame. According to Atkinson (1957), pride (the incentive of success) is an inverse linear function of the probability of success, shame (the incentive of failure) being a negative linear function. Attribution theory predicts an inverse U-shaped relationship between subjective probability of success and pride and shame. The results presented here are at variance with both theories: Pride and shame do not vary with subjective probability of success. However, pride and shame are systematically correlated with internal attributions of action outcome.
Elephantiasis Nostras Verrucosa (ENV): a complication of congestive heart failure and obesity.
Baird, Drew; Bode, David; Akers, Troy; Deyoung, Zachariah
2010-01-01
Congestive heart failure (CHF) and obesity are common medical conditions that have many complications and an increasing incidence in the United States. Presented here is a case of a disfiguring skin condition that visually highlights the dermatologic consequences of poorly controlled CHF and obesity. This condition will probably become more common as CHF and obesity increase in the US.
Strength and life criteria for corrugated fiberboard by three methods
Thomas J. Urbanik
1997-01-01
The conventional test method for determining the stacking life of corrugated containers at a fixed load level does not adequately predict a safe load when storage time is fixed. This study introduced multiple load levels and related the probability of time at failure to load. A statistical analysis of logarithm-of-time failure data varying with load level predicts the...
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
ERIC Educational Resources Information Center
Lafferty, Mark T.
2010-01-01
The number of project failures and those projects completed over cost and over schedule has been a significant issue for software project managers. Among the many reasons for failure, inaccuracy in software estimation--the basis for project bidding, budgeting, planning, and probability estimates--has been identified as a root cause of a high…
Enhanced Component Performance Study: Emergency Diesel Generators 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using (1) Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2014 and (2) maintenance unavailability (UA) performance data from Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2014. The objective is to show estimates of current failure probabilities and rates related to EDGs, trend these data on an annual basis, determine if the current data are consistent with the probability distributions currently recommended for use inmore » NRC probabilistic risk assessments, show how the reliability data differ for different EDG manufacturers and for EDGs with different ratings; and summarize the subcomponents, causes, detection methods, and recovery associated with each EDG failure mode. Engineering analyses were performed with respect to time period and failure mode without regard to the actual number of EDGs at each plant. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating. Six trends with varying degrees of statistical significance were identified in the data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meeuwsen, J.J.; Kling, W.L.; Ploem, W.A.G.A.
1997-01-01
Protection systems in power systems can fail either by not responding when they should (failure to operate) or by operating when they should not (false tripping). The former type of failure is particularly serious since it may result in the isolation of large sections of the network. However, the probability of a failure to operate can be reduced by carrying out preventive maintenance on protection systems. This paper describes an approach to determine the impact of preventive maintenance on protection systems on the reliability of the power supply to customers. The proposed approach is based on Markov models.
Zhang, Xu; Zhang, Mei-Jie; Fine, Jason
2012-01-01
With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated. PMID:21557288
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, D. G.; Arent, D. J.; Johnson, L.
2006-06-01
This paper documents a probabilistic risk assessment of existing and alternative power supply systems at a large telecommunications office. The analysis characterizes the increase in the reliability of power supply through the use of two alternative power configurations. Failures in the power systems supporting major telecommunications service nodes are a main contributor to significant telecommunications outages. A logical approach to improving the robustness of telecommunication facilities is to increase the depth and breadth of technologies available to restore power during power outages. Distributed energy resources such as fuel cells and gas turbines could provide additional on-site electric power sources tomore » provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.« less
Statistical analysis of field data for aircraft warranties
NASA Astrophysics Data System (ADS)
Lakey, Mary J.
Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.
Unique Association of Rare Cardiovascular Disease in an Athlete With Ventricular Arrhythmias.
Santomauro, V; Contursi, M; Dellegrottaglie, S; Borsellino, G
2015-01-01
Ventricular arrhythmias are a leading cause of non-elegibility to competitive sport. The failure to detect a significant organic substrate in the initial stage of screening does not preclude the identification of structural pathologies in the follow-up by using advanced imaging techniques. Here we report the case of a senior athlete judged not elegible because an arrhythmia with the morphology consistent with the origin of the left ventricle, in which subsequent execution of a cardiac MR and a thoracic CT scan has allowed the identification of an unique association between an area of myocardial damage, probable site of origine of the arrhythma, and a rare aortic malformation.
Mesoscopic description of random walks on combs
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Iomin, Alexander; Campos, Daniel; Horsthemke, Werner
2015-12-01
Combs are a simple caricature of various types of natural branched structures, which belong to the category of loopless graphs and consist of a backbone and branches. We study continuous time random walks on combs and present a generic method to obtain their transport properties. The random walk along the branches may be biased, and we account for the effect of the branches by renormalizing the waiting time probability distribution function for the motion along the backbone. We analyze the overall diffusion properties along the backbone and find normal diffusion, anomalous diffusion, and stochastic localization (diffusion failure), respectively, depending on the characteristics of the continuous time random walk along the branches, and compare our analytical results with stochastic simulations.
Self-reported nonadherence to antiretroviral therapy as a predictor of viral failure and mortality.
Glass, Tracy R; Sterne, Jonathan A C; Schneider, Marie-Paule; De Geest, Sabina; Nicca, Dunja; Furrer, Hansjakob; Günthard, Huldrych F; Bernasconi, Enos; Calmy, Alexandra; Rickenbach, Martin; Battegay, Manuel; Bucher, Heiner C
2015-10-23
To determine the effect of nonadherence to antiretroviral therapy (ART) on virologic failure and mortality in naive individuals starting ART. Prospective observational cohort study. Eligible individuals enrolled in the Swiss HIV Cohort Study, started ART between 2003 and 2012, and provided adherence data on at least one biannual clinical visit. Adherence was defined as missed doses (none, one, two, or more than two) and percentage adherence (>95, 90-95, and <90) in the previous 4 weeks. Inverse probability weighting of marginal structural models was used to estimate the effect of nonadherence on viral failure (HIV-1 viral load >500 copies/ml) and mortality. Of 3150 individuals followed for a median 4.7 years, 480 (15.2%) experienced viral failure and 104 (3.3%) died, 1155 (36.6%) reported missing one dose, 414 (13.1%) two doses and, 333 (10.6%) more than two doses of ART. The risk of viral failure increased with each missed dose (one dose: hazard ratio [HR] 1.15, 95% confidence interval 0.79-1.67; two doses: 2.15, 1.31-3.53; more than two doses: 5.21, 2.96-9.18). The risk of death increased with more than two missed doses (HR 4.87, 2.21-10.73). Missing one to two doses of ART increased the risk of viral failure in those starting once-daily (HR 1.67, 1.11-2.50) compared with those starting twice-daily regimens (HR 0.99, 0.64-1.54, interaction P = 0.09). Consistent results were found for percentage adherence. Self-report of two or more missed doses of ART is associated with an increased risk of both viral failure and death. A simple adherence question helps identify patients at risk for negative clinical outcomes and offers opportunities for intervention.
NASA Astrophysics Data System (ADS)
Taylor, Gabriel James
The failure of electrical cables exposed to severe thermal fire conditions are a safety concern for operating commercial nuclear power plants (NPPs). The Nuclear Regulatory Commission (NRC) has promoted the use of risk-informed and performance-based methods for fire protection which resulted in a need to develop realistic methods to quantify the risk of fire to NPP safety. Recent electrical cable testing has been conducted to provide empirical data on the failure modes and likelihood of fire-induced damage. This thesis evaluated numerous aspects of the data. Circuit characteristics affecting fire-induced electrical cable failure modes have been evaluated. In addition, thermal failure temperatures corresponding to cable functional failures have been evaluated to develop realistic single point thermal failure thresholds and probability distributions for specific cable insulation types. Finally, the data was used to evaluate the prediction capabilities of a one-dimension conductive heat transfer model used to predict cable failure.
NASA Astrophysics Data System (ADS)
Murrad, Muhamad; Leong, M. Salman
Based on the experiences of the Malaysian Armed Forces (MAF), failure of the main rotor gearbox (MRGB) was one of the major contributing factors to helicopter breakdowns. Even though vibration and oil analysis are the effective techniques for monitoring the health of helicopter components, these two techniques were rarely combined to form an effective assessment tool in MAF. Results of the oil analysis were often used only for oil changing schedule while assessments of MRGB condition were mainly based on overall vibration readings. A study group was formed and given a mandate to improve the maintenance strategy of S61-A4 helicopter fleet in the MAF. The improvement consisted of a structured approach to the reassessment/redefinition suitable maintenance actions that should be taken for the MRGB. Basic and enhanced tools for condition monitoring (CM) are investigated to address the predominant failures of the MRGB. Quantitative accelerated life testing (QALT) was considered in this work with an intent to obtain the required reliability information in a shorter time with tests under normal stress conditions. These tests when performed correctly can provide valuable information about MRGB performance under normal operating conditions which enable maintenance personnel to make decision more quickly, accurately and economically. The time-to-failure and probability of failure information of the MRGB were generated by applying QALT analysis principles. This study is anticipated to make a dramatic change in its approach to CM, bringing significant savings and various benefits to MAF.
Life Predicted in a Probabilistic Design Space for Brittle Materials With Transient Loads
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Palfi, Tamas; Reh, Stefan
2005-01-01
Analytical techniques have progressively become more sophisticated, and now we can consider the probabilistic nature of the entire space of random input variables on the lifetime reliability of brittle structures. This was demonstrated with NASA s CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code combined with the commercially available ANSYS/Probabilistic Design System (ANSYS/PDS), a probabilistic analysis tool that is an integral part of the ANSYS finite-element analysis program. ANSYS/PDS allows probabilistic loads, component geometry, and material properties to be considered in the finite-element analysis. CARES/Life predicts the time dependent probability of failure of brittle material structures under generalized thermomechanical loading--such as that found in a turbine engine hot-section. Glenn researchers coupled ANSYS/PDS with CARES/Life to assess the effects of the stochastic variables of component geometry, loading, and material properties on the predicted life of the component for fully transient thermomechanical loading and cyclic loading.
NASA Technical Reports Server (NTRS)
Behbehani, K.
1980-01-01
A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.
Lin, Chun-Li; Chang, Yen-Hsiang; Hsieh, Shih-Kai; Chang, Wen-Jen
2013-03-01
This study evaluated the risk of failure for an endodontically treated premolar with different crack depths, which was shearing toward the pulp chamber and was restored by using 3 different computer-aided design/computer-aided manufacturing ceramic restoration configurations. Three 3-dimensional finite element models designed with computer-aided design/computer-aided manufacturing ceramic onlay, endocrown, and conventional crown restorations were constructed to perform simulations. The Weibull function was incorporated with finite element analysis to calculate the long-term failure probability relative to different load conditions. The results indicated that the stress values on the enamel, dentin, and luting cement for endocrown restorations exhibited the lowest values relative to the other 2 restoration methods. Weibull analysis revealed that the overall failure probabilities in a shallow cracked premolar were 27%, 2%, and 1% for the onlay, endocrown, and conventional crown restorations, respectively, in the normal occlusal condition. The corresponding values were 70%, 10%, and 2% for the depth cracked premolar. This numeric investigation suggests that the endocrown provides sufficient fracture resistance only in a shallow cracked premolar with endodontic treatment. The conventional crown treatment can immobilize the premolar for different cracked depths with lower failure risk. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Failure Mode and Effect Analysis for Delivery of Lung Stereotactic Body Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perks, Julian R., E-mail: julian.perks@ucdmc.ucdavis.edu; Stanic, Sinisa; Stern, Robin L.
2012-07-15
Purpose: To improve the quality and safety of our practice of stereotactic body radiation therapy (SBRT), we analyzed the process following the failure mode and effects analysis (FMEA) method. Methods: The FMEA was performed by a multidisciplinary team. For each step in the SBRT delivery process, a potential failure occurrence was derived and three factors were assessed: the probability of each occurrence, the severity if the event occurs, and the probability of detection by the treatment team. A rank of 1 to 10 was assigned to each factor, and then the multiplied ranks yielded the relative risks (risk priority numbers).more » The failure modes with the highest risk priority numbers were then considered to implement process improvement measures. Results: A total of 28 occurrences were derived, of which nine events scored with significantly high risk priority numbers. The risk priority numbers of the highest ranked events ranged from 20 to 80. These included transcription errors of the stereotactic coordinates and machine failures. Conclusion: Several areas of our SBRT delivery were reconsidered in terms of process improvement, and safety measures, including treatment checklists and a surgical time-out, were added for our practice of gantry-based image-guided SBRT. This study serves as a guide for other users of SBRT to perform FMEA of their own practice.« less
Continuous infusion or bolus injection of loop diuretics for congestive heart failure?
Zepeda, Patricio; Rain, Carmen; Sepúlveda, Paola
2016-04-22
Loop diuretics are widely used in acute heart failure. However, there is controversy about the superiority of continuous infusion over bolus administration. Searching in Epistemonikos database, which is maintained by screening 30 databases, we identified four systematic reviews including 11 pertinent randomized controlled trials overall. We combined the evidence using meta-analysis and generated a summary of findings following the GRADE approach. We concluded continuous administration of loop diuretics probably reduces mortality and length of stay compared to intermittent administration in patients with acute heart failure.
Model analysis of the link between interest rates and crashes
NASA Astrophysics Data System (ADS)
Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft
2016-09-01
We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.
Assessing Aircraft Supply Air to Recommend Compounds for Timely Warning of Contamination
NASA Astrophysics Data System (ADS)
Fox, Richard B.
Taking aircraft out of service for even one day to correct fume-in-cabin events can cost the industry roughly $630 million per year in lost revenue. The quantitative correlation study investigated quantitative relationships between measured concentrations of contaminants in bleed air and probability of odor detectability. Data were collected from 94 aircraft engine and auxiliary power unit (APU) bleed air tests from an archival data set between 1997 and 2011, and no relationships were found. Pearson correlation was followed by regression analysis for individual contaminants. Significant relationships of concentrations of compounds in bleed air to probability of odor detectability were found (p<0.05), as well as between compound concentration and probability of sensory irritancy detectability. Study results may be useful to establish early warning levels. Predictive trend monitoring, a method to identify potential pending failure modes within a mechanical system, may influence scheduled down-time for maintenance as a planned event, rather than repair after a mechanical failure and thereby reduce operational costs associated with odor-in-cabin events. Twenty compounds (independent variables) were found statistically significant as related to probability of odor detectability (dependent variable 1). Seventeen compounds (independent variables) were found statistically significant as related to probability of sensory irritancy detectability (dependent variable 2). Additional research was recommended to further investigate relationships between concentrations of contaminants and probability of odor detectability or probability of sensory irritancy detectability for all turbine oil brands. Further research on implementation of predictive trend monitoring may be warranted to demonstrate how the monitoring process might be applied to in-flight application.
Uncertainty and Intelligence in Computational Stochastic Mechanics
NASA Technical Reports Server (NTRS)
Ayyub, Bilal M.
1996-01-01
Classical structural reliability assessment techniques are based on precise and crisp (sharp) definitions of failure and non-failure (survival) of a structure in meeting a set of strength, function and serviceability criteria. These definitions are provided in the form of performance functions and limit state equations. Thus, the criteria provide a dichotomous definition of what real physical situations represent, in the form of abrupt change from structural survival to failure. However, based on observing the failure and survival of real structures according to the serviceability and strength criteria, the transition from a survival state to a failure state and from serviceability criteria to strength criteria are continuous and gradual rather than crisp and abrupt. That is, an entire spectrum of damage or failure levels (grades) is observed during the transition to total collapse. In the process, serviceability criteria are gradually violated with monotonically increasing level of violation, and progressively lead into the strength criteria violation. Classical structural reliability methods correctly and adequately include the ambiguity sources of uncertainty (physical randomness, statistical and modeling uncertainty) by varying amounts. However, they are unable to adequately incorporate the presence of a damage spectrum, and do not consider in their mathematical framework any sources of uncertainty of the vagueness type. Vagueness can be attributed to sources of fuzziness, unclearness, indistinctiveness, sharplessness and grayness; whereas ambiguity can be attributed to nonspecificity, one-to-many relations, variety, generality, diversity and divergence. Using the nomenclature of structural reliability, vagueness and ambiguity can be accounted for in the form of realistic delineation of structural damage based on subjective judgment of engineers. For situations that require decisions under uncertainty with cost/benefit objectives, the risk of failure should depend on the underlying level of damage and the uncertainties associated with its definition. A mathematical model for structural reliability assessment that includes both ambiguity and vagueness types of uncertainty was suggested to result in the likelihood of failure over a damage spectrum. The resulting structural reliability estimates properly represent the continuous transition from serviceability to strength limit states over the ultimate time exposure of the structure. In this section, a structural reliability assessment method based on a fuzzy definition of failure is suggested to meet these practical needs. A failure definition can be developed to indicate the relationship between failure level and structural response. In this fuzzy model, a subjective index is introduced to represent all levels of damage (or failure). This index can be interpreted as either a measure of failure level or a measure of a degree of belief in the occurrence of some performance condition (e.g., failure). The index allows expressing the transition state between complete survival and complete failure for some structural response based on subjective evaluation and judgment.
Metallurgical failure analysis of MH-1A reactor core hold-down bolts. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawthorne, J.R.; Watson, H.E.
1976-11-01
The Naval Research Laboratory has performed a failure analysis on two MH-1A reactor core hold-down bolts that broke in service. Adherence to fabrication specifications, post-service properties and possible causes of bolt failure were investigated. The bolt material was verified as 17-4PH precipitation hardening stainless steel. Measured bolt dimensions also were in accordance with fabrication drawing specifications. Bolt failure occurred in the region of a locking pin hole which reduced the bolt net section by 47 percent. The failure analysis indicates that the probable cause of failure was net section overloading resulting from a lateral bending force on the bolt. Themore » analysis indicates that net section overloading could also have resulted from combined tensile stresses (bolt preloading plus differential thermal expansion). Recommendations are made for improved bolting.« less
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
NASA Astrophysics Data System (ADS)
Hasan, M.; Helal, A.; Gabr, M.
2014-12-01
In this project, we focus on providing a computer-automated platform for a better assessment of the potential failures and retrofit measures of flood-protecting earth structures, e.g., dams and levees. Such structures play an important role during extreme flooding events as well as during normal operating conditions. Furthermore, they are part of other civil infrastructures such as water storage and hydropower generation. Hence, there is a clear need for accurate evaluation of stability and functionality levels during their service lifetime so that the rehabilitation and maintenance costs are effectively guided. Among condition assessment approaches based on the factor of safety, the limit states (LS) approach utilizes numerical modeling to quantify the probability of potential failures. The parameters for LS numerical modeling include i) geometry and side slopes of the embankment, ii) loading conditions in terms of rate of rising and duration of high water levels in the reservoir, and iii) cycles of rising and falling water levels simulating the effect of consecutive storms throughout the service life of the structure. Sample data regarding the correlations of these parameters are available through previous research studies. We have unified these criteria and extended the risk assessment in term of loss of life through the implementation of a graphical user interface to automate input parameters that divides data into training and testing sets, and then feeds them into Artificial Neural Network (ANN) tool through MATLAB programming. The ANN modeling allows us to predict risk values of flood protective structures based on user feedback quickly and easily. In future, we expect to fine-tune the software by adding extensive data on variations of parameters.
NASA Astrophysics Data System (ADS)
Welty, N.; Rudolph, M.; Schäfer, F.; Apeldoorn, J.; Janovsky, R.
2013-07-01
This paper presents a computational methodology to predict the satellite system-level effects resulting from impacts of untrackable space debris particles. This approach seeks to improve on traditional risk assessment practices by looking beyond the structural penetration of the satellite and predicting the physical damage to internal components and the associated functional impairment caused by untrackable debris impacts. The proposed method combines a debris flux model with the Schäfer-Ryan-Lambert ballistic limit equation (BLE), which accounts for the inherent shielding of components positioned behind the spacecraft structure wall. Individual debris particle impact trajectories and component shadowing effects are considered and the failure probabilities of individual satellite components as a function of mission time are calculated. These results are correlated to expected functional impairment using a Boolean logic model of the system functional architecture considering the functional dependencies and redundancies within the system.
Factors Influencing Progressive Failure Analysis Predictions for Laminated Composite Structure
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.
2008-01-01
Progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model for use with a nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details are described in the present paper. Parametric studies for laminated composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented and to demonstrate their influence on progressive failure analysis predictions.
Apollo 15 mission main parachute failure
NASA Technical Reports Server (NTRS)
1971-01-01
The failure of one of the three main parachutes of the Apollo 15 spacecraft was investigated by studying malfunctions in the forward heat shield, broken riser, and firing the fuel expelled from the command module reaction control system. It is concluded that the most probable cause was the burning of raw fuel being expelled during the latter portion of depletion firing. Recommended corrective actions are included.
NASA Astrophysics Data System (ADS)
Sepulveda, S. A.; Serey, A.; Hermanns, R. L.; Redfield, T. F.; Oppikofer, T.; Duhart, P.
2011-12-01
The fjordland of the Chilean Patagonia is subject to active tectonics, with large magnitude subduction earthquakes, such as the M 9.5 1960 earthquake, and shallow crustal earthquakes along the regional Liquiñe-Ofqui Fault Zone (LOFZ). One of the latter (M 6.2) struck the Aysen Fjord region (45.5 S) on the 21st of April 2007, triggering dozens of landslides in the epicentral area along the fjord coast and surroundings. The largest rock slides and rock avalanches induced a local tsunami that together with debris flows caused ten fatalities and severely damaged several salmon farms, the most important economic activity of the area. Multi-scale studies of the landslides triggered during the Aysen earthquake have been carried out, including landslide mapping and classification, slope stability back-analyses and structural and geomorphological mapping of the largest failures from field surveys and high-resolution digital surface models created from terrestrial laser scanning. The failures included rock slides, rock avalanches, rock-soil slides, soil slides and debris flows. The largest rock avalanche had a volume of over 20 million cubic metres. The landslides affected steep slopes of intrusive rocks of the North Patagonian batholith covered by a thin layer of volcanic soils, which supports a high forest. The results of geotechnical analyses suggest a site effect due to topographic amplification on the generation of the landslides, with peak ground accelerations that may have reached between about 1.0 and 2.0 g for rock avalanches and between 0.6 and 1.0 g for shallow rock-soil slides, depending on the amount of assumed vertical acceleration and the applied method (limit equilibrium and Newmark). Attenuation relationships for shallow crustal seismicity indicate accelerations below 0.5 g for earthquakes of a similar magnitude and epicentral distances. Detailed field structural analyses of the largest rock avalanche in Punta Cola indicate a key role in the failure mechanics of brittle faults and jointing related to the LOFZ. The basal failure plane closely followed an older (epidote chlorite facies) thrust fault. Later fracture patterns suggest the thrust relaxed under gravitational stress following rock column uplift. Failure probably utilized a combination of these structures. Digital geomorphic models allowed establishing a sequence of events during failure which together make up the complex rock avalanche deposit. The volume of each individual slide could be more accurately determined. These and ongoing studies will allow a unique characterization of earthquake-induced slope failures in fjord coastal environments, providing new tools for landslide, seismic and tsunami hazard assessment in Patagonia and similar geomorphological settings around the world. This work was funded by Fondecyt project 11070107, the International Center for Geohazards, the Millenium Nucleus 'International Earthquake Research Center Montessus de Ballore', FNDR-Project 'Geological-Mining Environmental Research in Aysen' of the Chilean Government and the Andean Geothermal Center of Excellence.
Stemper, Brian D; Chirvi, Sajal; Doan, Ninh; Baisden, Jamie L; Maiman, Dennis J; Curry, William H; Yoganandan, Narayan; Pintar, Frank A; Paskoff, Glenn; Shender, Barry S
2018-06-01
Quantification of biomechanical tolerance is necessary for injury prediction and protection of vehicular occupants. This study experimentally quantified lumbar spine axial tolerance during accelerative environments simulating a variety of military and civilian scenarios. Intact human lumbar spines (T12-L5) were dynamically loaded using a custom-built drop tower. Twenty-three specimens were tested at sub-failure and failure levels consisting of peak axial forces between 2.6 and 7.9 kN and corresponding peak accelerations between 7 and 57 g. Military aircraft ejection and helicopter crashes fall within these high axial acceleration ranges. Testing was stopped following injury detection. Both peak force and acceleration were significant (p < 0.0001) injury predictors. Injury probability curves using parametric survival analysis were created for peak acceleration and peak force. Fifty-percent probability of injury (95%CI) for force and acceleration were 4.5 (3.9-5.2 kN), and 16 (13-19 g). A majority of injuries affected the L1 spinal level. Peak axial forces and accelerations were greater for specimens that sustained multiple injuries or injuries at L2-L5 spinal levels. In general, force-based tolerance was consistent with previous shorter-segment lumbar spine testing (3-5 vertebrae), although studies incorporating isolated vertebral bodies reported higher tolerance attributable to a different injury mechanism involving structural failure of the cortical shell. This study identified novel outcomes with regard to injury patterns, wherein more violent exposures produced more injuries in the caudal lumbar spine. This caudal migration was likely attributable to increased injury tolerance at lower lumbar spinal levels and a faster inertial mass recruitment process for high rate load application. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. J Orthop Res 36:1747-1756, 2018. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Yu, Soonyoung; Unger, Andre J A; Parker, Beth; Kim, Taehee
2012-06-15
In this study, we defined risk capital as the contingency fee or insurance premium that a brownfields redeveloper needs to set aside from the sale of each house in case they need to repurchase it at a later date because the indoor air has been detrimentally affected by subsurface contamination. The likelihood that indoor air concentrations will exceed a regulatory level subject to subsurface heterogeneity and source zone location uncertainty is simulated by a physics-based hydrogeological model using Monte Carlo realizations, yielding the probability of failure. The cost of failure is the future value of the house indexed to the stochastic US National Housing index. The risk capital is essentially the probability of failure times the cost of failure with a surcharge to compensate the developer against hydrogeological and financial uncertainty, with the surcharge acting as safety loading reflecting the developers' level of risk aversion. We review five methodologies taken from the actuarial and financial literature to price the risk capital for a highly stylized brownfield redevelopment project, with each method specifically adapted to accommodate our notion of the probability of failure. The objective of this paper is to develop an actuarially consistent approach for combining the hydrogeological and financial uncertainty into a contingency fee that the brownfields developer should reserve (i.e. the risk capital) in order to hedge their risk exposure during the project. Results indicate that the price of the risk capital is much more sensitive to hydrogeological rather than financial uncertainty. We use the Capital Asset Pricing Model to estimate the risk-adjusted discount rate to depreciate all costs to present value for the brownfield redevelopment project. A key outcome of this work is that the presentation of our risk capital valuation methodology is sufficiently generalized for application to a wide variety of engineering projects. Copyright © 2012 Elsevier Ltd. All rights reserved.
Stoll, Richard; Cappel, I; Jablonski-Momeni, Anahita; Pieper, K; Stachniss, V
2007-01-01
This study evaluated the long-term survival of inlays and partial crowns made of IPS Empress. For this purpose, the patient data of a prospective study were examined in retrospect and statistically evaluated. All of the inlays and partial crowns fabricated of IPS-Empress within the Department of Operative Dentistry at the School of Dental Medicine of Philipps University, Marburg, Germany were systematically recorded in a database between 1991 and 2001. The corresponding patient files were revised at the end of 2001. The information gathered in this way was used to evaluate the survival of the restorations using the method described by Kaplan and Meyer. A total of n = 1624 restorations were fabricated of IPS-Empress within the observation period. During this time, n = 53 failures were recorded. The remaining restorations were observed for a mean period of 18.77 months. The failures were mainly attributed to fractures, endodontic problems and cementation errors. The last failure was established after 82 months. At this stage, a cumulative survival probability of p = 0.81 was registered with a standard error of 0.04. At this time, n = 30 restorations were still being observed. Restorations on vital teeth (n = 1588) showed 46 failures, with a cumulative survival probability of p = 0.82. Restorations performed on non-vital teeth (n = 36) showed seven failures, with a cumulative survival probability of p = 0.53. Highly significant differences were found between the two groups (p < 0.0001) in a log-rank test. No significant difference (p = 0.41) was found between the patients treated by students (n = 909) and those treated by qualified dentists (n = 715). Likewise, no difference (p = 0.13) was established between the restorations seated with a high viscosity cement (n = 295) and those placed with a low viscosity cement (n = 1329).
Risk analysis by FMEA as an element of analytical validation.
van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M
2009-12-05
We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.
Temporal-varying failures of nodes in networks
NASA Astrophysics Data System (ADS)
Knight, Georgie; Cristadoro, Giampaolo; Altmann, Eduardo G.
2015-08-01
We consider networks in which random walkers are removed because of the failure of specific nodes. We interpret the rate of loss as a measure of the importance of nodes, a notion we denote as failure centrality. We show that the degree of the node is not sufficient to determine this measure and that, in a first approximation, the shortest loops through the node have to be taken into account. We propose approximations of the failure centrality which are valid for temporal-varying failures, and we dwell on the possibility of externally changing the relative importance of nodes in a given network by exploiting the interference between the loops of a node and the cycles of the temporal pattern of failures. In the limit of long failure cycles we show analytically that the escape in a node is larger than the one estimated from a stochastic failure with the same failure probability. We test our general formalism in two real-world networks (air-transportation and e-mail users) and show how communities lead to deviations from predictions for failures in hubs.
Estimating Kinship in Admixed Populations
Thornton, Timothy; Tang, Hua; Hoffmann, Thomas J.; Ochs-Balcom, Heather M.; Caan, Bette J.; Risch, Neil
2012-01-01
Genome-wide association studies (GWASs) are commonly used for the mapping of genetic loci that influence complex traits. A problem that is often encountered in both population-based and family-based GWASs is that of identifying cryptic relatedness and population stratification because it is well known that failure to appropriately account for both pedigree and population structure can lead to spurious association. A number of methods have been proposed for identifying relatives in samples from homogeneous populations. A strong assumption of population homogeneity, however, is often untenable, and many GWASs include samples from structured populations. Here, we consider the problem of estimating relatedness in structured populations with admixed ancestry. We propose a method, REAP (relatedness estimation in admixed populations), for robust estimation of identity by descent (IBD)-sharing probabilities and kinship coefficients in admixed populations. REAP appropriately accounts for population structure and ancestry-related assortative mating by using individual-specific allele frequencies at SNPs that are calculated on the basis of ancestry derived from whole-genome analysis. In simulation studies with related individuals and admixture from highly divergent populations, we demonstrate that REAP gives accurate IBD-sharing probabilities and kinship coefficients. We apply REAP to the Mexican Americans in Los Angeles, California (MXL) population sample of release 3 of phase III of the International Haplotype Map Project; in this sample, we identify third- and fourth-degree relatives who have not previously been reported. We also apply REAP to the African American and Hispanic samples from the Women's Health Initiative SNP Health Association Resource (WHI-SHARe) study, in which hundreds of pairs of cryptically related individuals have been identified. PMID:22748210
Chua, Daniel T T; Sham, Jonathan S T; Hung, Kwan-Ngai; Leung, Lucullus H T; Au, Gordon K H
2006-12-01
Stereotactic radiosurgery has been employed as a salvage treatment of local failures of nasopharyngeal carcinoma (NPC). To identify patients that would benefit from radiosurgery, we reviewed our data with emphasis on factors that predicted treatment outcome. A total of 48 patients with local failures of NPC were treated by stereotactic radiosurgery between March 1996 and February 2005. Radiosurgery was administered using a modified linear accelerator with single or multiple isocenters to deliver a median dose of 12.5 Gy to the target periphery. Median follow-up was 54 months. Five-year local failure-free probability after radiosurgery was 47.2% and 5-year overall survival rate was 46.9%. Neuroendocrine complications occurred in 27% of patients but there were no treatment-related deaths. Time interval from primary radiotherapy, retreatment T stage, prior local failures and tumor volume were significant predictive factors of local control and/or survival whereas age was of marginal significance in predicting survival. A radiosurgery prognostic scoring system was designed based on these predictive factors. Five-year local failure-free probabilities in patients with good, intermediate and poor prognostic scores were 100%, 42.5%, and 9.6%. The corresponding five-year overall survival rates were 100%, 51.1%, and 0%. Important factors that predicted tumor control and survival after radiosurgery were identified. Patients with good prognostic score should be treated by radiosurgery in view of the excellent results. Patients with intermediate prognostic score may also be treated by radiosurgery but those with poor prognostic score should receive other salvage treatments.
Raffa, Santi; Fantoni, Cecilia; Restauri, Luigia; Auricchio, Angelo
2005-10-01
We describe the case of a patient with atrioventricular (AV) junction ablation and chronic biventricular pacing in which intermittent dysfunction of the right ventricular (RV) lead resulted in left ventricular (LV) stimulation alone and onset of severe right heart failure. Restoration of biventricular pacing by increasing device output and then performing lead revision resolved the issue. This case provides evidence that LV pacing alone in patients with AV junction ablation may lead to severe right heart failure, most likely as a result of iatrogenic mechanical dyssynchrony within the RV. Thus, probably this pacing mode should be avoided in pacemaker-dependent patients with heart failure.
NASA Technical Reports Server (NTRS)
Holanda, R.; Frause, L. M.
1977-01-01
The reliability of 45 state-of-the-art strain gage systems under full scale engine testing was investigated. The flame spray process was used to install 23 systems on the first fan rotor of a YF-100 engine; the others were epoxy cemented. A total of 56 percent of the systems failed in 11 hours of engine operation. Flame spray system failures were primarily due to high gage resistance, probably caused by high stress levels. Epoxy system failures were principally erosion failures, but only on the concave side of the blade. Lead-wire failures between the blade-to-disk jump and the control room could not be analyzed.
Structural Analysis Made 'NESSUSary'
NASA Technical Reports Server (NTRS)
2005-01-01
Everywhere you look, chances are something that was designed and tested by a computer will be in plain view. Computers are now utilized to design and test just about everything imaginable, from automobiles and airplanes to bridges and boats, and elevators and escalators to streets and skyscrapers. Computer-design engineering first emerged in the 1970s, in the automobile and aerospace industries. Since computers were in their infancy, however, architects and engineers during the time were limited to producing only designs similar to hand-drafted drawings. (At the end of 1970s, a typical computer-aided design system was a 16-bit minicomputer with a price tag of $125,000.) Eventually, computers became more affordable and related software became more sophisticated, offering designers the "bells and whistles" to go beyond the limits of basic drafting and rendering, and venture into more skillful applications. One of the major advancements was the ability to test the objects being designed for the probability of failure. This advancement was especially important for the aerospace industry, where complicated and expensive structures are designed. The ability to perform reliability and risk assessment without using extensive hardware testing is critical to design and certification. In 1984, NASA initiated the Probabilistic Structural Analysis Methods (PSAM) project at Glenn Research Center to develop analysis methods and computer programs for the probabilistic structural analysis of select engine components for current Space Shuttle and future space propulsion systems. NASA envisioned that these methods and computational tools would play a critical role in establishing increased system performance and durability, and assist in structural system qualification and certification. Not only was the PSAM project beneficial to aerospace, it paved the way for a commercial risk- probability tool that is evaluating risks in diverse, down- to-Earth application
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
Assessment of Carrying Capacity of Timber Element Using SBRA Method
NASA Astrophysics Data System (ADS)
Kraus, Michal
2017-10-01
Wood as a building material has a significant perspective in the context of nonrenewable energy sources and production of greenhouse gas emissions. The subject of this paper is to verify the carrying capacity of the timber element using the probabilistic method Simulation Based Reliability Assessment (SBRA). The simulation is performed for one million cycles. Key factors decreasing the strength of wooden material at the time include the duration of the loads, and combinations thereof. Inconsiderable factor affecting the strength of wood is also the humidity. Continuous beam with three fields (length 15 m, glued laminated timber, and strength class GL 36 according to the DIN EN 1194) is placed in an environment with a thermal-humidity regime of the 2nd class according to the EC 5. Average life of carrying timber structure is estimated to be 50 years. The simulation results show that there is no risk of failure of wood during the first year. The probability of failure is common in the 10 years of its life. Then, wooden element already meets only a reduced level of reliability.
Effect of stress concentrations in composite structures
NASA Technical Reports Server (NTRS)
Babcock, G. D.; Knauss, W. G.
1984-01-01
The goal of achieving a better understanding of the failure of complex composite structure is sought. This type of structure requires a thorough understanding of the behavior under load both on a macro and micro scale if failure mechanisms are to be understood. The two problems being studied are the failure at a panel/stiffener interface and a generic problem of failure at a stress concentration.
Report of the NASA Ad Hoc Committee on failure of high strength structural materials
NASA Technical Reports Server (NTRS)
Brown, W. F., Jr. (Editor)
1972-01-01
An analysis of structural failures that have occurred in NASA programs was conducted. Reports of 231 examples of structural failure were reviewed. Attempts were made to identify those factors which contributed to the failures, and recommendations were formulated for actions which would minimize their effects on future NASA programs. Two classes of factors were identified: (1) those associated with deficiencies in existing materials and structures technology and (2) those attributable to inadequate documentation or communication of that technology.
Analysis of Failures of High Speed Shaft Bearing System in a Wind Turbine
NASA Astrophysics Data System (ADS)
Wasilczuk, Michał; Gawarkiewicz, Rafał; Bastian, Bartosz
2018-01-01
During the operation of wind turbines with gearbox of traditional configuration, consisting of one planetary stage and two helical stages high failure rate of high speed shaft bearings is observed. Such a high failures frequency is not reflected in the results of standard calculations of bearing durability. Most probably it can be attributed to atypical failure mechanism. The authors studied problems in 1.5 MW wind turbines of one of Polish wind farms. The analysis showed that the problems of high failure rate are commonly met all over the world and that the statistics for the analysed turbines were very similar. After the study of potential failure mechanism and its potential reasons, modification of the existing bearing system was proposed. Various options, with different bearing types were investigated. Different versions were examined for: expected durability increase, extent of necessary gearbox modifications and possibility to solve existing problems in operation.
Investigation into Cause of High Temperature Failure of Boiler Superheater Tube
NASA Astrophysics Data System (ADS)
Ghosh, D.; Ray, S.; Roy, H.; Shukla, A. K.
2015-04-01
The failure of the boiler tubes occur due to various reasons like creep, fatigue, corrosion and erosion. This paper highlights a case study of typical premature failure of a final superheater tube of 210 MW thermal power plant boiler. Visual examination, dimensional measurement, chemical analysis, oxide scale thickness measurement, microstructural examination are conducted as part of the investigations. Apart from these investigations, sulfur print, Energy Dispersive spectroscopy (EDS) and X ray diffraction analysis (XRD) are also conducted to ascertain the probable cause of failure of final super heater tube. Finally it has been concluded that the premature failure of the super heater tube can be attributed to the combination of localized high tube metal temperature and loss of metal from the outer surface due to high temperature corrosion. The corrective actions have also been suggested to avoid this type of failure in near future.
NASA Astrophysics Data System (ADS)
Kim, Dong Hyeok; Lee, Ouk Sub; Kim, Hong Min; Choi, Hye Bin
2008-11-01
A modified Split Hopkinson Pressure Bar technique with aluminum pressure bars and a pulse shaper technique to achieve a closer impedance match between the pressure bars and the specimen materials such as hot temperature degraded POM (Poly Oxy Methylene) and PP (Poly Propylene). The more distinguishable experimental signals were obtained to evaluate the more accurate dynamic deformation behavior of materials under a high strain rate loading condition. A pulse shaping technique is introduced to reduce the non-equilibrium on the dynamic material response by modulation of the incident wave during a short period of test. This increases the rise time of the incident pulse in the SHPB experiment. For the dynamic stress strain curve obtained from SHPB experiment, the Johnson-Cook model is applied as a constitutive equation. The applicability of this constitutive equation is verified by using the probabilistic reliability estimation method. Two reliability methodologies such as the FORM and the SORM have been proposed. The limit state function(LSF) includes the Johnson-Cook model and applied stresses. The LSF in this study allows more statistical flexibility on the yield stress than a paper published before. It is found that the failure probability estimated by using the SORM is more reliable than those of the FORM/ It is also noted that the failure probability increases with increase of the applied stress. Moreover, it is also found that the parameters of Johnson-Cook model such as A and n, and the applied stress are found to affect the failure probability more severely than the other random variables according to the sensitivity analysis.
Individual versus systemic risk and the Regulator's Dilemma.
Beale, Nicholas; Rand, David G; Battey, Heather; Croxson, Karen; May, Robert M; Nowak, Martin A
2011-08-02
The global financial crisis of 2007-2009 exposed critical weaknesses in the financial system. Many proposals for financial reform address the need for systemic regulation--that is, regulation focused on the soundness of the whole financial system and not just that of individual institutions. In this paper, we study one particular problem faced by a systemic regulator: the tension between the distribution of assets that individual banks would like to hold and the distribution across banks that best supports system stability if greater weight is given to avoiding multiple bank failures. By diversifying its risks, a bank lowers its own probability of failure. However, if many banks diversify their risks in similar ways, then the probability of multiple failures can increase. As more banks fail simultaneously, the economic disruption tends to increase disproportionately. We show that, in model systems, the expected systemic cost of multiple failures can be largely explained by two global parameters of risk exposure and diversity, which can be assessed in terms of the risk exposures of individual actors. This observation hints at the possibility of regulatory intervention to promote systemic stability by incentivizing a more diverse diversification among banks. Such intervention offers the prospect of an additional lever in the armory of regulators, potentially allowing some combination of improved system stability and reduced need for additional capital.
Probabilistic Analysis of a Composite Crew Module
NASA Technical Reports Server (NTRS)
Mason, Brian H.; Krishnamurthy, Thiagarajan
2011-01-01
An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.
Factors Predicting Meniscal Allograft Transplantation Failure
Parkinson, Ben; Smith, Nicholas; Asplin, Laura; Thompson, Peter; Spalding, Tim
2016-01-01
Background: Meniscal allograft transplantation (MAT) is performed to improve symptoms and function in patients with a meniscal-deficient compartment of the knee. Numerous studies have shown a consistent improvement in patient-reported outcomes, but high failure rates have been reported by some studies. The typical patients undergoing MAT often have multiple other pathologies that require treatment at the time of surgery. The factors that predict failure of a meniscal allograft within this complex patient group are not clearly defined. Purpose: To determine predictors of MAT failure in a large series to refine the indications for surgery and better inform future patients. Study Design: Cohort study; Level of evidence, 3. Methods: All patients undergoing MAT at a single institution between May 2005 and May 2014 with a minimum of 1-year follow-up were prospectively evaluated and included in this study. Failure was defined as removal of the allograft, revision transplantation, or conversion to a joint replacement. Patients were grouped according to the articular cartilage status at the time of the index surgery: group 1, intact or partial-thickness chondral loss; group 2, full-thickness chondral loss 1 condyle; and group 3, full-thickness chondral loss both condyles. The Cox proportional hazards model was used to determine significant predictors of failure, independently of other factors. Kaplan-Meier survival curves were produced for overall survival and significant predictors of failure in the Cox proportional hazards model. Results: There were 125 consecutive MATs performed, with 1 patient lost to follow-up. The median follow-up was 3 years (range, 1-10 years). The 5-year graft survival for the entire cohort was 82% (group 1, 97%; group 2, 82%; group 3, 62%). The probability of failure in group 1 was 85% lower (95% CI, 13%-97%) than in group 3 at any time. The probability of failure with lateral allografts was 76% lower (95% CI, 16%-89%) than medial allografts at any time. Conclusion: This study showed that the presence of severe cartilage damage at the time of MAT and medial allografts were significantly predictive of failure. Surgeons and patients should use this information when considering the risks and benefits of surgery. PMID:27583257
Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems
Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul
2010-01-01
Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN≥125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software∕hardware upgrades. System latency was determined to be ∼193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%–3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was ∼35 min, while that taken for comprehensive testing was ∼3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation of clinical resources because the most critical failure modes receive the most attention. It is expected that the set of guidelines proposed here will serve as a living document that is updated with the accumulation of progressively more intrainstitutional and interinstitutional experience with DMLC tracking. PMID:21302802
An overview of computational simulation methods for composite structures failure and life analysis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1993-01-01
Three parallel computational simulation methods are being developed at the LeRC Structural Mechanics Branch (SMB) for composite structures failure and life analysis: progressive fracture CODSTRAN; hierarchical methods for high-temperature composites; and probabilistic evaluation. Results to date demonstrate that these methods are effective in simulating composite structures failure/life/reliability.
Follow-up of the original cohort with the Ahmed glaucoma valve implant.
Topouzis, F; Coleman, A L; Choplin, N; Bethlem, M M; Hill, R; Yu, F; Panek, W C; Wilson, M R
1999-08-01
To study the long-term results of the Ahmed glaucoma valve implant in patients with complicated glaucoma in whom short-term results have been reported. In this multicenter study, we analyzed the long-term outcome of a cohort of 60 eyes from 60 patients in whom the Ahmed glaucoma valve was implanted. Failure was characterized by at least one of the following: intraocular pressure greater than 21 mm Hg at both of the last two visits less than 6 mm Hg at both of the last two visits, loss of light perception, additional glaucoma surgery, devastating complications, and removal or replacement of the Ahmed glaucoma valve implant. Devastating complications included chronic hypotony, retinal detachment, malignant glaucoma, endophthalmitis, and phthisis bulbi; we also report results that add corneal complications (corneal decompensation or edema, corneal graft failure) as defining a devastating complication. The mean follow-up time for the 60 eyes was 30.5 months (range, 2.1 to 63.5). When corneal complications were included in the definition of failure, 26 eyes (43%) were considered failures. Cumulative probabilities of success at 1, 2, 3, and 4 years were 76%, 68%, 54%, and 45%, respectively. When corneal complications were excluded from the definition of failure, 13 eyes (21.5%) were considered failures. Cumulative probabilities of success at 1, 2, 3, and 4 years were 87%, 82%, 76%, and 76%, respectively. Most of the failures after 12 months of postoperative follow-up were because of corneal complications. The long-term performance of the Ahmed glaucoma valve implant is comparable to other drainage devices. More than 12 months after the implantation of the Ahmed glaucoma valve implant, the most frequent adverse outcome was corneal decompensation or corneal graft failure. These corneal problems may be secondary to the type of eyes that have drainage devices or to the drainage device itself. Further investigation is needed to identify the reasons that corneal problems follow drainage device implantation.
Probability of Accurate Heart Failure Diagnosis and the Implications for Hospital Readmissions.
Carey, Sandra A; Bass, Kyle; Saracino, Giovanna; East, Cara A; Felius, Joost; Grayburn, Paul A; Vallabhan, Ravi C; Hall, Shelley A
2017-04-01
Heart failure (HF) is a complex syndrome with inherent diagnostic challenges. We studied the scope of possibly inaccurately documented HF in a large health care system among patients assigned a primary diagnosis of HF at discharge. Through a retrospective record review and a classification schema developed from published guidelines, we assessed the probability of the documented HF diagnosis being accurate and determined factors associated with HF-related and non-HF-related hospital readmissions. An arbitration committee of 3 experts reviewed a subset of records to corroborate the results. We assigned a low probability of accurate diagnosis to 133 (19%) of the 712 patients. A subset of patients were also reviewed by an expert panel, which concluded that 13% to 35% of patients probably did not have HF (inter-rater agreement, kappa = 0.35). Low-probability HF was predictive of being readmitted more frequently for non-HF causes (p = 0.018), as well as documented arrhythmias (p = 0.023), and age >60 years (p = 0.006). Documented sleep apnea (p = 0.035), percutaneous coronary intervention (p = 0.006), non-white race (p = 0.047), and B-type natriuretic peptide >400 pg/ml (p = 0.007) were determined to be predictive of HF readmissions in this cohort. In conclusion, approximately 1 in 5 patients documented to have HF were found to have a low probability of actually having it. Moreover, the determination of low-probability HF was twice as likely to result in readmission for non-HF causes and, thus, should be considered a determinant for all-cause readmissions in this population. Copyright © 2017 Elsevier Inc. All rights reserved.
Effect of stress concentrations in composite structures
NASA Technical Reports Server (NTRS)
Babcock, C. D.; Waas, A. M.
1985-01-01
Composite structures have found wide use in many engineering fields and a sound understanding of their response under load is important to their utilization. An experimental program is being carried out to gain a fundamental understanding of the failure mechanics of multilayered composite structures at GALCIT. As a part of this continuing study, the performance of laminated composite plates in the presence of a stress gradient and the failure of composite structures at points of thickness discontinuity is assessed. In particular, the questions of initiation of failure and its subsequent growth to complete failure of the structure are addressed.
Fault tree applications within the safety program of Idaho Nuclear Corporation
NASA Technical Reports Server (NTRS)
Vesely, W. E.
1971-01-01
Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.
Main propulsion system design recommendations for an advanced Orbit Transfer Vehicle
NASA Technical Reports Server (NTRS)
Redd, L.
1985-01-01
Various main propulsion system configurations of an advanced OTV are evaluated with respect to the probability of nonindependent failures, i.e., engine failures that disable the entire main propulsion system. Analysis of the life-cycle cost (LCC) indicates that LCC is sensitive to the main propulsion system reliability, vehicle dry weight, and propellant cost; it is relatively insensitive to the number of missions/overhaul, failures per mission, and EVA and IVA cost. In conclusion, two or three engines are recommended in view of their highest reliability, minimum life-cycle cost, and fail operational/fail safe capability.
Stress Analysis of B-52B and B-52H Air-Launching Systems Failure-Critical Structural Components
NASA Technical Reports Server (NTRS)
Ko, William L.
2005-01-01
The operational life analysis of any airborne failure-critical structural component requires the stress-load equation, which relates the applied load to the maximum tangential tensile stress at the critical stress point. The failure-critical structural components identified are the B-52B Pegasus pylon adapter shackles, B-52B Pegasus pylon hooks, B-52H airplane pylon hooks, B-52H airplane front fittings, B-52H airplane rear pylon fitting, and the B-52H airplane pylon lower sway brace. Finite-element stress analysis was performed on the said structural components, and the critical stress point was located and the stress-load equation was established for each failure-critical structural component. The ultimate load, yield load, and proof load needed for operational life analysis were established for each failure-critical structural component.
PROBABILISTIC RISK ANALYSIS OF RADIOACTIVE WASTE DISPOSALS - a case study
NASA Astrophysics Data System (ADS)
Trinchero, P.; Delos, A.; Tartakovsky, D. M.; Fernandez-Garcia, D.; Bolster, D.; Dentz, M.; Sanchez-Vila, X.; Molinero, J.
2009-12-01
The storage of contaminant material in superficial or sub-superficial repositories, such as tailing piles for mine waste or disposal sites for low and intermediate nuclear waste, poses a potential threat for the surrounding biosphere. The minimization of these risks can be achieved by supporting decision-makers with quantitative tools capable to incorporate all source of uncertainty within a rigorous probabilistic framework. A case study is presented where we assess the risks associated to the superficial storage of hazardous waste close to a populated area. The intrinsic complexity of the problem, involving many events with different spatial and time scales and many uncertainty parameters is overcome by using a formal PRA (probabilistic risk assessment) procedure that allows decomposing the system into a number of key events. Hence, the failure of the system is directly linked to the potential contamination of one of the three main receptors: the underlying karst aquifer, a superficial stream that flows near the storage piles and a protection area surrounding a number of wells used for water supply. The minimal cut sets leading to the failure of the system are obtained by defining a fault-tree that incorporates different events including the failure of the engineered system (e.g. cover of the piles) and the failure of the geological barrier (e.g. clay layer that separates the bottom of the pile from the karst formation). Finally the probability of failure is quantitatively assessed combining individual independent or conditional probabilities that are computed numerically or borrowed from reliability database.
Brain natriuretic peptide-guided therapy in the inpatient management of decompensated heart failure.
Saremi, Adonis; Gopal, Dipika; Maisel, Alan S
2012-02-01
Heart failure is extremely prevalent and is associated with significant mortality, morbidity and cost. Studies have already established mortality benefit with the use of neurohormonal blockade therapy in systolic failure. Unfortunately, physical signs and symptoms of heart failure lack diagnostic sensitivity and specificity, and medication doses proven to improve mortality in clinical trials are often not achieved. Brain natriuretic peptide (BNP) has proven to be of clinical use in the diagnosis and prognosis of heart failure, and recent efforts have been taken to further elucidate its role in guiding heart failure management. Multiple studies have been conducted on outpatient guided management, and although still controversial, there is a trend towards improved outcomes. Inpatient studies are lacking, but preliminary data suggest various BNP cut-off values, as well as percentage changes in BNP, that could be useful in predicting outcomes and improving mortality. In the future, heart failure management will probably involve an algorithm using clinical assessment and a multibiomarker-guided approach.