The assessment of low probability containment failure modes using dynamic PRA
NASA Astrophysics Data System (ADS)
Brunett, Acacia Joann
Although low probability containment failure modes in nuclear power plants may lead to large releases of radioactive material, these modes are typically crudely modeled in system level codes and have large associated uncertainties. Conventional risk assessment techniques (i.e. the fault-tree/event-tree methodology) are capable of accounting for these failure modes to some degree, however, they require the analyst to pre-specify the ordering of events, which can vary within the range of uncertainty of the phenomena. More recently, dynamic probabilistic risk assessment (DPRA) techniques have been developed which remove the dependency on the analyst. Through DPRA, it is now possible to perform a mechanistic and consistent analysis of low probability phenomena, with the timing of the possible events determined by the computational model simulating the reactor behavior. The purpose of this work is to utilize DPRA tools to assess low probability containment failure modes and the driving mechanisms. Particular focus is given to the risk-dominant containment failure modes considered in NUREG-1150, which has long been the standard for PRA techniques. More specifically, this work focuses on the low probability phenomena occurring during a station blackout (SBO) with late power recovery in the Zion Nuclear Power Plant, a Westinghouse pressurized water reactor (PWR). Subsequent to the major risk study performed in NUREG-1150, significant experimentation and modeling regarding the mechanisms driving containment failure modes have been performed. In light of this improved understanding, NUREG-1150 containment failure modes are reviewed in this work using the current state of knowledge. For some unresolved mechanisms, such as containment loading from high pressure melt ejection and combustion events, additional analyses are performed using the accident simulation tool MELCOR to explore the bounding containment loads for realistic scenarios. A dynamic treatment in the characterization of combustible gas ignition is also presented in this work. In most risk studies, combustion is treated simplistically in that it is assumed an ignition occurs if the gas mixture achieves a concentration favorable for ignition under the premise that an adequate ignition source is available. However, the criteria affecting ignition (such as the magnitude, location and frequency of the ignition sources) are complicated. This work demonstrates a technique for characterizing the properties of an ignition source to determine a probability of ignition. The ignition model developed in this work and implemented within a dynamic framework is utilized to analyze the implications and risk significance of late combustion events. This work also explores the feasibility of using dynamic event trees (DETs) with a deterministic sampling approach to analyze low probability phenomena. The flexibility of this approach is demonstrated through the rediscretization of containment fragility curves used in construction of the DET to show convergence to a true solution. Such a rediscretization also reduces the computational burden introduced through extremely fine fragility curve discretization by subsequent refinement of fragility curve regions of interest. Another advantage of the approach is the ability to perform sensitivity studies on the cumulative distribution functions (CDFs) used to determine branching probabilities without the need for rerunning the simulation code. Through review of the NUREG-1150 containment failure modes using the current state of knowledge, it is found that some failure modes, such as Alpha and rocket, can be excluded from further studies; other failure modes, such as failure to isolate, bypass, high pressure melt ejection (HPME), combustion-induced failure and overpressurization are still concerns to varying degrees. As part of this analysis, scoping studies performed in MELCOR show that HPME and the resulting direct containment heating (DCH) do not impose a significant threat to containment integrity. Additional scoping studies regarding the effect of recovery actions on in-vessel hydrogen generation show that reflooding a partially degraded core do not significantly affect hydrogen generation in-vessel, and the NUREG-1150 assumption that insufficient hydrogen is generated in-vessel to produce an energetic deflagration is confirmed. The DET analyses performed in this work show that very late power recovery produces the potential for very energetic combustion events which are capable of failing containment with a non-negligible probability, and that containment cooling systems have a significant impact on core concrete attack, and therefore combustible gas generation ex-vessel. Ultimately, the overall risk of combustion-induced containment failure is low, but its conditional likelihood can have a significant effect on accident mitigation strategies. It is also shown in this work that DETs are particularly well suited to examine low probability events because of their ability to rediscretize CDFs and observe solution convergence.
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
Chambers, David W
2010-01-01
Every plan contains risk. To proceed without planning some means of managing that risk is to court failure. The basic logic of risk is explained. It consists in identifying a threshold where some corrective action is necessary, the probability of exceeding that threshold, and the attendant cost should the undesired outcome occur. This is the probable cost of failure. Various risk categories in dentistry are identified, including lack of liquidity; poor quality; equipment or procedure failures; employee slips; competitive environments; new regulations; unreliable suppliers, partners, and patients; and threats to one's reputation. It is prudent to make investments in risk management to the extent that the cost of managing the risk is less than the probable loss due to risk failure and when risk management strategies can be matched to type of risk. Four risk management strategies are discussed: insurance, reducing the probability of failure, reducing the costs of failure, and learning. A risk management accounting of the financial meltdown of October 2008 is provided.
Reliability Analysis of Systems Subject to First-Passage Failure
NASA Technical Reports Server (NTRS)
Lutes, Loren D.; Sarkani, Shahram
2009-01-01
An obvious goal of reliability analysis is the avoidance of system failure. However, it is generally recognized that it is often not feasible to design a practical or useful system for which failure is impossible. Thus it is necessary to use techniques that estimate the likelihood of failure based on modeling the uncertainty about such items as the demands on and capacities of various elements in the system. This usually involves the use of probability theory, and a design is considered acceptable if it has a sufficiently small probability of failure. This report contains findings of analyses of systems subject to first-passage failure.
Probability of in-vessel steam explosion-induced containment failure for a KWU PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esmaili, H.; Khatib-Rahbar, M.; Zuchuat, O.
During postulated core meltdown accidents in light water reactors, there is a likelihood for an in-vessel steam explosion when the melt contacts the coolant in the lower plenum. The objective of the work described in this paper is to determine the conditional probability of in-vessel steam explosion-induced containment failure for a Kraftwerk Union (KWU) pressurized water reactor (PWR). The energetics of the explosion depends on the mass of the molten fuel that mixes with the coolant and participates in the explosion and on the conversion of fuel thermal energy into mechanical work. The work can result in the generation ofmore » dynamic pressures that affect the lower head (and possibly lead to its failure), and it can cause acceleration of a slug (fuel and coolant material) upward that can affect the upper internal structures and vessel head and ultimately cause the failure of the upper head. If the upper head missile has sufficient energy, it can reach the containment shell and penetrate it. The analysis, must therefore, take into account all possible dissipation mechanisms.« less
Pflug, Irving J
2010-05-01
The incidence of botulism in canned food in the last century is reviewed along with the background science; a few conclusions are reached based on analysis of published data. There are two primary aspects to botulism control: the design of an adequate process and the delivery of the adequate process to containers of food. The probability that the designed process will not be adequate to control Clostridium botulinum is very small, probably less than 1.0 x 10(-6), based on containers of food, whereas the failure of the operator of the processing equipment to deliver the specified process to containers of food may be of the order of 1 in 40, to 1 in 100, based on processing units (retort loads). In the commercial food canning industry, failure to deliver the process will probably be of the order of 1.0 x 10(-4) to 1.0 x 10(-6) when U.S. Food and Drug Administration (FDA) regulations are followed. Botulism incidents have occurred in food canning plants that have not followed the FDA regulations. It is possible but very rare to have botulism result from postprocessing contamination. It may thus be concluded that botulism incidents in canned food are primarily the result of human failure in the delivery of the designed or specified process to containers of food that, in turn, result in the survival, outgrowth, and toxin production of C. botulinum spores. Therefore, efforts in C. botulinum control should be concentrated on reducing human errors in the delivery of the specified process to containers of food.
Skerjanc, William F.; Maki, John T.; Collin, Blaise P.; ...
2015-12-02
The success of modular high temperature gas-cooled reactors is highly dependent on the performance of the tristructural-isotopic (TRISO) coated fuel particle and the quality to which it can be manufactured. During irradiation, TRISO-coated fuel particles act as a pressure vessel to contain fission gas and mitigate the diffusion of fission products to the coolant boundary. The fuel specifications place limits on key attributes to minimize fuel particle failure under irradiation and postulated accident conditions. PARFUME (an integrated mechanistic coated particle fuel performance code developed at the Idaho National Laboratory) was used to calculate fuel particle failure probabilities. By systematically varyingmore » key TRISO-coated particle attributes, failure probability functions were developed to understand how each attribute contributes to fuel particle failure. Critical manufacturing limits were calculated for the key attributes of a low enriched TRISO-coated nuclear fuel particle with a kernel diameter of 425 μm. As a result, these critical manufacturing limits identify ranges beyond where an increase in fuel particle failure probability is expected to occur.« less
Strength and life criteria for corrugated fiberboard by three methods
Thomas J. Urbanik
1997-01-01
The conventional test method for determining the stacking life of corrugated containers at a fixed load level does not adequately predict a safe load when storage time is fixed. This study introduced multiple load levels and related the probability of time at failure to load. A statistical analysis of logarithm-of-time failure data varying with load level predicts the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Lin, Guang
In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
Probabilistic Risk Assessment: A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Probabilistic risk analysis is an integration of failure modes and effects analysis (FMEA), fault tree analysis and other techniques to assess the potential for failure and to find ways to reduce risk. This bibliography references 160 documents in the NASA STI Database that contain the major concepts, probabilistic risk assessment, risk and probability theory, in the basic index or major subject terms, An abstract is included with most citations, followed by the applicable subject terms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masuda, Y.; Chiba, N.; Matsuo, Y.
This research proposes to investigate the impact behavior of the steel plate of BWR containment vessels against missiles, caused by the postulated catastrophic failure of components with a high kinetic energy. Although the probability of the occurrence of missiles inside and outside of containment vessels is extremely low, the following items are required to maintain the integrity of containment vessels: the probability of the occurrence of missiles, the weight and energy of missiles, and the impact behavior of containment vessel steel plate against postulated missiles. In connection with the third item, an actualscale missile test was conducted. In addition, amore » computation analysis was performed to confirm the impact behavior against the missiles, in order to search for wide applicability to the various kinds of postulated missiles. This research tries to derive a new empirical formula which carries out the assessment of the integrity of containment vessels.« less
Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong
2016-01-01
Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention.
Methods, apparatus and system for notification of predictable memory failure
Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2017-01-03
A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.
Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability
2015-07-01
12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015...Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability Marwan M. Harajli Graduate Student, Dept. of Civil and Environ...criterion is usually the failure probability . In this paper, we examine the buffered failure probability as an attractive alternative to the failure
NASA Astrophysics Data System (ADS)
Massmann, Joel; Freeze, R. Allan
1987-02-01
This paper puts in place a risk-cost-benefit analysis for waste management facilities that explicitly recognizes the adversarial relationship that exists in a regulated market economy between the owner/operator of a waste management facility and the government regulatory agency under whose terms the facility must be licensed. The risk-cost-benefit analysis is set up from the perspective of the owner/operator. It can be used directly by the owner/operator to assess alternative design strategies. It can also be used by the regulatory agency to assess alternative regulatory policy, but only in an indirect manner, by examining the response of an owner/operator to the stimuli of various policies. The objective function is couched in terms of a discounted stream of benefits, costs, and risks over an engineering time horizon. Benefits are in the form of revenues for services provided; costs are those of construction and operation of the facility. Risk is defined as the cost associated with the probability of failure, with failure defined as the occurrence of a groundwater contamination event that violates the licensing requirements established for the facility. Failure requires a breach of the containment structure and contaminant migration through the hydrogeological environment to a compliance surface. The probability of failure can be estimated on the basis of reliability theory for the breach of containment and with a Monte-Carlo finite-element simulation for the advective contaminant transport. In the hydrogeological environment the hydraulic conductivity values are defined stochastically. The probability of failure is reduced by the presence of a monitoring network operated by the owner/operator and located between the source and the regulatory compliance surface. The level of reduction in the probability of failure depends on the probability of detection of the monitoring network, which can be calculated from the stochastic contaminant transport simulations. While the framework is quite general, the development in this paper is specifically suited for a landfill in which the primary design feature is one or more synthetic liners in parallel. Contamination is brought about by the release of a single, inorganic nonradioactive species into a saturated, high-permeability, advective, steady state horizontal flow system which can be analyzed with a two-dimensional analysis. It is possible to carry out sensitivity analyses for a wide variety of influences on this system, including landfill size, liner design, hydrogeological parameters, amount of exploration, extent of monitoring network, nature of remedial schemes, economic factors, and regulatory policy.
Assessing performance and validating finite element simulations using probabilistic knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolin, Ronald M.; Rodriguez, E. A.
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less
NASA Astrophysics Data System (ADS)
Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo
2018-03-01
The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.
1984-09-28
variables before simula- tion of model - Search for reality checks a, - Express uncertainty as a probability density distribution. a. H2 a, H-22 TWIF... probability that the software con- tains errors. This prior is updated as test failure data are accumulated. Only a p of 1 (software known to contain...discusssed; both parametric and nonparametric versions are presented. It is shown by the author that the bootstrap underlies the jackknife method and
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
NASA Technical Reports Server (NTRS)
Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.
1995-01-01
The application of the probabilistic risk assessment methodology to a Space Shuttle environment, particularly to the potential of losing the Shuttle during nominal operation is addressed. The different related concerns are identified and combined to determine overall program risks. A fault tree model is used to allocate system probabilities to the subsystem level. The loss of the vehicle due to failure to contain energetic gas and debris, to maintain proper propulsion and configuration is analyzed, along with the loss due to Orbiter, external tank failure, and landing failure or error.
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong
2016-01-01
Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention. PMID:27875545
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
Failure probability analysis of optical grid
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng
2008-11-01
Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.
Solid motor diagnostic instrumentation. [design of self-contained instrumentation
NASA Technical Reports Server (NTRS)
Nakamura, Y.; Arens, W. E.; Wuest, W. S.
1973-01-01
A review of typical surveillance and monitoring practices followed during the flight phases of representative solid-propellant upper stages and apogee motors was conducted to evaluate the need for improved flight diagnostic instrumentation on future spacecraft. The capabilities of the flight instrumentation package were limited to the detection of whether or not the solid motor was the cause of failure and to the identification of probable primary failure modes. Conceptual designs of self-contained flight instrumentation packages capable of meeting these reqirements were generated and their performance, typical cost, and unit characteristics determined. Comparisons of a continuous real time and a thresholded hybrid design were made on the basis of performance, mass, power, cost, and expected life. The results of this analysis substantiated the feasibility of a self-contained independent flight instrumentation module as well as the existence of performance margins by which to exploit growth option applications.
NASA Technical Reports Server (NTRS)
Eberlein, A. J.; Lahm, T. G.
1976-01-01
The degree to which flight-critical failures in a strapdown laser gyro tetrad sensor assembly can be isolated in short-haul aircraft after a failure occurrence has been detected by the skewed sensor failure-detection voting logic is investigated along with the degree to which a failure in the tetrad computer can be detected and isolated at the computer level, assuming a dual-redundant computer configuration. The tetrad system was mechanized with two two-axis inertial navigation channels (INCs), each containing two gyro/accelerometer axes, computer, control circuitry, and input/output circuitry. Gyro/accelerometer data is crossfed between the two INCs to enable each computer to independently perform the navigation task. Computer calculations are synchronized between the computers so that calculated quantities are identical and may be compared. Fail-safe performance (identification of the first failure) is accomplished with a probability approaching 100 percent of the time, while fail-operational performance (identification and isolation of the first failure) is achieved 93 to 96 percent of the time.
The probability of containment failure by direct containment heating in Zion. Supplement 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilch, M.M.; Allen, M.D.; Stamps, D.W.
1994-12-01
Supplement 1 of NUREG/CR-6075 brings to closure the DCH issue for the Zion plant. It includes the documentation of the peer review process for NUREG/CR-6075, the assessments of four new splinter scenarios defined in working group meetings, and modeling enhancements recommended by the working groups. In the four new scenarios, consistency of the initial conditions has been implemented by using insights from systems-level codes. SCDAP/RELAP5 was used to analyze three short-term station blackout cases with Different lead rates. In all three case, the hot leg or surge line failed well before the lower head and thus the primary system depressurizedmore » to a point where DCH was no longer considered a threat. However, these calculations were continued to lower head failure in order to gain insights that were useful in establishing the initial and boundary conditions. The most useful insights are that the RCS pressure is-low at vessel breach metallic blockages in the core region do not melt and relocate into the lower plenum, and melting of upper plenum steel is correlated with hot leg failure. THE SCDAP/RELAP output was used as input to CONTAIN to assess the containment conditions at vessel breach. The containment-side conditions predicted by CONTAIN are similar to those originally specified in NUREG/CR-6075.« less
HOW to Recognize and Reduce Tree Hazards in Recreation Sites
Kathyn Robbins
1986-01-01
An understanding of the many factors affecting tree hazards in recreation sites will help predict which trees are most likely to fail. Hazard tree management deals with probabilities of failure. This guide, written for anyone involved in management or maintenance of public use areas that contain trees, is intended to help minimize the risk associated with hazard trees...
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
NASA Technical Reports Server (NTRS)
Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.
2010-01-01
Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Development of STS/Centaur failure probabilities liftoff to Centaur separation
NASA Technical Reports Server (NTRS)
Hudson, J. M.
1982-01-01
The results of an analysis to determine STS/Centaur catastrophic vehicle response probabilities for the phases of vehicle flight from STS liftoff to Centaur separation from the Orbiter are presented. The analysis considers only category one component failure modes as contributors to the vehicle response mode probabilities. The relevant component failure modes are grouped into one of fourteen categories of potential vehicle behavior. By assigning failure rates to each component, for each of its failure modes, the STS/Centaur vehicle response probabilities in each phase of flight can be calculated. The results of this study will be used in a DOE analysis to ascertain the hazard from carrying a nuclear payload on the STS.
Automatic Monitoring System Design and Failure Probability Analysis for River Dikes on Steep Channel
NASA Astrophysics Data System (ADS)
Chang, Yin-Lung; Lin, Yi-Jun; Tung, Yeou-Koung
2017-04-01
The purposes of this study includes: (1) design an automatic monitoring system for river dike; and (2) develop a framework which enables the determination of dike failure probabilities for various failure modes during a rainstorm. The historical dike failure data collected in this study indicate that most dikes in Taiwan collapsed under the 20-years return period discharge, which means the probability of dike failure is much higher than that of overtopping. We installed the dike monitoring system on the Chiu-She Dike which located on the middle stream of Dajia River, Taiwan. The system includes: (1) vertical distributed pore water pressure sensors in front of and behind the dike; (2) Time Domain Reflectometry (TDR) to measure the displacement of dike; (3) wireless floating device to measure the scouring depth at the toe of dike; and (4) water level gauge. The monitoring system recorded the variation of pore pressure inside the Chiu-She Dike and the scouring depth during Typhoon Megi. The recorded data showed that the highest groundwater level insides the dike occurred 15 hours after the peak discharge. We developed a framework which accounts for the uncertainties from return period discharge, Manning's n, scouring depth, soil cohesion, and friction angle and enables the determination of dike failure probabilities for various failure modes such as overtopping, surface erosion, mass failure, toe sliding and overturning. The framework was applied to Chiu-She, Feng-Chou, and Ke-Chuang Dikes on Dajia River. The results indicate that the toe sliding or overturning has the highest probability than other failure modes. Furthermore, the overall failure probability (integrate different failure modes) reaches 50% under 10-years return period flood which agrees with the historical failure data for the study reaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Dean; Oberkampf, William Louis; Helton, Jon Craig
2004-12-01
Relationships to determine the probability that a weak link (WL)/strong link (SL) safety system will fail to function as intended in a fire environment are investigated. In the systems under study, failure of the WL system before failure of the SL system is intended to render the overall system inoperational and thus prevent the possible occurrence of accidents with potentially serious consequences. Formal developments of the probability that the WL system fails to deactivate the overall system before failure of the SL system (i.e., the probability of loss of assured safety, PLOAS) are presented for several WWSL configurations: (i) onemore » WL, one SL, (ii) multiple WLs, multiple SLs with failure of any SL before any WL constituting failure of the safety system, (iii) multiple WLs, multiple SLs with failure of all SLs before any WL constituting failure of the safety system, and (iv) multiple WLs, multiple SLs and multiple sublinks in each SL with failure of any sublink constituting failure of the associated SL and failure of all SLs before failure of any WL constituting failure of the safety system. The indicated probabilities derive from time-dependent temperatures in the WL/SL system and variability (i.e., aleatory uncertainty) in the temperatures at which the individual components of this system fail and are formally defined as multidimensional integrals. Numerical procedures based on quadrature (i.e., trapezoidal rule, Simpson's rule) and also on Monte Carlo techniques (i.e., simple random sampling, importance sampling) are described and illustrated for the evaluation of these integrals. Example uncertainty and sensitivity analyses for PLOAS involving the representation of uncertainty (i.e., epistemic uncertainty) with probability theory and also with evidence theory are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilch, M.M.; Allen, M.D.; Klamerus, E.W.
1996-02-01
This report uses the scenarios described in NUREG/CR-6075 and NUREG/CR-6075, Supplement 1, to address the direct containment heating (DCH) issue for all Westinghouse plants with large dry or subatmospheric containments. DCH is considered resolved if the conditional containment failure probability (CCFP) is less than 0.1. Loads versus strength evaluations of the CCFP were performed for each plant using plant-specific information. The DCH issue is considered resolved for a plant if a screening phase results in a CCFP less than 0.01, which is more stringent than the overall success criterion. If the screening phase CCFP for a plant is greater thanmore » 0.01, then refined containment loads evaluations must be performed and/or the probability of high pressure at vessel breach must be analyzed. These analyses could be used separately or could be integrated together to recalculate the CCFP for an individual plant to reduce the CCFP to meet the overall success criterion of less than 0.1. The CCFPs for all of the Westinghouse plants with dry containments were less than 0.01 at the screening phase, and thus, the DCH issue is resolved for these plants based on containment loads alone. No additional analyses are required.« less
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.
Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi
2015-10-01
In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.
Probabilistic safety analysis of earth retaining structures during earthquakes
NASA Astrophysics Data System (ADS)
Grivas, D. A.; Souflis, C.
1982-07-01
A procedure is presented for determining the probability of failure of Earth retaining structures under static or seismic conditions. Four possible modes of failure (overturning, base sliding, bearing capacity, and overall sliding) are examined and their combined effect is evaluated with the aid of combinatorial analysis. The probability of failure is shown to be a more adequate measure of safety than the customary factor of safety. As Earth retaining structures may fail in four distinct modes, a system analysis can provide a single estimate for the possibility of failure. A Bayesian formulation of the safety retaining walls is found to provide an improved measure for the predicted probability of failure under seismic loading. The presented Bayesian analysis can account for the damage incurred to a retaining wall during an earthquake to provide an improved estimate for its probability of failure during future seismic events.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.
Risk-based decision making to manage water quality failures caused by combined sewer overflows
NASA Astrophysics Data System (ADS)
Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.
2017-12-01
Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean- pbf optimization. The effectiveness of using buffered failure probability compared to the failure probability is tested by comparing the solutions obtained by using mean-pbf and mean-pf optimizations.
Probabilistic Design of a Mars Sample Return Earth Entry Vehicle Thermal Protection System
NASA Technical Reports Server (NTRS)
Dec, John A.; Mitcheltree, Robert A.
2002-01-01
The driving requirement for design of a Mars Sample Return mission is to assure containment of the returned samples. Designing to, and demonstrating compliance with, such a requirement requires physics based tools that establish the relationship between engineer's sizing margins and probabilities of failure. The traditional method of determining margins on ablative thermal protection systems, while conservative, provides little insight into the actual probability of an over-temperature during flight. The objective of this paper is to describe a new methodology for establishing margins on sizing the thermal protection system (TPS). Results of this Monte Carlo approach are compared with traditional methods.
Risk Analysis of Earth-Rock Dam Failures Based on Fuzzy Event Tree Method
Fu, Xiao; Gu, Chong-Shi; Su, Huai-Zhi; Qin, Xiang-Nan
2018-01-01
Earth-rock dams make up a large proportion of the dams in China, and their failures can induce great risks. In this paper, the risks associated with earth-rock dam failure are analyzed from two aspects: the probability of a dam failure and the resulting life loss. An event tree analysis method based on fuzzy set theory is proposed to calculate the dam failure probability. The life loss associated with dam failure is summarized and refined to be suitable for Chinese dams from previous studies. The proposed method and model are applied to one reservoir dam in Jiangxi province. Both engineering and non-engineering measures are proposed to reduce the risk. The risk analysis of the dam failure has essential significance for reducing dam failure probability and improving dam risk management level. PMID:29710824
A methodology for estimating risks associated with landslides of contaminated soil into rivers.
Göransson, Gunnel; Norrman, Jenny; Larson, Magnus; Alén, Claes; Rosén, Lars
2014-02-15
Urban areas adjacent to surface water are exposed to soil movements such as erosion and slope failures (landslides). A landslide is a potential mechanism for mobilisation and spreading of pollutants. This mechanism is in general not included in environmental risk assessments for contaminated sites, and the consequences associated with contamination in the soil are typically not considered in landslide risk assessments. This study suggests a methodology to estimate the environmental risks associated with landslides in contaminated sites adjacent to rivers. The methodology is probabilistic and allows for datasets with large uncertainties and the use of expert judgements, providing quantitative estimates of probabilities for defined failures. The approach is illustrated by a case study along the river Göta Älv, Sweden, where failures are defined and probabilities for those failures are estimated. Failures are defined from a pollution perspective and in terms of exceeding environmental quality standards (EQSs) and acceptable contaminant loads. Models are then suggested to estimate probabilities of these failures. A landslide analysis is carried out to assess landslide probabilities based on data from a recent landslide risk classification study along the river Göta Älv. The suggested methodology is meant to be a supplement to either landslide risk assessment (LRA) or environmental risk assessment (ERA), providing quantitative estimates of the risks associated with landslide in contaminated sites. The proposed methodology can also act as a basis for communication and discussion, thereby contributing to intersectoral management solutions. From the case study it was found that the defined failures are governed primarily by the probability of a landslide occurring. The overall probabilities for failure are low; however, if a landslide occurs the probabilities of exceeding EQS are high and the probability of having at least a 10% increase in the contamination load within one year is also high. Copyright © 2013 Elsevier B.V. All rights reserved.
The nutritional and metabolic support of heart failure in the intensive care unit.
Meltzer, Joseph S; Moitra, Vivek K
2008-03-01
Heart failure and cardiovascular disease are common causes of morbidity and mortality, contributing to many ICU admissions. Nutritional deficiencies have been associated with the development and worsening of chronic heart failure. Nutritional and metabolic support may improve outcomes in critically ill patients with heart failure. This review analyzes the role of this support in the acute care setting of the ICU. Cardiac cachexia is a complex pathophysiologic process. It is characterized by inflammation and anabolic-catabolic imbalance. Nutritional supplements containing selenium, vitamins and antioxidants may provide needed support to the failing myocardium. Evidence shows that there is utility in intensive insulin therapy in the critically ill. Finally, there is an emerging metabolic role for HMG-CoA reductase inhibition, or statin therapy, in the treatment of heart failure. Shifting the metabolic milieu from catabolic to anabolic, reducing free radicals, and quieting inflammation in addition to caloric supplementation may be the key to nutritional support in the heart failure patient. Tight glycemic control with intensive insulin therapy plays an expanding role in the care of the critically ill. Glucose-insulin-potassium therapy probably does not improve the condition of the patient with heart failure or acute myocardial infarction.
NASA Astrophysics Data System (ADS)
Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-06-01
Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.
NASA Astrophysics Data System (ADS)
Wang, Yu; Jiang, Wenchun; Luo, Yun; Zhang, Yucai; Tu, Shan-Tung
2017-12-01
The reduction and re-oxidation of anode have significant effects on the integrity of the solid oxide fuel cell (SOFC) sealed by the glass-ceramic (GC). The mechanical failure is mainly controlled by the stress distribution. Therefore, a three dimensional model of SOFC is established to investigate the stress evolution during the reduction and re-oxidation by finite element method (FEM) in this paper, and the failure probability is calculated using the Weibull method. The results demonstrate that the reduction of anode can decrease the thermal stresses and reduce the failure probability due to the volumetric contraction and porosity increasing. The re-oxidation can result in a remarkable increase of the thermal stresses, and the failure probabilities of anode, cathode, electrolyte and GC all increase to 1, which is mainly due to the large linear strain rather than the porosity decreasing. The cathode and electrolyte fail as soon as the linear strains are about 0.03% and 0.07%. Therefore, the re-oxidation should be controlled to ensure the integrity, and a lower re-oxidation temperature can decrease the stress and failure probability.
Contraceptive failure in the United States
Trussell, James
2013-01-01
This review provides an update of previous estimates of first-year probabilities of contraceptive failure for all methods of contraception available in the United States. Estimates are provided of probabilities of failure during typical use (which includes both incorrect and inconsistent use) and during perfect use (correct and consistent use). The difference between these two probabilities reveals the consequences of imperfect use; it depends both on how unforgiving of imperfect use a method is and on how hard it is to use that method perfectly. These revisions reflect new research on contraceptive failure both during perfect use and during typical use. PMID:21477680
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Analysis of Emergency Diesel Generators Failure Incidents in Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Hunt, Ronderio LaDavis
In early years of operation, emergency diesel generators have had a minimal rate of demand failures. Emergency diesel generators are designed to operate as a backup when the main source of electricity has been disrupted. As of late, EDGs (emergency diesel generators) have been failing at NPPs (nuclear power plants) around the United States causing either station blackouts or loss of onsite and offsite power. These failures occurred from a specific type called demand failures. This thesis evaluated the current problem that raised concern in the nuclear industry which was averaging 1 EDG demand failure/year in 1997 to having an excessive event of 4 EDG demand failure year which occurred in 2011. To determine the next occurrence of the extreme event and possible cause to an event of such happening, two analyses were conducted, the statistical and root cause analysis. Considering the statistical analysis in which an extreme event probability approach was applied to determine the next occurrence year of an excessive event as well as, the probability of that excessive event occurring. Using the root cause analysis in which the potential causes of the excessive event occurred by evaluating, the EDG manufacturers, aging, policy changes/ maintenance practices and failure components. The root cause analysis investigated the correlation between demand failure data and historical data. Final results from the statistical analysis showed expectations of an excessive event occurring in a fixed range of probability and a wider range of probability from the extreme event probability approach. The root-cause analysis of the demand failure data followed historical statistics for the EDG manufacturer, aging and policy changes/ maintenance practices but, indicated a possible cause regarding the excessive event with the failure components. Conclusions showed the next excessive demand failure year, prediction of the probability and the next occurrence year of such failures, with an acceptable confidence level, was difficult but, it was likely that this type of failure will not be a 100 year event. It was noticeable to see that the majority of the EDG demand failures occurred within the main components as of 2005. The overall analysis of this study provided from percentages, indicated that it would be appropriate to make the statement that the excessive event was caused by the overall age (wear and tear) of the Emergency Diesel Generators in Nuclear Power Plants. Future Work will be to better determine the return period of the excessive event once the occurrence has happened for a second time by implementing the extreme event probability approach.
Game-Theoretic strategies for systems of components using product-form utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.
Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.
[Comments on the use of the "life-table method" in orthopedics].
Hassenpflug, J; Hahne, H J; Hedderich, J
1992-01-01
In the description of long term results, e.g. of joint replacements, survivorship analysis is used increasingly in orthopaedic surgery. The survivorship analysis is more useful to describe the frequency of failure rather than global statements in percentage. The relative probability of failure for fixed intervals is drawn from the number of controlled patients and the frequency of failure. The complementary probabilities of success are linked in their temporal sequence thus representing the probability of survival at a fixed endpoint. Necessary condition for the use of this procedure is the exact definition of moment and manner of failure. It is described how to establish survivorship tables.
EPRI-NASA Cooperative Project on Stress Corrosion Cracking of Zircaloys. [nuclear fuel failures
NASA Technical Reports Server (NTRS)
Cubicciotti, D.; Jones, R. L.
1978-01-01
Examinations of the inside surface of irradiated fuel cladding from two reactors show the Zircaloy cladding is exposed to a number of aggressive substances, among them iodine, cadmium, and iron-contaminated cesium. Iodine-induced stress corrosion cracking (SCC) of well characterized samples of Zircaloy sheet and tubing was studied. Results indicate that a threshold stress must be exceeded for iodine SCC to occur. The existence of a threshold stress indicates that crack formation probably is the key step in iodine SCC. Investigation of the crack formation process showed that the cracks responsible for SCC failure nucleated at locations in the metal surface that contained higher than average concentrations of alloying elements and impurities. A four-stage model of iodine SCC is proposed based on the experimental results and the relevance of the observations to pellet cladding interaction failures is discussed.
Risk ranking of LANL nuclear material storage containers for repackaging prioritization.
Smith, Paul H; Jordan, Hans; Hoffman, Jenifer A; Eller, P Gary; Balkey, Simon
2007-05-01
Safe handling and storage of nuclear material at U.S. Department of Energy facilities relies on the use of robust containers to prevent container breaches and subsequent worker contamination and uptake. The U.S. Department of Energy has no uniform requirements for packaging and storage of nuclear materials other than those declared excess and packaged to DOE-STD-3013-2000. This report describes a methodology for prioritizing a large inventory of nuclear material containers so that the highest risk containers are repackaged first. The methodology utilizes expert judgment to assign respirable fractions and reactivity factors to accountable levels of nuclear material at Los Alamos National Laboratory. A relative risk factor is assigned to each nuclear material container based on a calculated dose to a worker due to a failed container barrier and a calculated probability of container failure based on material reactivity and container age. This risk-based methodology is being applied at LANL to repackage the highest risk materials first and, thus, accelerate the reduction of risk to nuclear material handlers.
A risk assessment method for multi-site damage
NASA Astrophysics Data System (ADS)
Millwater, Harry Russell, Jr.
This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.
Accident hazard evaluation and control decisions on forested recreation sites
Lee A. Paine
1971-01-01
Accident hazard associated with trees on recreation sites is inherently concerned with probabilities. The major factors include the probabilities of mechanical failure and of target impact if failure occurs, the damage potential of the failure, and the target value. Hazard may be evaluated as the product of these factors; i.e., expected loss during the current...
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
Estimating earthquake-induced failure probability and downtime of critical facilities.
Porter, Keith; Ramer, Kyle
2012-01-01
Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.
Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods
NASA Astrophysics Data System (ADS)
Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed
2018-04-01
This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.
Time-dependent earthquake probabilities
Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.
2005-01-01
We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.
Two-IMU FDI performance of the sequential probability ratio test during shuttle entry
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.
Nouri.Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-01-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed. PMID:26779433
Nouri Gharahasanlou, Ali; Mokhtarei, Ashkan; Khodayarei, Aliasqar; Ataei, Mohammad
2014-04-01
Evaluating and analyzing the risk in the mining industry is a new approach for improving the machinery performance. Reliability, safety, and maintenance management based on the risk analysis can enhance the overall availability and utilization of the mining technological systems. This study investigates the failure occurrence probability of the crushing and mixing bed hall department at Azarabadegan Khoy cement plant by using fault tree analysis (FTA) method. The results of the analysis in 200 h operating interval show that the probability of failure occurrence for crushing, conveyor systems, crushing and mixing bed hall department is 73, 64, and 95 percent respectively and the conveyor belt subsystem found as the most probable system for failure. Finally, maintenance as a method of control and prevent the occurrence of failure is proposed.
Determination of Turbine Blade Life from Engine Field Data
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.; Litt, Jonathan S.; Hendricks, Robert C.; Soditus, Sherry M.
2013-01-01
It is probable that no two engine companies determine the life of their engines or their components in the same way or apply the same experience and safety factors to their designs. Knowing the failure mode that is most likely to occur minimizes the amount of uncertainty and simplifies failure and life analysis. Available data regarding failure mode for aircraft engine blades, while favoring low-cycle, thermal-mechanical fatigue (TMF) as the controlling mode of failure, are not definitive. Sixteen high-pressure turbine (HPT) T-1 blade sets were removed from commercial aircraft engines that had been commercially flown by a single airline and inspected for damage. Each set contained 82 blades. The damage was cataloged into three categories related to their mode of failure: (1) TMF, (2) Oxidation/erosion (O/E), and (3) Other. From these field data, the turbine blade life was determined as well as the lives related to individual blade failure modes using Johnson-Weibull analysis. A simplified formula for calculating turbine blade life and reliability was formulated. The L10 blade life was calculated to be 2427 cycles (11 077 hr). The resulting blade life attributed to O/E equaled that attributed to TMF. The category that contributed most to blade failure was Other. If there were no blade failures attributed to O/E and TMF, the overall blade L(sub 10) life would increase approximately 11 to 17 percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, B.G.; Richards, R.E.; Reece, W.J.
1992-10-01
This Reference Guide contains instructions on how to install and use Version 3.5 of the NRC-sponsored Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR). The NUCLARR data management system is contained in compressed files on the floppy diskettes that accompany this Reference Guide. NUCLARR is comprised of hardware component failure data (HCFD) and human error probability (HEP) data, both of which are available via a user-friendly, menu driven retrieval system. The data may be saved to a file in a format compatible with IRRAS 3.0 and commercially available statistical packages, or used to formulate log-plots and reports of data retrievalmore » and aggregation findings.« less
Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, B.G.; Richards, R.E.; Reece, W.J.
1992-10-01
This Reference Guide contains instructions on how to install and use Version 3.5 of the NRC-sponsored Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR). The NUCLARR data management system is contained in compressed files on the floppy diskettes that accompany this Reference Guide. NUCLARR is comprised of hardware component failure data (HCFD) and human error probability (HEP) data, both of which are available via a user-friendly, menu driven retrieval system. The data may be saved to a file in a format compatible with IRRAS 3.0 and commercially available statistical packages, or used to formulate log-plots and reports of data retrievalmore » and aggregation findings.« less
AGR-3/4 Irradiation Test Predictions using PARFUME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skerjanc, William Frances; Collin, Blaise Paul
2016-03-01
PARFUME, a fuel performance modeling code used for high temperature gas reactors, was used to model the AGR-3/4 irradiation test using as-run physics and thermal hydraulics data. The AGR-3/4 test is the combined third and fourth planned irradiations of the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. The AGR-3/4 test train consists of twelve separate and independently controlled and monitored capsules. Each capsule contains four compacts filled with both uranium oxycarbide (UCO) unaltered “driver” fuel particles and UCO designed-to-fail (DTF) fuel particles. The DTF fraction was specified to be 1×10-2. This report documents the calculations performed to predictmore » failure probability of TRISO-coated fuel particles during the AGR-3/4 experiment. In addition, this report documents the calculated source term from both the driver fuel and DTF particles. The calculations include the modeling of the AGR-3/4 irradiation that occurred from December 2011 to April 2014 in the Advanced Test Reactor (ATR) over a total of ten ATR cycles including seven normal cycles, one low power cycle, one unplanned outage cycle, and one Power Axial Locator Mechanism cycle. Results show that failure probabilities are predicted to be low, resulting in zero fuel particle failures per capsule. The primary fuel particle failure mechanism occurred as a result of localized stresses induced by the calculated IPyC cracking. Assuming 1,872 driver fuel particles per compact, failure probability calculated by PARFUME leads to no predicted particle failure in the AGR-3/4 driver fuel. In addition, the release fraction of fission products Ag, Cs, and Sr were calculated to vary depending on capsule location and irradiation temperature. The maximum release fraction of Ag occurs in Capsule 7 reaching up to 56% for the driver fuel and 100% for the DTF fuel. The release fraction of the other two fission products, Cs and Sr, are much smaller and in most cases less than 1% for the driver fuel. The notable exception occurs in Capsule 7 where the release fraction for Cs and Sr reach up to 0.73% and 2.4%, respectively, for the driver fuel. For the DTF fuel in Capsule 7, the release fraction for Cs and Sr are estimated to be 100% and 5%, respectively.« less
NASA Technical Reports Server (NTRS)
Vitali, Roberto; Lutomski, Michael G.
2004-01-01
National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.
NASA Technical Reports Server (NTRS)
McCarty, John P.; Lyles, Garry M.
1997-01-01
Propulsion system quality is defined in this paper as having high reliability, that is, quality is a high probability of within-tolerance performance or operation. Since failures are out-of-tolerance performance, the probability of failures and their occurrence is the difference between high and low quality systems. Failures can be described at 3 levels: the system failure (which is the detectable end of a failure), the failure mode (which is the failure process), and the failure cause (which is the start). Failure causes can be evaluated & classified by type. The results of typing flight history failures shows that most failures are in unrecognized modes and result from human error or noise, i.e. failures are when engineers learn how things really work. Although the study based on US launch vehicles, a sampling of failures from other countries indicates the finding has broad application. The parameters of the design of a propulsion system are not single valued, but have dispersions associated with the manufacturing of parts. Many tests are needed to find failures, if the dispersions are large relative to tolerances, which could contribute to the large number of failures in unrecognized modes.
NASA Technical Reports Server (NTRS)
Scalzo, F.
1983-01-01
Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.
Probability of failure prediction for step-stress fatigue under sine or random stress
NASA Technical Reports Server (NTRS)
Lambert, R. G.
1979-01-01
A previously proposed cumulative fatigue damage law is extended to predict the probability of failure or fatigue life for structural materials with S-N fatigue curves represented as a scatterband of failure points. The proposed law applies to structures subjected to sinusoidal or random stresses and includes the effect of initial crack (i.e., flaw) sizes. The corrected cycle ratio damage function is shown to have physical significance.
Probabilistic analysis on the failure of reactivity control for the PWR
NASA Astrophysics Data System (ADS)
Sony Tjahyani, D. T.; Deswandri; Sunaryo, G. R.
2018-02-01
The fundamental safety function of the power reactor is to control reactivity, to remove heat from the reactor, and to confine radioactive material. The safety analysis is used to ensure that each parameter is fulfilled during the design and is done by deterministic and probabilistic method. The analysis of reactivity control is important to be done because it will affect the other of fundamental safety functions. The purpose of this research is to determine the failure probability of the reactivity control and its failure contribution on a PWR design. The analysis is carried out by determining intermediate events, which cause the failure of reactivity control. Furthermore, the basic event is determined by deductive method using the fault tree analysis. The AP1000 is used as the object of research. The probability data of component failure or human error, which is used in the analysis, is collected from IAEA, Westinghouse, NRC and other published documents. The results show that there are six intermediate events, which can cause the failure of the reactivity control. These intermediate events are uncontrolled rod bank withdrawal at low power or full power, malfunction of boron dilution, misalignment of control rod withdrawal, malfunction of improper position of fuel assembly and ejection of control rod. The failure probability of reactivity control is 1.49E-03 per year. The causes of failures which are affected by human factor are boron dilution, misalignment of control rod withdrawal and malfunction of improper position for fuel assembly. Based on the assessment, it is concluded that the failure probability of reactivity control on the PWR is still within the IAEA criteria.
Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Brunett, Acacia
2015-04-26
The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less
Posttest destructive examination of the steel liner in a 1:6-scale reactor containment model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, L.D.
A 1:6-scale model of a nuclear reactor containment model was built and tested at Sandia National Laboratories as part of research program sponsored by the Nuclear Regulatory Commission to investigate containment overpressure test was terminated due to leakage from a large tear in the steel liner. A limited destructive examination of the liner and anchorage system was conducted to gain information about the failure mechanism and is described. Sections of liner were removed in areas where liner distress was evident or where large strains were indicated by instrumentation during the test. The condition of the liner, anchorage system, and concretemore » for each of the regions that were investigated are described. The probable cause of the observed posttest condition of the liner is discussed.« less
14 CFR 417.224 - Probability of failure analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...
14 CFR 417.224 - Probability of failure analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...
14 CFR 417.224 - Probability of failure analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...
14 CFR 417.224 - Probability of failure analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...
14 CFR 417.224 - Probability of failure analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Probability of failure analysis. 417.224 Section 417.224 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION ADMINISTRATION... phase of normal flight or when any anomalous condition exhibits the potential for a stage or its debris...
Determination of Turbine Blade Life from Engine Field Data
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.; Litt, Jonathan S.; Hendricks, Robert C.; Soditus, Sherry M.
2012-01-01
It is probable that no two engine companies determine the life of their engines or their components in the same way or apply the same experience and safety factors to their designs. Knowing the failure mode that is most likely to occur minimizes the amount of uncertainty and simplifies failure and life analysis. Available data regarding failure mode for aircraft engine blades, while favoring low-cycle, thermal mechanical fatigue as the controlling mode of failure, are not definitive. Sixteen high-pressure turbine (HPT) T-1 blade sets were removed from commercial aircraft engines that had been commercially flown by a single airline and inspected for damage. Each set contained 82 blades. The damage was cataloged into three categories related to their mode of failure: (1) Thermal-mechanical fatigue, (2) Oxidation/Erosion, and (3) "Other." From these field data, the turbine blade life was determined as well as the lives related to individual blade failure modes using Johnson-Weibull analysis. A simplified formula for calculating turbine blade life and reliability was formulated. The L(sub 10) blade life was calculated to be 2427 cycles (11 077 hr). The resulting blade life attributed to oxidation/erosion equaled that attributed to thermal-mechanical fatigue. The category that contributed most to blade failure was Other. If there were there no blade failures attributed to oxidation/erosion and thermal-mechanical fatigue, the overall blade L(sub 10) life would increase approximately 11 to 17 percent.
Sundaram, Aparna; Vaughan, Barbara; Kost, Kathryn; Bankole, Akinrinola; Finer, Lawrence; Singh, Susheela; Trussell, James
2017-03-01
Contraceptive failure rates measure a woman's probability of becoming pregnant while using a contraceptive. Information about these rates enables couples to make informed contraceptive choices. Failure rates were last estimated for 2002, and social and economic changes that have occurred since then necessitate a reestimation. To estimate failure rates for the most commonly used reversible methods in the United States, data from the 2006-2010 National Survey of Family Growth were used; some 15,728 contraceptive use intervals, contributed by 6,683 women, were analyzed. Data from the Guttmacher Institute's 2008 Abortion Patient Survey were used to adjust for abortion underreporting. Kaplan-Meier methods were used to estimate the associated single-decrement probability of failure by duration of use. Failure rates were compared with those from 1995 and 2002. Long-acting reversible contraceptives (the IUD and the implant) had the lowest failure rates of all methods (1%), while condoms and withdrawal carried the highest probabilities of failure (13% and 20%, respectively). However, the failure rate for the condom had declined significantly since 1995 (from 18%), as had the failure rate for all hormonal methods combined (from 8% to 6%). The failure rate for all reversible methods combined declined from 12% in 2002 to 10% in 2006-2010. These broad-based declines in failure rates reverse a long-term pattern of minimal change. Future research should explore what lies behind these trends, as well as possibilities for further improvements. © 2017 The Authors. Perspectives on Sexual and Reproductive Health published by Wiley Periodicals, Inc., on behalf of the Guttmacher Institute.
Mechanical failure probability of glasses in Earth orbit
NASA Technical Reports Server (NTRS)
Kinser, Donald L.; Wiedlocher, David E.
1992-01-01
Results of five years of earth-orbital exposure on mechanical properties of glasses indicate that radiation effects on mechanical properties of glasses, for the glasses examined, are less than the probable error of measurement. During the 5 year exposure, seven micrometeorite or space debris impacts occurred on the samples examined. These impacts were located in locations which were not subjected to effective mechanical testing, hence limited information on their influence upon mechanical strength was obtained. Combination of these results with micrometeorite and space debris impact frequency obtained by other experiments permits estimates of the failure probability of glasses exposed to mechanical loading under earth-orbit conditions. This probabilistic failure prediction is described and illustrated with examples.
On defense strategies for system of systems using aggregated correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Imam, Neena; Ma, Chris Y. T.
2017-04-01
We consider a System of Systems (SoS) wherein each system Si, i = 1; 2; ... ;N, is composed of discrete cyber and physical components which can be attacked and reinforced. We characterize the disruptions using aggregate failure correlation functions given by the conditional failure probability of SoS given the failure of an individual system. We formulate the problem of ensuring the survival of SoS as a game between an attacker and a provider, each with a utility function composed of asurvival probability term and a cost term, both expressed in terms of the number of components attacked and reinforced.more » The survival probabilities of systems satisfy simple product-form, first-order differential conditions, which simplify the Nash Equilibrium (NE) conditions. We derive the sensitivity functions that highlight the dependence of SoS survival probability at NE on cost terms, correlation functions, and individual system survival probabilities.We apply these results to a simplified model of distributed cloud computing infrastructure.« less
Estimation of probability of failure for damage-tolerant aerospace structures
NASA Astrophysics Data System (ADS)
Halbert, Keith
The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft.
A new algorithm for finding survival coefficients employed in reliability equations
NASA Technical Reports Server (NTRS)
Bouricius, W. G.; Flehinger, B. J.
1973-01-01
Product reliabilities are predicted from past failure rates and reasonable estimate of future failure rates. Algorithm is used to calculate probability that product will function correctly. Algorithm sums the probabilities of each survival pattern and number of permutations for that pattern, over all possible ways in which product can survive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
Rodrigues, Samantha A; Thambyah, Ashvin; Broom, Neil D
2015-03-01
The annulus-endplate anchorage system performs a critical role in the disc, creating a strong structural link between the compliant annulus and the rigid vertebrae. Endplate failure is thought to be associated with disc herniation, a recent study indicating that this failure mode occurs more frequently than annular rupture. The aim was to investigate the structural principles governing annulus-endplate anchorage and the basis of its strength and mechanisms of failure. Loading experiments were performed on ovine lumbar motion segments designed to induce annulus-endplate failure, followed by macro- to micro- to fibril-level structural analyses. The study was funded by a doctoral scholarship from our institution. Samples were loaded to failure in three modes: torsion using intact motion segments, in-plane tension of the anterior annulus-endplate along one of the oblique fiber angles, and axial tension of the anterior annulus-endplate. The anterior region was chosen for its ease of access. Decalcification was used to investigate the mechanical influence of the mineralized component. Structural analysis was conducted on both the intact and failed samples using differential interference contrast optical microscopy and scanning electron microscopy. Two main modes of anchorage failure were observed--failure at the tidemark or at the cement line. Samples subjected to axial tension contained more tidemark failures compared with those subjected to torsion and in-plane tension. Samples decalcified before testing frequently contained damage at the cement line, this being more extensive than in fresh samples. Analysis of the intact samples at their anchorage sites revealed that annular subbundle fibrils penetrate beyond the cement line to a limited depth and appear to merge with those in the vertebral and cartilaginous endplates. Annulus-endplate anchorage is more vulnerable to failure in axial tension compared with both torsion and in-plane tension and is probably due to acute fiber bending at the soft-hard interface of the tidemark. This finding is consistent with evidence showing that flexion, which induces a similar pattern of axial tension, increases the risk of herniation involving endplate failure. The study also highlights the important strengthening role of calcification at this junction and provides new evidence of a fibril-based form of structural integration across the cement line. Copyright © 2015 Elsevier Inc. All rights reserved.
Indicator of reliability of power grids and networks for environmental monitoring
NASA Astrophysics Data System (ADS)
Shaptsev, V. A.
2017-10-01
The energy supply of the mining enterprises includes power networks in particular. Environmental monitoring relies on the data network between the observers and the facilitators. Weather and conditions of their work change over time randomly. Temperature, humidity, wind strength and other stochastic processes are interconnecting in different segments of the power grid. The article presents analytical expressions for the probability of failure of the power grid as a whole or its particular segment. These expressions can contain one or more parameters of the operating conditions, simulated by Monte Carlo. In some cases, one can get the ultimate mathematical formula for calculation on the computer. In conclusion, the expression, including the probability characteristic function of one random parameter, for example, wind, temperature or humidity, is given. The parameters of this characteristic function can be given by retrospective or special observations (measurements).
Stochastic Formal Correctness of Numerical Algorithms
NASA Technical Reports Server (NTRS)
Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick
2009-01-01
We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.
Fuzzy-information-based robustness of interconnected networks against attacks and failures
NASA Astrophysics Data System (ADS)
Zhu, Qian; Zhu, Zhiliang; Wang, Yifan; Yu, Hai
2016-09-01
Cascading failure is fatal in applications and its investigation is essential and therefore became a focal topic in the field of complex networks in the last decade. In this paper, a cascading failure model is established for interconnected networks and the associated data-packet transport problem is discussed. A distinguished feature of the new model is its utilization of fuzzy information in resisting uncertain failures and malicious attacks. We numerically find that the giant component of the network after failures increases with tolerance parameter for any coupling preference and attacking ambiguity. Moreover, considering the effect of the coupling probability on the robustness of the networks, we find that the robustness of the assortative coupling and random coupling of the network model increases with the coupling probability. However, for disassortative coupling, there exists a critical phenomenon for coupling probability. In addition, a critical value that attacking information accuracy affects the network robustness is observed. Finally, as a practical example, the interconnected AS-level Internet in South Korea and Japan is analyzed. The actual data validates the theoretical model and analytic results. This paper thus provides some guidelines for preventing cascading failures in the design of architecture and optimization of real-world interconnected networks.
ERIC Educational Resources Information Center
Brookhart, Susan M.; And Others
1997-01-01
Process Analysis is described as a method for identifying and measuring the probability of events that could cause the failure of a program, resulting in a cause-and-effect tree structure of events. The method is illustrated through the evaluation of a pilot instructional program at an elementary school. (SLD)
ERIC Educational Resources Information Center
Dougherty, Michael R.; Sprenger, Amber
2006-01-01
This article introduces 2 new sources of bias in probability judgment, discrimination failure and inhibition failure, which are conceptualized as arising from an interaction between error prone memory processes and a support theory like comparison process. Both sources of bias stem from the influence of irrelevant information on participants'…
A detailed description of the sequential probability ratio test for 2-IMU FDI
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
The sequential probability ratio test (SPRT) for 2-IMU FDI (inertial measuring unit failure detection/isolation) is described. The SPRT is a statistical technique for detecting and isolating soft IMU failures originally developed for the strapdown inertial reference unit. The flowchart of a subroutine incorporating the 2-IMU SPRT is included.
Optimized Vertex Method and Hybrid Reliability
NASA Technical Reports Server (NTRS)
Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.
2002-01-01
A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, T.L.; Simonen, F.A.
1992-05-01
Probabilistic fracture mechanics analysis is a major element of comprehensive probabilistic methodology on which current NRC regulatory requirements for pressurized water reactor vessel integrity evaluation are based. Computer codes such as OCA-P and VISA-II perform probabilistic fracture analyses to estimate the increase in vessel failure probability that occurs as the vessel material accumulates radiation damage over the operating life of the vessel. The results of such analyses, when compared with limits of acceptable failure probabilities, provide an estimation of the residual life of a vessel. Such codes can be applied to evaluate the potential benefits of plant-specific mitigating actions designedmore » to reduce the probability of failure of a reactor vessel. 10 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, T.L.; Simonen, F.A.
1992-01-01
Probabilistic fracture mechanics analysis is a major element of comprehensive probabilistic methodology on which current NRC regulatory requirements for pressurized water reactor vessel integrity evaluation are based. Computer codes such as OCA-P and VISA-II perform probabilistic fracture analyses to estimate the increase in vessel failure probability that occurs as the vessel material accumulates radiation damage over the operating life of the vessel. The results of such analyses, when compared with limits of acceptable failure probabilities, provide an estimation of the residual life of a vessel. Such codes can be applied to evaluate the potential benefits of plant-specific mitigating actions designedmore » to reduce the probability of failure of a reactor vessel. 10 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeda, Masatoshi; Komura, Toshiyuki; Hirotani, Tsutomu
1995-12-01
Annual failure probabilities of buildings and equipment were roughly evaluated for two fusion-reactor-like buildings, with and without seismic base isolation, in order to examine the effectiveness of the base isolation system regarding siting issues. The probabilities are calculated considering nonlinearity and rupture of isolators. While the probability of building failure for both buildings on the same site was almost equal, the function failures for equipment showed that the base-isolated building had higher reliability than the non-isolated building. Even if the base-isolated building alone is located on a higher seismic hazard area, it could compete favorably with the ordinary one inmore » reliability of equipment.« less
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Quantitative risk analysis of oil storage facilities in seismic areas.
Fabbrocino, Giovanni; Iervolino, Iunio; Orlando, Francesca; Salzano, Ernesto
2005-08-31
Quantitative risk analysis (QRA) of industrial facilities has to take into account multiple hazards threatening critical equipment. Nevertheless, engineering procedures able to evaluate quantitatively the effect of seismic action are not well established. Indeed, relevant industrial accidents may be triggered by loss of containment following ground shaking or other relevant natural hazards, either directly or through cascade effects ('domino effects'). The issue of integrating structural seismic risk into quantitative probabilistic seismic risk analysis (QpsRA) is addressed in this paper by a representative study case regarding an oil storage plant with a number of atmospheric steel tanks containing flammable substances. Empirical seismic fragility curves and probit functions, properly defined both for building-like and non building-like industrial components, have been crossed with outcomes of probabilistic seismic hazard analysis (PSHA) for a test site located in south Italy. Once the seismic failure probabilities have been quantified, consequence analysis has been performed for those events which may be triggered by the loss of containment following seismic action. Results are combined by means of a specific developed code in terms of local risk contour plots, i.e. the contour line for the probability of fatal injures at any point (x, y) in the analysed area. Finally, a comparison with QRA obtained by considering only process-related top events is reported for reference.
Lin, Chun-Li; Chang, Yen-Hsiang; Pa, Che-An
2009-10-01
This study evaluated the risk of failure for an endodontically treated premolar with mesio occlusodistal palatal (MODP) preparation and 3 different computer-aided design/computer-aided manufacturing (CAD/CAM) ceramic restoration configurations. Three 3-dimensional finite element (FE) models designed with CAD/CAM ceramic onlay, endocrown, and conventional crown restorations were constructed to perform simulations. The Weibull function was incorporated with FE analysis to calculate the long-term failure probability relative to different load conditions. The results indicated that the stress values on the enamel, dentin, and luting cement for endocrown restoration were the lowest values relative to the other 2 restorations. Weibull analysis revealed that the individual failure probability in the endocrown enamel, dentin, and luting cement obviously diminished more than those for onlay and conventional crown restorations. The overall failure probabilities were 27.5%, 1%, and 1% for onlay, endocrown, and conventional crown restorations, respectively, in normal occlusal condition. This numeric investigation suggests that endocrown and conventional crown restorations for endodontically treated premolars with MODP preparation present similar longevity.
NASA Astrophysics Data System (ADS)
Vicuña, Cristián Molina; Höweler, Christoph
2017-12-01
The use of AE in machine failure diagnosis has increased over the last years. Most AE-based failure diagnosis strategies use digital signal processing and thus require the sampling of AE signals. High sampling rates are required for this purpose (e.g. 2 MHz or higher), leading to streams of large amounts of data. This situation is aggravated if fine resolution and/or multiple sensors are required. These facts combine to produce bulky data, typically in the range of GBytes, for which sufficient storage space and efficient signal processing algorithms are required. This situation probably explains why, in practice, AE-based methods consist mostly in the calculation of scalar quantities such as RMS and Kurtosis, and the analysis of their evolution in time. While the scalar-based approach offers the advantage of maximum data reduction; it has the disadvantage that most part of the information contained in the raw AE signal is lost unrecoverably. This work presents a method offering large data reduction, while keeping the most important information conveyed by the raw AE signal, useful for failure detection and diagnosis. The proposed method consist in the construction of a synthetic, unevenly sampled signal which envelopes the AE bursts present on the raw AE signal in a triangular shape. The constructed signal - which we call TriSignal - also permits the estimation of most scalar quantities typically used for failure detection. But more importantly, it contains the information of the time of occurrence of the bursts, which is key for failure diagnosis. Lomb-Scargle normalized periodogram is used to construct the TriSignal spectrum, which reveals the frequency content of the TriSignal and provides the same information as the classic AE envelope. The paper includes application examples in planetary gearbox and low-speed rolling element bearing.
Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Song-Hua Shen; Gary DeMoss
2010-06-01
Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less
Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; He, Fei; Ma, Chris Y. T.
In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less
Consistency of FMEA used in the validation of analytical procedures.
Oldenhof, M T; van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Vredenbregt, M J; Weda, M; Barends, D M
2011-02-20
In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection-Mass Spectrometry (HPLC-DAD-MS) analytical procedure used in the quality control of medicines. Each team was free to define their own ranking scales for the probability of severity (S), occurrence (O), and detection (D) of failure modes. We calculated Risk Priority Numbers (RPNs) and we identified the failure modes above the 90th percentile of RPN values as failure modes needing urgent corrective action; failure modes falling between the 75th and 90th percentile of RPN values were identified as failure modes needing necessary corrective action, respectively. Team 1 and Team 2 identified five and six failure modes needing urgent corrective action respectively, with two being commonly identified. Of the failure modes needing necessary corrective actions, about a third were commonly identified by both teams. These results show inconsistency in the outcome of the FMEA. To improve consistency, we recommend that FMEA is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating that this inconsistency is not always a drawback. Copyright © 2010 Elsevier B.V. All rights reserved.
Stochastic damage evolution in textile laminates
NASA Technical Reports Server (NTRS)
Dzenis, Yuris A.; Bogdanovich, Alexander E.; Pastore, Christopher M.
1993-01-01
A probabilistic model utilizing random material characteristics to predict damage evolution in textile laminates is presented. Model is based on a division of each ply into two sublaminas consisting of cells. The probability of cell failure is calculated using stochastic function theory and maximal strain failure criterion. Three modes of failure, i.e. fiber breakage, matrix failure in transverse direction, as well as matrix or interface shear cracking, are taken into account. Computed failure probabilities are utilized in reducing cell stiffness based on the mesovolume concept. A numerical algorithm is developed predicting the damage evolution and deformation history of textile laminates. Effect of scatter of fiber orientation on cell properties is discussed. Weave influence on damage accumulation is illustrated with the help of an example of a Kevlar/epoxy laminate.
Differential reliability : probabilistic engineering applied to wood members in bending-tension
Stanley K. Suddarth; Frank E. Woeste; William L. Galligan
1978-01-01
Reliability analysis is a mathematical technique for appraising the design and materials of engineered structures to provide a quantitative estimate of probability of failure. Two or more cases which are similar in all respects but one may be analyzed by this method; the contrast between the probabilities of failure for these cases allows strong analytical focus on the...
Fuzzy Bayesian Network-Bow-Tie Analysis of Gas Leakage during Biomass Gasification
Yan, Fang; Xu, Kaili; Yao, Xiwen; Li, Yang
2016-01-01
Biomass gasification technology has been rapidly developed recently. But fire and poisoning accidents caused by gas leakage restrict the development and promotion of biomass gasification. Therefore, probabilistic safety assessment (PSA) is necessary for biomass gasification system. Subsequently, Bayesian network-bow-tie (BN-bow-tie) analysis was proposed by mapping bow-tie analysis into Bayesian network (BN). Causes of gas leakage and the accidents triggered by gas leakage can be obtained by bow-tie analysis, and BN was used to confirm the critical nodes of accidents by introducing corresponding three importance measures. Meanwhile, certain occurrence probability of failure was needed in PSA. In view of the insufficient failure data of biomass gasification, the occurrence probability of failure which cannot be obtained from standard reliability data sources was confirmed by fuzzy methods based on expert judgment. An improved approach considered expert weighting to aggregate fuzzy numbers included triangular and trapezoidal numbers was proposed, and the occurrence probability of failure was obtained. Finally, safety measures were indicated based on the obtained critical nodes. The theoretical occurrence probabilities in one year of gas leakage and the accidents caused by it were reduced to 1/10.3 of the original values by these safety measures. PMID:27463975
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
Wang, Yao; Jing, Lei; Ke, Hong-Liang; Hao, Jian; Gao, Qun; Wang, Xiao-Xun; Sun, Qiang; Xu, Zhi-Jun
2016-09-20
The accelerated aging tests under electric stress for one type of LED lamp are conducted, and the differences between online and offline tests of the degradation of luminous flux are studied in this paper. The transformation of the two test modes is achieved with an adjustable AC voltage stabilized power source. Experimental results show that the exponential fitting of the luminous flux degradation in online tests possesses a higher fitting degree for most lamps, and the degradation rate of the luminous flux by online tests is always lower than that by offline tests. Bayes estimation and Weibull distribution are used to calculate the failure probabilities under the accelerated voltages, and then the reliability of the lamps under rated voltage of 220 V is estimated by use of the inverse power law model. Results show that the relative error of the lifetime estimation by offline tests increases as the failure probability decreases, and it cannot be neglected when the failure probability is less than 1%. The relative errors of lifetime estimation are 7.9%, 5.8%, 4.2%, and 3.5%, at the failure probabilities of 0.1%, 1%, 5%, and 10%, respectively.
Thorndahl, S; Willems, P
2008-01-01
Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.
van der Burg-de Graauw, N; Cobbaert, C M; Middelhoff, C J F M; Bantje, T A; van Guldener, C
2009-05-01
B-type natriuretic peptide (BNP) and its inactive counterpart NT-proBNP can help to identify or rule out heart failure in patients presenting with acute dyspnoea. It is not well known whether measurement of these peptides can be omitted in certain patient groups. We conducted a prospective observational study of 221 patients presenting with acute dyspnoea at the emergency department. The attending physicians estimated the probability of heart failure by clinical judgement. NT-proBNP was measured, but not reported. An independent panel made a final diagnosis of all available data including NT-proBNP level and judged whether and how NT-proBNP would have altered patient management. NT-proBNP levels were highest in patients with heart failure, alone or in combination with pulmonary failure. Additive value of NT-proBNP was present in 40 of 221 (18%) of the patients, and it mostly indicated that a more intensive treatment for heart failure would have been needed. Clinical judgement was an independent predictor of additive value of NT-proBNP with a maximum at a clinical probability of heart failure of 36%. NT-proBNP measurement has additive value in a substantial number of patients presenting with acute dyspnoea, but can possibly be omitted in patients with a clinical probability of heart failure of >70%.
ORCHID - a computer simulation of the reliability of an NDE inspection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moles, M.D.C.
1987-03-01
CANDU pressurized heavy water reactors contain several hundred horizontally-mounted zirconium alloy pressure tubes. Following a pressure tube failure, a pressure tube inspection system called CIGARette was rapidly designed, manufactured and put in operation. Defects called hydride blisters were found to be the cause of the failure, and were detected using a combination of eddy current and ultrasonic scans. A number of improvements were made to CIGARette during the inspection period. The ORCHID computer program models the operation of the delivery system, eddy current and ultrasonic systems by imitating the on-reactor decision-making procedure. ORCHID predicts that during the early stage ofmore » development, less than one blistered tube in three would be detected, while less than one in two would be detected in the middle development stage. However, ORCHID predicts that during the late development stage, probability of detection will be over 90%, primarily due to the inclusion of axial ultrasonic scans (a procedural modification). Rotational and axial slip could severely reduce probability of detection. Comparison of CIGARette's inspection data with ORCHID's predictions indicate that the latter are compatible with the actual inspection results, through the numbers are small and data uncertain. It should be emphasized that the CIGARette system has been essentially replaced with the much more reliable CIGAR system.« less
Reliability-based management of buried pipelines considering external corrosion defects
NASA Astrophysics Data System (ADS)
Miran, Seyedeh Azadeh
Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.
Alani, Amir M.; Faramarzi, Asaad
2015-01-01
In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. PMID:26068092
NASA Astrophysics Data System (ADS)
Zhang, H.; Guan, Z. W.; Wang, Q. Y.; Liu, Y. J.; Li, J. K.
2018-05-01
The effects of microstructure and stress ratio on high cycle fatigue of nickel superalloy Nimonic 80A were investigated. The stress ratios of 0.1, 0.5 and 0.8 were chosen to perform fatigue tests in a frequency of 110 Hz. Cleavage failure was observed, and three competing failure crack initiation modes were discovered by a scanning electron microscope, which were classified as surface without facets, surface with facets and subsurface with facets. With increasing the stress ratio from 0.1 to 0.8, the occurrence probability of surface and subsurface with facets also increased and reached the maximum value at R = 0.5, meanwhile the probability of surface initiation without facets decreased. The effect of microstructure on the fatigue fracture behavior at different stress ratios was also observed and discussed. Based on the Goodman diagram, it was concluded that the fatigue strength of 50% probability of failure at R = 0.1, 0.5 and 0.8 is lower than the modified Goodman line.
Deviation from Power Law Behavior in Landslide Phenomenon
NASA Astrophysics Data System (ADS)
Li, L.; Lan, H.; Wu, Y.
2013-12-01
Power law distribution of magnitude is widely observed in many natural hazards (e.g., earthquake, floods, tornadoes, and forest fires). Landslide is unique as the size distribution of landslide is characterized by a power law decrease with a rollover in the small size end. Yet, the emergence of the rollover, i.e., the deviation from power law behavior for small size landslides, remains a mystery. In this contribution, we grouped the forces applied on landslide bodies into two categories: 1) the forces proportional to the volume of failure mass (gravity and friction), and 2) the forces proportional to the area of failure surface (cohesion). Failure occurs when the forces proportional to volume exceed the forces proportional to surface area. As such, given a certain mechanical configuration, the failure volume to failure surface area ratio must exceed a corresponding threshold to guarantee a failure. Assuming all landslides share a uniform shape, which means the volume to surface area ratio of landslide regularly increase with the landslide volume, a cutoff of landslide volume distribution in the small size end can be defined. However, in realistic landslide phenomena, where heterogeneities of landslide shape and mechanical configuration are existent, a simple cutoff of landslide volume distribution does not exist. The stochasticity of landslide shape introduce a probability distribution of the volume to surface area ratio with regard to landslide volume, with which the probability that the volume to surface ratio exceed the threshold can be estimated regarding values of landslide volume. An experiment based on empirical data showed that this probability can induce the power law distribution of landslide volume roll down in the small size end. We therefore proposed that the constraints on the failure volume to failure surface area ratio together with the heterogeneity of landslide geometry and mechanical configuration attribute for the deviation from power law behavior in landslide phenomenon. Figure shows that a rollover of landslide size distribution in the small size end is produced as the probability for V/S (the failure volume to failure surface ratio of landslide) exceeding the mechanical threshold applied to the power law distribution of landslide volume.
POF-Darts: Geometric adaptive sampling for probability of failure
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; ...
2016-06-18
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
Diverse Redundant Systems for Reliable Space Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.
Defense strategies for asymmetric networked systems under composite utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less
Cook, D A
2006-04-01
Models that estimate the probability of death of intensive care unit patients can be used to stratify patients according to the severity of their condition and to control for casemix and severity of illness. These models have been used for risk adjustment in quality monitoring, administration, management and research and as an aid to clinical decision making. Models such as the Mortality Prediction Model family, SAPS II, APACHE II, APACHE III and the organ system failure models provide estimates of the probability of in-hospital death of ICU patients. This review examines methods to assess the performance of these models. The key attributes of a model are discrimination (the accuracy of the ranking in order of probability of death) and calibration (the extent to which the model's prediction of probability of death reflects the true risk of death). These attributes should be assessed in existing models that predict the probability of patient mortality, and in any subsequent model that is developed for the purposes of estimating these probabilities. The literature contains a range of approaches for assessment which are reviewed and a survey of the methodologies used in studies of intensive care mortality models is presented. The systematic approach used by Standards for Reporting Diagnostic Accuracy provides a framework to incorporate these theoretical considerations of model assessment and recommendations are made for evaluation and presentation of the performance of models that estimate the probability of death of intensive care patients.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
A simplified fragility analysis of fan type cable stayed bridges
NASA Astrophysics Data System (ADS)
Khan, R. A.; Datta, T. K.; Ahmad, S.
2005-06-01
A simplified fragility analysis of fan type cable stayed bridges using Probabilistic Risk Analysis (PRA) procedure is presented for determining their failure probability under random ground motion. Seismic input to the bridge support is considered to be a risk consistent response spectrum which is obtained from a separate analysis. For the response analysis, the bridge deck is modeles as a beam supported on spring at different points. The stiffnesses of the springs are determined by a separate 2D static analysis of cable-tower-deck system. The analysis provides a coupled stiffness matrix for the spring system. A continuum method of analysis using dynamic stiffness is used to determine the dynamic properties of the bridges. The response of the bridge deck is obtained by the response spectrum method of analysis as applied to multidegree of freedom system which duly takes into account the quasi-static component of bridge deck vibration. The fragility analysis includes uncertainties arising due to the variation in ground motion, material property, modeling, method of analysis, ductility factor and damage concentration effect. Probability of failure of the bridge deck is determined by the First Order Second Moment (FOSM) method of reliability. A three span double plane symmetrical fan type cable stayed bridge of total span 689 m, is used as an illustrative example. The fragility curves for the bridge deck failure are obtained under a number of parametric variations. Some of the important conclusions of the study indicate that (i) not only vertical component but also the horizontal component of ground motion has considerable effect on the probability of failure; (ii) ground motion with no time lag between support excitations provides a smaller probability of failure as compared to ground motion with very large time lag between support excitation; and (iii) probability of failure may considerably increase soft soil condition.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Specifying design conservatism: Worst case versus probabilistic analysis
NASA Technical Reports Server (NTRS)
Miles, Ralph F., Jr.
1993-01-01
Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.
NASA Technical Reports Server (NTRS)
Vesely, William E.; Colon, Alfredo E.
2010-01-01
Design Safety/Reliability is associated with the probability of no failure-causing faults existing in a design. Confidence in the non-existence of failure-causing faults is increased by performing tests with no failure. Reliability-Growth testing requirements are based on initial assurance and fault detection probability. Using binomial tables generally gives too many required tests compared to reliability-growth requirements. Reliability-Growth testing requirements are based on reliability principles and factors and should be used.
Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy
2013-01-01
Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
1981-05-15
Crane. is capable of imagining unicorns -- and we expect he is -- why does he find it relatively difficult to imagine himself avoiding a 30 minute...probability that the plan will succeed and to evaluate the risk of various causes of failure . We have suggested that the construction of scenarios is...expect that events will unfold as planned. However, the cumulative probability of at least one fatal failure could be overwhelmingly high even when
1986-04-07
34 Blackhol -" * Success/failure is too clear cut * The probability of failure is greater than the probability of success The Job Itsellf (59) • Does not...indecd, it is not -- or as one officer in the survey co-ented "a blackhole ." USAHEC is a viable career oppor- tunity; it is career enhancing; and
VHSIC/VHSIC-Like Reliability Prediction Modeling
1989-10-01
prediction would require ’ kowledge of event statistics as well as device robustness. Ii1 Additionally, although this is primarily a theoretical, bottom...Degradation in Section 5.3 P = Power PDIP = Plastic DIP P(f) = Probability of Failure due to EOS or ESD P(flc) = Probability of Failure given Contact from an...the results of those stresses: Device Stress Part Number Power Dissipation Manufacturer Test Type Part Description Junction Teniperatune Package Type
Lodi, Sara; Phillips, Andrew; Fidler, Sarah; Hawkins, David; Gilson, Richard; McLean, Ken; Fisher, Martin; Post, Frank; Johnson, Anne M.; Walker-Nthenda, Louise; Dunn, David; Porter, Kholoud
2013-01-01
Background The development of HIV drug resistance and subsequent virological failure are often cited as potential disadvantages of early cART initiation. However, their long-term probability is not known, and neither is the role of duration of infection at the time of initiation. Methods Patients enrolled in the UK Register of HIV seroconverters were followed-up from cART initiation to last HIV-RNA measurement. Through survival analysis we examined predictors of virologic failure (2HIV-RNA ≥400 c/l while on cART) including CD4 count and HIV duration at initiation. We also estimated the cumulative probabilities of failure and drug resistance (from the available HIV nucleotide sequences) for early initiators (cART within 12 months of seroconversion). Results Of 1075 starting cART at a median (IQR) CD4 count 272 (190,370) cells/mm3 and HIV duration 3 (1,6) years, virological failure occurred in 163 (15%). Higher CD4 count at initiation, but not HIV infection duration at cART initiation, was independently associated with lower risk of failure (p=0.033 and 0.592 respectively). Among 230 patients initiating cART early, 97 (42%) discontinued it after a median of 7 months; cumulative probabilities of resistance and failure by 8 years were 7% (95% CI 4,11) and 19% (13,25), respectively. Conclusion Although the rate of discontinuation of early cART in our cohort was high, the long-term rate of virological failure was low. Our data do not support early cART initiation being associated with increased risk of failure and drug resistance. PMID:24086588
Holbrook, Christopher M.; Perry, Russell W.; Brandes, Patricia L.; Adams, Noah S.
2013-01-01
In telemetry studies, premature tag failure causes negative bias in fish survival estimates because tag failure is interpreted as fish mortality. We used mark-recapture modeling to adjust estimates of fish survival for a previous study where premature tag failure was documented. High rates of tag failure occurred during the Vernalis Adaptive Management Plan’s (VAMP) 2008 study to estimate survival of fall-run Chinook salmon (Oncorhynchus tshawytscha) during migration through the San Joaquin River and Sacramento-San Joaquin Delta, California. Due to a high rate of tag failure, the observed travel time distribution was likely negatively biased, resulting in an underestimate of tag survival probability in this study. Consequently, the bias-adjustment method resulted in only a small increase in estimated fish survival when the observed travel time distribution was used to estimate the probability of tag survival. Since the bias-adjustment failed to remove bias, we used historical travel time data and conducted a sensitivity analysis to examine how fish survival might have varied across a range of tag survival probabilities. Our analysis suggested that fish survival estimates were low (95% confidence bounds range from 0.052 to 0.227) over a wide range of plausible tag survival probabilities (0.48–1.00), and this finding is consistent with other studies in this system. When tags fail at a high rate, available methods to adjust for the bias may perform poorly. Our example highlights the importance of evaluating the tag life assumption during survival studies, and presents a simple framework for evaluating adjusted survival estimates when auxiliary travel time data are available.
Performance analysis of the word synchronization properties of the outer code in a TDRSS decoder
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A self-synchronizing coding scheme for NASA's TDRSS satellite system is a concatenation of a (2,1,7) inner convolutional code with a (255,223) Reed-Solomon outer code. Both symbol and word synchronization are achieved without requiring that any additional symbols be transmitted. An important parameter which determines the performance of the word sync procedure is the ratio of the decoding failure probability to the undetected error probability. Ideally, the former should be as small as possible compared to the latter when the error correcting capability of the code is exceeded. A computer simulation of a (255,223) Reed-Solomon code as carried out. Results for decoding failure probability and for undetected error probability are tabulated and compared.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
The influence of microstructure on the probability of early failure in aluminum-based interconnects
NASA Astrophysics Data System (ADS)
Dwyer, V. M.
2004-09-01
For electromigration in short aluminum interconnects terminated by tungsten vias, the well known "short-line" effect applies. In a similar manner, for longer lines, early failure is determined by a critical value Lcrit for the length of polygranular clusters. Any cluster shorter than Lcrit is "immortal" on the time scale of early failure where the figure of merit is not the standard t50 value (the time to 50% failures), but rather the total probability of early failure, Pcf. Pcf is a complex function of current density, linewidth, line length, and material properties (the median grain size d50 and grain size shape factor σd). It is calculated here using a model based around the theory of runs, which has proved itself to be a useful tool for assessing the probability of extreme events. Our analysis shows that Pcf is strongly dependent on σd, and a change in σd from 0.27 to 0.5 can cause an order of magnitude increase in Pcf under typical test conditions. This has implications for the web-based two-dimensional grain-growth simulator MIT/EmSim, which generates grain patterns with σd=0.27, while typical as-patterned structures are better represented by a σd in the range 0.4 - 0.6. The simulator will consequently overestimate interconnect reliability due to this particular electromigration failure mode.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.
1995-08-01
A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.
2015-09-01
This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Oman India Pipeline: An operational repair strategy based on a rational assessment of risk
DOE Office of Scientific and Technical Information (OSTI.GOV)
German, P.
1996-12-31
This paper describes the development of a repair strategy for the operational phase of the Oman India Pipeline based upon the probability and consequences of a pipeline failure. Risk analyses and cost benefit analyses performed provide guidance on the level of deepwater repair development effort appropriate for the Oman India Pipeline project and identifies critical areas toward which more intense development effort should be directed. The risk analysis results indicate that the likelihood of a failure of the Oman India Pipeline during its 40-year life is low. Furthermore, the probability of operational failure of the pipeline in deepwater regions ismore » extremely low, the major proportion of operational failure risk being associated with the shallow water regions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, J; Lukose, R; Bronson, J
2015-06-15
Purpose: To conduct a failure mode and effects analysis (FMEA) as per AAPM Task Group 100 on clinical processes associated with teletherapy, and the development of mitigations for processes with identified high risk. Methods: A FMEA was conducted on clinical processes relating to teletherapy treatment plan development and delivery. Nine major processes were identified for analysis. These steps included CT simulation, data transfer, image registration and segmentation, treatment planning, plan approval and preparation, and initial and subsequent treatments. Process tree mapping was utilized to identify the steps contained within each process. Failure modes (FM) were identified and evaluated with amore » scale of 1–10 based upon three metrics: the severity of the effect, the probability of occurrence, and the detectability of the cause. The analyzed metrics were scored as follows: severity – no harm = 1, lethal = 10; probability – not likely = 1, certainty = 10; detectability – always detected = 1, undetectable = 10. The three metrics were combined multiplicatively to determine the risk priority number (RPN) which defined the overall score for each FM and the order in which process modifications should be deployed. Results: Eighty-nine procedural steps were identified with 186 FM accompanied by 193 failure effects with 213 potential causes. Eighty-one of the FM were scored with a RPN > 10, and mitigations were developed for FM with RPN values exceeding ten. The initial treatment had the most FM (16) requiring mitigation development followed closely by treatment planning, segmentation, and plan preparation with fourteen each. The maximum RPN was 400 and involved target delineation. Conclusion: The FMEA process proved extremely useful in identifying previously unforeseen risks. New methods were developed and implemented for risk mitigation and error prevention. Similar to findings reported for adult patients, the process leading to the initial treatment has an associated high risk.« less
Game-theoretic strategies for asymmetric networked systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
Abstract—We consider an infrastructure consisting of a network of systems each composed of discrete components that can be reinforced at a certain cost to guard against attacks. The network provides the vital connectivity between systems, and hence plays a critical, asymmetric role in the infrastructure operations. We characterize the system-level correlations using the aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network. The survival probabilities of systems and network satisfy first-order differential conditions that capture the component-level correlations. We formulate the problem of ensuring the infrastructure survival as a gamemore » between anattacker and a provider, using the sum-form and product-form utility functions, each composed of a survival probability term and a cost term. We derive Nash Equilibrium conditions which provide expressions for individual system survival probabilities, and also the expected capacity specified by the total number of operational components. These expressions differ only in a single term for the sum-form and product-form utilities, despite their significant differences.We apply these results to simplified models of distributed cloud computing infrastructures.« less
Jahanfar, Ali; Amirmojahedi, Mohsen; Gharabaghi, Bahram; Dubey, Brajesh; McBean, Edward; Kumar, Dinesh
2017-03-01
Rapid population growth of major urban centres in many developing countries has created massive landfills with extraordinary heights and steep side-slopes, which are frequently surrounded by illegal low-income residential settlements developed too close to landfills. These extraordinary landfills are facing high risks of catastrophic failure with potentially large numbers of fatalities. This study presents a novel method for risk assessment of landfill slope failure, using probabilistic analysis of potential failure scenarios and associated fatalities. The conceptual framework of the method includes selecting appropriate statistical distributions for the municipal solid waste (MSW) material shear strength and rheological properties for potential failure scenario analysis. The MSW material properties for a given scenario is then used to analyse the probability of slope failure and the resulting run-out length to calculate the potential risk of fatalities. In comparison with existing methods, which are solely based on the probability of slope failure, this method provides a more accurate estimate of the risk of fatalities associated with a given landfill slope failure. The application of the new risk assessment method is demonstrated with a case study for a landfill located within a heavily populated area of New Delhi, India.
Effects of footwear and stride length on metatarsal strains and failure in running.
Firminger, Colin R; Fung, Anita; Loundagin, Lindsay L; Edwards, W Brent
2017-11-01
The metatarsal bones of the foot are particularly susceptible to stress fracture owing to the high strains they experience during the stance phase of running. Shoe cushioning and stride length reduction represent two potential interventions to decrease metatarsal strain and thus stress fracture risk. Fourteen male recreational runners ran overground at a 5-km pace while motion capture and plantar pressure data were collected during four experimental conditions: traditional shoe at preferred and 90% preferred stride length, and minimalist shoe at preferred and 90% preferred stride length. Combined musculoskeletal - finite element modeling based on motion analysis and computed tomography data were used to quantify metatarsal strains and the probability of failure was determined using stress-life predictions. No significant interactions between footwear and stride length were observed. Running in minimalist shoes increased strains for all metatarsals by 28.7% (SD 6.4%; p<0.001) and probability of failure for metatarsals 2-4 by 17.3% (SD 14.3%; p≤0.005). Running at 90% preferred stride length decreased strains for metatarsal 4 by 4.2% (SD 2.0%; p≤0.007), and no differences in probability of failure were observed. Significant increases in metatarsal strains and the probability of failure were observed for recreational runners acutely transitioning to minimalist shoes. Running with a 10% reduction in stride length did not appear to be a beneficial technique for reducing the risk of metatarsal stress fracture, however the increased number of loading cycles for a given distance was not detrimental either. Copyright © 2017 Elsevier Ltd. All rights reserved.
Experiences with Probabilistic Analysis Applied to Controlled Systems
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Giesy, Daniel P.
2004-01-01
This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.
Improved Correction of Misclassification Bias With Bootstrap Imputation.
van Walraven, Carl
2018-07-01
Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.
Rizal, Datu; Tani, Shinichi; Nishiyama, Kimitoshi; Suzuki, Kazuhiko
2006-10-11
In this paper, a novel methodology in batch plant safety and reliability analysis is proposed using a dynamic simulator. A batch process involving several safety objects (e.g. sensors, controller, valves, etc.) is activated during the operational stage. The performance of the safety objects is evaluated by the dynamic simulation and a fault propagation model is generated. By using the fault propagation model, an improved fault tree analysis (FTA) method using switching signal mode (SSM) is developed for estimating the probability of failures. The timely dependent failures can be considered as unavailability of safety objects that can cause the accidents in a plant. Finally, the rank of safety object is formulated as performance index (PI) and can be estimated using the importance measures. PI shows the prioritization of safety objects that should be investigated for safety improvement program in the plants. The output of this method can be used for optimal policy in safety object improvement and maintenance. The dynamic simulator was constructed using Visual Modeler (VM, the plant simulator, developed by Omega Simulation Corp., Japan). A case study is focused on the loss of containment (LOC) incident at polyvinyl chloride (PVC) batch process which is consumed the hazardous material, vinyl chloride monomer (VCM).
Probability of loss of assured safety in systems with multiple time-dependent failure modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon Craig; Pilch, Martin.; Sallaberry, Cedric Jean-Marie.
2012-09-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allowmore » an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). Representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent are derived and numerically evaluated for a variety of WL/SL configurations, including PLOAS defined by (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS are considered.« less
NASA Technical Reports Server (NTRS)
Watring, Dale A. (Inventor); Johnson, Martin L. (Inventor)
1996-01-01
An ampoule failure system for use in material processing furnaces comprising a containment cartridge and an ampoule failure sensor. The containment cartridge contains an ampoule of toxic material therein and is positioned within a furnace for processing. An ampoule failure probe is positioned in the containment cartridge adjacent the ampoule for detecting a potential harmful release of toxic material therefrom during processing. The failure probe is spaced a predetermined distance from the ampoule and is chemically chosen so as to undergo a timely chemical reaction with the toxic material upon the harmful release thereof. The ampoule failure system further comprises a data acquisition system which is positioned externally of the furnace and is electrically connected to the ampoule failure probe so as to form a communicating electrical circuit. The data acquisition system includes an automatic shutdown device for shutting down the furnace upon the harmful release of toxic material. It also includes a resistance measuring device for measuring the resistance of the failure probe during processing. The chemical reaction causes a step increase in resistance of the failure probe whereupon the automatic shutdown device will responsively shut down the furnace.
Enhancing MPLS Protection Method with Adaptive Segment Repair
NASA Astrophysics Data System (ADS)
Chen, Chin-Ling
We propose a novel adaptive segment repair mechanism to improve traditional MPLS (Multi-Protocol Label Switching) failure recovery. The proposed mechanism protects one or more contiguous high failure probability links by dynamic setup of segment protection. Simulations demonstrate that the proposed mechanism reduces failure recovery time while also increasing network resource utilization.
Probabilistic inspection strategies for minimizing service failures
NASA Technical Reports Server (NTRS)
Brot, Abraham
1994-01-01
The INSIM computer program is described which simulates the 'limited fatigue life' environment in which aircraft structures generally operate. The use of INSIM to develop inspection strategies which aim to minimize service failures is demonstrated. Damage-tolerance methodology, inspection thresholds and customized inspections are simulated using the probability of failure as the driving parameter.
Caballero Morales, Santiago Omar
2013-01-01
The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082
Effect of Preconditioning and Soldering on Failures of Chip Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander A.
2014-01-01
Soldering of molded case tantalum capacitors can result in damage to Ta205 dielectric and first turn-on failures due to thermo-mechanical stresses caused by CTE mismatch between materials used in the capacitors. It is also known that presence of moisture might cause damage to plastic cases due to the pop-corning effect. However, there are only scarce literature data on the effect of moisture content on the probability of post-soldering electrical failures. In this work, that is based on a case history, different groups of similar types of CWR tantalum capacitors from two lots were prepared for soldering by bake, moisture saturation, and longterm storage at room conditions. Results of the testing showed that both factors: initial quality of the lot, and preconditioning affect the probability of failures. Baking before soldering was shown to be effective to prevent failures even in lots susceptible to pop-corning damage. Mechanism of failures is discussed and recommendations for pre-soldering bake are suggested based on analysis of moisture characteristics of materials used in the capacitors' design.
An Evidence Theoretic Approach to Design of Reliable Low-Cost UAVs
2009-07-28
given period. For complex systems with various stages of missions, “ success ” becomes hard to define. For a UAV, for example, is success defined as...For this reason, the proposed methods in this thesis investigate probability of failure (PoF ) rather than probability of success . Further, failure will...reduction in system PoF . Figure 25 illustrates this; a single component 43 (A) from the original system (Figure 25a) is modified to act in a subsystem with
On the estimation of risk associated with an attenuation prediction
NASA Technical Reports Server (NTRS)
Crane, R. K.
1992-01-01
Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.
An experimental evaluation of software redundancy as a strategy for improving reliability
NASA Technical Reports Server (NTRS)
Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.
1990-01-01
The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximationmore » and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.« less
Reliability analysis of the F-8 digital fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goodman, H. A.
1981-01-01
The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.
Fishnet statistics for probabilistic strength and scaling of nacreous imbricated lamellar materials
NASA Astrophysics Data System (ADS)
Luo, Wen; Bažant, Zdeněk P.
2017-12-01
Similar to nacre (or brick masonry), imbricated (or staggered) lamellar structures are widely found in nature and man-made materials, and are of interest for biomimetics. They can achieve high defect insensitivity and fracture toughness, as demonstrated in previous studies. But the probability distribution with a realistic far-left tail is apparently unknown. Here, strictly for statistical purposes, the microstructure of nacre is approximated by a diagonally pulled fishnet with quasibrittle links representing the shear bonds between parallel lamellae (or platelets). The probability distribution of fishnet strength is calculated as a sum of a rapidly convergent series of the failure probabilities after the rupture of one, two, three, etc., links. Each of them represents a combination of joint probabilities and of additive probabilities of disjoint events, modified near the zone of failed links by the stress redistributions caused by previously failed links. Based on previous nano- and multi-scale studies at Northwestern, the strength distribution of each link, characterizing the interlamellar shear bond, is assumed to be a Gauss-Weibull graft, but with a deeper Weibull tail than in Type 1 failure of non-imbricated quasibrittle materials. The autocorrelation length is considered equal to the link length. The size of the zone of failed links at maximum load increases with the coefficient of variation (CoV) of link strength, and also with fishnet size. With an increasing width-to-length aspect ratio, a rectangular fishnet gradually transits from the weakest-link chain to the fiber bundle, as the limit cases. The fishnet strength at failure probability 10-6 grows with the width-to-length ratio. For a square fishnet boundary, the strength at 10-6 failure probability is about 11% higher, while at fixed load the failure probability is about 25-times higher than it is for the non-imbricated case. This is a major safety advantage of the fishnet architecture over particulate or fiber reinforced materials. There is also a strong size effect, partly similar to that of Type 1 while the curves of log-strength versus log-size for different sizes could cross each other. The predicted behavior is verified by about a million Monte Carlo simulations for each of many fishnet geometries, sizes and CoVs of link strength. In addition to the weakest-link or fiber bundle, the fishnet becomes the third analytically tractable statistical model of structural strength, and has the former two as limit cases.
NASA Astrophysics Data System (ADS)
Popov, V. D.; Khamidullina, N. M.
2006-10-01
In developing radio-electronic devices (RED) of spacecraft operating in the fields of ionizing radiation in space, one of the most important problems is the correct estimation of their radiation tolerance. The “weakest link” in the element base of onboard microelectronic devices under radiation effect is the integrated microcircuits (IMC), especially of large scale (LSI) and very large scale (VLSI) degree of integration. The main characteristic of IMC, which is taken into account when making decisions on using some particular type of IMC in the onboard RED, is the probability of non-failure operation (NFO) at the end of the spacecraft’s lifetime. It should be noted that, until now, the NFO has been calculated only from the reliability characteristics, disregarding the radiation effect. This paper presents the so-called “reliability” approach to determination of radiation tolerance of IMC, which allows one to estimate the probability of non-failure operation of various types of IMC with due account of radiation-stimulated dose failures. The described technique is applied to RED onboard the Spektr-R spacecraft to be launched in 2007.
14 CFR 25.729 - Retracting mechanism.
Code of Federal Regulations, 2014 CFR
2014-01-01
... design take-off weight), occurring during retraction and extension at any airspeed up to 1.5 VSR1 (with... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any...
14 CFR 25.729 - Retracting mechanism.
Code of Federal Regulations, 2013 CFR
2013-01-01
... design take-off weight), occurring during retraction and extension at any airspeed up to 1.5 VSR1 (with... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any...
Failure Surfaces for the Design of Ceramic-Lined Gun Tubes
2004-12-01
density than steel making them attractive candidates as gun tube liners . A new design approach is necessary to address the large variability in strength...systems. Having established the failure criterion for the ceramic liner as the Weibull probability of failure, the need for a suitable failure...Report AMMRC SP-82-1, Materials Technology Laboratory, Watertown, Massachusetts, 1982. 7 R. Katz, Ceramic Gun Barrel Liners : Retrospect and Prospect
NASA Technical Reports Server (NTRS)
Piascik, Robert S.; Prosser, William H.
2011-01-01
The Director of the NASA Engineering and Safety Center (NESC), requested an independent assessment of the anomalous gaseous hydrogen (GH2) flow incident on the Space Shuttle Program (SSP) Orbiter Vehicle (OV)-105 during the Space Transportation System (STS)-126 mission. The main propulsion system (MPS) engine #2 GH2 flow control valve (FCV) LV-57 transition from low towards high flow position without being commanded. Post-flight examination revealed that the FCV LV-57 poppet had experienced a fatigue failure that liberated a section of the poppet flange. The NESC assessment provided a peer review of the computational fluid dynamics (CFD), stress analysis, and impact testing. A probability of detection (POD) study was requested by the SSP Orbiter Project for the eddy current (EC) nondestructive evaluation (NDE) techniques that were developed to inspect the flight FCV poppets. This report contains the findings and recommendations from the NESC assessment.
NASA Technical Reports Server (NTRS)
Piascik, Robert S.; Prosser, William H.
2011-01-01
The Director of the NASA Engineering and Safety Center (NESC), requested an independent assessment of the anomalous gaseous hydrogen (GH2) flow incident on the Space Shuttle Program (SSP) Orbiter Vehicle (OV)-105 during the Space Transportation System (STS)-126 mission. The main propulsion system (MPS) engine #2 GH2 flow control valve (FCV) LV-57 transition from low towards high flow position without being commanded. Post-flight examination revealed that the FCV LV-57 poppet had experienced a fatigue failure that liberated a section of the poppet flange. The NESC assessment provided a peer review of the computational fluid dynamics (CFD), stress analysis, and impact testing. A probability of detection (POD) study was requested by the SSP Orbiter Project for the eddy current (EC) nondestructive evaluation (NDE) techniques that were developed to inspect the flight FCV poppets. This report contains the Appendices to the main report.
Mechanistic Considerations Used in the Development of the PROFIT PCI Failure Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pankaskie, P. J.
A fuel Pellet-Zircaloy Cladding (thermo-mechanical-chemical) Interactions (PC!) failure model for estimating the probability of failure in !ransient increases in power (PROFIT) was developed. PROFIT is based on 1) standard statistical methods applied to available PC! fuel failure data and 2) a mechanistic analysis of the environmental and strain-rate-dependent stress versus strain characteristics of Zircaloy cladding. The statistical analysis of fuel failures attributable to PCI suggested that parameters in addition to power, transient increase in power, and burnup are needed to define PCI fuel failures in terms of probability estimates with known confidence limits. The PROFIT model, therefore, introduces an environmentalmore » and strain-rate dependent strain energy absorption to failure (SEAF) concept to account for the stress versus strain anomalies attributable to interstitial-disloction interaction effects in the Zircaloy cladding. Assuming that the power ramping rate is the operating corollary of strain-rate in the Zircaloy cladding, then the variables of first order importance in the PCI fuel failure phenomenon are postulated to be: 1. pre-transient fuel rod power, P{sub I}, 2. transient increase in fuel rod power, {Delta}P, 3. fuel burnup, Bu, and 4. the constitutive material property of the Zircaloy cladding, SEAF.« less
Ayas, Mouhab; Eapen, Mary; Le-Rademacher, Jennifer; Carreras, Jeanette; Abdel-Azim, Hisham; Alter, Blanche P.; Anderlini, Paolo; Battiwalla, Minoo; Bierings, Marc; Buchbinder, David K.; Bonfim, Carmem; Camitta, Bruce M.; Fasth, Anders L.; Gale, Robert Peter; Lee, Michelle A.; Lund, Troy C.; Myers, Kasiani C.; Olsson, Richard F.; Page, Kristin M.; Prestidge, Tim D.; Radhi, Mohamed; Shah, Ami J.; Schultz, Kirk R.; Wirk, Baldeep; Wagner, John E.; Deeg, H. Joachim
2015-01-01
Second allogeneic hematopoietic cell transplantation (HCT) is the only salvage option for those for develop graft failure after their first HCT. Data on outcomes after second HCT in Fanconi anemia (FA) are scarce. We report outcomes after second allogeneic HCT for FA (n=81). The indication for second HCT was graft failure after the first HCT. Transplants occurred between 1990 and 2012. The timing of second transplantation predicted subsequent graft failure and survival. Graft failure was high when the second transplant occurred less than 3 months from the first. The 3-month probability of graft failure was 69% when the interval between first and second transplant was less than 3 months compared to 23% when the interval was longer (p<0.001). Consequently, survival rates were substantially lower when the interval between first and second transplant was less than 3 months, 23% at 1-year compared to 58%, when the interval was longer (p=0.001). The corresponding 5-year probabilities of survival were 16% and 45%, respectively (p=0.006). Taken together, these data suggest that fewer than half of FA patients undergoing a second HCT for graft failure are long-term survivors. There is an urgent need to develop strategies to lower graft failure after first HCT. PMID:26116087
NASA Astrophysics Data System (ADS)
Jackson, Andrew
2015-07-01
On launch, one of Swarm's absolute scalar magnetometers (ASMs) failed to function, leaving an asymmetrical arrangement of redundant spares on different spacecrafts. A decision was required concerning the deployment of individual satellites into the low-orbit pair or the higher "lonely" orbit. I analyse the probabilities for successful operation of two of the science components of the Swarm mission in terms of a classical probabilistic failure analysis, with a view to concluding a favourable assignment for the satellite with the single working ASM. I concentrate on the following two science aspects: the east-west gradiometer aspect of the lower pair of satellites and the constellation aspect, which requires a working ASM in each of the two orbital planes. I use the so-called "expert solicitation" probabilities for instrument failure solicited from Mission Advisory Group (MAG) members. My conclusion from the analysis is that it is better to have redundancy of ASMs in the lonely satellite orbit. Although the opposite scenario, having redundancy (and thus four ASMs) in the lower orbit, increases the chance of a working gradiometer late in the mission; it does so at the expense of a likely constellation. Although the results are presented based on actual MAG members' probabilities, the results are rather generic, excepting the case when the probability of individual ASM failure is very small; in this case, any arrangement will ensure a successful mission since there is essentially no failure expected at all. Since the very design of the lower pair is to enable common mode rejection of external signals, it is likely that its work can be successfully achieved during the first 5 years of the mission.
NASA Astrophysics Data System (ADS)
Iwakoshi, Takehisa; Hirota, Osamu
2014-10-01
This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.
Cycles till failure of silver-zinc cells with competing failure modes - Preliminary data analysis
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Leibecki, H. F.; Bozek, J. M.
1980-01-01
The data analysis of cycles to failure of silver-zinc electrochemical cells with competing failure modes is presented. The test ran 129 cells through charge-discharge cycles until failure; preliminary data analysis consisted of response surface estimate of life. Batteries fail through low voltage condition and an internal shorting condition; a competing failure modes analysis was made using maximum likelihood estimation for the extreme value life distribution. Extensive residual plotting and probability plotting were used to verify data quality and selection of model.
Fatigue Failure of External Hexagon Connections on Cemented Implant-Supported Crowns.
Malta Barbosa, João; Navarro da Rocha, Daniel; Hirata, Ronaldo; Freitas, Gileade; Bonfante, Estevam A; Coelho, Paulo G
2018-01-17
To evaluate the probability of survival and failure modes of different external hexagon connection systems restored with anterior cement-retained single-unit crowns. The postulated null hypothesis was that there would be no differences under accelerated life testing. Fifty-four external hexagon dental implants (∼4 mm diameter) were used for single cement-retained crown replacement and divided into 3 groups: (3i) Full OSSEOTITE, Biomet 3i (n = 18); (OL) OEX P4, Osseolife Implants (n = 18); and (IL) Unihex, Intra-Lock International (n = 18). Abutments were torqued to the implants, and maxillary central incisor crowns were cemented and subjected to step-stress-accelerated life testing in water. Use-level probability Weibull curves and probability of survival for a mission of 100,000 cycles at 200 N (95% 2-sided confidence intervals) were calculated. Stereo and scanning electron microscopes were used for failure inspection. The beta values for 3i, OL, and IL (1.60, 1.69, and 1.23, respectively) indicated that fatigue accelerated the failure of the 3 groups. Reliability for the 3i and OL (41% and 68%, respectively) was not different between each other, but both were significantly lower than IL group (98%). Abutment screw fracture was the failure mode consistently observed in all groups. Because the reliability was significantly different between the 3 groups, our postulated null hypothesis was rejected.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Security Threat Assessment of an Internet Security System Using Attack Tree and Vague Sets
2014-01-01
Security threat assessment of the Internet security system has become a greater concern in recent years because of the progress and diversification of information technology. Traditionally, the failure probabilities of bottom events of an Internet security system are treated as exact values when the failure probability of the entire system is estimated. However, security threat assessment when the malfunction data of the system's elementary event are incomplete—the traditional approach for calculating reliability—is no longer applicable. Moreover, it does not consider the failure probability of the bottom events suffered in the attack, which may bias conclusions. In order to effectively solve the problem above, this paper proposes a novel technique, integrating attack tree and vague sets for security threat assessment. For verification of the proposed approach, a numerical example of an Internet security system security threat assessment is adopted in this paper. The result of the proposed method is compared with the listing approaches of security threat assessment methods. PMID:25405226
Cascading failures with local load redistribution in interdependent Watts-Strogatz networks
NASA Astrophysics Data System (ADS)
Hong, Chen; Zhang, Jun; Du, Wen-Bo; Sallan, Jose Maria; Lordan, Oriol
2016-05-01
Cascading failures of loads in isolated networks have been studied extensively over the last decade. Since 2010, such research has extended to interdependent networks. In this paper, we study cascading failures with local load redistribution in interdependent Watts-Strogatz (WS) networks. The effects of rewiring probability and coupling strength on the resilience of interdependent WS networks have been extensively investigated. It has been found that, for small values of the tolerance parameter, interdependent networks are more vulnerable as rewiring probability increases. For larger values of the tolerance parameter, the robustness of interdependent networks firstly decreases and then increases as rewiring probability increases. Coupling strength has a different impact on robustness. For low values of coupling strength, the resilience of interdependent networks decreases with the increment of the coupling strength until it reaches a certain threshold value. For values of coupling strength above this threshold, the opposite effect is observed. Our results are helpful to understand and design resilient interdependent networks.
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Sequential experimental design based generalised ANOVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less
Security threat assessment of an Internet security system using attack tree and vague sets.
Chang, Kuei-Hu
2014-01-01
Security threat assessment of the Internet security system has become a greater concern in recent years because of the progress and diversification of information technology. Traditionally, the failure probabilities of bottom events of an Internet security system are treated as exact values when the failure probability of the entire system is estimated. However, security threat assessment when the malfunction data of the system's elementary event are incomplete--the traditional approach for calculating reliability--is no longer applicable. Moreover, it does not consider the failure probability of the bottom events suffered in the attack, which may bias conclusions. In order to effectively solve the problem above, this paper proposes a novel technique, integrating attack tree and vague sets for security threat assessment. For verification of the proposed approach, a numerical example of an Internet security system security threat assessment is adopted in this paper. The result of the proposed method is compared with the listing approaches of security threat assessment methods.
PROCRU: A model for analyzing crew procedures in approach to landing
NASA Technical Reports Server (NTRS)
Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.
1980-01-01
A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.
Predicting Quarantine Failure Rates
2004-01-01
Preemptive quarantine through contact-tracing effectively controls emerging infectious diseases. Occasionally this quarantine fails, however, and infected persons are released. The probability of quarantine failure is typically estimated from disease-specific data. Here a simple, exact estimate of the failure rate is derived that does not depend on disease-specific parameters. This estimate is universally applicable to all infectious diseases. PMID:15109418
Sharland, Michael J; Waring, Stephen C; Johnson, Brian P; Taran, Allise M; Rusin, Travis A; Pattock, Andrew M; Palcher, Jeanette A
2018-01-01
Assessing test performance validity is a standard clinical practice and although studies have examined the utility of cognitive/memory measures, few have examined attention measures as indicators of performance validity beyond the Reliable Digit Span. The current study further investigates the classification probability of embedded Performance Validity Tests (PVTs) within the Brief Test of Attention (BTA) and the Conners' Continuous Performance Test (CPT-II), in a large clinical sample. This was a retrospective study of 615 patients consecutively referred for comprehensive outpatient neuropsychological evaluation. Non-credible performance was defined two ways: failure on one or more PVTs and failure on two or more PVTs. Classification probability of the BTA and CPT-II into non-credible groups was assessed. Sensitivity, specificity, positive predictive value, and negative predictive value were derived to identify clinically relevant cut-off scores. When using failure on two or more PVTs as the indicator for non-credible responding compared to failure on one or more PVTs, highest classification probability, or area under the curve (AUC), was achieved by the BTA (AUC = .87 vs. .79). CPT-II Omission, Commission, and Total Errors exhibited higher classification probability as well. Overall, these findings corroborate previous findings, extending them to a large clinical sample. BTA and CPT-II are useful embedded performance validity indicators within a clinical battery but should not be used in isolation without other performance validity indicators.
Progressive failure on the North Anatolian fault since 1939 by earthquake stress triggering
Stein, R.S.; Barka, A.A.; Dieterich, J.H.
1997-01-01
10 M ??? 6.7 earthquakes ruptured 1000 km of the North Anatolian fault (Turkey) during 1939-1992, providing an unsurpassed opportunity to study how one large shock sets up the next. We use the mapped surface slip and fault geometry to infer the transfer of stress throughout the sequence. Calculations of the change in Coulomb failure stress reveal that nine out of 10 ruptures were brought closer to failure by the preceding shocks, typically by 1-10 bar, equivalent to 3-30 years of secular stressing. We translate the calculated stress changes into earthquake probability gains using an earthquake-nucleation constitutive relation, which includes both permanent and transient effects of the sudden stress changes. The transient effects of the stress changes dominate during the mean 10 yr period between triggering and subsequent rupturing shocks in the Anatolia sequence. The stress changes result in an average three-fold gain in the net earthquake probability during the decade after each event. Stress is calculated to be high today at several isolated sites along the fault. During the next 30 years, we estimate a 15 per cent probability of a M ??? 6.7 earthquake east of the major eastern centre of Ercinzan, and a 12 per cent probability for a large event south of the major western port city of Izmit. Such stress-based probability calculations may thus be useful to assess and update earthquake hazards elsewhere.
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2010-12-01
A cost-effective and service-differentiated provisioning strategy is very desirable to service providers so that they can offer users satisfactory services, while optimizing network resource allocation. Providing differentiated protection services to connections for surviving link failure has been extensively studied in recent years. However, the differentiated protection services for workflow-based applications, which consist of many interdependent tasks, have scarcely been studied. This paper investigates the problem of providing differentiated services for workflow-based applications in optical grid. In this paper, we develop three differentiated protection services provisioning strategies which can provide security level guarantee and network-resource optimization for workflow-based applications. The simulation demonstrates that these heuristic algorithms provide protection cost-effectively while satisfying the applications' failure probability requirements.
Patel, Teresa; Fisher, Stanley P.
2016-01-01
Objective This study aimed to utilize failure modes and effects analysis (FMEA) to transform clinical insights into a risk mitigation plan for intrathecal (IT) drug delivery in pain management. Methods The FMEA methodology, which has been used for quality improvement, was adapted to assess risks (i.e., failure modes) associated with IT therapy. Ten experienced pain physicians scored 37 failure modes in the following categories: patient selection for therapy initiation (efficacy and safety concerns), patient safety during IT therapy, and product selection for IT therapy. Participants assigned severity, probability, and detection scores for each failure mode, from which a risk priority number (RPN) was calculated. Failure modes with the highest RPNs (i.e., most problematic) were discussed, and strategies were proposed to mitigate risks. Results Strategic discussions focused on 17 failure modes with the most severe outcomes, the highest probabilities of occurrence, and the most challenging detection. The topic of the highest‐ranked failure mode (RPN = 144) was manufactured monotherapy versus compounded combination products. Addressing failure modes associated with appropriate patient and product selection was predicted to be clinically important for the success of IT therapy. Conclusions The methodology of FMEA offers a systematic approach to prioritizing risks in a complex environment such as IT therapy. Unmet needs and information gaps are highlighted through the process. Risk mitigation and strategic planning to prevent and manage critical failure modes can contribute to therapeutic success. PMID:27477689
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R. K.; Peters, Scott
The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) Cyber Security for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing Cyber Security for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modifiedmore » and used as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less
Cryptographic Key Management and Critical Risk Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K
The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) CyberSecurity for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing CyberSecurity for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modified and usedmore » as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less
ERIC Educational Resources Information Center
Tempel, Tobias; Neumann, Roland
2016-01-01
We investigated processes underlying performance decrements of highly test-anxious persons. Three experiments contrasted conditions that differed in the degree of activation of concepts related to failure. Participants memorized a list of words either containing words related to failure or containing no words related to failure in Experiment 1. In…
A Numerical Round Robin for the Reliability Prediction of Structural Ceramics
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Janosik, Lesley A.
1993-01-01
A round robin has been conducted on integrated fast fracture design programs for brittle materials. An informal working group (WELFEP-WEakest Link failure probability prediction by Finite Element Postprocessors) was formed to discuss and evaluate the implementation of the programs examined in the study. Results from the study have provided insight on the differences between the various programs examined. Conclusions from the study have shown that when brittle materials are used in design, analysis must understand how to apply the concepts presented herein to failure probability analysis.
A Probability Problem from Real Life: The Tire Exploded.
ERIC Educational Resources Information Center
Bartlett, Albert A.
1993-01-01
Discusses the probability of seeing a tire explode or disintegrate while traveling down the highway. Suggests that a person observing 10 hours a day would see a failure on the average of once every 300 years. (MVL)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyer, Brian David; Erpenbeck, Heather H; Miller, Karen A
2010-09-13
Current safeguards approaches used by the IAEA at gas centrifuge enrichment plants (GCEPs) need enhancement in order to verify declared low enriched uranium (LEU) production, detect undeclared LEU production and detect high enriched uranium (BEU) production with adequate probability using non destructive assay (NDA) techniques. At present inspectors use attended systems, systems needing the presence of an inspector for operation, during inspections to verify the mass and {sup 235}U enrichment of declared cylinders of uranium hexafluoride that are used in the process of enrichment at GCEPs. This paper contains an analysis of how possible improvements in unattended and attended NDAmore » systems including process monitoring and possible on-site destructive analysis (DA) of samples could reduce the uncertainty of the inspector's measurements providing more effective and efficient IAEA GCEPs safeguards. We have also studied a few advanced safeguards systems that could be assembled for unattended operation and the level of performance needed from these systems to provide more effective safeguards. The analysis also considers how short notice random inspections, unannounced inspections (UIs), and the concept of information-driven inspections can affect probability of detection of the diversion of nuclear material when coupled to new GCEPs safeguards regimes augmented with unattended systems. We also explore the effects of system failures and operator tampering on meeting safeguards goals for quantity and timeliness and the measures needed to recover from such failures and anomalies.« less
Atun, Rifat; Gurol-Urganci, Ipek; Hone, Thomas; Pell, Lisa; Stokes, Jonathan; Habicht, Triin; Lukka, Kaija; Raaper, Elin; Habicht, Jarno
2016-12-01
Following independence from the Soviet Union in 1991, Estonia introduced a national insurance system, consolidated the number of health care providers, and introduced family medicine centred primary health care (PHC) to strengthen the health system. Using routinely collected health billing records for 2005-2012, we examine health system utilisation for seven ambulatory care sensitive conditions (ACSCs) (asthma, chronic obstructive pulmonary disease [COPD], depression, Type 2 diabetes, heart failure, hypertension, and ischemic heart disease [IHD]), and by patient characteristics (gender, age, and number of co-morbidities). The data set contained 552 822 individuals. We use patient level data to test the significance of trends, and employ multivariate regression analysis to evaluate the probability of inpatient admission while controlling for patient characteristics, health system supply-side variables, and PHC use. Over the study period, utilisation of PHC increased, whilst inpatient admissions fell. Service mix in PHC changed with increases in phone, email, nurse, and follow-up (vs initial) consultations. Healthcare utilisation for diabetes, depression, IHD and hypertension shifted to PHC, whilst for COPD, heart failure and asthma utilisation in outpatient and inpatient settings increased. Multivariate regression indicates higher probability of inpatient admission for males, older patient and especially those with multimorbidity, but protective effect for PHC, with significantly lower hospital admission for those utilising PHC services. Our findings suggest health system reforms in Estonia have influenced the shift of ACSCs from secondary to primary care, with PHC having a protective effect in reducing hospital admissions.
[Polio vaccination failure in Italy, years 2006-2010].
Iannazzo, Stefania; Rizzuto, Elvira; Pompa, Maria Grazia
2014-01-01
The purpose of this paper is to describe the lack of antipolio vaccination and its reasons, in the period 2006-2010. STUDY DESIGN, SETTING AND PARTICIPANTS. Until 2014 the data on vaccination activities, aggregated at the regional level, were sent to the Ministry of Health using a paper form used to collect the data and then to calculate vaccine coverage (CV) at 24 months. This form contains a section for identifying the reasons for polio vaccination failure. During the reporting period the national CV was always above 95%. The highest rates of non-vaccination were always observed in the same Region. Polio vaccination failure is well explained in 82%of cases, but only three Regions have always provided an explanation, while two have extremely low percentages of explanation, less than 50%. The dominant mode is «noncompliant » (45.5%), followed by «undetectable» (26.5%). The percentage of explanation of non-vaccination was lower than expected. At the moment we cannot clarify why, but only speculate that the lack of a computerized immunization registry has been a key element. Probably, the form used was not sufficiently detailed to monitor the phenomenon of non-vaccination and program interventions. Updating the form, in 2013, we took into account these and other critical issues.
Cowie, R L; Brink, B A
1990-04-21
The effectiveness of a tablet containing a combination of rifampicin, isoniazid and pyrazinamide (Rifater; Mer-National) in the treatment of pulmonary tuberculosis was examined by comparing it with a previously evaluated four-drug regimen. Of 150 black goldminers with a first case of pulmonary tuberculosis, 69 were randomly allocated to receive the combination tablet (RHZ), 5 tablets per day on weekdays for 100 treatment-days, and 81 the four-drug regimen (streptomycin, rifampicin, isoniazid and pyrazinamide) (RHZS). Non-compliance was detected in 42% of the RHZ group and in 16% of the RHZS group. Two patients in the RHZ group and 4 in the RHZS group had to have their treatment altered because routine investigations revealed drug-resistant mycobacteria. Treatment was unsuccessful in 10 patients in the RHZ group, with 4 men failing to complete the regimen and being lost to follow-up, 3 cases of failure of conversion of sputum on the regimen, and 3 relapses. The results for the RHZS group were similar, with 4 failures to complete the regimen, 2 treatment failures and 4 relapses. Evaluation of RHZ showed it to be comparable with a previously evaluated, successful short-course regimen (RHZS). The high incidence of non-compliance probably reflects reduced supervision of this wholly oral regimen.
NASA Astrophysics Data System (ADS)
Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei
2017-02-01
The reliability of Sn-3.0Ag-0.5Cu (SAC 305) solder joint under a broad level of drop impacts was studied. The failure performance of solder joint, failure probability and failure position were analyzed under two shock test conditions, i.e., 1000 g for 1 ms and 300 g for 2 ms. The stress distribution on the solder joint was calculated by ABAQUS. The results revealed that the dominant reason was the tension due to the difference in stiffness between the print circuit board and ball grid array, and the maximum tension of 121.1 MPa and 31.1 MPa, respectively, under both 1000 g or 300 g drop impact, was focused on the corner of the solder joint which was located in the outmost corner of the solder ball row. The failure modes were summarized into the following four modes: initiation and propagation through the (1) intermetallic compound layer, (2) Ni layer, (3) Cu pad, or (4) Sn-matrix. The outmost corner of the solder ball row had a high failure probability under both 1000 g and 300 g drop impact. The number of failures of solder ball under the 300 g drop impact was higher than that under the 1000 g drop impact. The characteristic drop values for failure were 41 and 15,199, respectively, following the statistics.
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Technical Reports Server (NTRS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-01-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Astrophysics Data System (ADS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-10-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
Assessing changes in failure probability of dams in a changing climate
NASA Astrophysics Data System (ADS)
Mallakpour, I.; AghaKouchak, A.; Moftakhari, H.; Ragno, E.
2017-12-01
Dams are crucial infrastructures and provide resilience against hydrometeorological extremes (e.g., droughts and floods). In 2017, California experienced series of flooding events terminating a 5-year drought, and leading to incidents such as structural failure of Oroville Dam's spillway. Because of large socioeconomic repercussions of such incidents, it is of paramount importance to evaluate dam failure risks associated with projected shifts in the streamflow regime. This becomes even more important as the current procedures for design of hydraulic structures (e.g., dams, bridges, spillways) are based on the so-called stationary assumption. Yet, changes in climate are anticipated to result in changes in statistics of river flow (e.g., more extreme floods) and possibly increasing the failure probability of already aging dams. Here, we examine changes in discharge under two representative concentration pathways (RCPs): RCP4.5 and RCP8.5. In this study, we used routed daily streamflow data from ten global climate models (GCMs) in order to investigate possible climate-induced changes in streamflow in northern California. Our results show that while the average flow does not show a significant change, extreme floods are projected to increase in the future. Using the extreme value theory, we estimate changes in the return periods of 50-year and 100-year floods in the current and future climates. Finally, we use the historical and future return periods to quantify changes in failure probability of dams in a warming climate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A
2010-12-01
This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Anderson, Leif; Box, Neil; Carter, Katrina; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael
2012-01-01
There are two general shortcomings to the current annual sparing assessment: 1. The vehicle functions are currently assessed according to confidence targets, which can be misleading- overly conservative or optimistic. 2. The current confidence levels are arbitrarily determined and do not account for epistemic uncertainty (lack of knowledge) in the ORU failure rate. There are two major categories of uncertainty that impact Sparing Assessment: (a) Aleatory Uncertainty: Natural variability in distribution of actual failures around an Mean Time Between Failure (MTBF) (b) Epistemic Uncertainty : Lack of knowledge about the true value of an Orbital Replacement Unit's (ORU) MTBF We propose an approach to revise confidence targets and account for both categories of uncertainty, an approach we call Probability and Confidence Trade-space (PACT) evaluation.
Compounding effects of sea level rise and fluvial flooding.
Moftakhari, Hamed R; Salvadori, Gianfausto; AghaKouchak, Amir; Sanders, Brett F; Matthew, Richard A
2017-09-12
Sea level rise (SLR), a well-documented and urgent aspect of anthropogenic global warming, threatens population and assets located in low-lying coastal regions all around the world. Common flood hazard assessment practices typically account for one driver at a time (e.g., either fluvial flooding only or ocean flooding only), whereas coastal cities vulnerable to SLR are at risk for flooding from multiple drivers (e.g., extreme coastal high tide, storm surge, and river flow). Here, we propose a bivariate flood hazard assessment approach that accounts for compound flooding from river flow and coastal water level, and we show that a univariate approach may not appropriately characterize the flood hazard if there are compounding effects. Using copulas and bivariate dependence analysis, we also quantify the increases in failure probabilities for 2030 and 2050 caused by SLR under representative concentration pathways 4.5 and 8.5. Additionally, the increase in failure probability is shown to be strongly affected by compounding effects. The proposed failure probability method offers an innovative tool for assessing compounding flood hazards in a warming climate.
Improving online risk assessment with equipment prognostics and health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie B.; Liu, Xiaotong; Briere, Chris
The current approach to evaluating the risk of nuclear power plant (NPP) operation relies on static probabilities of component failure, which are based on industry experience with the existing fleet of nominally similar light water reactors (LWRs). As the nuclear industry looks to advanced reactor designs that feature non-light water coolants (e.g., liquid metal, high temperature gas, molten salt), this operating history is not available. Many advanced reactor designs use advanced components, such as electromagnetic pumps, that have not been used in the US commercial nuclear fleet. Given the lack of rich operating experience, we cannot accurately estimate the evolvingmore » probability of failure for basic components to populate the fault trees and event trees that typically comprise probabilistic risk assessment (PRA) models. Online equipment prognostics and health management (PHM) technologies can bridge this gap to estimate the failure probabilities for components under operation. The enhanced risk monitor (ERM) incorporates equipment condition assessment into the existing PRA and risk monitor framework to provide accurate and timely estimates of operational risk.« less
Statistical Performance Evaluation Of Soft Seat Pressure Relief Valves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Stephen P.; Gross, Robert E.
2013-03-26
Risk-based inspection methods enable estimation of the probability of failure on demand for spring-operated pressure relief valves at the United States Department of Energy's Savannah River Site in Aiken, South Carolina. This paper presents a statistical performance evaluation of soft seat spring operated pressure relief valves. These pressure relief valves are typically smaller and of lower cost than hard seat (metal to metal) pressure relief valves and can provide substantial cost savings in fluid service applications (air, gas, liquid, and steam) providing that probability of failure on demand (the probability that the pressure relief valve fails to perform its intendedmore » safety function during a potentially dangerous over pressurization) is at least as good as that for hard seat valves. The research in this paper shows that the proportion of soft seat spring operated pressure relief valves failing is the same or less than that of hard seat valves, and that for failed valves, soft seat valves typically have failure ratios of proof test pressure to set pressure less than that of hard seat valves.« less
ERIC Educational Resources Information Center
Pitts, Laura; Dymond, Simon
2012-01-01
Research on the high-probability (high-p) request sequence shows that compliance with low-probability (low-p) requests generally increases when preceded by a series of high-p requests. Few studies have conducted formal preference assessments to identify the consequences used for compliance, which may partly explain treatment failures, and still…
Dam failure analysis for the Lago El Guineo Dam, Orocovis, Puerto Rico
Gómez-Fragoso, Julieta; Heriberto Torres-Sierra,
2016-08-09
The U.S. Geological Survey, in cooperation with the Puerto Rico Electric Power Authority, completed hydrologic and hydraulic analyses to assess the potential hazard to human life and property associated with the hypothetical failure of the Lago El Guineo Dam. The Lago El Guineo Dam is within the headwaters of the Río Grande de Manatí and impounds a drainage area of about 4.25 square kilometers.The hydrologic assessment was designed to determine the outflow hydrographs and peak discharges for Lago El Guineo and other subbasins in the Río Grande de Manatí hydrographic basin for three extreme rainfall events: (1) a 6-hour probable maximum precipitation event, (2) a 24-hour probable maximum precipitation event, and (3) a 24-hour, 100-year recurrence rainfall event. The hydraulic study simulated a dam failure of Lago El Guineo Dam using flood hydrographs generated from the hydrologic study. The simulated dam failure generated a hydrograph that was routed downstream from Lago El Guineo Dam through the lower reaches of the Río Toro Negro and the Río Grande de Manatí to determine water-surface profiles developed from the event-based hydrologic scenarios and “sunny day” conditions. The Hydrologic Engineering Center’s Hydrologic Modeling System (HEC–HMS) and Hydrologic Engineering Center’s River Analysis System (HEC–RAS) computer programs, developed by the U.S. Army Corps of Engineers, were used for the hydrologic and hydraulic modeling, respectively. The flow routing in the hydraulic analyses was completed using the unsteady flow module available in the HEC–RAS model.Above the Lago El Guineo Dam, the simulated inflow peak discharges from HEC–HMS resulted in about 550 and 414 cubic meters per second for the 6- and 24-hour probable maximum precipitation events, respectively. The 24-hour, 100-year recurrence storm simulation resulted in a peak discharge of about 216 cubic meters per second. For the hydrologic analysis, no dam failure conditions are considered within the model. The results of the hydrologic simulations indicated that for all hydrologic conditions scenarios, the Lago El Guineo Dam would not experience overtopping. For the dam breach hydraulic analysis, failure by piping was the selected hypothetical failure mode for the Lago El Guineo Dam.Results from the simulated dam failure of the Lago El Guineo Dam using the HEC–RAS model for the 6- and 24-hour probable maximum precipitation events indicated peak discharges below the dam of 1,342.43 and 1,434.69 cubic meters per second, respectively. Dam failure during the 24-hour, 100-year recurrence rainfall event resulted in a peak discharge directly downstream from Lago El Guineo Dam of 1,183.12 cubic meters per second. Dam failure during sunny-day conditions (no precipitation) produced a peak discharge at Lago El Guineo Dam of 1,015.31 cubic meters per second assuming the initial water-surface elevation was at the morning-glory spillway invert elevation.The results of the hydraulic analysis indicate that the flood would extend to many inhabited areas along the stream banks from the Lago El Guineo Dam to the mouth of the Río Grande as a result of the simulated failure of the Lago El Guineo Dam. Low-lying regions in the vicinity of Ciales, Manatí, and Barceloneta, Puerto Rico, are among the regions that would be most affected by failure of the Lago El Guineo Dam. Effects of the flood control (levee) structure constructed in 2000 to provide protection to the low-lying populated areas of Barceloneta, Puerto Rico, were considered in the hydraulic analysis of dam failure. The results indicate that overtopping can be expected in the aforementioned levee during 6- and 24-hour probable maximum precipitation events. The levee was not overtopped during dam failure scenarios under the 24-hour, 100-year recurrence rainfall event or sunny-day conditions.
NASA Technical Reports Server (NTRS)
Bueno, R.; Chow, E.; Gershwin, S. B.; Willsky, A. S.
1975-01-01
The research is reported on the problems of failure detection and reliable system design for digital aircraft control systems. Failure modes, cross detection probability, wrong time detection, application of performance tools, and the GLR computer package are discussed.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Performance modeling of Deep Burn TRISO fuel using ZrC as a load-bearing layer and an oxygen getter
NASA Astrophysics Data System (ADS)
Wongsawaeng, Doonyapong
2010-01-01
The effects of design choices for the TRISO particle fuel were explored in order to determine their contribution to attaining high-burnup in Deep Burn modular helium reactor fuels containing transuranics from light water reactor spent fuel. The new design features were: (1) ZrC coating substituted for the SiC, allowing the fuel to survive higher accident temperatures; (2) pyrocarbon/SiC "alloy" substituted for the inner pyrocarbon coating to reduce layer failure and (3) pyrocarbon seal coat and thin ZrC oxygen getter coating on the kernel to eliminate CO. Fuel performance was evaluated using General Atomics Company's PISA code. The only acceptable design has a 200-μm kernel diameter coupled with at least 150-μm thick, 50% porosity buffer, a 15-μm ZrC getter over a 10-μm pyrocarbon seal coat on the kernel, an alloy inner pyrocarbon, and ZrC substituted for SiC. The code predicted that during a 1600 °C postulated accident at 70% FIMA, the ZrC failure probability is <10-4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
Disasters as a necessary part of benefit-cost analyses.
Mark, R K; Stuart-Alexander, D E
1977-09-16
Benefit-cost analyses for water projects generally have not included the expected costs (residual risk) of low-probability disasters such as dam failures, impoundment-induced earthquakes, and landslides. Analysis of the history of these types of events demonstrates that dam failures are not uncommon and that the probability of a reservoir-triggered earth-quake increases with increasing reservoir depth. Because the expected costs from such events can be significant and risk is project-specific, estimates should be made for each project. The cost of expected damage from a "high-risk" project in an urban area could be comparable to project benefits.
NASA Technical Reports Server (NTRS)
Naumann, R. J.; Oran, W. A.; Whymark, R. R.; Rey, C.
1981-01-01
The single axis acoustic levitator that was flown on SPAR VI malfunctioned. The results of a series of tests, analyses, and investigation of hypotheses that were undertaken to determine the probable cause of failure are presented, together with recommendations for future flights of the apparatus. The most probable causes of the SPAR VI failure were lower than expected sound intensity due to mechanical degradation of the sound source, and an unexpected external force that caused the experiment sample to move radially and eventually be lost from the acoustic energy well.
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.
Bordin, Dimorvan; Bergamo, Edmara T P; Fardin, Vinicius P; Coelho, Paulo G; Bonfante, Estevam A
2017-07-01
To assess the probability of survival (reliability) and failure modes of narrow implants with different diameters. For fatigue testing, 42 implants with the same macrogeometry and internal conical connection were divided, according to diameter, as follows: narrow (Ø3.3×10mm) and extra-narrow (Ø2.9×10mm) (21 per group). Identical abutments were torqued to the implants and standardized maxillary incisor crowns were cemented and subjected to step-stress accelerated life testing (SSALT) in water. The use-level probability Weibull curves, and reliability for a mission of 50,000 and 100,000 cycles at 50N, 100, 150 and 180N were calculated. For the finite element analysis (FEA), two virtual models, simulating the samples tested in fatigue, were constructed. Loading at 50N and 100N were applied 30° off-axis at the crown. The von-Mises stress was calculated for implant and abutment. The beta (β) values were: 0.67 for narrow and 1.32 for extra-narrow implants, indicating that failure rates did not increase with fatigue in the former, but more likely were associated with damage accumulation and wear-out failures in the latter. Both groups showed high reliability (up to 97.5%) at 50 and 100N. A decreased reliability was observed for both groups at 150 and 180N (ranging from 0 to 82.3%), but no significant difference was observed between groups. Failure predominantly involved abutment fracture for both groups. FEA at 50N-load, Ø3.3mm showed higher von-Mises stress for abutment (7.75%) and implant (2%) when compared to the Ø2.9mm. There was no significant difference between narrow and extra-narrow implants regarding probability of survival. The failure mode was similar for both groups, restricted to abutment fracture. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fishnet model for failure probability tail of nacre-like imbricated lamellar materials
NASA Astrophysics Data System (ADS)
Luo, Wen; Bažant, Zdeněk P.
2017-12-01
Nacre, the iridescent material of the shells of pearl oysters and abalone, consists mostly of aragonite (a form of CaCO3), a brittle constituent of relatively low strength (≈10 MPa). Yet it has astonishing mean tensile strength (≈150 MPa) and fracture energy (≈350 to 1,240 J/m2). The reasons have recently become well understood: (i) the nanoscale thickness (≈300 nm) of nacre's building blocks, the aragonite lamellae (or platelets), and (ii) the imbricated, or staggered, arrangement of these lamellea, bound by biopolymer layers only ≈25 nm thick, occupying <5% of volume. These properties inspire manmade biomimetic materials. For engineering applications, however, the failure probability of ≤10-6 is generally required. To guarantee it, the type of probability density function (pdf) of strength, including its tail, must be determined. This objective, not pursued previously, is hardly achievable by experiments alone, since >10^8 tests of specimens would be needed. Here we outline a statistical model of strength that resembles a fishnet pulled diagonally, captures the tail of pdf of strength and, importantly, allows analytical safety assessments of nacreous materials. The analysis shows that, in terms of safety, the imbricated lamellar structure provides a major additional advantage—˜10% strength increase at tail failure probability 10^-6 and a 1 to 2 orders of magnitude tail probability decrease at fixed stress. Another advantage is that a high scatter of microstructure properties diminishes the strength difference between the mean and the probability tail, compared with the weakest link model. These advantages of nacre-like materials are here justified analytically and supported by millions of Monte Carlo simulations.
Possible consequences of absence of "Jupiters" in planetary systems.
Wetherill, G W
1994-01-01
The formation of the gas giant planets Jupiter and Saturn probably required the growth of massive approximately 15 Earth-mass cores on a time scale shorter than the approximately 10(7) time scale for removal of nebular gas. Relatively minor variations in nebular parameters could preclude the growth of full-size gas giants even in systems in which the terrestrial planet region is similar to our own. Systems containing "failed Jupiters," resembling Uranus and Neptune in their failure to capture much nebular gas, would be expected to contain more densely populated cometary source regions. They will also eject a smaller number of comets into interstellar space. If systems of this kind were the norm, observation of hyperbolic comets would be unexpected. Monte Carlo calculations of the orbital evolution of region of such systems (the Kuiper belt) indicate that throughout Earth history the cometary impact flux in their terrestrial planet regions would be approximately 1000 times greater than in our Solar System. It may be speculated that this could frustrate the evolution of organisms that observe and seek to understand their planetary system. For this reason our observation of these planets in our Solar System may tell us nothing about the probability of similar gas giants occurring in other planetary systems. This situation can be corrected by observation of an unbiased sample of planetary systems.
Effect of Progressive Heart Failure on Cerebral Hemodynamics and Monoamine Metabolism in CNS.
Mamalyga, M L; Mamalyga, L M
2017-07-01
Compensated and decompensated heart failure are characterized by different associations of disorders in the brain and heart. In compensated heart failure, the blood flow in the common carotid and basilar arteries does not change. Exacerbation of heart failure leads to severe decompensation and is accompanied by a decrease in blood flow in the carotid and basilar arteries. Changes in monoamine content occurring in the brain at different stages of heart failure are determined by various factors. The functional exercise test showed unequal monoamine-synthesizing capacities of the brain in compensated and decompensated heart failure. Reduced capacity of the monoaminergic systems in decompensated heart failure probably leads to overstrain of the central regulatory mechanisms, their gradual exhaustion, and failure of the compensatory mechanisms, which contributes to progression of heart failure.
Failure Investigation of Radiant Platen Superheater Tube of Thermal Power Plant Boiler
NASA Astrophysics Data System (ADS)
Ghosh, D.; Ray, S.; Mandal, A.; Roy, H.
2015-04-01
This paper highlights a case study of typical premature failure of a radiant platen superheater tube of 210 MW thermal power plant boiler. Visual examination, dimensional measurement and chemical analysis, are conducted as part of the investigations. Apart from these, metallographic analysis and fractography are also conducted to ascertain the probable cause of failure. Finally it has been concluded that the premature failure of the super heater tube can be attributed to localized creep at high temperature. The corrective actions has also been suggested to avoid this type of failure in near future.
Reliability analysis of redundant systems. [a method to compute transition probabilities
NASA Technical Reports Server (NTRS)
Yeh, H. Y.
1974-01-01
A method is proposed to compute the transition probability (the probability of partial or total failure) of parallel redundant system. The effect of geometry of the system, the direction of load, and the degree of redundancy on the probability of complete survival of parachute-like system are also studied. The results show that the probability of complete survival of three-member parachute-like system is very sensitive to the variation of horizontal angle of the load. However, it becomes very insignificant as the degree of redundancy increases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sallaberry, Cedric Jean-Marie; Helton, Jon C.
2015-05-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high - consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to d eactivate the entire system before the SL system fails (i.e., degrades into a configurationmore » that could allow an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). This report describes the Fortran 90 program CPLOAS_2 that implements the following representations for PLOAS for situations in which both link physical properties and link failure properties are time - dependent: (i) failure of all SLs before failure of any WL, (ii) failure of any SL before f ailure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS can be included in the calculations performed by CPLOAS_2. Keywords: Aleatory uncertainty, CPLOAS_2, Epistemic uncertainty, Probability of loss of assured safety, Strong link, Uncertainty analysis, Weak link« less
King, C.-Y.; Luo, G.
1990-01-01
Electric resistance and emissions of hydrogen and radon isotopes of concrete (which is somewhat similar to fault-zone materials) under increasing uniaxial compression were continuously monitored to check whether they show any pre- and post-failure changes that may correspond to similar changes reported for earthquakes. The results show that all these parameters generally begin to increase when the applied stresses reach 20% to 90% of the corresponding failure stresses, probably due to the occurrence and growth of dilatant microcracks in the specimens. The prefailure changes have different patterns for different specimens, probably because of differences in spatial and temporal distributions of the microcracks. The resistance shows large co-failure increases, and the gas emissions show large post-failure increases. The post-failure increase of radon persists longer and stays at a higher level than that of hydrogen, suggesting a difference in the emission mechanisms for these two kinds of gases. The H2 increase may be mainly due to chemical reaction at the crack surfaces while they are fresh, whereas the Rn increases may be mainly the result of the increased emanation area of such surfaces. The results suggest that monitoring of resistivity and gas emissions may be useful for predicting earthquakes and failures of concrete structures. ?? 1990 Birkha??user Verlag.
Advances on the Failure Analysis of the Dam-Foundation Interface of Concrete Dams.
Altarejos-García, Luis; Escuder-Bueno, Ignacio; Morales-Torres, Adrián
2015-12-02
Failure analysis of the dam-foundation interface in concrete dams is characterized by complexity, uncertainties on models and parameters, and a strong non-linear softening behavior. In practice, these uncertainties are dealt with a well-structured mixture of experience, best practices and prudent, conservative design approaches based on the safety factor concept. Yet, a sound, deep knowledge of some aspects of this failure mode remain unveiled, as they have been offset in practical applications by the use of this conservative approach. In this paper we show a strategy to analyse this failure mode under a reliability-based approach. The proposed methodology of analysis integrates epistemic uncertainty on spatial variability of strength parameters and data from dam monitoring. The purpose is to produce meaningful and useful information regarding the probability of occurrence of this failure mode that can be incorporated in risk-informed dam safety reviews. In addition, relationships between probability of failure and factors of safety are obtained. This research is supported by a more than a decade of intensive professional practice on real world cases and its final purpose is to bring some clarity, guidance and to contribute to the improvement of current knowledge and best practices on such an important dam safety concern.
Prognostic Factors in Severe Chagasic Heart Failure
Costa, Sandra de Araújo; Rassi, Salvador; Freitas, Elis Marra da Madeira; Gutierrez, Natália da Silva; Boaventura, Fabiana Miranda; Sampaio, Larissa Pereira da Costa; Silva, João Bastista Masson
2017-01-01
Background Prognostic factors are extensively studied in heart failure; however, their role in severe Chagasic heart failure have not been established. Objectives To identify the association of clinical and laboratory factors with the prognosis of severe Chagasic heart failure, as well as the association of these factors with mortality and survival in a 7.5-year follow-up. Methods 60 patients with severe Chagasic heart failure were evaluated regarding the following variables: age, blood pressure, ejection fraction, serum sodium, creatinine, 6-minute walk test, non-sustained ventricular tachycardia, QRS width, indexed left atrial volume, and functional class. Results 53 (88.3%) patients died during follow-up, and 7 (11.7%) remained alive. Cumulative overall survival probability was approximately 11%. Non-sustained ventricular tachycardia (HR = 2.11; 95% CI: 1.04 - 4.31; p<0.05) and indexed left atrial volume ≥ 72 mL/m2 (HR = 3.51; 95% CI: 1.63 - 7.52; p<0.05) were the only variables that remained as independent predictors of mortality. Conclusions The presence of non-sustained ventricular tachycardia on Holter and indexed left atrial volume > 72 mL/m2 are independent predictors of mortality in severe Chagasic heart failure, with cumulative survival probability of only 11% in 7.5 years. PMID:28443956
Advances on the Failure Analysis of the Dam—Foundation Interface of Concrete Dams
Altarejos-García, Luis; Escuder-Bueno, Ignacio; Morales-Torres, Adrián
2015-01-01
Failure analysis of the dam-foundation interface in concrete dams is characterized by complexity, uncertainties on models and parameters, and a strong non-linear softening behavior. In practice, these uncertainties are dealt with a well-structured mixture of experience, best practices and prudent, conservative design approaches based on the safety factor concept. Yet, a sound, deep knowledge of some aspects of this failure mode remain unveiled, as they have been offset in practical applications by the use of this conservative approach. In this paper we show a strategy to analyse this failure mode under a reliability-based approach. The proposed methodology of analysis integrates epistemic uncertainty on spatial variability of strength parameters and data from dam monitoring. The purpose is to produce meaningful and useful information regarding the probability of occurrence of this failure mode that can be incorporated in risk-informed dam safety reviews. In addition, relationships between probability of failure and factors of safety are obtained. This research is supported by a more than a decade of intensive professional practice on real world cases and its final purpose is to bring some clarity, guidance and to contribute to the improvement of current knowledge and best practices on such an important dam safety concern. PMID:28793709
Fatigue analysis of composite materials using the fail-safe concept
NASA Technical Reports Server (NTRS)
Stievenard, G.
1982-01-01
If R1 is the probability of having a crack on a flight component and R2 is the probability of seeing this crack propagate between two scheduled inspections, the global failure regulation states that this product must not exceed 0.0000001.
Human versus automation in responding to failures: an expected-value analysis
NASA Technical Reports Server (NTRS)
Sheridan, T. B.; Parasuraman, R.
2000-01-01
A simple analytical criterion is provided for deciding whether a human or automation is best for a failure detection task. The method is based on expected-value decision theory in much the same way as is signal detection. It requires specification of the probabilities of misses (false negatives) and false alarms (false positives) for both human and automation being considered, as well as factors independent of the choice--namely, costs and benefits of incorrect and correct decisions as well as the prior probability of failure. The method can also serve as a basis for comparing different modes of automation. Some limiting cases of application are discussed, as are some decision criteria other than expected value. Actual or potential applications include the design and evaluation of any system in which either humans or automation are being considered.
Examining risk in mineral exploration
Singer, Donald A.; Kouda, Ryoichi
1999-01-01
Successful mineral exploration strategy requires identification of some of the risk sources and considering them in the decision-making process so that controllable risk can be reduced. Risk is defined as chance of failure or loss. Exploration is an economic activity involving risk and uncertainty, so risk also must be defined in an economic context. Risk reduction can be addressed in three fundamental ways: (1) increasing the number of examinations; (2) increasing success probabilities; and (3) changing success probabilities per test by learning. These provide the framework for examining exploration risk. First, the number of prospects examined is increased, such as by joint venturing, thereby reducing chance of gambler's ruin. Second, success probability is increased by exploring for deposit types more likely to be economic, such as those with a high proportion of world-class deposits. For example, in looking for 100+ ton (>3 million oz) Au deposits, porphyry Cu-Au, or epithermal quartz alunite Au types require examining fewer deposits than Comstock epithermal vein and most other deposit types. For porphyry copper exploration, a strong positive relationship between area of sulfide minerals and deposits' contained Cu can be used to reduce exploration risk by only examining large sulfide systems. In some situations, success probabilities can be increased by examining certain geologic environments. Only 8% of kuroko massive sulfide deposits are world class, but success chances can be increased to about 15% by looking in settings containing sediments and rhyolitic rocks. It is possible to reduce risk of loss during mining by sequentially developing and expanding a mine—thus reducing capital exposed at early stages and reducing present value of risked capital. Because this strategy is easier to apply in some deposit types than in others, the strategy can affect deposit types sought. Third, risk is reduced by using prior information and by changing the independence of trials assumption, that is, by learning. Bayes' formula is used to change the probability of existence of the deposit sought on the basis of successive exploration stages. Perhaps the most important way to reduce exploration risk is to employ personnel with the appropriate experience and yet who are learning.
29 CFR 1910.1045 - Acrylonitrile.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., equipment failure, rupture of containers, or failure of control equipment, which results in an unexpected... decontamination is completed. (l) Waste disposal. AN waste, scrap, debris, bags, containers, or equipment shall be.... (3) Labels. (i) The employer shall assure that precautionary labels are affixed to all containers of...
Bowden, Vanessa K; Visser, Troy A W; Loft, Shayne
2017-06-01
It is generally assumed that drivers speed intentionally because of factors such as frustration with the speed limit or general impatience. The current study examined whether speeding following an interruption could be better explained by unintentional prospective memory (PM) failure. In these situations, interrupting drivers may create a PM task, with speeding the result of drivers forgetting their newly encoded intention to travel at a lower speed after interruption. Across 3 simulated driving experiments, corrected or uncorrected speeding in recently reduced speed zones (from 70 km/h to 40 km/h) increased on average from 8% when uninterrupted to 33% when interrupted. Conversely, the probability that participants traveled under their new speed limit in recently increased speed zones (from 40 km/h to 70 km/h) increased from 1% when uninterrupted to 23% when interrupted. Consistent with a PM explanation, this indicates that interruptions lead to a general failure to follow changed speed limits, not just to increased speeding. Further testing a PM explanation, Experiments 2 and 3 manipulated variables expected to influence the probability of PM failures and subsequent speeding after interruptions. Experiment 2 showed that performing a cognitively demanding task during the interruption, when compared with unfilled interruptions, increased the probability of initially speeding from 1% to 11%, but that participants were able to correct (reduce) their speed. In Experiment 3, providing participants with 10s longer to encode the new speed limit before interruption decreased the probability of uncorrected speeding after an unfilled interruption from 30% to 20%. Theoretical implications and implications for road design interventions are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
neutron-Induced Failures in semiconductor Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wender, Stephen Arthur
2017-03-13
Single Event Effects are a very significant failure mode in modern semiconductor devices that may limit their reliability. Accelerated testing is important for semiconductor industry. Considerable more work is needed in this field to mitigate the problem. Mitigation of this problem will probably come from Physicists and Electrical Engineers working together
14 CFR 29.729 - Retracting mechanism.
Code of Federal Regulations, 2014 CFR
2014-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...
14 CFR 27.729 - Retracting mechanism.
Code of Federal Regulations, 2010 CFR
2010-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...
14 CFR 29.729 - Retracting mechanism.
Code of Federal Regulations, 2012 CFR
2012-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...
14 CFR 27.729 - Retracting mechanism.
Code of Federal Regulations, 2013 CFR
2013-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...
14 CFR 29.729 - Retracting mechanism.
Code of Federal Regulations, 2011 CFR
2011-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...
14 CFR 29.729 - Retracting mechanism.
Code of Federal Regulations, 2013 CFR
2013-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...
14 CFR 27.729 - Retracting mechanism.
Code of Federal Regulations, 2012 CFR
2012-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...
14 CFR 27.729 - Retracting mechanism.
Code of Federal Regulations, 2011 CFR
2011-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...
14 CFR 27.729 - Retracting mechanism.
Code of Federal Regulations, 2014 CFR
2014-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 27.777 and 27.779. (g...
14 CFR 29.729 - Retracting mechanism.
Code of Federal Regulations, 2010 CFR
2010-01-01
... loads occurring during retraction and extension at any airspeed up to the design maximum landing gear... of— (1) Any reasonably probable failure in the normal retraction system; or (2) The failure of any... location and operation of the retraction control must meet the requirements of §§ 29.777 and 29.779. (g...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
Defense Strategies for Asymmetric Networked Systems with Discrete Components.
Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun
2018-05-03
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.
Reducing the Risk of Human Space Missions with INTEGRITY
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Dillon-Merill, Robin L.; Tri, Terry O.; Henninger, Donald L.
2003-01-01
The INTEGRITY Program will design and operate a test bed facility to help prepare for future beyond-LEO missions. The purpose of INTEGRITY is to enable future missions by developing, testing, and demonstrating advanced human space systems. INTEGRITY will also implement and validate advanced management techniques including risk analysis and mitigation. One important way INTEGRITY will help enable future missions is by reducing their risk. A risk analysis of human space missions is important in defining the steps that INTEGRITY should take to mitigate risk. This paper describes how a Probabilistic Risk Assessment (PRA) of human space missions will help support the planning and development of INTEGRITY to maximize its benefits to future missions. PRA is a systematic methodology to decompose the system into subsystems and components, to quantify the failure risk as a function of the design elements and their corresponding probability of failure. PRA provides a quantitative estimate of the probability of failure of the system, including an assessment and display of the degree of uncertainty surrounding the probability. PRA provides a basis for understanding the impacts of decisions that affect safety, reliability, performance, and cost. Risks with both high probability and high impact are identified as top priority. The PRA of human missions beyond Earth orbit will help indicate how the risk of future human space missions can be reduced by integrating and testing systems in INTEGRITY.
Defense Strategies for Asymmetric Networked Systems with Discrete Components
Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.
2018-01-01
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588
A method for producing digital probabilistic seismic landslide hazard maps
Jibson, R.W.; Harp, E.L.; Michael, J.A.
2000-01-01
The 1994 Northridge, California, earthquake is the first earthquake for which we have all of the data sets needed to conduct a rigorous regional analysis of seismic slope instability. These data sets include: (1) a comprehensive inventory of triggered landslides, (2) about 200 strong-motion records of the mainshock, (3) 1:24 000-scale geologic mapping of the region, (4) extensive data on engineering properties of geologic units, and (5) high-resolution digital elevation models of the topography. All of these data sets have been digitized and rasterized at 10 m grid spacing using ARC/INFO GIS software on a UNIX computer. Combining these data sets in a dynamic model based on Newmark's permanent-deformation (sliding-block) analysis yields estimates of coseismic landslide displacement in each grid cell from the Northridge earthquake. The modeled displacements are then compared with the digital inventory of landslides triggered by the Northridge earthquake to construct a probability curve relating predicted displacement to probability of failure. This probability function can be applied to predict and map the spatial variability in failure probability in any ground-shaking conditions of interest. We anticipate that this mapping procedure will be used to construct seismic landslide hazard maps that will assist in emergency preparedness planning and in making rational decisions regarding development and construction in areas susceptible to seismic slope failure. ?? 2000 Elsevier Science B.V. All rights reserved.
Jibson, Randall W.; Harp, Edwin L.; Michael, John A.
1998-01-01
The 1994 Northridge, California, earthquake is the first earthquake for which we have all of the data sets needed to conduct a rigorous regional analysis of seismic slope instability. These data sets include (1) a comprehensive inventory of triggered landslides, (2) about 200 strong-motion records of the mainshock, (3) 1:24,000-scale geologic mapping of the region, (4) extensive data on engineering properties of geologic units, and (5) high-resolution digital elevation models of the topography. All of these data sets have been digitized and rasterized at 10-m grid spacing in the ARC/INFO GIS platform. Combining these data sets in a dynamic model based on Newmark's permanent-deformation (sliding-block) analysis yields estimates of coseismic landslide displacement in each grid cell from the Northridge earthquake. The modeled displacements are then compared with the digital inventory of landslides triggered by the Northridge earthquake to construct a probability curve relating predicted displacement to probability of failure. This probability function can be applied to predict and map the spatial variability in failure probability in any ground-shaking conditions of interest. We anticipate that this mapping procedure will be used to construct seismic landslide hazard maps that will assist in emergency preparedness planning and in making rational decisions regarding development and construction in areas susceptible to seismic slope failure.
Wang, X-M; Yin, S-H; Du, J; Du, M-L; Wang, P-Y; Wu, J; Horbinski, C M; Wu, M-J; Zheng, H-Q; Xu, X-Q; Shu, W; Zhang, Y-J
2017-07-01
Retreatment of tuberculosis (TB) often fails in China, yet the risk factors associated with the failure remain unclear. To identify risk factors for the treatment failure of retreated pulmonary tuberculosis (PTB) patients, we analyzed the data of 395 retreated PTB patients who received retreatment between July 2009 and July 2011 in China. PTB patients were categorized into 'success' and 'failure' groups by their treatment outcome. Univariable and multivariable logistic regression were used to evaluate the association between treatment outcome and socio-demographic as well as clinical factors. We also created an optimized risk score model to evaluate the predictive values of these risk factors on treatment failure. Of 395 patients, 99 (25·1%) were diagnosed as retreatment failure. Our results showed that risk factors associated with treatment failure included drug resistance, low education level, low body mass index (6 months), standard treatment regimen, retreatment type, positive culture result after 2 months of treatment, and the place where the first medicine was taken. An Optimized Framingham risk model was then used to calculate the risk scores of these factors. Place where first medicine was taken (temporary living places) received a score of 6, which was highest among all the factors. The predicted probability of treatment failure increases as risk score increases. Ten out of 359 patients had a risk score >9, which corresponded to an estimated probability of treatment failure >70%. In conclusion, we have identified multiple clinical and socio-demographic factors that are associated with treatment failure of retreated PTB patients. We also created an optimized risk score model that was effective in predicting the retreatment failure. These results provide novel insights for the prognosis and improvement of treatment for retreated PTB patients.
Failure Atlas for Rolling Bearings in Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallian, T. E.
2006-01-01
This Atlas is structured as a supplement to the book: T.E. Tallian: Failure Atlas for Hertz Contact Machine Elements, 2nd edition, ASME Press New York, (1999). The content of the atlas comprises plate pages from the book that contain bearing failure images, application data, and descriptions of failure mode, image, and suspected failure causes. Rolling bearings are a critical component of the mainshaft system, gearbox and generator in the rapidly developing technology of power generating wind turbines. The demands for long service life are stringent; the design load, speed and temperature regimes are demanding and the environmental conditions including weather,more » contamination, impediments to monitoring and maintenance are often unfavorable. As a result, experience has shown that the rolling bearings are prone to a variety of failure modes that may prevent achievement of design lives. Morphological failure diagnosis is extensively used in the failure analysis and improvement of bearing operation. Accumulated experience shows that the failure appearance and mode of failure causation in wind turbine bearings has many distinguishing features. The present Atlas is a first effort to collect an interpreted database of specifically wind turbine related rolling bearing failures and make it widely available. This Atlas is structured as a supplement to the book: T. E. Tallian: Failure Atlas for Hertz Contact Machine Elements, 2d edition, ASME Press New York, (1999). The main body of that book is a comprehensive collection of self-contained pages called Plates, containing failure images, bearing and application data, and three descriptions: failure mode, image and suspected failure causes. The Plates are sorted by main failure mode into chapters. Each chapter is preceded by a general technical discussion of the failure mode, its appearance and causes. The Plates part is supplemented by an introductory part, describing the appearance classification and failure classification systems used, and by several indexes. The present Atlas is intended as a supplement to the book. It has the same structure but contains only Plate pages, arranged in chapters, each with a chapter heading page giving a short definition of the failure mode illustrated. Each Plate page is self contained, with images, bearing and application data, and descriptions of the failure mode, the images and the suspected causes. Images are provided in two resolutions: The text page includes 6 by 9 cm images. In addition, high resolution image files are attached, to be retrieved by clicking on their 'push pin' icon. While the material in the present Atlas is self-contained, it is nonetheless a supplement to the book and the complete interpretation of the terse image descriptions and of the system underlying the failure code presupposes familiarity with the book. Since this Atlas is a supplement to the book, its chapter numbering follows that of the book. Not all failure modes covered in the book have been found among the observed wind turbines. For that reason, and because of the omission of introductory matter, the chapter numbers in this Atlas are not a continuous sequence.« less
ACARA - AVAILABILITY, COST AND RESOURCE ALLOCATION
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
ACARA is a program for analyzing availability, lifecycle cost, and resource scheduling. It uses a statistical Monte Carlo method to simulate a system's capacity states as well as component failure and repair. Component failures are modelled using a combination of exponential and Weibull probability distributions. ACARA schedules component replacement to achieve optimum system performance. The scheduling will comply with any constraints on component production, resupply vehicle capacity, on-site spares, or crew manpower and equipment. ACARA is capable of many types of analyses and trade studies because of its integrated approach. It characterizes the system performance in terms of both state availability and equivalent availability (a weighted average of state availability). It can determine the probability of exceeding a capacity state to assess reliability and loss of load probability. It can also evaluate the effect of resource constraints on system availability and lifecycle cost. ACARA interprets the results of a simulation and displays tables and charts for: (1) performance, i.e., availability and reliability of capacity states, (2) frequency of failure and repair, (3) lifecycle cost, including hardware, transportation, and maintenance, and (4) usage of available resources, including mass, volume, and maintenance man-hours. ACARA incorporates a user-friendly, menu-driven interface with full screen data entry. It provides a file management system to store and retrieve input and output datasets for system simulation scenarios. ACARA is written in APL2 using the APL2 interpreter for IBM PC compatible systems running MS-DOS. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. A dot matrix printer is required if the user wishes to print a graph from a results table. A sample MS-DOS executable is provided on the distribution medium. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The standard distribution medium for this program is a set of three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ACARA was developed in 1992.
Nygård, Lotte; Vogelius, Ivan R; Fischer, Barbara M; Kjær, Andreas; Langer, Seppo W; Aznar, Marianne C; Persson, Gitte F; Bentzen, Søren M
2018-04-01
The aim of the study was to build a model of first failure site- and lesion-specific failure probability after definitive chemoradiotherapy for inoperable NSCLC. We retrospectively analyzed 251 patients receiving definitive chemoradiotherapy for NSCLC at a single institution between 2009 and 2015. All patients were scanned by fludeoxyglucose positron emission tomography/computed tomography for radiotherapy planning. Clinical patient data and fludeoxyglucose positron emission tomography standardized uptake values from primary tumor and nodal lesions were analyzed by using multivariate cause-specific Cox regression. In patients experiencing locoregional failure, multivariable logistic regression was applied to assess risk of each lesion being the first site of failure. The two models were used in combination to predict probability of lesion failure accounting for competing events. Adenocarcinoma had a lower hazard ratio (HR) of locoregional failure than squamous cell carcinoma (HR = 0.45, 95% confidence interval [CI]: 0.26-0.76, p = 0.003). Distant failures were more common in the adenocarcinoma group (HR = 2.21, 95% CI: 1.41-3.48, p < 0.001). Multivariable logistic regression of individual lesions at the time of first failure showed that primary tumors were more likely to fail than lymph nodes (OR = 12.8, 95% CI: 5.10-32.17, p < 0.001). Increasing peak standardized uptake value was significantly associated with lesion failure (OR = 1.26 per unit increase, 95% CI: 1.12-1.40, p < 0.001). The electronic model is available at http://bit.ly/LungModelFDG. We developed a failure site-specific competing risk model based on patient- and lesion-level characteristics. Failure patterns differed between adenocarcinoma and squamous cell carcinoma, illustrating the limitation of aggregating them into NSCLC. Failure site-specific models add complementary information to conventional prognostic models. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Landslide Probability Assessment by the Derived Distributions Technique
NASA Astrophysics Data System (ADS)
Muñoz, E.; Ochoa, A.; Martínez, H.
2012-12-01
Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model is used along with the soil characteristic curve (suction vs. moisture) and the Mohr-Coulomb failure criteria in order to calculate the FOS of the slope. Data from two slopes located on steep tropical regions of the cities of Medellín (Colombia) and Rio de Janeiro (Brazil) where used to verify the model's performance. The results indicated significant differences between the obtained FOS values and the behavior observed on the field. The model shows relatively high values of FOS that do not reflect the instability of the analyzed slopes. For the two cases studied, the application of a more simple reliability concept (as the Probability of Failure - PR and Reliability Index - β), instead of a FOS could lead to more realistic results.
Cycles till failure of silver-zinc cells with completing failures modes: Preliminary data analysis
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Leibecki, H. F.; Bozek, J. M.
1980-01-01
One hundred and twenty nine cells were run through charge-discharge cycles until failure. The experiment design was a variant of a central composite factorial in five factors. Preliminary data analysis consisted of response surface estimation of life. Batteries fail under two basic modes; a low voltage condition and an internal shorting condition. A competing failure modes analysis using maximum likelihood estimation for the extreme value life distribution was performed. Extensive diagnostics such as residual plotting and probability plotting were employed to verify data quality and choice of model.
29 CFR 1910.1051 - 1,3-Butadiene.
Code of Federal Regulations, 2010 CFR
2010-07-01
... mixtures in intact containers or in transportation pipelines sealed in such a manner as to fully contain BD... means any occurrence such as, but not limited to, equipment failure, rupture of containers, or failure... representative employee exposure for that operation from the shift during which the highest exposure is expected...
Atun, Rifat; Gurol–Urganci, Ipek; Hone, Thomas; Pell, Lisa; Stokes, Jonathan; Habicht, Triin; Lukka, Kaija; Raaper, Elin; Habicht, Jarno
2016-01-01
Background Following independence from the Soviet Union in 1991, Estonia introduced a national insurance system, consolidated the number of health care providers, and introduced family medicine centred primary health care (PHC) to strengthen the health system. Methods Using routinely collected health billing records for 2005–2012, we examine health system utilisation for seven ambulatory care sensitive conditions (ACSCs) (asthma, chronic obstructive pulmonary disease [COPD], depression, Type 2 diabetes, heart failure, hypertension, and ischemic heart disease [IHD]), and by patient characteristics (gender, age, and number of co–morbidities). The data set contained 552 822 individuals. We use patient level data to test the significance of trends, and employ multivariate regression analysis to evaluate the probability of inpatient admission while controlling for patient characteristics, health system supply–side variables, and PHC use. Findings Over the study period, utilisation of PHC increased, whilst inpatient admissions fell. Service mix in PHC changed with increases in phone, email, nurse, and follow–up (vs initial) consultations. Healthcare utilisation for diabetes, depression, IHD and hypertension shifted to PHC, whilst for COPD, heart failure and asthma utilisation in outpatient and inpatient settings increased. Multivariate regression indicates higher probability of inpatient admission for males, older patient and especially those with multimorbidity, but protective effect for PHC, with significantly lower hospital admission for those utilising PHC services. Interpretation Our findings suggest health system reforms in Estonia have influenced the shift of ACSCs from secondary to primary care, with PHC having a protective effect in reducing hospital admissions. PMID:27648258
Modeling wildland fire containment with uncertain flame length and fireline width
Romain Mees; David Strauss; Richard Chase
1993-01-01
We describe a mathematical model for the probability that a fireline succeeds in containing a fire. The probability increases as the fireline width increases, and also as the fire's flame length decreases. More interestingly, uncertainties in width and flame length affect the computed containment probabilities, and can thus indirectly affect the optimum allocation...
Diagnostic reasoning techniques for selective monitoring
NASA Technical Reports Server (NTRS)
Homem-De-mello, L. S.; Doyle, R. J.
1991-01-01
An architecture for using diagnostic reasoning techniques in selective monitoring is presented. Given the sensor readings and a model of the physical system, a number of assertions are generated and expressed as Boolean equations. The resulting system of Boolean equations is solved symbolically. Using a priori probabilities of component failure and Bayes' rule, revised probabilities of failure can be computed. These will indicate what components have failed or are the most likely to have failed. This approach is suitable for systems that are well understood and for which the correctness of the assertions can be guaranteed. Also, the system must be such that changes are slow enough to allow the computation.
Probabilistic metrology or how some measurement outcomes render ultra-precise estimates
NASA Astrophysics Data System (ADS)
Calsamiglia, J.; Gendra, B.; Muñoz-Tapia, R.; Bagan, E.
2016-10-01
We show on theoretical grounds that, even in the presence of noise, probabilistic measurement strategies (which have a certain probability of failure or abstention) can provide, upon a heralded successful outcome, estimates with a precision that exceeds the deterministic bounds for the average precision. This establishes a new ultimate bound on the phase estimation precision of particular measurement outcomes (or sequence of outcomes). For probe systems subject to local dephasing, we quantify such precision limit as a function of the probability of failure that can be tolerated. Our results show that the possibility of abstaining can set back the detrimental effects of noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Pfenninger, Stefan
In this paper, we propose a strategy to control the self-organizing dynamics of the Bak-Tang-Wiesenfeld (BTW) sandpile model on complex networks by allowing some degree of failure tolerance for the nodes and introducing additional active dissipation while taking the risk of possible node damage. We show that the probability for large cascades significantly increases or decreases respectively when the risk for node damage outweighs the active dissipation and when the active dissipation outweighs the risk for node damage. By considering the potential additional risk from node damage, a non-trivial optimal active dissipation control strategy which minimizes the total cost inmore » the system can be obtained. Under some conditions the introduced control strategy can decrease the total cost in the system compared to the uncontrolled model. Moreover, when the probability of damaging a node experiencing failure tolerance is greater than the critical value, then no matter how successful the active dissipation control is, the total cost of the system will have to increase. This critical damage probability can be used as an indicator of the robustness of a network or system. Copyright (C) EPLA, 2015« less
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
Lifetime Predictions of a Titanium Silicate Glass with Machined Flaws
NASA Technical Reports Server (NTRS)
Tucker, Dennis S.; Nettles, Alan T.; Cagle, Holly
2003-01-01
A dynamic fatigue study was performed on a Titanium Silicate glass to assess its susceptibility to delayed failure and to compare the results with those of a previous study. Fracture mechanics techniques were used to analyze the results for the purpose of making lifetime predictions. The material strength and lifetime was seen to increase due to the removal of residual stress through grinding and polishing. Influence on time-to-failure is addressed for the case with and without residual stress present. Titanium silicate glass otherwise known as ultra-low expansion (ULE)* glass is a candidate for use in applications requiring low thermal expansion characteristics such as telescope mirrors. The Hubble Space Telescope s primary mirror was manufactured from ULE glass. ULE contains 7.5% titanium dioxide which in combination with silica results in a homogenous glass with a linear expansion coefficient near zero. delayed failure . This previous study was based on a 230/270 grit surface. The grinding and polishing process reduces the surface flaw size and subsurface damage, and relieves residual stress by removing the material with successively smaller grinding media. This results in an increase in strength of the optic during the grinding and polishing sequence. Thus, a second study was undertaken using samples with a surface finish typically achieved for mirror elements, to observe the effects of surface finishing on the time-to-failure predictions. An allowable stress can be calculated for this material based upon modulus of rupture data; however, this does not take into account the problem of delayed failure, most likely due to stress corrosion, which can significantly shorten lifetime. Fortunately, a theory based on fracture mechanics has been developed enabling lifetime predictions to be made for brittle materials susceptible to delayed failure. Knowledge of the factors governing the rate of subcritical flaw growth in a given environment enables the development of relations between lifetime, applied stress and failure probability for the material under study. Dynamic fatigue is one method of obtaining the necessary information to develop these relationships. In this study, the dynamic fatigue method was used to construct a time-to-failure diagram for polished ULE glass.
Role of stress triggering in earthquake migration on the North Anatolian fault
Stein, R.S.; Dieterich, J.H.; Barka, A.A.
1996-01-01
Ten M???6.7 earthquakes ruptured 1,000 km of the North Anatolian fault (Turkey) during 1939-92, providing an unsurpassed opportunity to study how one large shock sets up the next. Calculations of the change in Coulomb failure stress reveal that 9 out of 10 ruptures were brought closer to failure by the preceding shocks, typically by 5 bars, equivalent to 20 years of secular stressing. We translate the calculated stress changes into earthquake probabilities using an earthquake-nucleation constitutive relation, which includes both permanent and transient stress effects. For the typical 10-year period between triggering and subsequent rupturing shocks in the Anatolia sequence, the stress changes yield an average three-fold gain in the ensuing earthquake probability. Stress is now calculated to be high at several isolated sites along the fault. During the next 30 years, we estimate a 15% probability of a M???6.7 earthquake east of the major eastern center of Erzincan, and a 12% probability for a large event south of the major western port city of Izmit. Such stress-based probability calculations may thus be useful to assess and update earthquake hazards elsewhere. ?? 1997 Elsevier Science Ltd.
Gamell, Marc; Teranishi, Keita; Kolla, Hemanth; ...
2017-10-26
In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less
SU-E-T-627: Failure Modes and Effect Analysis for Monthly Quality Assurance of Linear Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, J; Xiao, Y; Wang, J
2014-06-15
Purpose: To develop and implement a failure mode and effect analysis (FMEA) on routine monthly Quality Assurance (QA) tests (physical tests part) of linear accelerator. Methods: A systematic failure mode and effect analysis method was performed for monthly QA procedures. A detailed process tree of monthly QA was created and potential failure modes were defined. Each failure mode may have many influencing factors. For each factor, a risk probability number (RPN) was calculated from the product of probability of occurrence (O), the severity of effect (S), and detectability of the failure (D). The RPN scores are in a range ofmore » 1 to 1000, with higher scores indicating stronger correlation to a given influencing factor of a failure mode. Five medical physicists in our institution were responsible to discuss and to define the O, S, D values. Results: 15 possible failure modes were identified and all RPN scores of all influencing factors of these 15 failue modes were from 8 to 150, and the checklist of FMEA in monthly QA was drawn. The system showed consistent and accurate response to erroneous conditions. Conclusion: The influencing factors of RPN greater than 50 were considered as highly-correlated factors of a certain out-oftolerance monthly QA test. FMEA is a fast and flexible tool to develop an implement a quality management (QM) frame work of monthly QA, which improved the QA efficiency of our QA team. The FMEA work may incorporate more quantification and monitoring fuctions in future.« less
Schmeida, Mary; Savrin, Ronald A
2012-01-01
Heart failure readmission among the elderly is frequent and costly to both the patient and the Medicare trust fund. In this study, the authors explore the factors that are associated with states having heart failure readmission rates that are higher than the U.S. national rate. Acute inpatient hospital settings. 50 state-level data and multivariate regression analysis is used. The dependent variable Heart Failure 30-day Readmission Worse than U.S. Rate is based on adult Medicare Fee-for-Service patients hospitalized with a primary discharge diagnosis of heart failure and for which a subsequent inpatient readmission occurred within 30 days of their last discharge. One key variable found--states with a higher resident population speaking a primary language other than English at home--that is significantly associated with a decrease in probability in states ranking "worse" on heart failure 30-day readmission. Whereas, states with a higher median income, more total days of care per 1,000 Medicare enrollees, and a greater percentage of Medicare enrollees with prescription drug coverage have a greater probability for heart failure 30-day readmission to be "worse" than the U.S. national rate. Case management interventions targeting health literacy may be more effective than other factors to improve state-level hospital status on heart failure 30-day readmission. Factors such as total days of care per 1,000 Medicare enrollees and improving patient access to postdischarge medication(s) may not be as important as literacy. Interventions aimed to prevent disparities should consider higher income population groups as vulnerable for readmission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamell, Marc; Teranishi, Keita; Kolla, Hemanth
In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less
Arancibia, F; Ewig, S; Martinez, J A; Ruiz, M; Bauer, T; Marcos, M A; Mensa, J; Torres, A
2000-07-01
The aim of the study was to determine the causes and prognostic implications of antimicrobial treatment failures in patients with nonresponding and progressive life-threatening, community-acquired pneumonia. Forty-nine patients hospitalized with a presumptive diagnosis of community-acquired pneumonia during a 16-mo period, failure to respond to antimicrobial treatment, and documented repeated microbial investigation >/= 72 h after initiation of in-hospital antimicrobial treatment were recorded. A definite etiology of treatment failure could be established in 32 of 49 (65%) patients, and nine additional patients (18%) had a probable etiology. Treatment failures were mainly infectious in origin and included primary, persistent, and nosocomial infections (n = 10 [19%], 13 [24%], and 11 [20%] of causes, respectively). Definite but not probable persistent infections were mostly due to microbial resistance to the administered initial empiric antimicrobial treatment. Nosocomial infections were particularly frequent in patients with progressive pneumonia. Definite persistent infections and nosocomial infections had the highest associated mortality rates (75 and 88%, respectively). Nosocomial pneumonia was the only cause of treatment failure independently associated with death in multivariate analysis (RR, 16.7; 95% CI, 1.4 to 194.9; p = 0.03). We conclude that the detection of microbial resistance and the diagnosis of nosocomial pneumonia are the two major challenges in hospitalized patients with community-acquired pneumonia who do not respond to initial antimicrobial treatment. In order to establish these potentially life-threatening etiologies, a regular microbial reinvestigation seems mandatory for all patients presenting with antimicrobial treatment failures.
Predictors of treatment failure in young patients undergoing in vitro fertilization.
Jacobs, Marni B; Klonoff-Cohen, Hillary; Agarwal, Sanjay; Kritz-Silverstein, Donna; Lindsay, Suzanne; Garzo, V Gabriel
2016-08-01
The purpose of the study was to evaluate whether routinely collected clinical factors can predict in vitro fertilization (IVF) failure among young, "good prognosis" patients predominantly with secondary infertility who are less than 35 years of age. Using de-identified clinic records, 414 women <35 years undergoing their first autologous IVF cycle were identified. Logistic regression was used to identify patient-driven clinical factors routinely collected during fertility treatment that could be used to model predicted probability of cycle failure. One hundred ninety-seven patients with both primary and secondary infertility had a failed IVF cycle, and 217 with secondary infertility had a successful live birth. None of the women with primary infertility had a successful live birth. The significant predictors for IVF cycle failure among young patients were fewer previous live births, history of biochemical pregnancies or spontaneous abortions, lower baseline antral follicle count, higher total gonadotropin dose, unknown infertility diagnosis, and lack of at least one fair to good quality embryo. The full model showed good predictive value (c = 0.885) for estimating risk of cycle failure; at ≥80 % predicted probability of failure, sensitivity = 55.4 %, specificity = 97.5 %, positive predictive value = 95.4 %, and negative predictive value = 69.8 %. If this predictive model is validated in future studies, it could be beneficial for predicting IVF failure in good prognosis women under the age of 35 years.
Effects of Gas Pressure on the Failure Characteristics of Coal
NASA Astrophysics Data System (ADS)
Xie, Guangxiang; Yin, Zhiqiang; Wang, Lei; Hu, Zuxiang; Zhu, Chuanqi
2017-07-01
Several experiments were conducted using self-developed equipment for visual gas-solid coupling mechanics. The raw coal specimens were stored in a container filled with gas (99% CH4) under different initial gas pressure conditions (0.0, 0.5, 1.0, and 1.5 MPa) for 24 h prior to testing. Then, the specimens were tested in a rock-testing machine, and the mechanical properties, surface deformation and failure modes were recorded using strain gauges, an acoustic emission (AE) system and a camera. An analysis of the fractals of fragments and dissipated energy was performed to understand the changes observed in the stress-strain and crack propagation behaviour of the gas-containing coal specimens. The results demonstrate that increased gas pressure leads to a reduction in the uniaxial compression strength (UCS) of gas-containing coal and the critical dilatancy stress. The AE, surface deformation and fractal analysis results show that the failure mode changes during the gas state. Interestingly, a higher initial gas pressure will cause the damaged cracks and failure of the gas-containing coal samples to become severe. The dissipated energy characteristic in the failure process of a gas-containing coal sample is analysed using a combination of fractal theory and energy principles. Using the theory of fracture mechanics, based on theoretical analyses and calculations, the stress intensity factor of crack tips increases as the gas pressure increases, which is the main cause of the reduction in the UCS and critical dilatancy stress and explains the influence of gas in coal failure. More serious failure is created in gas-containing coal under a high gas pressure and low exterior load.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
A Statistical Perspective on Highly Accelerated Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Edward V.
Highly accelerated life testing has been heavily promoted at Sandia (and elsewhere) as a means to rapidly identify product weaknesses caused by flaws in the product's design or manufacturing process. During product development, a small number of units are forced to fail at high stress. The failed units are then examined to determine the root causes of failure. The identification of the root causes of product failures exposed by highly accelerated life testing can instigate changes to the product's design and/or manufacturing process that result in a product with increased reliability. It is widely viewed that this qualitative use ofmore » highly accelerated life testing (often associated with the acronym HALT) can be useful. However, highly accelerated life testing has also been proposed as a quantitative means for "demonstrating" the reliability of a product where unreliability is associated with loss of margin via an identified and dominating failure mechanism. It is assumed that the dominant failure mechanism can be accelerated by changing the level of a stress factor that is assumed to be related to the dominant failure mode. In extreme cases, a minimal number of units (often from a pre-production lot) are subjected to a single highly accelerated stress relative to normal use. If no (or, sufficiently few) units fail at this high stress level, some might claim that a certain level of reliability has been demonstrated (relative to normal use conditions). Underlying this claim are assumptions regarding the level of knowledge associated with the relationship between the stress level and the probability of failure. The primary purpose of this document is to discuss (from a statistical perspective) the efficacy of using accelerated life testing protocols (and, in particular, "highly accelerated" protocols) to make quantitative inferences concerning the performance of a product (e.g., reliability) when in fact there is lack-of-knowledge and uncertainty concerning the assumed relationship between the stress level and performance. In addition, this document contains recommendations for conducting more informative accelerated tests.« less
Ruggeri, Annalisa; Labopin, Myriam; Sormani, Maria Pia; Sanz, Guillermo; Sanz, Jaime; Volt, Fernanda; Michel, Gerard; Locatelli, Franco; Diaz De Heredia, Cristina; O'Brien, Tracey; Arcese, William; Iori, Anna Paola; Querol, Sergi; Kogler, Gesine; Lecchi, Lucilla; Pouthier, Fabienne; Garnier, Federico; Navarrete, Cristina; Baudoux, Etienne; Fernandes, Juliana; Kenzey, Chantal; Eapen, Mary; Gluckman, Eliane; Rocha, Vanderson; Saccardi, Riccardo
2014-09-01
Umbilical cord blood transplant recipients are exposed to an increased risk of graft failure, a complication leading to a higher rate of transplant-related mortality. The decision and timing to offer a second transplant after graft failure is challenging. With the aim of addressing this issue, we analyzed engraftment kinetics and outcomes of 1268 patients (73% children) with acute leukemia (64% acute lymphoblastic leukemia, 36% acute myeloid leukemia) in remission who underwent single-unit umbilical cord blood transplantation after a myeloablative conditioning regimen. The median follow-up was 31 months. The overall survival rate at 3 years was 47%; the 100-day cumulative incidence of transplant-related mortality was 16%. Longer time to engraftment was associated with increased transplant-related mortality and shorter overall survival. The cumulative incidence of neutrophil engraftment at day 60 was 86%, while the median time to achieve engraftment was 24 days. Probability density analysis showed that the likelihood of engraftment after umbilical cord blood transplantation increased after day 10, peaked on day 21 and slowly decreased to 21% by day 31. Beyond day 31, the probability of engraftment dropped rapidly, and the residual probability of engrafting after day 42 was 5%. Graft failure was reported in 166 patients, and 66 of them received a second graft (allogeneic, n=45). Rescue actions, such as the search for another graft, should be considered starting after day 21. A diagnosis of graft failure can be established in patients who have not achieved neutrophil recovery by day 42. Moreover, subsequent transplants should not be postponed after day 42. Copyright© Ferrata Storti Foundation.
Cascading failures in ac electricity grids.
Rohden, Martin; Jung, Daniel; Tamrakar, Samyak; Kettemann, Stefan
2016-09-01
Sudden failure of a single transmission element in a power grid can induce a domino effect of cascading failures, which can lead to the isolation of a large number of consumers or even to the failure of the entire grid. Here we present results of the simulation of cascading failures in power grids, using an alternating current (AC) model. We first apply this model to a regular square grid topology. For a random placement of consumers and generators on the grid, the probability to find more than a certain number of unsupplied consumers decays as a power law and obeys a scaling law with respect to system size. Varying the transmitted power threshold above which a transmission line fails does not seem to change the power-law exponent q≈1.6. Furthermore, we study the influence of the placement of generators and consumers on the number of affected consumers and demonstrate that large clusters of generators and consumers are especially vulnerable to cascading failures. As a real-world topology, we consider the German high-voltage transmission grid. Applying the dynamic AC model and considering a random placement of consumers, we find that the probability to disconnect more than a certain number of consumers depends strongly on the threshold. For large thresholds the decay is clearly exponential, while for small ones the decay is slow, indicating a power-law decay.
Recent advances in computational structural reliability analysis methods
NASA Astrophysics Data System (ADS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-10-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Recent advances in computational structural reliability analysis methods
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-01-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
Size distribution of submarine landslides along the U.S. Atlantic margin
Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.
2009-01-01
Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.
Koster-Brouwer, Maria E; Verboom, Diana M; Scicluna, Brendon P; van de Groep, Kirsten; Frencken, Jos F; Janssen, Davy; Schuurman, Rob; Schultz, Marcus J; van der Poll, Tom; Bonten, Marc J M; Cremer, Olaf L
2018-03-01
Discrimination between infectious and noninfectious causes of acute respiratory failure is difficult in patients admitted to the ICU after a period of hospitalization. Using a novel biomarker test (SeptiCyte LAB), we aimed to distinguish between infection and inflammation in this population. Nested cohort study. Two tertiary mixed ICUs in the Netherlands. Hospitalized patients with acute respiratory failure requiring mechanical ventilation upon ICU admission from 2011 to 2013. Patients having an established infection diagnosis or an evidently noninfectious reason for intubation were excluded. None. Blood samples were collected upon ICU admission. Test results were categorized into four probability bands (higher bands indicating higher infection probability) and compared with the infection plausibility as rated by post hoc assessment using strict definitions. Of 467 included patients, 373 (80%) were treated for a suspected infection at admission. Infection plausibility was classified as ruled out, undetermined, or confirmed in 135 (29%), 135 (29%), and 197 (42%) patients, respectively. Test results correlated with infection plausibility (Spearman's rho 0.332; p < 0.001). After exclusion of undetermined cases, positive predictive values were 29%, 54%, and 76% for probability bands 2, 3, and 4, respectively, whereas the negative predictive value for band 1 was 76%. Diagnostic discrimination of SeptiCyte LAB and C-reactive protein was similar (p = 0.919). Among hospitalized patients admitted to the ICU with clinical uncertainty regarding the etiology of acute respiratory failure, the diagnostic value of SeptiCyte LAB was limited.
Risk-based maintenance of ethylene oxide production facilities.
Khan, Faisal I; Haddara, Mahmoud R
2004-05-20
This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.
NASA Technical Reports Server (NTRS)
Runkle, R.; Henson, K.
1982-01-01
A failure analysis of the parachute on the Space Transportation System 3 flight's solid rocket booster's is presented. During the reentry phase of the two Solid Rocket Boosters (SRBs), one 115 ft diameter main parachute failed on the right hand SRB (A12). This parachute failure caused the SRB to impact the Ocean at 110 ft/sec in lieu of the expected 3 parachute impact velocity of 88 ft/sec. This higher impact velocity relates directly to more SRB aft skirt and more motor case damage. The cause of the parachute failure, the potential risks of losing an SRB as a result of this failure, and recommendations to ensure that the probability of chute failures of this type in the future will be low are discussed.
Impact of coverage on the reliability of a fault tolerant computer
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1975-01-01
A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.
1980-03-14
failure Sigmar (Or) in line 50, the standard deviation of the relative error of the weights Sigmap (o) in line 60, the standard deviation of the phase...200, the weight structures in the x and y coordinates Q in line 210, the probability of element failure Sigmar (Or) in line 220, the standard...NUMBER OF ELEMENTS =u;2*H 120 PRINT "Pr’obability of elemenit failure al;O 130 PRINT "Standard dtvi&t ion’ oe r.1&tive ýrror of wl; Sigmar 14 0 PRINT
NASA Astrophysics Data System (ADS)
Faulkner, B. R.; Lyon, W. G.
2001-12-01
We present a probabilistic model for predicting virus attenuation. The solution employs the assumption of complete mixing. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve 4-log attenuation. We tabulated data from related studies to develop probability density functions for input parameters, and utilized a database of soil hydraulic parameters based on the 12 USDA soil categories. Regulators can use the model based on limited information such as boring logs, climate data, and soil survey reports for a particular site of interest. Plackett-Burman sensitivity analysis indicated the most important main effects on probability of failure to achieve 4-log attenuation in our model were mean logarithm of saturated hydraulic conductivity (+0.396), mean water content (+0.203), mean solid-water mass transfer coefficient (-0.147), and the mean solid-water equilibrium partitioning coefficient (-0.144). Using the model, we predicted the probability of failure of a one-meter thick proposed hydrogeologic barrier and a water content of 0.3. With the currently available data and the associated uncertainty, we predicted soils classified as sand would fail (p=0.999), silt loams would also fail (p=0.292), but soils classified as clays would provide the required 4-log attenuation (p=0.001). The model is extendible in the sense that probability density functions of parameters can be modified as future studies refine the uncertainty, and the lightweight object-oriented design of the computer model (implemented in Java) will facilitate reuse with modified classes. This is an abstract of a proposed presentation and does not necessarily reflect EPA policy.
NASA Technical Reports Server (NTRS)
Putcha, Chandra S.; Mikula, D. F. Kip; Dueease, Robert A.; Dang, Lan; Peercy, Robert L.
1997-01-01
This paper deals with the development of a reliability methodology to assess the consequences of using hardware, without failure analysis or corrective action, that has previously demonstrated that it did not perform per specification. The subject of this paper arose from the need to provide a detailed probabilistic analysis to calculate the change in probability of failures with respect to the base or non-failed hardware. The methodology used for the analysis is primarily based on principles of Monte Carlo simulation. The random variables in the analysis are: Maximum Time of Operation (MTO) and operation Time of each Unit (OTU) The failure of a unit is considered to happen if (OTU) is less than MTO for the Normal Operational Period (NOP) in which this unit is used. NOP as a whole uses a total of 4 units. Two cases are considered. in the first specialized scenario, the failure of any operation or system failure is considered to happen if any of the units used during the NOP fail. in the second specialized scenario, the failure of any operation or system failure is considered to happen only if any two of the units used during the MOP fail together. The probability of failure of the units and the system as a whole is determined for 3 kinds of systems - Perfect System, Imperfect System 1 and Imperfect System 2. in a Perfect System, the operation time of the failed unit is the same as that of the MTO. In an Imperfect System 1, the operation time of the failed unit is assumed as 1 percent of the MTO. In an Imperfect System 2, the operation time of the failed unit is assumed as zero. in addition, simulated operation time of failed units is assumed as 10 percent of the corresponding units before zero value. Monte Carlo simulation analysis is used for this study. Necessary software has been developed as part of this study to perform the reliability calculations. The results of the analysis showed that the predicted change in failure probability (P(sub F)) for the previously failed units is as high as 49 percent above the baseline (perfect system) for the worst case. The predicted change in system P(sub F) for the previously failed units is as high as 36% for single unit failure without any redundancy. For redundant systems, with dual unit failure, the predicted change in P(sub F) for the previously failed units is as high as 16%. These results will help management to make decisions regarding the consequences of using previously failed units without adequate failure analysis or corrective action.
NASA Technical Reports Server (NTRS)
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
Probability of survival of implant-supported metal ceramic and CAD/CAM resin nanoceramic crowns.
Bonfante, Estevam A; Suzuki, Marcelo; Lorenzoni, Fábio C; Sena, Lídia A; Hirata, Ronaldo; Bonfante, Gerson; Coelho, Paulo G
2015-08-01
To evaluate the probability of survival and failure modes of implant-supported resin nanoceramic relative to metal-ceramic crowns. Resin nanoceramic molar crowns (LU) (Lava Ultimate, 3M ESPE, USA) were milled and metal-ceramic (MC) (Co-Cr alloy, Wirobond C+, Bego, USA) with identical anatomy were fabricated (n=21). The metal coping and a burnout-resin veneer were created by CAD/CAM, using an abutment (Stealth-abutment, Bicon LLC, USA) and a milled crown from the LU group as models for porcelain hot-pressing (GC-Initial IQ-Press, GC, USA). Crowns were cemented, the implants (n=42, Bicon) embedded in acrylic-resin for mechanical testing, and subjected to single-load to fracture (SLF, n=3 each) for determination of step-stress profiles for accelerated-life testing in water (n=18 each). Weibull curves (50,000 cycles at 200N, 90% CI) were plotted. Weibull modulus (m) and characteristic strength (η) were calculated and a contour plot used (m versus η) for determining differences between groups. Fractography was performed in SEM and polarized-light microscopy. SLF mean values were 1871N (±54.03) for MC and 1748N (±50.71) for LU. Beta values were 0.11 for MC and 0.49 for LU. Weibull modulus was 9.56 and η=1038.8N for LU, and m=4.57 and η=945.42N for MC (p>0.10). Probability of survival (50,000 and 100,000 cycles at 200 and 300N) was 100% for LU and 99% for MC. Failures were cohesive within LU. In MC crowns, porcelain veneer fractures frequently extended to the supporting metal coping. Probability of survival was not different between crown materials, but failure modes differed. In load bearing regions, similar reliability should be expected for metal ceramics, known as the gold standard, and resin nanoceramic crowns over implants. Failure modes involving porcelain veneer fracture and delamination in MC crowns are less likely to be successfully repaired compared to cohesive failures in resin nanoceramic material. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
The less familiar side of heart failure: symptomatic diastolic dysfunction.
Morris, Spencer A; Van Swol, Mark; Udani, Bela
2005-06-01
Arrange for echocardiography or radionuclide angiography within 72 hours of a heart failure exacerbation. An ejection fraction >50% in the presence of signs and symptoms of heart failure makes the diagnosis of diastolic heart failure probable. To treat associated hypertension, use angiotensin receptor blockers (ARBs), angiotensin-converting enzyme (ACE) inhibitors, beta-blockers, calcium channel blockers, or diuretics to achieve a blood pressure goal of <130/80 mm Hg. When using beta-blockers to control heart rate, titrate doses more aggressively than would be done for systolic failure, to reach a goal of 60 to 70 bpm. Use ACE inhibitors/ARBs to decrease hospitalizations, decrease symptoms, and prevent left ventricular remodeling.
Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.
What are the effects of hypertonic saline plus furosemide in acute heart failure?
Zepeda, Patricio; Rain, Carmen; Sepúlveda, Paola
2015-08-27
In search of new therapies to solve diuretic resistance in acute heart failure, the addition of hypertonic saline has been proposed. Searching in Epistemonikos database, which is maintained by screening 30 databases, we identified two systematic reviews including nine pertinent randomized controlled trials. We combined the evidence and generated a summary of findings following the GRADE approach. We concluded hypertonic saline associated with furosemide probably decrease mortality, length of hospital stay and hospital readmission in patients with acute decompensated heart failure.
Of pacemakers and statistics: the actuarial method extended.
Dussel, J; Wolbarst, A B; Scott-Millar, R N; Obel, I W
1980-01-01
Pacemakers cease functioning because of either natural battery exhaustion (nbe) or component failure (cf). A study of four series of pacemakers shows that a simple extension of the actuarial method, so as to incorporate Normal statistics, makes possible a quantitative differentiation between the two modes of failure. This involves the separation of the overall failure probability density function PDF(t) into constituent parts pdfnbe(t) and pdfcf(t). The approach should allow a meaningful comparison of the characteristics of different pacemaker types.
A Fault Tree Approach to Analysis of Behavioral Systems: An Overview.
ERIC Educational Resources Information Center
Stephens, Kent G.
Developed at Brigham Young University, Fault Tree Analysis (FTA) is a technique for enhancing the probability of success in any system by analyzing the most likely modes of failure that could occur. It provides a logical, step-by-step description of possible failure events within a system and their interaction--the combinations of potential…
Failure Analysis by Statistical Techniques (FAST). Volume 1. User’s Manual
1974-10-31
REPORT NUMBER DNA 3336F-1 2. OOVT ACCESSION NO 4. TITLE Cand Sublllle) • FAILURE ANALYSIS BY STATISTICAL TECHNIQUES (FAST) Volume I, User’s...SS2), and t’ a facility ( SS7 ). The other three diagrams break down the three critical subsystems. T le median probability of survival of the
[Rare cause of heart failure in an elderly woman in Djibouti: left ventricular non compaction].
Massoure, P L; Lamblin, G; Bertani, A; Eve, O; Kaiser, E
2011-10-01
The purpose of this report is to describe the first case of left ventricular non compaction diagnosed in Djibouti. The patient was a 74-year-old Djiboutian woman with symptomatic heart failure. Echocardiography is the key tool for assessment of left ventricular non compaction. This rare cardiomyopathy is probably underdiagnosed in Africa.
An approximation formula for a class of fault-tolerant computers
NASA Technical Reports Server (NTRS)
White, A. L.
1986-01-01
An approximation formula is derived for the probability of failure for fault-tolerant process-control computers. These computers use redundancy and reconfiguration to achieve high reliability. Finite-state Markov models capture the dynamic behavior of component failure and system recovery, and the approximation formula permits an estimation of system reliability by an easy examination of the model.
Mitigating Thermal Runaway Risk in Lithium Ion Batteries
NASA Technical Reports Server (NTRS)
Darcy, Eric; Jeevarajan, Judy; Russell, Samuel
2014-01-01
The JSC/NESC team has successfully demonstrated Thermal Runaway (TR) risk reduction in a lithium ion battery for human space flight by developing and implementing verifiable design features which interrupt energy transfer between adjacent electrochemical cells. Conventional lithium ion (li-Ion) batteries can fail catastrophically as a result of a single cell going into thermal runaway. Thermal runaway results when an internal component fails to separate electrode materials leading to localized heating and complete combustion of the lithium ion cell. Previously, the greatest control to minimize the probability of cell failure was individual cell screening. Combining thermal runaway propagation mitigation design features with a comprehensive screening program reduces both the probability, and the severity, of a single cell failure.
Brookian stratigraphic plays in the National Petroleum Reserve - Alaska (NPRA)
Houseknecht, David W.
2003-01-01
The Brookian megasequence in the National Petroleum Reserve in Alaska (NPRA) includes bottomset and clinoform seismic facies of the Torok Formation (mostly Albian age) and generally coeval, topset seismic facies of the uppermost Torok Formation and the Nanushuk Group. These strata are part of a composite total petroleum system involving hydrocarbons expelled from three stratigraphic intervals of source rocks, the Lower Cretaceous gamma-ray zone (GRZ), the Lower Jurassic Kingak Shale, and the Triassic Shublik Formation. The potential for undiscovered oil and gas resources in the Brookian megasequence in NPRA was assessed by defining five plays (assessment units), one in the topset seismic facies and four in the bottomset-clinoform seismic facies. The Brookian Topset Play is estimated to contain between 60 (95-percent probability) and 465 (5-percent probability) million barrels of technically recoverable oil, with a mean (expected value) of 239 million barrels. The Brookian Topset Play is estimated to contain between 0 (95-percent probability) and 679 (5-percent probability) billion cubic feet of technically recoverable, nonassociated natural gas, with a mean (expected value) of 192 billion cubic feet. The Brookian Clinoform North Play, which extends across northern NPRA, is estimated to contain between 538 (95-percent probability) and 2,257 (5-percent probability) million barrels of technically recoverable oil, with a mean (expected value) of 1,306 million barrels. The Brookian Clinoform North Play is estimated to contain between 0 (95-percent probability) and 1,969 (5-percent probability) billion cubic feet of technically recoverable, nonassociated natural gas, with a mean (expected value) of 674 billion cubic feet. The Brookian Clinoform Central Play, which extends across central NPRA, is estimated to contain between 299 (95-percent probability) and 1,849 (5-percent probability) million barrels of technically recoverable oil, with a mean (expected value) of 973 million barrels. The Brookian Clinoform Central Play is estimated to contain between 1,806 (95-percent probability) and 10,076 (5-percent probability) billion cubic feet of technically recoverable, nonassociated natural gas, with a mean (expected value) of 5,405 billion cubic feet. The Brookian Clinoform South-Shallow Play is estimated to contain between 0 (95-percent probability) and 1,254 (5-percent probability) million barrels of technically recoverable oil, with a mean (expected value) of 508 million barrels. The Brookian Clinoform South-Shallow Play is estimated to contain between 0 (95-percent probability) and 5,809 (5-percent probability) billion cubic feet of technically recoverable, nonassociated natural gas, with a mean (expected value) of 2,405 billion cubic feet. The Brookian Clinoform South-Deep Play is estimated to contain between 0 (95-percent probability) and 8,796 (5-percent probability) billion cubic feet of technically recoverable, nonassociated natural gas, with a mean (expected value) of 3,788 billion cubic feet. No technically recoverable oil is assessed in the Brookian Clinoform South-Deep Play, as it lies at depths that are entirely in the gas window. Among the Brookian stratigraphic plays in NPRA, the Brookian Clinoform North Play and the Brookian Clinoform Central Play are most likely to be objectives of exploration activity in the near-term future because they are estimated to contain multiple oil accumulations larger than 128 million barrels technically recoverable oil, and because some of those accumulations may occur near existing infrastructure in the eastern parts of the plays. The other Brookian stratigraphic plays are not likely to be the focus of exploration activity because they are estimated to contain maximum accumulation sizes that are smaller, but they may be an objective of satellite exploration if infrastructure is extended into the play areas. The total volumes of natural gas estimated to occur in B
An interlaminar tension strength specimen
NASA Technical Reports Server (NTRS)
Jackson, Wade C.; Martin, Roderick H.
1992-01-01
This paper describes a technique to determine interlaminar tension strength, sigma(sub 3c) of a fiber reinforced composite material using a curved beam. The specimen was a unidirectional curved beam, bent 90 degrees, with straight arms. Attached to each arm was a hinged loading mechanism which was held by the grips of a tensile testing machine. Geometry effects of the specimen, including the effects of loading arm length, inner radius, thickness, and width, were studied. The data sets fell into two categories: low strength corresponding to a macroscopic flaw related failure and high strength corresponding to a microscopic flaw related failure. From the data available, the loading arm length had no effect on sigma(sub 3c). The inner radius was not expected to have a significant effect on sigma(sub 3c), but this conclusion could not be confirmed because of differences in laminate quality for each curve geometry. The thicker specimens had the lowest value of sigma(sub 3c) because of poor laminate quality. Width was found to affect the value of sigma(sub 3c) only slightly. The wider specimens generally had a slightly lower strength since more material was under high stress, and hence, had a larger probability of containing a significant flaw.
Kirsch, L E; Nguyen, L; Moeckly, C S; Gerth, R
1997-01-01
Helium leak rate measurements were quantitatively correlated to the probability of microbial ingress for rubber-stoppered glass vials subjected to immersion challenge. Standard 10-mL tubing glass vials were modified by inserting micropipettes of various sizes (0.1 to 10 microns nominal diameter) into a side wall hole and securing them with epoxy. Butyl rubber closures and aluminum crimps were used to seal the vials. The test units were sealed in a helium-filled glove bag, then the absolute helium leak rates were determined. The test units were disassembled, filled with media, resealed, and autoclaved. The test units were thermally treated to eliminate airlocks within the micropipette lumen and establish a liquid path between microbial challenge media and the test units' contents. Microbial challenge was performed by immersing the test units in a 35 degrees C bath containing magnesium ion and 8 to 10 logs of viable P. diminuta and E. coli for 24 hours. The test units were then incubated at 35 degrees C for an additional 13 days. Microbial ingress was detected by turbidity and plating on blood agar. The elimination of airlocks was confirmed by the presence of magnesium ions in the vial contents by atomic absorption spectrometry. A total of 288 vials were subjected to microbial challenge testing. Those test units whose contents failed to show detectable magnesium ions were eliminated from further analysis. At large leak rates, the probability of microbial ingress approached 100% and at very low leak rates microbial ingress rates were 0%. A dramatic increase in microbial failure occurred in the leak rate region 10(-4.5) to 10(-3) std cc/sec, which roughly corresponded to leak diameters ranging from 0.4 to 2 microns. Below a leak rate of 10(-4.5) std cc/sec the microbial failure rate was < 10%. The critical leak rate in our studies, i.e. the value below which microbial ingress cannot occur because the leak is too small, was observed to be between 10(-5) and 10(-5.8) std cc/sec, which corresponds to an approximate leak diameter of 0.2-0.3 micron.
Application of the strain invariant failure theory (SIFT) to metals and fiber-polymer composites
NASA Astrophysics Data System (ADS)
Hart-Smith, L. J.
2010-11-01
The strain invariant failure theory (SIFT) model, developed to predict the onset of irreversible damage of fiber-polymer composite laminates, may be also applied to metals. Indeed, it can be applied to all solid materials. Two initial failure mechanisms are considered - distortion and dilatation. The author's experiences are confined to the structures of transport aircraft; phase changes in metals and self-destruction of laminates during curing are not covered. Doing so would need additional material properties, and probably a different failure theory. SIFT does not cover environmental attack on the interface between fibers and resin; it covers only cohesive failures within the fibers or resin, or within a homogeneous piece of metal. In the SIFT model, each damage mechanism is characterized by its own critical value of a strain invariant. Each mechanism dominates its own portion of the strain domain; there is no interaction between them. Application of SIFT to metals is explained first. Fiber-polymer composites contain two discrete constituents; each material must be characterized independently by its own two invariants. This is why fiber-polymer composites need four invariants whereas metals require only two. There is no such thing as a composite material, only composites of materials. The "composite materials" must not be modeled as homogeneous anisotropic solids because it is then not even possible to differentiate between fiber and matrix failures. The SIFT model uses measured material properties; it does not require that half of them be arbitrarily replaced by unmeasurable properties to fit laminate test data, as so many earlier composite failure criteria have. The biggest difference in using SIFT for metals and fiber-reinforced materials is internal residual thermal and moisture absorption stresses created by the gross dissimilarity in properties between embedded fibers and thermoset resin matrices. These residual stresses consume so much of the strength of unreinforced polymers for typical thermoset resins cured at high temperature, like epoxies, that little strength is available to resist mechanical loads. (Thermoplastic polymers suffer far less in this regard.) The paper explains how SIFT is used via worked examples, which demonstrate the kind of detailed information that SIFT analyses can generate.
High-Temperature Graphitization Failure of Primary Superheater Tube
NASA Astrophysics Data System (ADS)
Ghosh, D.; Ray, S.; Roy, H.; Mandal, N.; Shukla, A. K.
2015-12-01
Failure of boiler tubes is the main cause of unit outages of the plant, which further affects the reliability, availability and safety of the unit. So failure analysis of boiler tubes is absolutely essential to predict the root cause of the failure and the steps are taken for future remedial action to prevent the failure in near future. This paper investigates the probable cause/causes of failure of the primary superheater tube in a thermal power plant boiler. Visual inspection, dimensional measurement, chemical analysis, metallographic examination and hardness measurement are conducted as the part of the investigative studies. Apart from these tests, mechanical testing and fractographic analysis are also conducted as supplements. Finally, it is concluded that the superheater tube is failed due to graphitization for prolonged exposure of the tube at higher temperature.
Defense strategies for cloud computing multi-site server infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; He, Fei
We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, andmore » also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.« less
Model-Based Method for Sensor Validation
NASA Technical Reports Server (NTRS)
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
Ulusoy, Nuran
2017-01-01
The aim of this study was to evaluate the effects of two endocrown designs and computer aided design/manufacturing (CAD/CAM) materials on stress distribution and failure probability of restorations applied to severely damaged endodontically treated maxillary first premolar tooth (MFP). Two types of designs without and with 3 mm intraradicular extensions, endocrown (E) and modified endocrown (ME), were modeled on a 3D Finite element (FE) model of the MFP. Vitablocks Mark II (VMII), Vita Enamic (VE), and Lava Ultimate (LU) CAD/CAM materials were used for each type of design. von Mises and maximum principle values were evaluated and the Weibull function was incorporated with FE analysis to calculate the long term failure probability. Regarding the stresses that occurred in enamel, for each group of material, ME restoration design transmitted less stress than endocrown. During normal occlusal function, the overall failure probability was minimum for ME with VMII. ME restoration design with VE was the best restorative option for premolar teeth with extensive loss of coronal structure under high occlusal loads. Therefore, ME design could be a favorable treatment option for MFPs with missing palatal cusp. Among the CAD/CAM materials tested, VMII and VE were found to be more tooth-friendly than LU. PMID:29119108
WE-G-BRA-08: Failure Modes and Effects Analysis (FMEA) for Gamma Knife Radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Bhatnagar, J; Bednarz, G
2015-06-15
Purpose: To perform a failure modes and effects analysis (FMEA) study for Gamma Knife (GK) radiosurgery processes at our institution based on our experience with the treatment of more than 13,000 patients. Methods: A team consisting of medical physicists, nurses, radiation oncologists, neurosurgeons at the University of Pittsburgh Medical Center and an external physicist expert was formed for the FMEA study. A process tree and a failure mode table were created for the GK procedures using the Leksell GK Perfexion and 4C units. Three scores for the probability of occurrence (O), the severity (S), and the probability of no detectionmore » (D) for failure modes were assigned to each failure mode by each professional on a scale from 1 to 10. The risk priority number (RPN) for each failure mode was then calculated (RPN = OxSxD) as the average scores from all data sets collected. Results: The established process tree for GK radiosurgery consists of 10 sub-processes and 53 steps, including a sub-process for frame placement and 11 steps that are directly related to the frame-based nature of the GK radiosurgery. Out of the 86 failure modes identified, 40 failure modes are GK specific, caused by the potential for inappropriate use of the radiosurgery head frame, the imaging fiducial boxes, the GK helmets and plugs, and the GammaPlan treatment planning system. The other 46 failure modes are associated with the registration, imaging, image transfer, contouring processes that are common for all radiation therapy techniques. The failure modes with the highest hazard scores are related to imperfect frame adaptor attachment, bad fiducial box assembly, overlooked target areas, inaccurate previous treatment information and excessive patient movement during MRI scan. Conclusion: The implementation of the FMEA approach for Gamma Knife radiosurgery enabled deeper understanding of the overall process among all professionals involved in the care of the patient and helped identify potential weaknesses in the overall process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoisak, J; Manger, R; Dragojevic, I
Purpose: To perform a failure mode and effects analysis (FMEA) of the process for treating superficial skin cancers with the Xoft Axxent electronic brachytherapy (eBx) system, given the recent introduction of expanded quality control (QC) initiatives at our institution. Methods: A process map was developed listing all steps in superficial treatments with Xoft eBx, from the initial patient consult to the completion of the treatment course. The process map guided the FMEA to identify the failure modes for each step in the treatment workflow and assign Risk Priority Numbers (RPN), calculated as the product of the failure mode’s probability ofmore » occurrence (O), severity (S) and lack of detectability (D). FMEA was done with and without the inclusion of recent QC initiatives such as increased staffing, physics oversight, standardized source calibration, treatment planning and documentation. The failure modes with the highest RPNs were identified and contrasted before and after introduction of the QC initiatives. Results: Based on the FMEA, the failure modes with the highest RPN were related to source calibration, treatment planning, and patient setup/treatment delivery (Fig. 1). The introduction of additional physics oversight, standardized planning and safety initiatives such as checklists and time-outs reduced the RPNs of these failure modes. High-risk failure modes that could be mitigated with improved hardware and software interlocks were identified. Conclusion: The FMEA analysis identified the steps in the treatment process presenting the highest risk. The introduction of enhanced QC initiatives mitigated the risk of some of these failure modes by decreasing their probability of occurrence and increasing their detectability. This analysis demonstrates the importance of well-designed QC policies, procedures and oversight in a Xoft eBx programme for treatment of superficial skin cancers. Unresolved high risk failure modes highlight the need for non-procedural quality initiatives such as improved planning software and more robust hardware interlock systems.« less
Guest Editor's Introduction: Special section on dependable distributed systems
NASA Astrophysics Data System (ADS)
Fetzer, Christof
1999-09-01
We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potential disadvantages when used in dependable systems. For example, minimizing the time to market might lead to the release of components with inherent design faults (e.g. use of `shortcuts' that only work most of the time). In addition, the components might be more complex than needed and, hence, potentially have more design faults than simpler components. However, given economic constraints and the ability to cope with some of the problems using fault-evasion and fault-tolerance, only for a small percentage of systems can one justify not using COTS components. Distributed systems built from current COTS components are asynchronous systems in the sense that there exists no a priori known bound on the transmission delay of messages or the execution time of processes. When designing a distributed algorithm, one would like to make sure (e.g. by testing or verification) that it is correct, i.e. satisfies its specification. Many distributed algorithms make use of consensus (eventually all non-crashed processes have to agree on a value), leader election (a crashed leader is eventually replaced by a new leader, but at any time there is at most one leader) or a group membership detection service (a crashed process is eventually suspected to have crashed but only crashed processes are suspected). From a theoretical point of view, the service specifications given for such services are not implementable in asynchronous systems. In particular, for each implementation one can derive a counter example in which the service violates its specification. From a practical point of view, the consensus, the leader election, and the membership detection problem are solvable in asynchronous distributed systems. In this special section, Raynal and Tronel show how to bridge this difference by showing how to implement the group membership detection problem with a negligible probability [1] to fail in an asynchronous system. The group membership detection problem is specified by a liveness condition (L) and a safety property (S): (L) if a process p crashes, then eventually every non-crashed process q has to suspect that p has crashed; and (S) if a process q suspects p, then p has indeed crashed. One can show that either (L) or (S) is implementable, but one cannot implement both (L) and (S) at the same time in an asynchronous system. In practice, one only needs to implement (L) and (S) such that the probability that (L) or (S) is violated becomes negligible. Raynal and Tronel propose and analyse a protocol that implements (L) with certainty and that can be tuned such that the probability that (S) is violated becomes negligible. Designing and implementing distributed fault-tolerant protocols for asynchronous systems is a difficult but not an impossible task. A fault-tolerant protocol has to detect and mask certain failure classes, e.g. crash failures and message omission failures. There is a trade-off between the performance of a fault-tolerant protocol and the failure classes the protocol can tolerate. One wants to tolerate as many failure classes as needed to satisfy the stochastic requirements of the protocol [1] while still maintaining a sufficient performance. Since clients of a protocol have different requirements with respect to the performance/fault-tolerance trade-off, one would like to be able to customize protocols such that one can select an appropriate performance/fault-tolerance trade-off. In this special section Hiltunen et al describe how one can compose protocols from micro-protocols in their Cactus system. They show how a group RPC system can be tailored to the needs of a client. In particular, they show how considering additional failure classes affects the performance of a group RPC system. References [1] Cristian F 1991 Understanding fault-tolerant distributed systems Communications of ACM 34 (2) 56-78 [2] Heimerdinger W L and Weinstock C B 1992 A conceptual framework for system fault tolerance Technical Report 92-TR-33, CMU/SEI [3] Laprie J C (ed) 1992 Dependability: Basic Concepts and Terminology (Vienna: Springer)
ERIC Educational Resources Information Center
Beitzel, Brian D.; Staley, Richard K.; DuBois, Nelson F.
2011-01-01
Previous research has cast doubt on the efficacy of utilizing external representations as an aid to solving word problems. The present study replicates previous findings that concrete representations hinder college students' ability to solve probability word problems, and extends those findings to apply to a multimedia instructional context. Our…
Reliability computation using fault tree analysis
NASA Technical Reports Server (NTRS)
Chelson, P. O.
1971-01-01
A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.
Effect of risk aversion on prioritizing conservation projects.
Tulloch, Ayesha I T; Maloney, Richard F; Joseph, Liana N; Bennett, Joseph R; Di Fonzo, Martina M I; Probert, William J M; O'Connor, Shaun M; Densem, Jodie P; Possingham, Hugh P
2015-04-01
Conservation outcomes are uncertain. Agencies making decisions about what threat mitigation actions to take to save which species frequently face the dilemma of whether to invest in actions with high probability of success and guaranteed benefits or to choose projects with a greater risk of failure that might provide higher benefits if they succeed. The answer to this dilemma lies in the decision maker's aversion to risk--their unwillingness to accept uncertain outcomes. Little guidance exists on how risk preferences affect conservation investment priorities. Using a prioritization approach based on cost effectiveness, we compared 2 approaches: a conservative probability threshold approach that excludes investment in projects with a risk of management failure greater than a fixed level, and a variance-discounting heuristic used in economics that explicitly accounts for risk tolerance and the probabilities of management success and failure. We applied both approaches to prioritizing projects for 700 of New Zealand's threatened species across 8303 management actions. Both decision makers' risk tolerance and our choice of approach to dealing with risk preferences drove the prioritization solution (i.e., the species selected for management). Use of a probability threshold minimized uncertainty, but more expensive projects were selected than with variance discounting, which maximized expected benefits by selecting the management of species with higher extinction risk and higher conservation value. Explicitly incorporating risk preferences within the decision making process reduced the number of species expected to be safe from extinction because lower risk tolerance resulted in more species being excluded from management, but the approach allowed decision makers to choose a level of acceptable risk that fit with their ability to accommodate failure. We argue for transparency in risk tolerance and recommend that decision makers accept risk in an adaptive management framework to maximize benefits and avoid potential extinctions due to inefficient allocation of limited resources. © 2014 Society for Conservation Biology.
Probability of Failure Analysis Standards and Guidelines for Expendable Launch Vehicles
NASA Astrophysics Data System (ADS)
Wilde, Paul D.; Morse, Elisabeth L.; Rosati, Paul; Cather, Corey
2013-09-01
Recognizing the central importance of probability of failure estimates to ensuring public safety for launches, the Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST), the National Aeronautics and Space Administration (NASA), and U.S. Air Force (USAF), through the Common Standards Working Group (CSWG), developed a guide for conducting valid probability of failure (POF) analyses for expendable launch vehicles (ELV), with an emphasis on POF analysis for new ELVs. A probability of failure analysis for an ELV produces estimates of the likelihood of occurrence of potentially hazardous events, which are critical inputs to launch risk analysis of debris, toxic, or explosive hazards. This guide is intended to document a framework for POF analyses commonly accepted in the US, and should be useful to anyone who performs or evaluates launch risk analyses for new ELVs. The CSWG guidelines provide performance standards and definitions of key terms, and are being revised to address allocation to flight times and vehicle response modes. The POF performance standard allows a launch operator to employ alternative, potentially innovative methodologies so long as the results satisfy the performance standard. Current POF analysis practice at US ranges includes multiple methodologies described in the guidelines as accepted methods, but not necessarily the only methods available to demonstrate compliance with the performance standard. The guidelines include illustrative examples for each POF analysis method, which are intended to illustrate an acceptable level of fidelity for ELV POF analyses used to ensure public safety. The focus is on providing guiding principles rather than "recipe lists." Independent reviews of these guidelines were performed to assess their logic, completeness, accuracy, self- consistency, consistency with risk analysis practices, use of available information, and ease of applicability. The independent reviews confirmed the general validity of the performance standard approach and suggested potential updates to improve the accuracy each of the example methods, especially to address reliability growth.
Survival Predictions of Ceramic Crowns Using Statistical Fracture Mechanics
Nasrin, S.; Katsube, N.; Seghi, R.R.; Rokhlin, S.I.
2017-01-01
This work establishes a survival probability methodology for interface-initiated fatigue failures of monolithic ceramic crowns under simulated masticatory loading. A complete 3-dimensional (3D) finite element analysis model of a minimally reduced molar crown was developed using commercially available hardware and software. Estimates of material surface flaw distributions and fatigue parameters for 3 reinforced glass-ceramics (fluormica [FM], leucite [LR], and lithium disilicate [LD]) and a dense sintered yttrium-stabilized zirconia (YZ) were obtained from the literature and incorporated into the model. Utilizing the proposed fracture mechanics–based model, crown survival probability as a function of loading cycles was obtained from simulations performed on the 4 ceramic materials utilizing identical crown geometries and loading conditions. The weaker ceramic materials (FM and LR) resulted in lower survival rates than the more recently developed higher-strength ceramic materials (LD and YZ). The simulated 10-y survival rate of crowns fabricated from YZ was only slightly better than those fabricated from LD. In addition, 2 of the model crown systems (FM and LD) were expanded to determine regional-dependent failure probabilities. This analysis predicted that the LD-based crowns were more likely to fail from fractures initiating from margin areas, whereas the FM-based crowns showed a slightly higher probability of failure from fractures initiating from the occlusal table below the contact areas. These 2 predicted fracture initiation locations have some agreement with reported fractographic analyses of failed crowns. In this model, we considered the maximum tensile stress tangential to the interfacial surface, as opposed to the more universally reported maximum principal stress, because it more directly impacts crack propagation. While the accuracy of these predictions needs to be experimentally verified, the model can provide a fundamental understanding of the importance that pre-existing flaws at the intaglio surface have on fatigue failures. PMID:28107637
[Determinants of pride and shame: outcome, expected success and attribution].
Schützwohl, A
1991-01-01
In two experiments we investigated the relationship between subjective probability of success and pride and shame. According to Atkinson (1957), pride (the incentive of success) is an inverse linear function of the probability of success, shame (the incentive of failure) being a negative linear function. Attribution theory predicts an inverse U-shaped relationship between subjective probability of success and pride and shame. The results presented here are at variance with both theories: Pride and shame do not vary with subjective probability of success. However, pride and shame are systematically correlated with internal attributions of action outcome.
Elephantiasis Nostras Verrucosa (ENV): a complication of congestive heart failure and obesity.
Baird, Drew; Bode, David; Akers, Troy; Deyoung, Zachariah
2010-01-01
Congestive heart failure (CHF) and obesity are common medical conditions that have many complications and an increasing incidence in the United States. Presented here is a case of a disfiguring skin condition that visually highlights the dermatologic consequences of poorly controlled CHF and obesity. This condition will probably become more common as CHF and obesity increase in the US.
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
ERIC Educational Resources Information Center
Lafferty, Mark T.
2010-01-01
The number of project failures and those projects completed over cost and over schedule has been a significant issue for software project managers. Among the many reasons for failure, inaccuracy in software estimation--the basis for project bidding, budgeting, planning, and probability estimates--has been identified as a root cause of a high…
NASA Technical Reports Server (NTRS)
1992-01-01
Numerous 'extended impacts' found in both leading and trailing edge capture cells have been successfully analyzed for the chemical composition of projectile residues by secondary ion mass spectrometry (SIMS). Most data have been obtained from the trailing edge cells where 45 of 58 impacts have been classified as 'probably natural' and the remainder as 'possibly man-made debris.' This is in striking contrast to leading edge cells where 9 of 11 impacts so far measured are definitely classified as orbital debris. Although all the leading edge cells had lost their plastic entrance foils during flight, the rate of foil failure was similar to that of the trailing edge cells, 10 percent of which were recovered intact. Ultra-violet embrittlement is suspected as the major cause of failure on both leading and trailing edges. The major impediment to the accurate determination of projectile chemistry is the fractionation of volatile and refractory elements in the hypervelocity impact and redeposition processes. This effect had been noticed in simulation experiment but is more pronounced in the Long Duration Exposure Facility (LDEF) capture cells, probably due to the higher average velocities of the space impacts. Surface contamination of the pure Ge surfaces with a substance rich in Si but also containing Mg and Al provides an additional problem for the accurate determination of impactor chemistry. The effect is variable, being much larger on surfaces that were exposed to space than in those cells that remained intact. Future work will concentrate on the analyses of more leading edge impacts and the development of new SIMS techniques for the measurement of elemental abundances in extended impacts.
NASA Technical Reports Server (NTRS)
Amari, S.; Foote, J.; Swan, P.; Walker, R. M.; Zinner, E.; Lange, G.
1993-01-01
Numerous 'extended impacts' found in both leading and trailing edge capture cells were successfully analyzed for the chemical composition of projectile residues by secondary ion mass spectrometry (SIMS). Most data were obtained from the trailing edge cells where 45 of 58 impacts were classified as 'probably natural' and the remainder as 'possibly man-made debris.' This is in striking contrast to leading edge cells where 9 of 11 impacts so far measured are definitely classified as orbital debris. Although all the leading edge cells had lost their plastic entrance foils during flight, the rate of foil failure was similar to that of the trailing edge cells, 10 percent of which were recovered intact. Ultraviolet embrittlement is suspected as the major cause of failure on both leading and trailing edges. The major impediment to the accurate determination of projectile chemistry is the fractionation of volatile and refractory elements in the hypervelocity impact and redeposition processes. This effect had been noted in a simulation experiment but is more pronounced in the LDEF capture cells, probably due to the higher average velocities of the space impacts. Surface contamination of the pure Ge surfaces with a substance rich in Si, but also containing Mg and Al, provides an additional problem for the accurate determination of impactor chemistry. The effect is variable, being much larger on surfaces that were exposed to space than in those cells that remained intact. Future work will concentrate on the analyses of more leading edge impacts and the development of new SIMS techniques for the measurement of elemental abundances in extended impacts.
Enhanced Component Performance Study: Emergency Diesel Generators 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using (1) Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2014 and (2) maintenance unavailability (UA) performance data from Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2014. The objective is to show estimates of current failure probabilities and rates related to EDGs, trend these data on an annual basis, determine if the current data are consistent with the probability distributions currently recommended for use inmore » NRC probabilistic risk assessments, show how the reliability data differ for different EDG manufacturers and for EDGs with different ratings; and summarize the subcomponents, causes, detection methods, and recovery associated with each EDG failure mode. Engineering analyses were performed with respect to time period and failure mode without regard to the actual number of EDGs at each plant. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating. Six trends with varying degrees of statistical significance were identified in the data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meeuwsen, J.J.; Kling, W.L.; Ploem, W.A.G.A.
1997-01-01
Protection systems in power systems can fail either by not responding when they should (failure to operate) or by operating when they should not (false tripping). The former type of failure is particularly serious since it may result in the isolation of large sections of the network. However, the probability of a failure to operate can be reduced by carrying out preventive maintenance on protection systems. This paper describes an approach to determine the impact of preventive maintenance on protection systems on the reliability of the power supply to customers. The proposed approach is based on Markov models.
Application of a Probalistic Sizing Methodology for Ceramic Structures
NASA Astrophysics Data System (ADS)
Rancurel, Michael; Behar-Lafenetre, Stephanie; Cornillon, Laurence; Leroy, Francois-Henri; Coe, Graham; Laine, Benoit
2012-07-01
Ceramics are increasingly used in the space industry to take advantage of their stability and high specific stiffness properties. Their brittle behaviour often leads to size them by increasing the safety factors that are applied on the maximum stresses. It induces to oversize the structures. This is inconsistent with the major driver in space architecture, the mass criteria. This paper presents a methodology to size ceramic structures based on their failure probability. Thanks to failure tests on samples, the Weibull law which characterizes the strength distribution of the material is obtained. A-value (Q0.0195%) and B-value (Q0.195%) are then assessed to take into account the limited number of samples. A knocked-down Weibull law that interpolates the A- & B- values is also obtained. Thanks to these two laws, a most-likely and a knocked- down prediction of failure probability are computed for complex ceramic structures. The application of this methodology and its validation by test is reported in the paper.
Zhang, Xu; Zhang, Mei-Jie; Fine, Jason
2012-01-01
With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated. PMID:21557288
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, D. G.; Arent, D. J.; Johnson, L.
2006-06-01
This paper documents a probabilistic risk assessment of existing and alternative power supply systems at a large telecommunications office. The analysis characterizes the increase in the reliability of power supply through the use of two alternative power configurations. Failures in the power systems supporting major telecommunications service nodes are a main contributor to significant telecommunications outages. A logical approach to improving the robustness of telecommunication facilities is to increase the depth and breadth of technologies available to restore power during power outages. Distributed energy resources such as fuel cells and gas turbines could provide additional on-site electric power sources tomore » provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.« less
Statistical analysis of field data for aircraft warranties
NASA Astrophysics Data System (ADS)
Lakey, Mary J.
Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.
Posttest analysis of the 1:6-scale reinforced concrete containment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, P.A.; Kennedy, J.M.; Marchertas, A.H.
A prediction of the response of the Sandia National Laboratories 1:6- scale reinforced concrete containment model test was made by Argonne National Laboratory. ANL along with nine other organizations performed a detailed nonlinear response analysis of the 1:6-scale model containment subjected to overpressurization in the fall of 1986. The two-dimensional code TEMP-STRESS and the three-dimensional NEPTUNE code were utilized (1) to predict the global response of the structure, (2) to identify global failure sites and the corresponding failure pressures and (3) to identify some local failure sites and pressure levels. A series of axisymmetric models was studied with the two-dimensionalmore » computer program TEMP-STRESS. The comparison of these pretest computations with test data from the containment model has provided a test for the capability of the respective finite element codes to predict global failure modes, and hence serves as a validation of these codes. Only the two-dimensional analyses will be discussed in this paper. 3 refs., 10 figs.« less
NASA Astrophysics Data System (ADS)
Taylor, Gabriel James
The failure of electrical cables exposed to severe thermal fire conditions are a safety concern for operating commercial nuclear power plants (NPPs). The Nuclear Regulatory Commission (NRC) has promoted the use of risk-informed and performance-based methods for fire protection which resulted in a need to develop realistic methods to quantify the risk of fire to NPP safety. Recent electrical cable testing has been conducted to provide empirical data on the failure modes and likelihood of fire-induced damage. This thesis evaluated numerous aspects of the data. Circuit characteristics affecting fire-induced electrical cable failure modes have been evaluated. In addition, thermal failure temperatures corresponding to cable functional failures have been evaluated to develop realistic single point thermal failure thresholds and probability distributions for specific cable insulation types. Finally, the data was used to evaluate the prediction capabilities of a one-dimension conductive heat transfer model used to predict cable failure.
NASA Technical Reports Server (NTRS)
Behbehani, K.
1980-01-01
A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.
Design with brittle materials - An interdisciplinary educational program
NASA Technical Reports Server (NTRS)
Mueller, J. I.; Bollard, R. J. H.; Hartz, B. J.; Kobayashi, A. S.; Love, W. J.; Scott, W. D.; Taggart, R.; Whittemore, O. J.
1980-01-01
A series of interdisciplinary design courses being offered to senior and graduate engineering students at the University of Washington is described. Attention is given to the concepts and some of the details on group design projects that have been undertaken during the past two years. It is noted that ceramic materials normally demonstrate a large scatter in strength properties. As a consequence, when designing with these materials, the conventional 'mil standards' design stresses with acceptable margins of safety cannot by employed and the designer is forced to accept a probable number of failures in structures of a given brittle material. It is this prediction of the probability of failure for structures of given, well-characterized materials that forms the basis for this series of courses.
Lin, Chun-Li; Chang, Yen-Hsiang; Hsieh, Shih-Kai; Chang, Wen-Jen
2013-03-01
This study evaluated the risk of failure for an endodontically treated premolar with different crack depths, which was shearing toward the pulp chamber and was restored by using 3 different computer-aided design/computer-aided manufacturing ceramic restoration configurations. Three 3-dimensional finite element models designed with computer-aided design/computer-aided manufacturing ceramic onlay, endocrown, and conventional crown restorations were constructed to perform simulations. The Weibull function was incorporated with finite element analysis to calculate the long-term failure probability relative to different load conditions. The results indicated that the stress values on the enamel, dentin, and luting cement for endocrown restorations exhibited the lowest values relative to the other 2 restoration methods. Weibull analysis revealed that the overall failure probabilities in a shallow cracked premolar were 27%, 2%, and 1% for the onlay, endocrown, and conventional crown restorations, respectively, in the normal occlusal condition. The corresponding values were 70%, 10%, and 2% for the depth cracked premolar. This numeric investigation suggests that the endocrown provides sufficient fracture resistance only in a shallow cracked premolar with endodontic treatment. The conventional crown treatment can immobilize the premolar for different cracked depths with lower failure risk. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Failure Mode and Effect Analysis for Delivery of Lung Stereotactic Body Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perks, Julian R., E-mail: julian.perks@ucdmc.ucdavis.edu; Stanic, Sinisa; Stern, Robin L.
2012-07-15
Purpose: To improve the quality and safety of our practice of stereotactic body radiation therapy (SBRT), we analyzed the process following the failure mode and effects analysis (FMEA) method. Methods: The FMEA was performed by a multidisciplinary team. For each step in the SBRT delivery process, a potential failure occurrence was derived and three factors were assessed: the probability of each occurrence, the severity if the event occurs, and the probability of detection by the treatment team. A rank of 1 to 10 was assigned to each factor, and then the multiplied ranks yielded the relative risks (risk priority numbers).more » The failure modes with the highest risk priority numbers were then considered to implement process improvement measures. Results: A total of 28 occurrences were derived, of which nine events scored with significantly high risk priority numbers. The risk priority numbers of the highest ranked events ranged from 20 to 80. These included transcription errors of the stereotactic coordinates and machine failures. Conclusion: Several areas of our SBRT delivery were reconsidered in terms of process improvement, and safety measures, including treatment checklists and a surgical time-out, were added for our practice of gantry-based image-guided SBRT. This study serves as a guide for other users of SBRT to perform FMEA of their own practice.« less
NASA Technical Reports Server (NTRS)
White, A. L.
1983-01-01
This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.
Continuous infusion or bolus injection of loop diuretics for congestive heart failure?
Zepeda, Patricio; Rain, Carmen; Sepúlveda, Paola
2016-04-22
Loop diuretics are widely used in acute heart failure. However, there is controversy about the superiority of continuous infusion over bolus administration. Searching in Epistemonikos database, which is maintained by screening 30 databases, we identified four systematic reviews including 11 pertinent randomized controlled trials overall. We combined the evidence using meta-analysis and generated a summary of findings following the GRADE approach. We concluded continuous administration of loop diuretics probably reduces mortality and length of stay compared to intermittent administration in patients with acute heart failure.
Model analysis of the link between interest rates and crashes
NASA Astrophysics Data System (ADS)
Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft
2016-09-01
We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.
Assessing Aircraft Supply Air to Recommend Compounds for Timely Warning of Contamination
NASA Astrophysics Data System (ADS)
Fox, Richard B.
Taking aircraft out of service for even one day to correct fume-in-cabin events can cost the industry roughly $630 million per year in lost revenue. The quantitative correlation study investigated quantitative relationships between measured concentrations of contaminants in bleed air and probability of odor detectability. Data were collected from 94 aircraft engine and auxiliary power unit (APU) bleed air tests from an archival data set between 1997 and 2011, and no relationships were found. Pearson correlation was followed by regression analysis for individual contaminants. Significant relationships of concentrations of compounds in bleed air to probability of odor detectability were found (p<0.05), as well as between compound concentration and probability of sensory irritancy detectability. Study results may be useful to establish early warning levels. Predictive trend monitoring, a method to identify potential pending failure modes within a mechanical system, may influence scheduled down-time for maintenance as a planned event, rather than repair after a mechanical failure and thereby reduce operational costs associated with odor-in-cabin events. Twenty compounds (independent variables) were found statistically significant as related to probability of odor detectability (dependent variable 1). Seventeen compounds (independent variables) were found statistically significant as related to probability of sensory irritancy detectability (dependent variable 2). Additional research was recommended to further investigate relationships between concentrations of contaminants and probability of odor detectability or probability of sensory irritancy detectability for all turbine oil brands. Further research on implementation of predictive trend monitoring may be warranted to demonstrate how the monitoring process might be applied to in-flight application.
Metallurgical failure analysis of MH-1A reactor core hold-down bolts. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawthorne, J.R.; Watson, H.E.
1976-11-01
The Naval Research Laboratory has performed a failure analysis on two MH-1A reactor core hold-down bolts that broke in service. Adherence to fabrication specifications, post-service properties and possible causes of bolt failure were investigated. The bolt material was verified as 17-4PH precipitation hardening stainless steel. Measured bolt dimensions also were in accordance with fabrication drawing specifications. Bolt failure occurred in the region of a locking pin hole which reduced the bolt net section by 47 percent. The failure analysis indicates that the probable cause of failure was net section overloading resulting from a lateral bending force on the bolt. Themore » analysis indicates that net section overloading could also have resulted from combined tensile stresses (bolt preloading plus differential thermal expansion). Recommendations are made for improved bolting.« less
NASA Technical Reports Server (NTRS)
Yunis, Isam S.; Carney, Kelly S.
1993-01-01
A new aerospace application of structural reliability techniques is presented, where the applied forces depend on many probabilistic variables. This application is the plume impingement loading of the Space Station Freedom Photovoltaic Arrays. When the space shuttle berths with Space Station Freedom it must brake and maneuver towards the berthing point using its primary jets. The jet exhaust, or plume, may cause high loads on the photovoltaic arrays. The many parameters governing this problem are highly uncertain and random. An approach, using techniques from structural reliability, as opposed to the accepted deterministic methods, is presented which assesses the probability of failure of the array mast due to plume impingement loading. A Monte Carlo simulation of the berthing approach is used to determine the probability distribution of the loading. A probability distribution is also determined for the strength of the array. Structural reliability techniques are then used to assess the array mast design. These techniques are found to be superior to the standard deterministic dynamic transient analysis, for this class of problem. The results show that the probability of failure of the current array mast design, during its 15 year life, is minute.
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M
2015-07-01
Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.
Problems in shallow land disposal of solid low-level radioactive waste in the united states
Stevens, P.R.; DeBuchananne, G.D.
1976-01-01
Disposal of solid low-level wastes containing radionuclides by burial in shallow trenches was initiated during World War II at several sites as a method of protecting personnel from radiation and isolating the radionuclides from the hydrosphere and biosphere. Today, there are 11 principal shallow-land burial sites in the United States that contain a total of more than 1.4 million cubic meters of solid wastes contaminated with a wide variety of radionuclides. Criteria for burial sites have been few and generalized and have contained only minimal hydrogeologic considerations. Waste-management practices have included the burial of small quantities of long-lived radionuclides with large volumes of wastes contaminated with shorter-lived nuclides at the same site, thereby requiring an assurance of extremely long-time containment for the entire disposal site. Studies at 4 of the 11 sites have documented the migration of radionuclides. Other sites are being studied for evidence of containment failure. Conditions at the 4 sites are summarized. In each documented instance of containment failure, ground water has probably been the medium of transport. Migrating radionuclides that have been identified include90Sr,137Cs,106Ru,239Pu,125Sb,60Co, and3H. Shallow land burial of solid wastes containing radionuclides can be a viable practice only if a specific site satisfies adequate hydrogeologic criteria. Suggested hydrogeologic criteria and the types of hydrogeologic data necessary for an adequate evaluation of proposed burial sites are given. It is mandatory that a concomitant inventory and classification be made of the longevity, and the physical and chemical form of the waste nuclides to be buried, in order that the anticipated waste types can be matched to the containment capability of the proposed sites. Ongoing field investigations at existing sites will provide data needed to improve containment at these sites and help develop hydrogeologic criteria for new sites. These studies have necessitated the development of special drilling, sampling, well construction, and testing techniques. A recent development in borehole geophysical techniques is downhole spectral gammaray analysis which not only locates but identifies specific radionuclides in the subsurface. Field investigations are being supplemented by laboratory studies of the hydrochemistry of the transuranic elements, the kinetics of solid-liquid phase interactions, and the potential complexing of radionuclides with organic compounds and solvents which mobilize normally highly sorbable nuclides. Theoretical studies of digital predictive solute transport models are being implemented to assure their availability for application to problems and processes identified in the field and laboratory. ?? 1976 International Association of Engineering Geology.
Apollo 15 mission main parachute failure
NASA Technical Reports Server (NTRS)
1971-01-01
The failure of one of the three main parachutes of the Apollo 15 spacecraft was investigated by studying malfunctions in the forward heat shield, broken riser, and firing the fuel expelled from the command module reaction control system. It is concluded that the most probable cause was the burning of raw fuel being expelled during the latter portion of depletion firing. Recommended corrective actions are included.
21 CFR 1003.21 - Notification by the manufacturer to affected persons.
Code of Federal Regulations, 2010 CFR
2010-04-01
... HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH NOTIFICATION OF DEFECTS OR FAILURE TO COMPLY Notification... nontechnical terms of the hazards reasonably related to any defect or failure to comply; and (3) The following... the above statement. (b) The envelope containing the notice shall not contain advertising or other...
Yu, Soonyoung; Unger, Andre J A; Parker, Beth; Kim, Taehee
2012-06-15
In this study, we defined risk capital as the contingency fee or insurance premium that a brownfields redeveloper needs to set aside from the sale of each house in case they need to repurchase it at a later date because the indoor air has been detrimentally affected by subsurface contamination. The likelihood that indoor air concentrations will exceed a regulatory level subject to subsurface heterogeneity and source zone location uncertainty is simulated by a physics-based hydrogeological model using Monte Carlo realizations, yielding the probability of failure. The cost of failure is the future value of the house indexed to the stochastic US National Housing index. The risk capital is essentially the probability of failure times the cost of failure with a surcharge to compensate the developer against hydrogeological and financial uncertainty, with the surcharge acting as safety loading reflecting the developers' level of risk aversion. We review five methodologies taken from the actuarial and financial literature to price the risk capital for a highly stylized brownfield redevelopment project, with each method specifically adapted to accommodate our notion of the probability of failure. The objective of this paper is to develop an actuarially consistent approach for combining the hydrogeological and financial uncertainty into a contingency fee that the brownfields developer should reserve (i.e. the risk capital) in order to hedge their risk exposure during the project. Results indicate that the price of the risk capital is much more sensitive to hydrogeological rather than financial uncertainty. We use the Capital Asset Pricing Model to estimate the risk-adjusted discount rate to depreciate all costs to present value for the brownfield redevelopment project. A key outcome of this work is that the presentation of our risk capital valuation methodology is sufficiently generalized for application to a wide variety of engineering projects. Copyright © 2012 Elsevier Ltd. All rights reserved.
Stoll, Richard; Cappel, I; Jablonski-Momeni, Anahita; Pieper, K; Stachniss, V
2007-01-01
This study evaluated the long-term survival of inlays and partial crowns made of IPS Empress. For this purpose, the patient data of a prospective study were examined in retrospect and statistically evaluated. All of the inlays and partial crowns fabricated of IPS-Empress within the Department of Operative Dentistry at the School of Dental Medicine of Philipps University, Marburg, Germany were systematically recorded in a database between 1991 and 2001. The corresponding patient files were revised at the end of 2001. The information gathered in this way was used to evaluate the survival of the restorations using the method described by Kaplan and Meyer. A total of n = 1624 restorations were fabricated of IPS-Empress within the observation period. During this time, n = 53 failures were recorded. The remaining restorations were observed for a mean period of 18.77 months. The failures were mainly attributed to fractures, endodontic problems and cementation errors. The last failure was established after 82 months. At this stage, a cumulative survival probability of p = 0.81 was registered with a standard error of 0.04. At this time, n = 30 restorations were still being observed. Restorations on vital teeth (n = 1588) showed 46 failures, with a cumulative survival probability of p = 0.82. Restorations performed on non-vital teeth (n = 36) showed seven failures, with a cumulative survival probability of p = 0.53. Highly significant differences were found between the two groups (p < 0.0001) in a log-rank test. No significant difference (p = 0.41) was found between the patients treated by students (n = 909) and those treated by qualified dentists (n = 715). Likewise, no difference (p = 0.13) was established between the restorations seated with a high viscosity cement (n = 295) and those placed with a low viscosity cement (n = 1329).
Risk analysis by FMEA as an element of analytical validation.
van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M
2009-12-05
We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.
Temporal-varying failures of nodes in networks
NASA Astrophysics Data System (ADS)
Knight, Georgie; Cristadoro, Giampaolo; Altmann, Eduardo G.
2015-08-01
We consider networks in which random walkers are removed because of the failure of specific nodes. We interpret the rate of loss as a measure of the importance of nodes, a notion we denote as failure centrality. We show that the degree of the node is not sufficient to determine this measure and that, in a first approximation, the shortest loops through the node have to be taken into account. We propose approximations of the failure centrality which are valid for temporal-varying failures, and we dwell on the possibility of externally changing the relative importance of nodes in a given network by exploiting the interference between the loops of a node and the cycles of the temporal pattern of failures. In the limit of long failure cycles we show analytically that the escape in a node is larger than the one estimated from a stochastic failure with the same failure probability. We test our general formalism in two real-world networks (air-transportation and e-mail users) and show how communities lead to deviations from predictions for failures in hubs.
Chua, Daniel T T; Sham, Jonathan S T; Hung, Kwan-Ngai; Leung, Lucullus H T; Au, Gordon K H
2006-12-01
Stereotactic radiosurgery has been employed as a salvage treatment of local failures of nasopharyngeal carcinoma (NPC). To identify patients that would benefit from radiosurgery, we reviewed our data with emphasis on factors that predicted treatment outcome. A total of 48 patients with local failures of NPC were treated by stereotactic radiosurgery between March 1996 and February 2005. Radiosurgery was administered using a modified linear accelerator with single or multiple isocenters to deliver a median dose of 12.5 Gy to the target periphery. Median follow-up was 54 months. Five-year local failure-free probability after radiosurgery was 47.2% and 5-year overall survival rate was 46.9%. Neuroendocrine complications occurred in 27% of patients but there were no treatment-related deaths. Time interval from primary radiotherapy, retreatment T stage, prior local failures and tumor volume were significant predictive factors of local control and/or survival whereas age was of marginal significance in predicting survival. A radiosurgery prognostic scoring system was designed based on these predictive factors. Five-year local failure-free probabilities in patients with good, intermediate and poor prognostic scores were 100%, 42.5%, and 9.6%. The corresponding five-year overall survival rates were 100%, 51.1%, and 0%. Important factors that predicted tumor control and survival after radiosurgery were identified. Patients with good prognostic score should be treated by radiosurgery in view of the excellent results. Patients with intermediate prognostic score may also be treated by radiosurgery but those with poor prognostic score should receive other salvage treatments.
Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls
NASA Astrophysics Data System (ADS)
Guha Ray, A.; Baidya, D. K.
2012-09-01
Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.
Raffa, Santi; Fantoni, Cecilia; Restauri, Luigia; Auricchio, Angelo
2005-10-01
We describe the case of a patient with atrioventricular (AV) junction ablation and chronic biventricular pacing in which intermittent dysfunction of the right ventricular (RV) lead resulted in left ventricular (LV) stimulation alone and onset of severe right heart failure. Restoration of biventricular pacing by increasing device output and then performing lead revision resolved the issue. This case provides evidence that LV pacing alone in patients with AV junction ablation may lead to severe right heart failure, most likely as a result of iatrogenic mechanical dyssynchrony within the RV. Thus, probably this pacing mode should be avoided in pacemaker-dependent patients with heart failure.
NASA Technical Reports Server (NTRS)
Holanda, R.; Frause, L. M.
1977-01-01
The reliability of 45 state-of-the-art strain gage systems under full scale engine testing was investigated. The flame spray process was used to install 23 systems on the first fan rotor of a YF-100 engine; the others were epoxy cemented. A total of 56 percent of the systems failed in 11 hours of engine operation. Flame spray system failures were primarily due to high gage resistance, probably caused by high stress levels. Epoxy system failures were principally erosion failures, but only on the concave side of the blade. Lead-wire failures between the blade-to-disk jump and the control room could not be analyzed.
NASA Technical Reports Server (NTRS)
Johnson, W. S.; Bigelow, C. A.; Bahei-El-din, Y. A.
1983-01-01
Experimental results for five laminate orientations of boron/aluminum composites containing either circular holes or crack-like slits are presented. Specimen stress-strain behavior, stress at first fiber failure, and ultimate strength were determined. Radiographs were used to monitor the fracture process. The specimens were analyzed with a three-dimensional elastic-elastic finite-element model. The first fiber failures in notched specimens with laminate orientation occurred at or very near the specimen ultimate strength. For notched unidirectional specimens, the first fiber failure occurred at approximately one-half of the specimen ultimate strength. Acoustic emission events correlated with fiber breaks in unidirectional composites, but did not for other laminates. Circular holes and crack-like slits of the same characteristic length were found to produce approximately the same strength reduction. The predicted stress-strain responses and stress at first fiber failure compared very well with test data for laminates containing 0 deg fibers.
Risk Due to Radiological Terror Attacks With Natural Radionuclides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedrich, Steinhaeusler; Lyudmila, Zaitseva; Stan, Rydell
The naturally occurring radionuclides radium (Ra-226) and polonium (Po-210) have the potential to be used for criminal acts. Analysis of international incident data contained in the Database on Nuclear Smuggling, Theft and Orphan Radiation Sources (CSTO), operated at the University of Salzburg, shows that several acts of murder and terrorism with natural radionuclides have already been carried out in Europe and Russia. Five different modes of attack (T) are possible: (1) Covert irradiation of an individual in order to deliver a high individual dose; (2) Covert irradiation of a group of persons delivering a large collective dose; (3) Contamination ofmore » food or drink; (4) Generation of radioactive aerosols or solutions; (5) Combination of Ra-226 with conventional explosives (Dirty Bomb).This paper assesses the risk (R) of such criminal acts in terms of: (a) Probability of terrorist motivation deploying a certain attack mode T; (b) Probability of success by the terrorists for the selected attack mode T; (c) Primary damage consequence (C) to the attacked target (activity, dose); (d) Secondary damage consequence (C') to the attacked target (psychological and socio-economic effects); (e) Probability that the consequences (C, C') cannot be brought under control, resulting in a failure to manage successfully the emergency situation due to logistical and/or technical deficits in implementing adequate countermeasures. Extensive computer modelling is used to determine the potential impact of such a criminal attack on directly affected victims and on the environment.« less
Risk Due to Radiological Terror Attacks With Natural Radionuclides
NASA Astrophysics Data System (ADS)
Friedrich, Steinhäusler; Stan, Rydell; Lyudmila, Zaitseva
2008-08-01
The naturally occurring radionuclides radium (Ra-226) and polonium (Po-210) have the potential to be used for criminal acts. Analysis of international incident data contained in the Database on Nuclear Smuggling, Theft and Orphan Radiation Sources (CSTO), operated at the University of Salzburg, shows that several acts of murder and terrorism with natural radionuclides have already been carried out in Europe and Russia. Five different modes of attack (T) are possible: (1) Covert irradiation of an individual in order to deliver a high individual dose; (2) Covert irradiation of a group of persons delivering a large collective dose; (3) Contamination of food or drink; (4) Generation of radioactive aerosols or solutions; (5) Combination of Ra-226 with conventional explosives (Dirty Bomb). This paper assesses the risk (R) of such criminal acts in terms of: (a) Probability of terrorist motivation deploying a certain attack mode T; (b) Probability of success by the terrorists for the selected attack mode T; (c) Primary damage consequence (C) to the attacked target (activity, dose); (d) Secondary damage consequence (C') to the attacked target (psychological and socio-economic effects); (e) Probability that the consequences (C, C') cannot be brought under control, resulting in a failure to manage successfully the emergency situation due to logistical and/or technical deficits in implementing adequate countermeasures. Extensive computer modelling is used to determine the potential impact of such a criminal attack on directly affected victims and on the environment.
Probabilistic Evaluation of Blade Impact Damage
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Abumeri, G. H.
2003-01-01
The response to high velocity impact of a composite blade is probabilistically evaluated. The evaluation is focused on quantifying probabilistically the effects of uncertainties (scatter) in the variables that describe the impact, the blade make-up (geometry and material), the blade response (displacements, strains, stresses, frequencies), the blade residual strength after impact, and the blade damage tolerance. The results of probabilistic evaluations results are in terms of probability cumulative distribution functions and probabilistic sensitivities. Results show that the blade has relatively low damage tolerance at 0.999 probability of structural failure and substantial at 0.01 probability.
Analysis of Failures of High Speed Shaft Bearing System in a Wind Turbine
NASA Astrophysics Data System (ADS)
Wasilczuk, Michał; Gawarkiewicz, Rafał; Bastian, Bartosz
2018-01-01
During the operation of wind turbines with gearbox of traditional configuration, consisting of one planetary stage and two helical stages high failure rate of high speed shaft bearings is observed. Such a high failures frequency is not reflected in the results of standard calculations of bearing durability. Most probably it can be attributed to atypical failure mechanism. The authors studied problems in 1.5 MW wind turbines of one of Polish wind farms. The analysis showed that the problems of high failure rate are commonly met all over the world and that the statistics for the analysed turbines were very similar. After the study of potential failure mechanism and its potential reasons, modification of the existing bearing system was proposed. Various options, with different bearing types were investigated. Different versions were examined for: expected durability increase, extent of necessary gearbox modifications and possibility to solve existing problems in operation.
Investigation into Cause of High Temperature Failure of Boiler Superheater Tube
NASA Astrophysics Data System (ADS)
Ghosh, D.; Ray, S.; Roy, H.; Shukla, A. K.
2015-04-01
The failure of the boiler tubes occur due to various reasons like creep, fatigue, corrosion and erosion. This paper highlights a case study of typical premature failure of a final superheater tube of 210 MW thermal power plant boiler. Visual examination, dimensional measurement, chemical analysis, oxide scale thickness measurement, microstructural examination are conducted as part of the investigations. Apart from these investigations, sulfur print, Energy Dispersive spectroscopy (EDS) and X ray diffraction analysis (XRD) are also conducted to ascertain the probable cause of failure of final super heater tube. Finally it has been concluded that the premature failure of the super heater tube can be attributed to the combination of localized high tube metal temperature and loss of metal from the outer surface due to high temperature corrosion. The corrective actions have also been suggested to avoid this type of failure in near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilton, Harry H.
Protocols are developed for formulating optimal viscoelastic designer functionally graded materials tailored to best respond to prescribed loading and boundary conditions. In essence, an inverse approach is adopted where material properties instead of structures per se are designed and then distributed throughout structural elements. The final measure of viscoelastic material efficacy is expressed in terms of failure probabilities vs. survival time000.
A model for predicting embankment slope failures in clay-rich soils; A Louisiana example
NASA Astrophysics Data System (ADS)
Burns, S. F.
2015-12-01
A model for predicting embankment slope failures in clay-rich soils; A Louisiana example It is well known that smectite-rich soils significantly reduce the stability of slopes. The question is how much smectite in the soil causes slope failures. A study of over 100 sites in north and south Louisiana, USA, compared slopes that failed during a major El Nino winter (heavy rainfall) in 1982-1983 to similar slopes that did not fail. Soils in the slopes were tested for per cent clay, liquid limits, plasticity indices and semi-quantitative clay mineralogy. Slopes with the High Risk for failure (85-90% chance of failure in 8-15 years after construction) contained soils with a liquid limit > 54%, a plasticity index over 29%, and clay contents > 47%. Slopes with an Intermediate Risk (55-50% chance of failure in 8-15 years) contained soils with a liquid limit between 36-54%, plasticity index between 16-19%, and clay content between 32-47%. Slopes with a Low Risk chance of failure (< 5% chance of failure in 8-15 years after construction) contained soils with a liquid limit < 36%, a plasticity index < 16%, and a clay content < 32%. These data show that if one is constructing embankments and one wants to prevent slope failure of the 3:1 slopes, check the above soil characteristics before construction. If the soils fall into the Low Risk classification, construct the embankment normally. If the soils fall into the High Risk classification, one will need to use lime stabilization or heat treatments to prevent failures. Soils in the Intermediate Risk class will have to be evaluated on a case by case basis.
NASA Astrophysics Data System (ADS)
Kim, Dong Hyeok; Lee, Ouk Sub; Kim, Hong Min; Choi, Hye Bin
2008-11-01
A modified Split Hopkinson Pressure Bar technique with aluminum pressure bars and a pulse shaper technique to achieve a closer impedance match between the pressure bars and the specimen materials such as hot temperature degraded POM (Poly Oxy Methylene) and PP (Poly Propylene). The more distinguishable experimental signals were obtained to evaluate the more accurate dynamic deformation behavior of materials under a high strain rate loading condition. A pulse shaping technique is introduced to reduce the non-equilibrium on the dynamic material response by modulation of the incident wave during a short period of test. This increases the rise time of the incident pulse in the SHPB experiment. For the dynamic stress strain curve obtained from SHPB experiment, the Johnson-Cook model is applied as a constitutive equation. The applicability of this constitutive equation is verified by using the probabilistic reliability estimation method. Two reliability methodologies such as the FORM and the SORM have been proposed. The limit state function(LSF) includes the Johnson-Cook model and applied stresses. The LSF in this study allows more statistical flexibility on the yield stress than a paper published before. It is found that the failure probability estimated by using the SORM is more reliable than those of the FORM/ It is also noted that the failure probability increases with increase of the applied stress. Moreover, it is also found that the parameters of Johnson-Cook model such as A and n, and the applied stress are found to affect the failure probability more severely than the other random variables according to the sensitivity analysis.
Individual versus systemic risk and the Regulator's Dilemma.
Beale, Nicholas; Rand, David G; Battey, Heather; Croxson, Karen; May, Robert M; Nowak, Martin A
2011-08-02
The global financial crisis of 2007-2009 exposed critical weaknesses in the financial system. Many proposals for financial reform address the need for systemic regulation--that is, regulation focused on the soundness of the whole financial system and not just that of individual institutions. In this paper, we study one particular problem faced by a systemic regulator: the tension between the distribution of assets that individual banks would like to hold and the distribution across banks that best supports system stability if greater weight is given to avoiding multiple bank failures. By diversifying its risks, a bank lowers its own probability of failure. However, if many banks diversify their risks in similar ways, then the probability of multiple failures can increase. As more banks fail simultaneously, the economic disruption tends to increase disproportionately. We show that, in model systems, the expected systemic cost of multiple failures can be largely explained by two global parameters of risk exposure and diversity, which can be assessed in terms of the risk exposures of individual actors. This observation hints at the possibility of regulatory intervention to promote systemic stability by incentivizing a more diverse diversification among banks. Such intervention offers the prospect of an additional lever in the armory of regulators, potentially allowing some combination of improved system stability and reduced need for additional capital.
Probabilistic Analysis of a Composite Crew Module
NASA Technical Reports Server (NTRS)
Mason, Brian H.; Krishnamurthy, Thiagarajan
2011-01-01
An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.
Vulnerability of bridges to scour: insights from an international expert elicitation workshop
NASA Astrophysics Data System (ADS)
Lamb, Rob; Aspinall, Willy; Odbert, Henry; Wagener, Thorsten
2017-08-01
Scour (localised erosion) during flood events is one of the most significant threats to bridges over rivers and estuaries, and has been the cause of numerous bridge failures, with damaging consequences. Mitigation of the risk of bridges being damaged by scour is therefore important to many infrastructure owners, and is supported by industry guidance. Even after mitigation, some residual risk remains, though its extent is difficult to quantify because of the uncertainties inherent in the prediction of scour and the assessment of the scour risk. This paper summarises findings from an international expert workshop on bridge scour risk assessment that explores uncertainties about the vulnerability of bridges to scour. Two specialised structured elicitation methods were applied to explore the factors that experts in the field consider important when assessing scour risk and to derive pooled expert judgements of bridge failure probabilities that are conditional on a range of assumed scenarios describing flood event severity, bridge and watercourse types and risk mitigation protocols. The experts' judgements broadly align with industry good practice, but indicate significant uncertainty about quantitative estimates of bridge failure probabilities, reflecting the difficulty in assessing the residual risk of failure. The data and findings presented here could provide a useful context for the development of generic scour fragility models and their associated uncertainties.
Factors Predicting Meniscal Allograft Transplantation Failure
Parkinson, Ben; Smith, Nicholas; Asplin, Laura; Thompson, Peter; Spalding, Tim
2016-01-01
Background: Meniscal allograft transplantation (MAT) is performed to improve symptoms and function in patients with a meniscal-deficient compartment of the knee. Numerous studies have shown a consistent improvement in patient-reported outcomes, but high failure rates have been reported by some studies. The typical patients undergoing MAT often have multiple other pathologies that require treatment at the time of surgery. The factors that predict failure of a meniscal allograft within this complex patient group are not clearly defined. Purpose: To determine predictors of MAT failure in a large series to refine the indications for surgery and better inform future patients. Study Design: Cohort study; Level of evidence, 3. Methods: All patients undergoing MAT at a single institution between May 2005 and May 2014 with a minimum of 1-year follow-up were prospectively evaluated and included in this study. Failure was defined as removal of the allograft, revision transplantation, or conversion to a joint replacement. Patients were grouped according to the articular cartilage status at the time of the index surgery: group 1, intact or partial-thickness chondral loss; group 2, full-thickness chondral loss 1 condyle; and group 3, full-thickness chondral loss both condyles. The Cox proportional hazards model was used to determine significant predictors of failure, independently of other factors. Kaplan-Meier survival curves were produced for overall survival and significant predictors of failure in the Cox proportional hazards model. Results: There were 125 consecutive MATs performed, with 1 patient lost to follow-up. The median follow-up was 3 years (range, 1-10 years). The 5-year graft survival for the entire cohort was 82% (group 1, 97%; group 2, 82%; group 3, 62%). The probability of failure in group 1 was 85% lower (95% CI, 13%-97%) than in group 3 at any time. The probability of failure with lateral allografts was 76% lower (95% CI, 16%-89%) than medial allografts at any time. Conclusion: This study showed that the presence of severe cartilage damage at the time of MAT and medial allografts were significantly predictive of failure. Surgeons and patients should use this information when considering the risks and benefits of surgery. PMID:27583257
Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems
Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul
2010-01-01
Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN≥125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software∕hardware upgrades. System latency was determined to be ∼193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%–3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was ∼35 min, while that taken for comprehensive testing was ∼3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation of clinical resources because the most critical failure modes receive the most attention. It is expected that the set of guidelines proposed here will serve as a living document that is updated with the accumulation of progressively more intrainstitutional and interinstitutional experience with DMLC tracking. PMID:21302802
Follow-up of the original cohort with the Ahmed glaucoma valve implant.
Topouzis, F; Coleman, A L; Choplin, N; Bethlem, M M; Hill, R; Yu, F; Panek, W C; Wilson, M R
1999-08-01
To study the long-term results of the Ahmed glaucoma valve implant in patients with complicated glaucoma in whom short-term results have been reported. In this multicenter study, we analyzed the long-term outcome of a cohort of 60 eyes from 60 patients in whom the Ahmed glaucoma valve was implanted. Failure was characterized by at least one of the following: intraocular pressure greater than 21 mm Hg at both of the last two visits less than 6 mm Hg at both of the last two visits, loss of light perception, additional glaucoma surgery, devastating complications, and removal or replacement of the Ahmed glaucoma valve implant. Devastating complications included chronic hypotony, retinal detachment, malignant glaucoma, endophthalmitis, and phthisis bulbi; we also report results that add corneal complications (corneal decompensation or edema, corneal graft failure) as defining a devastating complication. The mean follow-up time for the 60 eyes was 30.5 months (range, 2.1 to 63.5). When corneal complications were included in the definition of failure, 26 eyes (43%) were considered failures. Cumulative probabilities of success at 1, 2, 3, and 4 years were 76%, 68%, 54%, and 45%, respectively. When corneal complications were excluded from the definition of failure, 13 eyes (21.5%) were considered failures. Cumulative probabilities of success at 1, 2, 3, and 4 years were 87%, 82%, 76%, and 76%, respectively. Most of the failures after 12 months of postoperative follow-up were because of corneal complications. The long-term performance of the Ahmed glaucoma valve implant is comparable to other drainage devices. More than 12 months after the implantation of the Ahmed glaucoma valve implant, the most frequent adverse outcome was corneal decompensation or corneal graft failure. These corneal problems may be secondary to the type of eyes that have drainage devices or to the drainage device itself. Further investigation is needed to identify the reasons that corneal problems follow drainage device implantation.
Probability of Accurate Heart Failure Diagnosis and the Implications for Hospital Readmissions.
Carey, Sandra A; Bass, Kyle; Saracino, Giovanna; East, Cara A; Felius, Joost; Grayburn, Paul A; Vallabhan, Ravi C; Hall, Shelley A
2017-04-01
Heart failure (HF) is a complex syndrome with inherent diagnostic challenges. We studied the scope of possibly inaccurately documented HF in a large health care system among patients assigned a primary diagnosis of HF at discharge. Through a retrospective record review and a classification schema developed from published guidelines, we assessed the probability of the documented HF diagnosis being accurate and determined factors associated with HF-related and non-HF-related hospital readmissions. An arbitration committee of 3 experts reviewed a subset of records to corroborate the results. We assigned a low probability of accurate diagnosis to 133 (19%) of the 712 patients. A subset of patients were also reviewed by an expert panel, which concluded that 13% to 35% of patients probably did not have HF (inter-rater agreement, kappa = 0.35). Low-probability HF was predictive of being readmitted more frequently for non-HF causes (p = 0.018), as well as documented arrhythmias (p = 0.023), and age >60 years (p = 0.006). Documented sleep apnea (p = 0.035), percutaneous coronary intervention (p = 0.006), non-white race (p = 0.047), and B-type natriuretic peptide >400 pg/ml (p = 0.007) were determined to be predictive of HF readmissions in this cohort. In conclusion, approximately 1 in 5 patients documented to have HF were found to have a low probability of actually having it. Moreover, the determination of low-probability HF was twice as likely to result in readmission for non-HF causes and, thus, should be considered a determinant for all-cause readmissions in this population. Copyright © 2017 Elsevier Inc. All rights reserved.
Fault tree applications within the safety program of Idaho Nuclear Corporation
NASA Technical Reports Server (NTRS)
Vesely, W. E.
1971-01-01
Computerized fault tree analyses are used to obtain both qualitative and quantitative information about the safety and reliability of an electrical control system that shuts the reactor down when certain safety criteria are exceeded, in the design of a nuclear plant protection system, and in an investigation of a backup emergency system for reactor shutdown. The fault tree yields the modes by which the system failure or accident will occur, the most critical failure or accident causing areas, detailed failure probabilities, and the response of safety or reliability to design modifications and maintenance schemes.
Main propulsion system design recommendations for an advanced Orbit Transfer Vehicle
NASA Technical Reports Server (NTRS)
Redd, L.
1985-01-01
Various main propulsion system configurations of an advanced OTV are evaluated with respect to the probability of nonindependent failures, i.e., engine failures that disable the entire main propulsion system. Analysis of the life-cycle cost (LCC) indicates that LCC is sensitive to the main propulsion system reliability, vehicle dry weight, and propellant cost; it is relatively insensitive to the number of missions/overhaul, failures per mission, and EVA and IVA cost. In conclusion, two or three engines are recommended in view of their highest reliability, minimum life-cycle cost, and fail operational/fail safe capability.
PROBABILISTIC RISK ANALYSIS OF RADIOACTIVE WASTE DISPOSALS - a case study
NASA Astrophysics Data System (ADS)
Trinchero, P.; Delos, A.; Tartakovsky, D. M.; Fernandez-Garcia, D.; Bolster, D.; Dentz, M.; Sanchez-Vila, X.; Molinero, J.
2009-12-01
The storage of contaminant material in superficial or sub-superficial repositories, such as tailing piles for mine waste or disposal sites for low and intermediate nuclear waste, poses a potential threat for the surrounding biosphere. The minimization of these risks can be achieved by supporting decision-makers with quantitative tools capable to incorporate all source of uncertainty within a rigorous probabilistic framework. A case study is presented where we assess the risks associated to the superficial storage of hazardous waste close to a populated area. The intrinsic complexity of the problem, involving many events with different spatial and time scales and many uncertainty parameters is overcome by using a formal PRA (probabilistic risk assessment) procedure that allows decomposing the system into a number of key events. Hence, the failure of the system is directly linked to the potential contamination of one of the three main receptors: the underlying karst aquifer, a superficial stream that flows near the storage piles and a protection area surrounding a number of wells used for water supply. The minimal cut sets leading to the failure of the system are obtained by defining a fault-tree that incorporates different events including the failure of the engineered system (e.g. cover of the piles) and the failure of the geological barrier (e.g. clay layer that separates the bottom of the pile from the karst formation). Finally the probability of failure is quantitatively assessed combining individual independent or conditional probabilities that are computed numerically or borrowed from reliability database.
Brain natriuretic peptide-guided therapy in the inpatient management of decompensated heart failure.
Saremi, Adonis; Gopal, Dipika; Maisel, Alan S
2012-02-01
Heart failure is extremely prevalent and is associated with significant mortality, morbidity and cost. Studies have already established mortality benefit with the use of neurohormonal blockade therapy in systolic failure. Unfortunately, physical signs and symptoms of heart failure lack diagnostic sensitivity and specificity, and medication doses proven to improve mortality in clinical trials are often not achieved. Brain natriuretic peptide (BNP) has proven to be of clinical use in the diagnosis and prognosis of heart failure, and recent efforts have been taken to further elucidate its role in guiding heart failure management. Multiple studies have been conducted on outpatient guided management, and although still controversial, there is a trend towards improved outcomes. Inpatient studies are lacking, but preliminary data suggest various BNP cut-off values, as well as percentage changes in BNP, that could be useful in predicting outcomes and improving mortality. In the future, heart failure management will probably involve an algorithm using clinical assessment and a multibiomarker-guided approach.
NASA Astrophysics Data System (ADS)
Ekonomou, L.; Karampelas, P.; Vita, V.; Chatzarakis, G. E.
2011-04-01
One of the most popular methods of protecting high voltage transmission lines against lightning strikes and internal overvoltages is the use of arresters. The installation of arresters in high voltage transmission lines can prevent or even reduce the lines' failure rate. Several studies based on simulation tools have been presented in order to estimate the critical currents that exceed the arresters' rated energy stress and to specify the arresters' installation interval. In this work artificial intelligence, and more specifically a Q-learning artificial neural network (ANN) model, is addressed for evaluating the arresters' failure probability. The aims of the paper are to describe in detail the developed Q-learning ANN model and to compare the results obtained by its application in operating 150 kV Greek transmission lines with those produced using a simulation tool. The satisfactory and accurate results of the proposed ANN model can make it a valuable tool for designers of electrical power systems seeking more effective lightning protection, reducing operational costs and better continuity of service.
Ghavami, Behnam; Raji, Mohsen; Pedram, Hossein
2011-08-26
Carbon nanotube field-effect transistors (CNFETs) show great promise as building blocks of future integrated circuits. However, synthesizing single-walled carbon nanotubes (CNTs) with accurate chirality and exact positioning control has been widely acknowledged as an exceedingly complex task. Indeed, density and chirality variations in CNT growth can compromise the reliability of CNFET-based circuits. In this paper, we present a novel statistical compact model to estimate the failure probability of CNFETs to provide some material and process guidelines for the design of CNFETs in gigascale integrated circuits. We use measured CNT spacing distributions within the framework of detailed failure analysis to demonstrate that both the CNT density and the ratio of metallic to semiconducting CNTs play dominant roles in defining the failure probability of CNFETs. Besides, it is argued that the large-scale integration of these devices within an integrated circuit will be feasible only if a specific range of CNT density with an acceptable ratio of semiconducting to metallic CNTs can be adjusted in a typical synthesis process.
Impact of distributed energy resources on the reliability of a critical telecommunications facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, David; Zuffranieri, Jason V.; Atcitty, Christopher B.
2006-03-01
This report documents a probabilistic risk assessment of an existing power supply system at a large telecommunications office. The focus is on characterizing the increase in the reliability of power supply through the use of two alternative power configurations. Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages. A logical approach to improve the robustness of telecommunication facilities would be to increase the depth and breadth of technologies available to restore power inmore » the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. The analysis is based on a hierarchical Bayesian approach and focuses on the failure probability associated with each of three possible facility configurations, along with assessment of the uncertainty or confidence level in the probability of failure. A risk-based characterization of final best configuration is presented.« less
Impacts of geographical locations and sociocultural traits on the Vietnamese entrepreneurship.
Vuong, Quan Hoang
2016-01-01
This paper presents new results obtained from investigating the data from a 2015 Vietnamese entrepreneurs' survey, containing 3071 observations. Evidence from the estimations using multinomial logits was found to support relationships between several sociocultural factors and entrepreneurship-related performance or traits. Specifically, those relationships include: (a) Active participation in entrepreneurs' social networks and reported value of creativity; (b) CSR-willingness and reported entrepreneurs' perseverance; (c) Transforming of sociocultural values and entrepreneurs' decisiveness; and, (d) Lessons learned from others' failures and perceived chance of success. Using geographical locations as the control variate, evaluations of the baseline-category logits models indicate their varying effects on the outcomes when combined with the sociocultural factors that are found to be statistically significant. Empirical probabilities that give further detail about behavioral patterns are provided; and toward the end, the paper offers some conclusions with some striking insights and useful explanations on the Vietnamese entrepreneurship processes.
Viscoelastic analysis of a dental metal-ceramic system
NASA Astrophysics Data System (ADS)
Özüpek, Şebnem; Ünlü, Utku Cemal
2012-11-01
Porcelain-fused-to-metal (PFM) restorations used in prosthetic dentistry contain thermal stresses which develop during the cooling phase after firing. These thermal stresses coupled with the stresses produced by mechanical loads may be the dominant reasons for failures in clinical situations. For an accurate calculation of these stresses, viscoelastic behavior of ceramics at high temperatures should not be ignored. In this study, the finite element technique is used to evaluate the effect of viscoelasticity on stress distributions of a three-point flexure test specimen, which is the current international standard, ISO 9693, to characterize the interfacial bond strength of metal-ceramic restorative systems. Results indicate that the probability of interfacial debonding due to normal tensile stress is higher than that due to shear stress. This conclusion suggests modification of ISO 9693 bond strength definition from one in terms of the shear stress only to that accounting for both normal and shear stresses.
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of "failure modes and effects analysis" (FMEA). In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members' decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors.
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Background: Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of “failure modes and effects analysis” (FMEA). Materials and Methods: In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members’ decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Results: Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. Conclusions: The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors. PMID:28194208
Probabilistic evaluation of uncertainties and risks in aerospace components
NASA Technical Reports Server (NTRS)
Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.
1992-01-01
This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.
Evaluating Micrometeoroid and Orbital Debris Risk Assessments Using Anomaly Data
NASA Technical Reports Server (NTRS)
Squire, Michael
2017-01-01
The accuracy of micrometeoroid and orbital debris (MMOD) risk assessments can be difficult to evaluate. A team from the National Aeronautics and Space Administration (NASA) Engineering and Safety Center (NESC) has completed a study that compared MMOD-related failures on operational satellites to predictions of how many of those failures should occur using NASA's TM"s MMOD risk assessment methodology and tools. The study team used the Poisson probability to quantify the degree of inconsistency between the predicted and reported numbers of failures. Many elements go into a risk assessment, and each of those elements represent a possible source of uncertainty or bias that will influence the end result. There are also challenges in obtaining accurate and useful data on MMOD-related failures.
Statistical modeling of SRAM yield performance and circuit variability
NASA Astrophysics Data System (ADS)
Cheng, Qi; Chen, Yijian
2015-03-01
In this paper, we develop statistical models to investigate SRAM yield performance and circuit variability in the presence of self-aligned multiple patterning (SAMP) process. It is assumed that SRAM fins are fabricated by a positivetone (spacer is line) self-aligned sextuple patterning (SASP) process which accommodates two types of spacers, while gates are fabricated by a more pitch-relaxed self-aligned quadruple patterning (SAQP) process which only allows one type of spacer. A number of possible inverter and SRAM structures are identified and the related circuit multi-modality is studied using the developed failure-probability and yield models. It is shown that SRAM circuit yield is significantly impacted by the multi-modality of fins' spatial variations in a SRAM cell. The sensitivity of 6-transistor SRAM read/write failure probability to SASP process variations is calculated and the specific circuit type with the highest probability to fail in the reading/writing operation is identified. Our study suggests that the 6-transistor SRAM configuration may not be scalable to 7-nm half pitch and more robust SRAM circuit design needs to be researched.
NASA Astrophysics Data System (ADS)
Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen
2018-05-01
To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.
Probabilistic structural analysis of aerospace components using NESSUS
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.
1988-01-01
Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.
Denis, P; Le Pen, C; Umuhire, D; Berdeaux, G
2008-01-01
To compare the effectiveness of two treatment sequences, latanoprost-latanoprost timolol fixed combination (L-LT) versus travoprost-travoprost timolol fixed combination (T-TT), in the treatment of open-angle glaucoma (OAG) or ocular hypertension (OHT). A discrete event simulation (DES) model was constructed. Patients with either OAG or OHT were treated first-line with a prostaglandin, either latanoprost or travoprost. In case of treatment failure, patients were switched to the specific prostaglandin-timolol sequence LT or TT. Failure was defined as intraocular pressure higher than or equal to 18 mmHg at two visits. Time to failure was estimated from two randomized clinical trials. Log-rank tests were computed. Linear functions after log-log transformation were used to model time to failure. The time horizon of the model was 60 months. Outcomes included treatment failure and disease progression. Sensitivity analyses were performed. Latanoprost treatment resulted in more treatment failures than travoprost (p<0.01), and LT more than TT (p<0.01). At 60 months, the probability of starting a third treatment line was 39.2% with L-LT versus 29.9% with T-TT. On average, L-LT patients developed 0.55 new visual field defects versus 0.48 for T-TT patients. The probability of no disease progression at 60 months was 61.4% with L-LT and 65.5% with T-TT. Based on randomized clinical trial results and using a DES model, the T-TT sequence was more effective at avoiding starting a third line treatment than the L-LT sequence. T-TT treated patients developed less glaucoma progression.
Austin, Peter C.; Tu, Jack V.; Ho, Jennifer E.; Levy, Daniel; Lee, Douglas S.
2014-01-01
Objective Physicians classify patients into those with or without a specific disease. Furthermore, there is often interest in classifying patients according to disease etiology or subtype. Classification trees are frequently used to classify patients according to the presence or absence of a disease. However, classification trees can suffer from limited accuracy. In the data-mining and machine learning literature, alternate classification schemes have been developed. These include bootstrap aggregation (bagging), boosting, random forests, and support vector machines. Study design and Setting We compared the performance of these classification methods with those of conventional classification trees to classify patients with heart failure according to the following sub-types: heart failure with preserved ejection fraction (HFPEF) vs. heart failure with reduced ejection fraction (HFREF). We also compared the ability of these methods to predict the probability of the presence of HFPEF with that of conventional logistic regression. Results We found that modern, flexible tree-based methods from the data mining literature offer substantial improvement in prediction and classification of heart failure sub-type compared to conventional classification and regression trees. However, conventional logistic regression had superior performance for predicting the probability of the presence of HFPEF compared to the methods proposed in the data mining literature. Conclusion The use of tree-based methods offers superior performance over conventional classification and regression trees for predicting and classifying heart failure subtypes in a population-based sample of patients from Ontario. However, these methods do not offer substantial improvements over logistic regression for predicting the presence of HFPEF. PMID:23384592
Failure probability of three designs of zirconia crowns
Ramos, G. Freitas; Monteiro, E. Barbosa Carmona; Bottino, M.A.; Zhang, Y.; de Melo, R. Marques
2015-01-01
Objectives This study utilized a 2-parameter Weibull analysis for evaluation of lifetime of fully or partially porcelain-/glaze-veneered zirconia crowns after fatigue test. Methods Sixty first molars were selected and prepared for full-coverage crowns with three different designs(n = 20): Traditional –crowns with zirconia framework covered with feldspathic porcelain; Modified– crowns partially covered with veneering porcelain; and Monolithic–full-contour zirconia crowns. All specimens were treated with a glaze layer. Specimens were subjected to mechanical cycling (100N, 3Hz) with a piston with hemispherical tip (Ø=6 mm) until the specimens failed or up to 2×106 cycles. Every 500,000 cycles intervals, the fatigue tests were interrupted, and stereomicroscopy (10 X) was used to inspect the specimens for damage. We performed Weibull analysis of interval data to calculate the number of failures in each interval. Results The types and number of failures according to the groups were: cracking (Traditional-13, Modified-6) and chipping (Traditional-4) of the feldspathic porcelain, followed by delamination (Traditional-1) at the veneer/core interface and debonding (Monollithic-2) at the cementation interface. Weibull parameters (beta, scale; and eta, shape), with a two-sided confidence interval of 95%, were: Traditional – 1.25 and 0.9 × 106cycles; Modified– 0.58 and 11.7 × 106 cycles; and Monolithic – 1.05 and 16.5 × 106 cycles. Traditional crowns showed greater susceptibility to fatigue, the Modified group presented higher propensity to early failures, and the Monolithic group showed no susceptibility to fatigue. The Modified and Monolithic groups presented the highest number of crowns with no failures after the fatigue test. Conclusions The three crown designs presented significantly different behaviors under fatigue. The Modified and the Monolithic groups presented less probability to failure after 2×106cycles. PMID:26509988
NASA Technical Reports Server (NTRS)
Monaghan, Mark W.; Gillespie, Amanda M.
2013-01-01
During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.
Evaluation of Pad 18 Spent Mercury Gold Trap Stainless Steel Container Failure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skidmore, E.
Failure of the Pad 18 spent mercury gold trap stainless steel waste container is principally attributed to corrosion induced by degradation of plasticized polyvinyl chloride (pPVC) waste packaging material. Dehydrochlorination of pPVC polymer by thermal and/or radiolytic degradation is well-known to evolve HCl gas, which is highly corrosive to stainless steel and other metals in the presence of moisture. Degradation of the pPVC packaging material was likely caused by radiolysis in the presence of tritium gas within the waste container, though other degradation mechanisms (aging, thermo-oxidation, plasticizer migration) over 30 years storage may have contributed. Corrosion was also likely enhancedmore » by the crevice in the container weld design, and may have been enhanced by the presence of tritiated water. Similar non-failed spent mercury gold trap waste containers did not show radiographic evidence of plastic packaging or trapped free liquid within the container. Therefore, those containers are not expected to exhibit similar failures. Halogenated polymers such as pPVC subject to degradation can evolve halide gases such as HCl, which is corrosive in the presence of moisture and can generate pressure in sealed systems.« less
Schackman, Bruce R; Ribaudo, Heather J; Krambrink, Amy; Hughes, Valery; Kuritzkes, Daniel R; Gulick, Roy M
2007-12-15
Blacks had higher rates of virologic failure than whites on efavirenz-containing regimens in the AIDS Clinical Trials Group (ACTG) A5095 study; preliminary analyses also suggested an association with adherence. We rigorously examined associations over time among race, virologic failure, 4 self-reported adherence metrics, and quality of life (QOL). ACTG A5095 was a double-blind placebo-controlled study of treatment-naive HIV-positive patients randomized to zidovudine/lamivudine/abacavir versus zidovudine/lamivudine plus efavirenz versus zidovudine/lamivudine/abacavir plus efavirenz. Virologic failure was defined as confirmed HIV-1 RNA >or=200 copies/mL at >or=16 weeks on study. The zidovudine/lamivudine/abacavir arm was discontinued early because of virologic inferiority. We examined virologic failure differences for efavirenz-containing arms according to missing 0 (adherent) versus at least 1 dose (nonadherent) during the past 4 days, alternative self-reported adherence metrics, and QOL. Analyses used the Fisher exact, log rank tests, and Cox proportional hazards models. The study population included white (n = 299), black (n = 260), and Hispanic (n = 156) patients with >or=1 adherence evaluation. Virologic failure was associated with week 12 nonadherence during the past 4 days for blacks (53% nonadherent failed vs. 25% adherent; P < 0.001) but not for whites (20% nonadherent failed vs. 20% adherent; P = 0.91). After adjustment for baseline covariates and treatment, there was a significant interaction between race and week 12 adherence (P = 0.02). In time-dependent Cox models using self-reports over time to reflect recent adherence, there was a significantly higher failure risk for nonadherent subjects (hazard ratio [HR] = 2.07; P < 0.001). Significant race-adherence interactions were seen in additional models of adherence: missing at least 1 medication dose ever (P = 0.04), past month (P < 0.01), or past weekend (P = 0.05). Lower QOL was significantly associated with virologic failure (P < 0.001); there was no evidence of an interaction between QOL and race (P = 0.39) or adherence (P = 0.51) in predicting virologic failure. There was a greater effect of nonadherence on virologic failure in blacks given efavirenz-containing regimens than in whites. Self-reported adherence and QOL are independent predictors of virologic failure.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
Lu, Chin-Shan; Lai, Kee-hung; Lun, Y H Venus; Cheng, T C E
2012-11-01
Recent reports on work safety in container shipping operations highlight high frequencies of human failures. In this study, we empirically examine the effects of seafarers' perceptions of national culture on the occurrence of human failures affecting work safety in shipping operations. We develop a model adopting Hofstede's national culture construct, which comprises five dimensions, namely power distance, collectivism/individualism, uncertainty avoidance, masculinity/femininity, and Confucian dynamism. We then formulate research hypotheses from theory and test the hypotheses using survey data collected from 608 seafarers who work on global container carriers. Using a point scale for evaluating seafarers' perception of the five national culture dimensions, we find that Filipino seafarers score highest on collectivism, whereas Chinese and Taiwanese seafarers score highest on Confucian dynamism, followed by collectivism, masculinity, power distance, and uncertainty avoidance. The results also indicate that Taiwanese seafarers have a propensity for uncertainty avoidance and masculinity, whereas Filipino seafarers lean more towards power distance, masculinity, and collectivism, which are consistent with the findings of Hofstede and Bond (1988). The results suggest that there will be fewer human failures in container shipping operations when power distance is low, and collectivism and uncertainty avoidance are high. Specifically, this study finds that Confucian dynamism plays an important moderating role as it affects the strength of associations between some national culture dimensions and human failures. Finally, we discuss our findings' contribution to the development of national culture theory and their managerial implications for reducing the occurrence of human failures in shipping operations. Copyright © 2012 Elsevier Ltd. All rights reserved.
Does my high blood pressure improve your survival? Overall and subgroup learning curves in health.
Van Gestel, Raf; Müller, Tobias; Bosmans, Johan
2017-09-01
Learning curves in health are of interest for a wide range of medical disciplines, healthcare providers, and policy makers. In this paper, we distinguish between three types of learning when identifying overall learning curves: economies of scale, learning from cumulative experience, and human capital depreciation. In addition, we approach the question of how treating more patients with specific characteristics predicts provider performance. To soften collinearity problems, we explore the use of least absolute shrinkage and selection operator regression as a variable selection method and Theil-Goldberger mixed estimation to augment the available information. We use data from the Belgian Transcatheter Aorta Valve Implantation (TAVI) registry, containing information on the first 860 TAVI procedures in Belgium. We find that treating an additional TAVI patient is associated with an increase in the probability of 2-year survival by about 0.16%-points. For adverse events like renal failure and stroke, we find that an extra day between procedures is associated with an increase in the probability for these events by 0.12%-points and 0.07%-points, respectively. Furthermore, we find evidence for positive learning effects from physicians' experience with defibrillation, treating patients with hypertension, and the use of certain types of replacement valves during the TAVI procedure. Copyright © 2017 John Wiley & Sons, Ltd.
Sung, Ki Hyuk; Chung, Chin Youb; Lee, Kyoung Min; Lee, Seung Yeol; Choi, In Ho; Cho, Tae-Joon; Yoo, Won Joon; Park, Moon Seok
2014-01-01
This study aimed to determine the best treatment modality for coronal angular deformity of the knee joint in growing children using decision analysis. A decision tree was created to evaluate 3 treatment modalities for coronal angular deformity in growing children: temporary hemiepiphysiodesis using staples, percutaneous screws, or a tension band plate. A decision analysis model was constructed containing the final outcome score, probability of metal failure, and incomplete correction of deformity. The final outcome was defined as health-related quality of life and was used as a utility in the decision tree. The probabilities associated with each case were obtained by literature review, and health-related quality of life was evaluated by a questionnaire completed by 25 pediatric orthopedic experts. Our decision analysis model favored temporary hemiepiphysiodesis using a tension band plate over temporary hemiepiphysiodesis using percutaneous screws or stapling, with utilities of 0.969, 0.957, and 0.962, respectively. One-way sensitivity analysis showed that hemiepiphysiodesis using a tension band plate was better than temporary hemiepiphysiodesis using percutaneous screws, when the overall complication rate of hemiepiphysiodesis using a tension band plate was lower than 15.7%. Two-way sensitivity analysis showed that hemiepiphysiodesis using a tension band plate was more beneficial than temporary hemiepiphysiodesis using percutaneous screws. PMID:25276801
Small and large wetland fragments are equally suited breeding sites for a ground-nesting passerine.
Pasinelli, Gilberto; Mayer, Christian; Gouskov, Alexandre; Schiegg, Karin
2008-06-01
Large habitat fragments are generally thought to host more species and to offer more diverse and/or better quality habitats than small fragments. However, the importance of small fragments for population dynamics in general and for reproductive performance in particular is highly controversial. Using an information-theoretic approach, we examined reproductive performance and probability of local recruitment of color-banded reed buntings Emberiza schoeniclus in relation to the size of 18 wetland fragments in northeastern Switzerland over 4 years. We also investigated if reproductive performance and recruitment probability were density-dependent. None of the four measures of reproductive performance (laying date, nest failure probability, fledgling production per territory, fledgling condition) nor recruitment probability were found to be related to wetland fragment size. In terms of fledgling production, however, fragment size interacted with year, indicating that small fragments were better reproductive grounds in some years than large fragments. Reproductive performance and recruitment probability were not density-dependent. Our results suggest that small fragments are equally suited as breeding grounds for the reed bunting as large fragments and should therefore be managed to provide a habitat for this and other specialists occurring in the same habitat. Moreover, large fragments may represent sinks in specific years because a substantial percentage of all breeding pairs in our study area breed in large fragments, and reproductive failure in these fragments due to the regularly occurring floods may have a much stronger impact on regional population dynamics than comparable events in small fragments.
The SAS4A/SASSYS-1 Safety Analysis Code System, Version 5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fanning, T. H.; Brunett, A. J.; Sumner, T.
The SAS4A/SASSYS-1 computer code is developed by Argonne National Laboratory for thermal, hydraulic, and neutronic analysis of power and flow transients in liquidmetal- cooled nuclear reactors (LMRs). SAS4A was developed to analyze severe core disruption accidents with coolant boiling and fuel melting and relocation, initiated by a very low probability coincidence of an accident precursor and failure of one or more safety systems. SASSYS-1, originally developed to address loss-of-decay-heat-removal accidents, has evolved into a tool for margin assessment in design basis accident (DBA) analysis and for consequence assessment in beyond-design-basis accident (BDBA) analysis. SAS4A contains detailed, mechanistic models of transientmore » thermal, hydraulic, neutronic, and mechanical phenomena to describe the response of the reactor core, its coolant, fuel elements, and structural members to accident conditions. The core channel models in SAS4A provide the capability to analyze the initial phase of core disruptive accidents, through coolant heat-up and boiling, fuel element failure, and fuel melting and relocation. Originally developed to analyze oxide fuel clad with stainless steel, the models in SAS4A have been extended and specialized to metallic fuel with advanced alloy cladding. SASSYS-1 provides the capability to perform a detailed thermal/hydraulic simulation of the primary and secondary sodium coolant circuits and the balance-ofplant steam/water circuit. These sodium and steam circuit models include component models for heat exchangers, pumps, valves, turbines, and condensers, and thermal/hydraulic models of pipes and plena. SASSYS-1 also contains a plant protection and control system modeling capability, which provides digital representations of reactor, pump, and valve controllers and their response to input signal changes.« less
NASA Astrophysics Data System (ADS)
Lugauer, F. P.; Stiehl, T. H.; Zaeh, M. F.
Modern laser systems are widely used in industry due to their excellent flexibility and high beam intensities. This leads to an increased hazard potential, because conventional laser safety barriers only offer a short protection time when illuminated with high laser powers. For that reason active systems are used more and more to prevent accidents with laser machines. These systems must fulfil the requirements of functional safety, e.g. according to IEC 61508, which causes high costs. The safety provided by common passive barriers is usually unconsidered in this context. In the presented approach, active and passive systems are evaluated from a holistic perspective. To assess the functional safety of hybrid safety systems, the failure probability of passive barriers is analysed and added to the failure probability of the active system.
Pérez, M A
2012-12-01
Probabilistic analyses allow the effect of uncertainty in system parameters to be determined. In the literature, many researchers have investigated static loading effects on dental implants. However, the intrinsic variability and uncertainty of most of the main problem parameters are not accounted for. The objective of this research was to apply a probabilistic computational approach to predict the fatigue life of three different commercial dental implants considering the variability and uncertainty in their fatigue material properties and loading conditions. For one of the commercial dental implants, the influence of its diameter in the fatigue life performance was also studied. This stochastic technique was based on the combination of a probabilistic finite element method (PFEM) and a cumulative damage approach known as B-model. After 6 million of loading cycles, local failure probabilities of 0.3, 0.4 and 0.91 were predicted for the Lifecore, Avinent and GMI implants, respectively (diameter of 3.75mm). The influence of the diameter for the GMI implant was studied and the results predicted a local failure probability of 0.91 and 0.1 for the 3.75mm and 5mm, respectively. In all cases the highest failure probability was located at the upper screw-threads. Therefore, the probabilistic methodology proposed herein may be a useful tool for performing a qualitative comparison between different commercial dental implants. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolaczkowski, A.M.; Lambright, J.A.; Ferrell, W.L.
This document contains the internal event initiated accident sequence analyses for Peach Bottom, Unit 2; one of the reference plants being examined as part of the NUREG-1150 effort by the Nuclear Regulatory Commission. NUREG-1150 will document the risk of a selected group of nuclear power plants. As part of that work, this report contains the overall core damage frequency estimate for Peach Bottom, Unit 2, and the accompanying plant damage state frequencies. Sensitivity and uncertainty analyses provided additional insights regarding the dominant contributors to the Peach Bottom core damage frequency estimate. The mean core damage frequency at Peach Bottom wasmore » calculated to be 8.2E-6. Station blackout type accidents (loss of all ac power) were found to dominate the overall results. Anticipated Transient Without Scram accidents were also found to be non-negligible contributors. The numerical results are largely driven by common mode failure probability estimates and to some extent, human error. Because of significant data and analysis uncertainties in these two areas (important, for instance, to the most dominant scenario in this study), it is recommended that the results of the uncertainty and sensitivity analyses be considered before any actions are taken based on this analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, R.L.; Gallaher, R.B.
1977-08-02
This bibliography contains 100-word abstracts of reports to the U.S. Nuclear Regulatory Commission concerning operational events that occurred at boiling-water reactor nuclear power plants in 1976. The report includes 1,253 abstracts that describe incidents, failures, and design or construction deficiencies that were experienced at the facilities. They are arranged alphabetically by reactor name and then chronologically for each reactor. Key-word and permuted-title indexes are provided to facilitate location of the subjects of interest, and tables that summarize the information contained in the bibliography are provided. The information listed in the tables includes instrument failures, equipment failures, system failures, causes ofmore » failures, deficiencies noted, and the time of occurrence (i.e., during refueling, operation, testing, or construction). Three of the unique events that occurred during the year are reviewed in detail.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, R.L.; Gallaher, R.B.
1976-07-01
The bibliography presented contains 100-word abstracts of reports to the U.S. Nuclear Regulatory Commission concerning operational events that occurred at boiling-water reactor nuclear power plants in 1975. The report includes 1169 abstracts, arranged alphabetically by reactor name and then chronologically for each reactor, that describe incidents, failures, and design or construction deficiencies that were experienced at the facilities. Key-word and permuted-title indexes are provided to facilitate location of the subjects of interest, and tables that summarize the information contained in the bibliography are provided. The information listed in the tables includes instrument failures, equipment failures, system failures, causes of failures,more » deficiencies noted, and the time of occurrence (i.e., during refueling, operation, testing, or construction). Seven of the unique events that occurred during the year are reviewed in detail.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, R.L.; Gallaher, R.B.
1976-07-01
The bibliography presented contains 100-word abstracts of reports to the U.S. Nuclear Regulatory Commission concerning operational events that occurred at pressurized-water reactor nuclear power plants in 1975. The report includes 1097 abstracts, arranged alphabetically by reactor name and then chronologically for each reactor, that describe incidents, failures, and design or construction deficiencies experienced at the facilities. Key-word and permuted-title indexes are provided to facilitate location of the subjects of interest, and tables summarizing the information contained in the bibliography are presented. The information listed in the tables includes instrument failures, equipment failures, system failures, causes of failures, deficiencies noted, andmore » the time of occurrence (i.e., during refueling, operation, testing, or construction). A few of the unique events that occurred during the year are reviewed in detail.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, R.L.; Gallaher, R.B.
1977-08-01
The bibliography contains 100-word abstracts of reports to the U.S. Nuclear Regulatory Commission concerning operational events that occurred at pressurized-water reactor nuclear power plants in 1976. Included are 1264 abstracts that describe incidents, failures, and design construction deficiencies experienced at the facilities. They are arranged alphabetically by reactor name and then chronologically for each reactor. Key-word and permuted-title indexes are provided to facilitate location of the subjects of interest, and tables summarizing the information contained in the bibliography are presented. The information listed in the tables includes instrument failures, equipment failures, system failures, causes of failures, deficiencies noted, and themore » time of occurrence (i.e., during refueling, operation, testing, or construction). A few of the unique events that occurred during the year are reviewed in detail.« less
Occupational kidney disease among Chinese herbalists exposed to herbs containing aristolochic acids.
Yang, Hsiao-Yu; Wang, Jung-Der; Lo, Tsai-Chang; Chen, Pau-Chung
2011-04-01
Many Chinese herbs contain aristolochic acids (ALAs) which are nephrotoxic and carcinogenic. The objective of this study was to identify whether exposure to herbs containing ALAs increased the risk of kidney disease among Chinese herbalists. A nested case-control study was carried out on 6538 Chinese herbalists registered between 1985 and 1998. All incident cases of chronic renal failure reported to the Database of Catastrophic Illness of the National Health Insurance Bureau between 1995 and 2000 were defined as the case group. Up to four controls without renal failure were randomly matched to each case by sex and year of birth. A structured questionnaire survey was administered between November and December 2002. The Mantel-Haenszel method and conditional logistic regression were used to estimate the risks. 40 cases and 98 matched controls were included in the final analysis. After adjusting for age, frequent analgesic use, and habitual consumption of alcohol, fermented or smoked food, we found manufacturing and selling Chinese herbal medicine (OR 3.43, 95% CI 1.16 to 10.19), processing, selling or dispensing herbal medicines containing Fangji (OR 4.17, 95% CI 1.36 to 12.81), living in the workplace (OR 3.14, 95% CI 1.11 to 8.84) and a history of taking of herbal medicines containing Fangji (frequently or occasionally) (OR 5.42, 95% CI 1.18 to 24.96) were significantly associated with renal failure. Occupational exposure to and consumption of herbs containing ALAs increases the risk of renal failure in Chinese herbalists.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
Model-OA wind turbine generator - Failure modes and effects analysis
NASA Technical Reports Server (NTRS)
Klein, William E.; Lali, Vincent R.
1990-01-01
The results failure modes and effects analysis (FMEA) conducted for wind-turbine generators are presented. The FMEA was performed for the functional modes of each system, subsystem, or component. The single-point failures were eliminated for most of the systems. The blade system was the only exception. The qualitative probability of a blade separating was estimated at level D-remote. Many changes were made to the hardware as a result of this analysis. The most significant change was the addition of the safety system. Operational experience and need to improve machine availability have resulted in subsequent changes to the various systems, which are also reflected in this FMEA.
Gamell, Marc; Teranishi, Keita; Mayo, Jackson; ...
2017-04-24
By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamell, Marc; Teranishi, Keita; Mayo, Jackson
By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less
Predicting the Lifetime of Dynamic Networks Experiencing Persistent Random Attacks.
Podobnik, Boris; Lipic, Tomislav; Horvatic, Davor; Majdandzic, Antonio; Bishop, Steven R; Eugene Stanley, H
2015-09-21
Estimating the critical points at which complex systems abruptly flip from one state to another is one of the remaining challenges in network science. Due to lack of knowledge about the underlying stochastic processes controlling critical transitions, it is widely considered difficult to determine the location of critical points for real-world networks, and it is even more difficult to predict the time at which these potentially catastrophic failures occur. We analyse a class of decaying dynamic networks experiencing persistent failures in which the magnitude of the overall failure is quantified by the probability that a potentially permanent internal failure will occur. When the fraction of active neighbours is reduced to a critical threshold, cascading failures can trigger a total network failure. For this class of network we find that the time to network failure, which is equivalent to network lifetime, is inversely dependent upon the magnitude of the failure and logarithmically dependent on the threshold. We analyse how permanent failures affect network robustness using network lifetime as a measure. These findings provide new methodological insight into system dynamics and, in particular, of the dynamic processes of networks. We illustrate the network model by selected examples from biology, and social science.
Failure mechanisms of thermal barrier coatings exposed to elevated temperatures
NASA Technical Reports Server (NTRS)
Miller, R. A.; Lowell, C. E.
1982-01-01
The failure of a ZrO2-8%Y2O3/Ni-14% Al-0.1% Zr coating system on Rene 41 in Mach 0.3 burner rig tests was characterized. High flame and metal temperatures were employed in order to accelerate coating failure. Failure by delamination was shown to precede surface cracking or spalling. This type of failure could be duplicated by cooling down the specimen after a single long duration isothermal high temperature cycle in a burner rig or a furnace, but only if the atmosphere was oxidizing. Stresses due to thermal expansion mismatch on cooling coupled with the effects of plastic deformation of the bond coat and oxidation of the irregular bond coat are the probable life limiting factors. Heat up stresses alone could not fail the coating in the burner rig tests. Spalling eventually occurs on heat up but only after the coating has already failed through delamination.
A new statistical methodology predicting chip failure probability considering electromigration
NASA Astrophysics Data System (ADS)
Sun, Ted
In this research thesis, we present a new approach to analyze chip reliability subject to electromigration (EM) whose fundamental causes and EM phenomenon happened in different materials are presented in this thesis. This new approach utilizes the statistical nature of EM failure in order to assess overall EM risk. It includes within-die temperature variations from the chip's temperature map extracted by an Electronic Design Automation (EDA) tool to estimate the failure probability of a design. Both the power estimation and thermal analysis are performed in the EDA flow. We first used the traditional EM approach to analyze the design with a single temperature across the entire chip that involves 6 metal and 5 via layers. Next, we used the same traditional approach but with a realistic temperature map. The traditional EM analysis approach and that coupled with a temperature map and the comparison between the results of considering and not considering temperature map are presented in in this research. A comparison between these two results confirms that using a temperature map yields a less pessimistic estimation of the chip's EM risk. Finally, we employed the statistical methodology we developed considering a temperature map and different use-condition voltages and frequencies to estimate the overall failure probability of the chip. The statistical model established considers the scaling work with the usage of traditional Black equation and four major conditions. The statistical result comparisons are within our expectations. The results of this statistical analysis confirm that the chip level failure probability is higher i) at higher use-condition frequencies for all use-condition voltages, and ii) when a single temperature instead of a temperature map across the chip is considered. In this thesis, I start with an overall review on current design types, common flows, and necessary verifications and reliability checking steps used in this IC design industry. Furthermore, the important concepts about "Scripting Automation" which is used in all the integration of using diversified EDA tools in this research work are also described in detail with several examples and my completed coding works are also put in the appendix for your reference. Hopefully, this construction of my thesis will give readers a thorough understanding about my research work from the automation of EDA tools to the statistical data generation, from the nature of EM to the statistical model construction, and the comparisons among the traditional EM analysis and the statistical EM analysis approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
2011-05-01
As the nuclear weapon stockpile ages, there is increased concern about common degradation ultimately leading to common cause failure of multiple weapons that could significantly impact reliability or safety. Current acceptable limits for the reliability and safety of a weapon are based on upper limits on the probability of failure of an individual item, assuming that failures among items are independent. We expanded the current acceptable limits to apply to situations with common cause failure. Then, we developed a simple screening process to quickly assess the importance of observed common degradation for both reliability and safety to determine if furthermore » action is necessary. The screening process conservatively assumes that common degradation is common cause failure. For a population with between 100 and 5000 items we applied the screening process and conclude the following. In general, for a reliability requirement specified in the Military Characteristics (MCs) for a specific weapon system, common degradation is of concern if more than 100(1-x)% of the weapons are susceptible to common degradation, where x is the required reliability expressed as a fraction. Common degradation is of concern for the safety of a weapon subsystem if more than 0.1% of the population is susceptible to common degradation. Common degradation is of concern for the safety of a weapon component or overall weapon system if two or more components/weapons in the population are susceptible to degradation. Finally, we developed a technique for detailed evaluation of common degradation leading to common cause failure for situations that are determined to be of concern using the screening process. The detailed evaluation requires that best estimates of common cause and independent failure probabilities be produced. Using these techniques, observed common degradation can be evaluated for effects on reliability and safety.« less
Failure Analysis of a Missile Locking Hook from the F-14 Jet
1989-09-01
MTL) to determine the probable cause of failure. The component is one of two launcher housing support points for the Spar- row Missile and is located...reference Raytheon Draw- ing No. 685029, Figure 3). Atomic absorpticn and inductively coupled argon plasma emission spectroscopy were used to determine ...microscopy, while Figure 16 is a SEM fractograph taken of the same region. The crack initiation site was determined by tracing the radial marks indicative of
Individual versus systemic risk and the Regulator's Dilemma
Beale, Nicholas; Rand, David G.; Battey, Heather; Croxson, Karen; May, Robert M.; Nowak, Martin A.
2011-01-01
The global financial crisis of 2007–2009 exposed critical weaknesses in the financial system. Many proposals for financial reform address the need for systemic regulation—that is, regulation focused on the soundness of the whole financial system and not just that of individual institutions. In this paper, we study one particular problem faced by a systemic regulator: the tension between the distribution of assets that individual banks would like to hold and the distribution across banks that best supports system stability if greater weight is given to avoiding multiple bank failures. By diversifying its risks, a bank lowers its own probability of failure. However, if many banks diversify their risks in similar ways, then the probability of multiple failures can increase. As more banks fail simultaneously, the economic disruption tends to increase disproportionately. We show that, in model systems, the expected systemic cost of multiple failures can be largely explained by two global parameters of risk exposure and diversity, which can be assessed in terms of the risk exposures of individual actors. This observation hints at the possibility of regulatory intervention to promote systemic stability by incentivizing a more diverse diversification among banks. Such intervention offers the prospect of an additional lever in the armory of regulators, potentially allowing some combination of improved system stability and reduced need for additional capital. PMID:21768387
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
Systems Biology and Biomechanical Model of Heart Failure
Louridas, George E; Lourida, Katerina G
2012-01-01
Heart failure is seen as a complex disease caused by a combination of a mechanical disorder, cardiac remodeling and neurohormonal activation. To define heart failure the systems biology approach integrates genes and molecules, interprets the relationship of the molecular networks with modular functional units, and explains the interaction between mechanical dysfunction and cardiac remodeling. The biomechanical model of heart failure explains satisfactorily the progression of myocardial dysfunction and the development of clinical phenotypes. The earliest mechanical changes and stresses applied in myocardial cells and/or myocardial loss or dysfunction activate left ventricular cavity remodeling and other neurohormonal regulatory mechanisms such as early release of natriuretic peptides followed by SAS and RAAS mobilization. Eventually the neurohormonal activation and the left ventricular remodeling process are leading to clinical deterioration of heart failure towards a multi-organic damage. It is hypothesized that approaching heart failure with the methodology of systems biology we promote the elucidation of its complex pathophysiology and most probably we can invent new therapeutic strategies. PMID:22935019
14 CFR 25.729 - Retracting mechanism.
Code of Federal Regulations, 2012 CFR
2012-01-01
... takeoff weight), occurring during retraction and extension at any airspeed up to 1.6 V s 1 (with the flaps... extending the landing gear in the event of— (1) Any reasonably probable failure in the normal retraction...
14 CFR 25.729 - Retracting mechanism.
Code of Federal Regulations, 2010 CFR
2010-01-01
... takeoff weight), occurring during retraction and extension at any airspeed up to 1.6 V s 1 (with the flaps... extending the landing gear in the event of— (1) Any reasonably probable failure in the normal retraction...
14 CFR 25.729 - Retracting mechanism.
Code of Federal Regulations, 2011 CFR
2011-01-01
... takeoff weight), occurring during retraction and extension at any airspeed up to 1.6 V s 1 (with the flaps... extending the landing gear in the event of— (1) Any reasonably probable failure in the normal retraction...
30 CFR 282.15 - Cancellation of leases.
Code of Federal Regulations, 2010 CFR
2010-07-01
... lease would probably cause serious harm or damage to life (including fish and other aquatic life), to... due to the failure of one or more partners to exercise due diligence, the innocent parties shall have...
NASA Technical Reports Server (NTRS)
Sullivan, Roy M.
2016-01-01
The stress rupture strength of silicon carbide fiber-reinforced silicon carbide composites with a boron nitride fiber coating decreases with time within the intermediate temperature range of 700 to 950 degree Celsius. Various theories have been proposed to explain the cause of the time-dependent stress rupture strength. The objective of this paper is to investigate the relative significance of the various theories for the time-dependent strength of silicon carbide fiber-reinforced silicon carbide composites. This is achieved through the development of a numerically based progressive failure analysis routine and through the application of the routine to simulate the composite stress rupture tests. The progressive failure routine is a time-marching routine with an iterative loop between a probability of fiber survival equation and a force equilibrium equation within each time step. Failure of the composite is assumed to initiate near a matrix crack and the progression of fiber failures occurs by global load sharing. The probability of survival equation is derived from consideration of the strength of ceramic fibers with randomly occurring and slow growing flaws as well as the mechanical interaction between the fibers and matrix near a matrix crack. The force equilibrium equation follows from the global load sharing presumption. The results of progressive failure analyses of the composite tests suggest that the relationship between time and stress-rupture strength is attributed almost entirely to the slow flaw growth within the fibers. Although other mechanisms may be present, they appear to have only a minor influence on the observed time-dependent behavior.
Energy drink-induced acute kidney injury.
Greene, Elisa; Oman, Kristy; Lefler, Mary
2014-10-01
To report a case of acute renal failure possibly induced by Red Bull. A 40-year-old man presented with various complaints, including a recent hypoglycemic episode. Assessment revealed that serum creatinine was elevated at 5.5 mg/dL, from a baseline of 0.9 mg/dL. An interview revealed a 2- to 3-week history of daily ingestion of 100 to 120 oz of Red Bull energy drink. Resolution of renal dysfunction occurred within 2 days of discontinuation of Red Bull and persisted through 10 months of follow-up. Rechallenge was not attempted. Energy-drink-induced renal failure has been reported infrequently. We identified 2 case reports via a search of MEDLINE, one of which occurred in combination with alcohol and the other of which was not available in English. According to the Food and Drug Administration's (FDA's) Center for Food Safety and Applied Nutrition Adverse Event Reporting System, between 2004 and 2012, the FDA has received 166 reports of adverse events associated with energy drink consumption. Only 3 of the 166 (0.18%) described renal failure, and none were reported with Red Bull specifically. A defined mechanism for injury is unknown. Assessment of the Naranjo adverse drug reaction probability scale indicates a probable relationship between the development of acute renal failure and Red Bull ingestion in our patient. Acute kidney injury has rarely been reported with energy drink consumption. Our report describes the first English language report of acute renal failure occurring in the context of ingestion of large quantities of energy drink without concomitant alcohol. © The Author(s) 2014.
Clark, Renee M; Besterfield-Sacre, Mary E
2009-03-01
We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walston, S; Rowland, M; Campbell, K
It is difficult to track to the location of a melted core in a GE BWR with Mark I containment during a beyond-design-basis accident. The Cooper Nuclear Station provided a baseline of normal material distributions and shielding configurations for the GE BWR with Mark I containment. Starting with source terms for a design-basis accident, methods and remote observation points were investigated to allow tracking of a melted core during a beyond-design-basis accident. The design of the GE BWR with Mark-I containment highlights an amazing poverty of expectations regarding a common mode failure of all reactor core cooling systems resulting inmore » a beyond-design-basis accident from the simple loss of electric power. This design is shown in Figure 1. The station blackout accident scenario has been consistently identified as the leading contributor to calculated probabilities for core damage. While NRC-approved models and calculations provide guidance for indirect methods to assess core damage during a beyond-design-basis loss-of-coolant accident (LOCA), there appears to be no established method to track the location of the core directly should the LOCA include a degree of fuel melt. We came to the conclusion that - starting with detailed calculations which estimate the release and movement of gaseous and soluble fission products from the fuel - selected dose readings in specific rooms of the reactor building should allow the location of the core to be verified.« less
Cao, Qi; Postmus, Douwe; Hillege, Hans L; Buskens, Erik
2013-06-01
Early estimates of the commercial headroom available to a new medical device can assist producers of health technology in making appropriate product investment decisions. The purpose of this study was to illustrate how this quantity can be captured probabilistically by combining probability elicitation with early health economic modeling. The technology considered was a novel point-of-care testing device in heart failure disease management. First, we developed a continuous-time Markov model to represent the patients' disease progression under the current care setting. Next, we identified the model parameters that are likely to change after the introduction of the new device and interviewed three cardiologists to capture the probability distributions of these parameters. Finally, we obtained the probability distribution of the commercial headroom available per measurement by propagating the uncertainty in the model inputs to uncertainty in modeled outcomes. For a willingness-to-pay value of €10,000 per life-year, the median headroom available per measurement was €1.64 (interquartile range €0.05-€3.16) when the measurement frequency was assumed to be daily. In the subsequently conducted sensitivity analysis, this median value increased to a maximum of €57.70 for different combinations of the willingness-to-pay threshold and the measurement frequency. Probability elicitation can successfully be combined with early health economic modeling to obtain the probability distribution of the headroom available to a new medical technology. Subsequently feeding this distribution into a product investment evaluation method enables stakeholders to make more informed decisions regarding to which markets a currently available product prototype should be targeted. Copyright © 2013. Published by Elsevier Inc.
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
Failure analysis and modeling of a VAXcluster system
NASA Technical Reports Server (NTRS)
Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.
1990-01-01
This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.
Tenofovir in second-line ART in Zambia and South Africa: Collaborative analysis of cohort studies
Wandeler, Gilles; Keiser, Olivia; Mulenga, Lloyd; Hoffmann, Christopher J; Wood, Robin; Chaweza, Thom; Brennan, Alana; Prozesky, Hans; Garone, Daniela; Giddy, Janet; Chimbetete, Cleophas; Boulle, Andrew; Egger, Matthias
2012-01-01
Objectives Tenofovir (TDF) is increasingly used in second-line antiretroviral treatment (ART) in sub-Saharan Africa. We compared outcomes of second-line ART containing and not containing TDF in cohort studies from Zambia and the Republic of South Africa (RSA). Methods Patients aged ≥ 16 years starting protease inhibitor-based second-line ART in Zambia (1 cohort) and RSA (5 cohorts) were included. We compared mortality, immunological failure (all cohorts) and virological failure (RSA only) between patients receiving and not receiving TDF. Competing risk models and Cox models adjusted for age, sex, CD4 count, time on first-line ART and calendar year were used to analyse mortality and treatment failure, respectively. Hazard ratios (HRs) were combined in fixed-effects meta-analysis. Findings 1,687 patients from Zambia and 1,556 patients from RSA, including 1,350 (80.0%) and 206 (13.2%) patients starting TDF, were followed over 4,471 person-years. Patients on TDF were more likely to have started second-line ART in recent years, and had slightly higher baseline CD4 counts than patients not on TDF. Overall 127 patients died, 532 were lost to follow-up and 240 patients developed immunological failure. In RSA 94 patients had virologic failure. Combined HRs comparing tenofovir with other regimens were 0.60 (95% CI 0.41–0.87) for immunologic failure and 0.63 (0.38–1.05) for mortality. The HR for virologic failure in RSA was 0.28 (0.09–0.90). Conclusions In this observational study patients on TDF-containing second-line ART were less likely to develop treatment failure than patients on other regimens. TDF seems to be an effective component of second-line ART in southern Africa. PMID:22743595
High Energy Failure Containment for Spacecraft
NASA Technical Reports Server (NTRS)
Pektas, Pete; Baker, Christopher
2011-01-01
Objective: The objective of this paper will be to investigate advancements and any commonality between spacecraft debris containment and the improvements being made in ballistic protection. Scope: This paper will focus on cross application of protection devices and methods, and how they relate to protecting humans from failures in spacecraft. The potential gain is to reduce the risk associated with hardware failure, while decreasing the weight and size of energy containment methods currently being used by the government and commercial industry. Method of Approach: This paper will examine testing that has already been accomplished in regards to the failure of high energy rotating hardware and compare it to advancements in ballistic protection. Examples are: DOT research and testing of turbine containment as documented in DOT/FAA/AR-96/110, DOT/FAA/AR-97/82, DOT/FAA/AR-98/22. It will also look at work accomplished by companies such as ApNano and IBD Deisenroth in the development of nano ceramics and nanometric steels. Other forms of energy absorbent materials and composites will also be considered and discussed. New Advances in State of the Art: There have been numerous advances in technology in regards to high energy debris containment and in the similar field of ballistic protection. This paper will discuss methods such as using impregnated or dry Kevlar, ceramic, and nano-technology which have been successfully tested but are yet to be utilized in spacecraft. Reports on tungsten disulfide nanotubes claim that they are 4-5 times stronger than steel and reports vary about the magnitude increase over Kevlar, but it appears to be somewhere in the range of 2-6 times stronger. This technology could also have applications in the protection of pressure vessels, motor housings, and hydraulic component failures.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
Nucleation, growth and localisation of microcracks: implications for predictability of rock failure
NASA Astrophysics Data System (ADS)
Main, I. G.; Kun, F.; Pál, G.; Jánosi, Z.
2016-12-01
The spontaneous emergence of localized co-operative deformation is an important phenomenon in the development of shear faults in porous media. It can be studied by empirical observation, by laboratory experiment or by numerical simulation. Here we investigate the evolution of damage and fragmentation leading up to and including system-sized failure in a numerical model of a porous rock, using discrete element simulations of the strain-controlled uniaxial compression of cylindrical samples of different finite size. As the system approaches macroscopic failure the number of fractures and the energy release rate both increase as a time-reversed Omori law, with scaling constants for the frequency-size distribution and the inter-event time, including their temporal evolution, that closely resemble those of natural experiments. The damage progressively localizes in a narrow shear band, ultimately a fault 'gouge' containing a large number of poorly-sorted non-cohesive fragments on a broad bandwidth of scales, with properties similar to those of natural and experimental faults. We determine the position and orientation of the central fault plane, the width of the deformation band and the spatial and mass distribution of fragments. The relative width of the deformation band decreases as a power law of the system size and the probability distribution of the angle of the damage plane converges to around 30 degrees, representing an emergent internal coefficient of friction of 0.7 or so. The mass of fragments is power law distributed, with an exponent that does not depend on scale, and is near that inferred for experimental and natural fault gouges. The fragments are in general angular, with a clear self-affine geometry. The consistency of this model with experimental and field results confirms the critical roles of pre-existing heterogeneity, elastic interactions, and finite system size to grain size ratio on the development of faults, and ultimately to assessing the predictive power of forecasts of failure time in such media.
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1992-01-01
This paper is concerned with methods of measuring and developing quality software. Reliable flight and ground support software is a highly important factor in the successful operation of the space shuttle program. Reliability is probably the most important of the characteristics inherent in the concept of 'software quality'. It is the probability of failure free operation of a computer program for a specified time and environment.
Fracture of Reduced-Diameter Zirconia Dental Implants Following Repeated Insertion.
Karl, Matthias; Scherg, Stefan; Grobecker-Karl, Tanja
Achievement of high insertion torque values indicating good primary stability is a goal during dental implant placement. The objective of this study was to evaluate whether or not two-piece implants made from zirconia ceramic may be damaged as a result of torque application. A total of 10 two-piece zirconia implants were repeatedly inserted into polyurethane foam material with increasing density and decreasing osteotomy size. The insertion torque applied was measured, and implants were checked for fractures by applying the fluorescent penetrant method. Weibull probability of failure was calculated based on the recorded insertion torque values. Catastrophic failures could be seen in five of the implants from two different batches at insertion torques ranging from 46.0 to 70.5 Ncm, while the remaining implants (all belonging to one batch) survived. Weibull probability of failure seems to be low at the manufacturer-recommended maximum insertion torque of 35 Ncm. Chipping fractures at the thread tips as well as tool marks were the only otherwise observed irregularities. While high insertion torques may be desirable for immediate loading protocols, zirconia implants may fracture when manufacturer-recommended insertion torques are exceeded. Evaluating bone quality prior to implant insertion may be useful.
Bridge reliability assessment based on the PDF of long-term monitored extreme strains
NASA Astrophysics Data System (ADS)
Jiao, Meiju; Sun, Limin
2011-04-01
Structural health monitoring (SHM) systems can provide valuable information for the evaluation of bridge performance. As the development and implementation of SHM technology in recent years, the data mining and use has received increasingly attention and interests in civil engineering. Based on the principle of probabilistic and statistics, a reliability approach provides a rational basis for analysis of the randomness in loads and their effects on structures. A novel approach combined SHM systems with reliability method to evaluate the reliability of a cable-stayed bridge instrumented with SHM systems was presented in this paper. In this study, the reliability of the steel girder of the cable-stayed bridge was denoted by failure probability directly instead of reliability index as commonly used. Under the assumption that the probability distributions of the resistance are independent to the responses of structures, a formulation of failure probability was deduced. Then, as a main factor in the formulation, the probability density function (PDF) of the strain at sensor locations based on the monitoring data was evaluated and verified. That Donghai Bridge was taken as an example for the application of the proposed approach followed. In the case study, 4 years' monitoring data since the operation of the SHM systems was processed, and the reliability assessment results were discussed. Finally, the sensitivity and accuracy of the novel approach compared with FORM was discussed.
Virulo is a probabilistic model for predicting virus attenuation. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve a chosen degree o...
Fault tolerant system with imperfect coverage, reboot and server vacation
NASA Astrophysics Data System (ADS)
Jain, Madhu; Meena, Rakesh Kumar
2017-06-01
This study is concerned with the performance modeling of a fault tolerant system consisting of operating units supported by a combination of warm and cold spares. The on-line as well as warm standby units are subject to failures and are send for the repair to a repair facility having single repairman which is prone to failure. If the failed unit is not detected, the system enters into an unsafe state from which it is cleared by the reboot and recovery action. The server is allowed to go for vacation if there is no failed unit present in the system. Markov model is developed to obtain the transient probabilities associated with the system states. Runge-Kutta method is used to evaluate the system state probabilities and queueing measures. To explore the sensitivity and cost associated with the system, numerical simulation is conducted.
QKD-based quantum private query without a failure probability
NASA Astrophysics Data System (ADS)
Liu, Bin; Gao, Fei; Huang, Wei; Wen, QiaoYan
2015-10-01
In this paper, we present a quantum-key-distribution (QKD)-based quantum private query (QPQ) protocol utilizing single-photon signal of multiple optical pulses. It maintains the advantages of the QKD-based QPQ, i.e., easy to implement and loss tolerant. In addition, different from the situations in the previous QKD-based QPQ protocols, in our protocol, the number of the items an honest user will obtain is always one and the failure probability is always zero. This characteristic not only improves the stability (in the sense that, ignoring the noise and the attack, the protocol would always succeed), but also benefits the privacy of the database (since the database will no more reveal additional secrets to the honest users). Furthermore, for the user's privacy, the proposed protocol is cheat sensitive, and for security of the database, we obtain an upper bound for the leaked information of the database in theory.
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
Empty sella syndrome secondary to intrasellar cyst in adolescence.
Raiti, S; Albrink, M J; Maclaren, N K; Chadduck, W M; Gabriele, O F; Chou, S M
1976-09-01
A 15-year-old boy had growth failure and failure of sexual development. The probable onset was at age 10. Endocrine studies showed hypopituitarism with deficiency of growth hormone and follicle-stimulating hormone, an abnormal response to metyrapone, and deficiency of thyroid function. Luteinizing hormone level was in the low-normal range. Posterior pituitary function was normal. Roentgenogram showed a large sella with some destruction of the posterior clinoids. Transsphenoidal exploration was carried out. The sella was empty except for a whitish membrane; no pituitary tissue was seen. The sella was packed with muscle. Recovery was uneventful, and the patient was given replacement therapy. On histologic examination,the cyst wall showed low pseudostratified cuboidal epithelium and occasional squamous metaplasia. Hemosiderin-filled phagocytes and acinar structures were also seen. The diagnosis was probable rupture of an intrasellar epithelial cyst, leading to empty sella syndrome.
CARES/Life Software for Designing More Reliable Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.
Puberty and the Education of Girls*
CAVANAGH, SHANNON E.; RIEGLE-CRUMB, CATHERINE; CROSNOE, ROBERT
2010-01-01
This study extends previous research on the social psychological implications of pubertal timing to education by applying a life course framework to data from the National Longitudinal Study of Adolescent Health and from the Adolescent Health and Academic Achievement Study. Early pubertal timing, which has previously been associated with major social psychological changes in girls' lives during middle school, predicted girls' grade point average and probability of course failure at the start of high school. Because of this initial failure during the high school transition, it also predicted their probability of dropping out of high school, and, among those who graduated, their grade point average at the end of high school. Such research demonstrates one way in which the immediate social psychological risk of early pubertal timing, measured as the age at menarche, translates into long-term disadvantage for girls, thereby opening up new avenues of research for social psychologists interested in youth development, health, and education. PMID:20216926
NASA Astrophysics Data System (ADS)
Kempa, Wojciech M.
2017-12-01
A finite-capacity queueing system with server breakdowns is investigated, in which successive exponentially distributed failure-free times are followed by repair periods. After the processing a customer may either rejoin the queue (feedback) with probability q, or definitely leave the system with probability 1 - q. The system of integral equations for transient queue-size distribution, conditioned by the initial level of buffer saturation, is build. The solution of the corresponding system written for Laplace transforms is found using the linear algebraic approach. The considered queueing system can be successfully used in modelling production lines with machine failures, in which the parameter q may be considered as a typical fraction of items demanding corrections. Morever, this queueing model can be applied in the analysis of real TCP/IP performance, where q stands for the fraction of packets requiring retransmission.
Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations
NASA Astrophysics Data System (ADS)
Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.
2014-02-01
The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
NASA Technical Reports Server (NTRS)
Williams, R. E.; Kruger, R.
1980-01-01
Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.
NASA Technical Reports Server (NTRS)
Delucia, R. A.; Salvino, J. T.
1981-01-01
This report presents statistical information relating to the number of gas turbine engine rotor failures which occurred in commercial aviation service use. The predominant failure involved blade fragments, 82.4 percent of which were contained. Although fewer rotor rim, disk, and seal failures occurred, 33.3%, 100% and 50% respectively were uncontained. Sixty-five percent of the 166 rotor failures occurred during the takeoff and climb stages of flight.
40 CFR 112.12 - Spill Prevention, Control, and Countermeasure Plan requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... equipment failure or human error at the facility. (c) Bulk storage containers. (1) Not use a container for... means of containment for the entire capacity of the largest single container and sufficient freeboard to... soil conditions. (6) Bulk storage container inspections. (i) Except for containers that meet the...
40 CFR 112.12 - Spill Prevention, Control, and Countermeasure Plan requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... equipment failure or human error at the facility. (c) Bulk storage containers. (1) Not use a container for... means of containment for the entire capacity of the largest single container and sufficient freeboard to... soil conditions. (6) Bulk storage container inspections. (i) Except for containers that meet the...
40 CFR 112.12 - Spill Prevention, Control, and Countermeasure Plan requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... equipment failure or human error at the facility. (c) Bulk storage containers. (1) Not use a container for... means of containment for the entire capacity of the largest single container and sufficient freeboard to... soil conditions. (6) Bulk storage container inspections. (i) Except for containers that meet the...
Nasal heterotopia versus pilocytic astrocytoma: A narrow border.
Ellouze, N; Born, J; Hoyoux, C; Michotte, A; Retz, C; Tebache, M; Piette, C
2015-08-01
Failure of the anterior neuropore can lead to three main types of anomalies: nasal dermal sinus, encephalocele and nasal glioma or heterotopia. In this report, we describe a case of intracranial and extracranial glial heterotopia that probably resulted from a common failure of anterior neuropore development. We describe the prenatal radiological assessment based on ultrasound and MRI results, and consider their limitation for early fetal diagnosis. We also discuss the embryogenesis and the possible pathogenic mechanisms involved. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Meyer, Nele Kristin; Schwanghart, Wolfgang; Korup, Oliver
2014-05-01
Norwegian's road network is frequently affected by debris flows. Both damage repair and traffic interruption generate high economic losses and necessitate a rigorous assessment of where losses are expected to be high and where preventive measures should be focused on. In recent studies, we have developed susceptibility and trigger probability maps that serve as input into a hazard calculation at the scale of first-order watersheds. Here we combine these results with graph theory to assess the impact of debris flows on the road network of southern Norway. Susceptibility and trigger probability are aggregated for individual road sections to form a reliability index that relates to the failure probability of a link that connects two network vertices, e.g., road junctions. We define link vulnerability as a function of traffic volume and additional link failure distance. Additional link failure distance is the extra length of the alternative path connecting the two associated link vertices in case the network link fails and is calculated by a shortest-path algorithm. The product of network reliability and vulnerability indices represent the risk index. High risk indices identify critical links for the Norwegian road network and are investigated in more detail. Scenarios demonstrating the impact of single or multiple debris flow events are run for the most important routes between seven large cities in southern Norway. First results show that the reliability of the road network is lowest in the central and north-western part of the study area. Road network vulnerability is highest in the mountainous regions in central southern Norway where the road density is low and in the vicinity of cities where the traffic volume is large. The scenarios indicate that city connections that have their shortest path via routes crossing the central part of the study area have the highest risk of route failure.
Cost-effectiveness of early compared to late inhaled nitric oxide therapy in near-term infants.
Armstrong, Edward P; Dhanda, Rahul
2010-12-01
The purpose of this study was to determine the cost-effectiveness of early versus late inhaled nitric oxide (INO) therapy in neonates with hypoxic respiratory failure initially managed on conventional mechanical ventilation. A decision analytic model was created to compare the use of early INO compared to delayed INO for neonates receiving mechanical ventilation due to hypoxic respiratory failure. The perspective of the model was that of a hospital. Patients who did not respond to either early or delayed INO were assumed to have been treated with extracorporeal membrane oxygenation (ECMO). The effectiveness measure was defined as a neonate discharged alive without requiring ECMO therapy. A Monte Carlo simulation of 10,000 cases was conducted using first and second order probabilistic analysis. Direct medical costs that differed between early versus delayed INO treatment were estimated until time to hospital discharge. The proportion of successfully treated patients and costs were determined from the probabilistic sensitivity analysis. The mean (± SD) effectiveness rate for early INO was 0.75 (± 0.08) and 0.61 (± 0.09) for delayed INO. The mean hospital cost for early INO was $21,462 (± $2695) and $27,226 (± $3532) for delayed INO. In 87% of scenarios, early INO dominated delayed INO by being both more effective and less costly. The acceptability curve between products demonstrated that early INO had over a 90% probability of being the most cost-effective treatment across a wide range of willingness to pay values. This analysis indicated that early INO therapy was cost-effective in neonates with hypoxic respiratory failure requiring mechanical ventilation compared to delayed INO by reducing the probability of developing severe hypoxic respiratory failure. There was a 90% or higher probability that early INO was more cost-effective than delayed INO across a wide range of willingness to pay values in this analysis.
Dihydroartemisinin-piperaquine for treating uncomplicated Plasmodium falciparum malaria
Zani, Babalwa; Gathu, Michael; Donegan, Sarah; Olliaro, Piero L; Sinclair, David
2014-01-01
Background The World Health Organization (WHO) recommends Artemisinin-based Combination Therapy (ACT) for treating uncomplicated Plasmodium falciparum malaria. This review aims to assist the decision-making of malaria control programmes by providing an overview of the relative effects of dihydroartemisinin-piperaquine (DHA-P) versus other recommended ACTs. Objectives To evaluate the effectiveness and safety of DHA-P compared to other ACTs for treating uncomplicated P. falciparum malaria in adults and children. Search methods We searched the Cochrane Infectious Diseases Group Specialized Register; the Cochrane Central Register of Controlled Trials (CENTRAL) published in The Cochrane Library; MEDLINE; EMBASE; LILACS, and the metaRegister of Controlled Trials (mRCT) up to July 2013. Selection criteria Randomized controlled trials comparing a three-day course of DHA-P to a three-day course of an alternative WHO recommended ACT in uncomplicated P. falciparum malaria. Data collection and analysis Two authors independently assessed trials for eligibility and risk of bias, and extracted data. We analysed primary outcomes in line with the WHO 'Protocol for assessing and monitoring antimalarial drug efficacy’ and compared drugs using risk ratios (RR) and 95% confidence intervals (CI). Secondary outcomes were effects on gametocytes, haemoglobin, and adverse events. We assessed the quality of evidence using the GRADE approach. Main results We included 27 trials, enrolling 16,382 adults and children, and conducted between 2002 and 2010. Most trials excluded infants aged less than six months and pregnant women. DHA-P versus artemether-lumefantrine In Africa, over 28 days follow-up, DHA-P is superior to artemether-lumefantrine at preventing further parasitaemia (PCR-unadjusted treatment failure: RR 0.34, 95% CI 0.30 to 0.39, nine trials, 6200 participants, high quality evidence), and although PCR-adjusted treatment failure was below 5% for both ACTs, it was consistently lower with DHA-P (PCR-adjusted treatment failure: RR 0.42, 95% CI 0.29 to 0.62, nine trials, 5417 participants, high quality evidence). DHA-P has a longer prophylactic effect on new infections which may last for up to 63 days (PCR-unadjusted treatment failure: RR 0.71, 95% CI 0.65 to 0.78, two trials, 3200 participants, high quality evidence). In Asia and Oceania, no differences have been shown at day 28 (four trials, 1143 participants, moderate quality evidence), or day 63 (one trial, 323 participants, low quality evidence). Compared to artemether-lumefantrine, no difference was seen in prolonged QTc (low quality evidence), and no cardiac arrhythmias were reported. The frequency of other adverse events is probably similar with both combinations (moderate quality evidence). DHA-P versus artesunate plus mefloquine In Asia, over 28 days follow-up, DHA-P is as effective as artesunate plus mefloquine at preventing further parasitaemia (PCR-unadjusted treatment failure: eight trials, 3487 participants, high quality evidence). Once adjusted by PCR to exclude new infections, treatment failure at day 28 was below 5% for both ACTs in all eight trials, but lower with DHA-P in two trials (PCR-adjusted treatment failure: RR 0.41 95% CI 0.21 to 0.80, eight trials, 3482 participants, high quality evidence). Both combinations contain partner drugs with very long half-lives and no consistent benefit in preventing new infections has been seen over 63 days follow-up (PCR-unadjusted treatment failure: five trials, 2715 participants, moderate quality evidence). In the only trial from South America, there were fewer recurrent parastaemias over 63 days with artesunate plus mefloquine (PCR-unadjusted treatment failure: RR 6.19, 95% CI 1.40 to 27.35, one trial, 445 participants, low quality evidence), but no differences were seen once adjusted for new infections (PCR-adjusted treatment failure: one trial, 435 participants, low quality evidence). DHA-P is associated with less nausea, vomiting, dizziness, sleeplessness, and palpitations compared to artesunate plus mefloquine (moderate quality evidence). DHA-P was associated with more frequent prolongation of the QTc interval (low quality evidence), but no cardiac arrhythmias were reported. Authors' conclusions In Africa, dihydroartemisinin-piperaquine reduces overall treatment failure compared to artemether-lumefantrine, although both drugs have PCR-adjusted failure rates of less than 5%. In Asia, dihydroartemisinin-piperaquine is as effective as artesunate plus mefloquine, and is better tolerated. PLAIN LANGUAGE SUMMARY Dihydroartemisinin-piperaquine for treating uncomplicated malaria This review summarises trials evaluating the effects of dihydroartemisinin-piperaquine (DHA-P) compared to other artemisinin-based combination therapies recommended by the World Health Organization. After searching for relevant trials up to July 2013, we included 27 randomized controlled trials, enrolling 16,382 adults and children and conducted between 2002 and 2010. What is uncomplicated malaria and how might dihydroartemisinin-piperaquine work Uncomplicated malaria is the mild form of malaria which usually causes a fever, with or without headache, tiredness, muscle pains, abdominal pains, nausea, and vomiting. If left untreated, uncomplicated malaria can develop into severe malaria with kidney failure, breathing difficulties, fitting, unconsciousness, and eventually death. DHA-P is one of five artemisinin-based combination therapies the World Health Organization currently recommends to treat malaria. These combinations contain an artemisinin component (such as dihydroartemisinin) which works very quickly to clear the malaria parasite from the person's blood, and a longer acting drug (such as piperaquine) which clears the remaining parasites from the blood and may prevent new infections with malaria for several weeks. What the research says DHA-P versus artemether lumefantrine In studies of people living in Africa, both DHA-P and artemether-lumefantrine are very effective at treating malaria (high quality evidence). However, DHA-P cures slightly more patients than artemether-lumefantrine, and it also prevents further malaria infections for longer after treatment (high quality evidence). DHA-P and artemether-lumefantrine probably have similar side effects (moderate quality evidence). DHA-P versus artesunate plus mefloquine In studies of people living in Asia, DHA-P is as effective as artesunate plus mefloquine at treating malaria (moderate quality evidence). Artesunate plus mefloquine probably causes more nausea, vomiting, dizziness, sleeplessness, and palpitations than DHA-P (moderate quality evidence). Overall, in some people, DHA-P has been seen to cause short term changes in electrocardiographs tracing the conduction of the heart rhythm (low quality evidence), but these small changes on the electrocardiograph resolved within one week without serious consequences. PMID:24443033
Analysis of risk factors for cluster behavior of dental implant failures.
Chrcanovic, Bruno Ramos; Kisch, Jenö; Albrektsson, Tomas; Wennerberg, Ann
2017-08-01
Some studies indicated that implant failures are commonly concentrated in few patients. To identify and analyze cluster behavior of dental implant failures among subjects of a retrospective study. This retrospective study included patients receiving at least three implants only. Patients presenting at least three implant failures were classified as presenting a cluster behavior. Univariate and multivariate logistic regression models and generalized estimating equations analysis evaluated the effect of explanatory variables on the cluster behavior. There were 1406 patients with three or more implants (8337 implants, 592 failures). Sixty-seven (4.77%) patients presented cluster behavior, with 56.8% of all implant failures. The intake of antidepressants and bruxism were identified as potential negative factors exerting a statistically significant influence on a cluster behavior at the patient-level. The negative factors at the implant-level were turned implants, short implants, poor bone quality, age of the patient, the intake of medicaments to reduce the acid gastric production, smoking, and bruxism. A cluster pattern among patients with implant failure is highly probable. Factors of interest as predictors for implant failures could be a number of systemic and local factors, although a direct causal relationship cannot be ascertained. © 2017 Wiley Periodicals, Inc.
12 CFR 509.21 - Failure to appear.
Code of Federal Regulations, 2010 CFR
2010-01-01
... IN ADJUDICATORY PROCEEDINGS Uniform Rules of Practice and Procedure § 509.21 Failure to appear... administrative law judge shall file with the Director a recommended decision containing the findings and the...
Masquelier, Bernard; Droz, Cecile; Dary, Martin; Perronne, Christian; Ferré, Virginie; Spire, Bruno; Descamps, Diane; Raffi, François; Brun-Vézinet, Françoise; Chêne, Geneviève
2003-01-01
In 243 antiretroviral-naive human immunodeficiency-infected patients starting a first-line-protease inhibitor (mainly nelfinavir)-containing therapy, the presence of the polymorphism R57K in the protease at the inception of therapy was independently associated with a higher rate of virological failure. PMID:14576131
A multivariate copula-based framework for dealing with hazard scenarios and failure probabilities
NASA Astrophysics Data System (ADS)
Salvadori, G.; Durante, F.; De Michele, C.; Bernardi, M.; Petrella, L.
2016-05-01
This paper is of methodological nature, and deals with the foundations of Risk Assessment. Several international guidelines have recently recommended to select appropriate/relevant Hazard Scenarios in order to tame the consequences of (extreme) natural phenomena. In particular, the scenarios should be multivariate, i.e., they should take into account the fact that several variables, generally not independent, may be of interest. In this work, it is shown how a Hazard Scenario can be identified in terms of (i) a specific geometry and (ii) a suitable probability level. Several scenarios, as well as a Structural approach, are presented, and due comparisons are carried out. In addition, it is shown how the Hazard Scenario approach illustrated here is well suited to cope with the notion of Failure Probability, a tool traditionally used for design and risk assessment in engineering practice. All the results outlined throughout the work are based on the Copula Theory, which turns out to be a fundamental theoretical apparatus for doing multivariate risk assessment: formulas for the calculation of the probability of Hazard Scenarios in the general multidimensional case (d≥2) are derived, and worthy analytical relationships among the probabilities of occurrence of Hazard Scenarios are presented. In addition, the Extreme Value and Archimedean special cases are dealt with, relationships between dependence ordering and scenario levels are studied, and a counter-example concerning Tail Dependence is shown. Suitable indications for the practical application of the techniques outlined in the work are given, and two case studies illustrate the procedures discussed in the paper.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-01
... Friday, except Federal holidays. The AD docket contains this proposed AD, the regulatory evaluation, any... three criteria address the failure types under evaluation: single failures, single failures in... evaluations included consideration of previous actions taken that may mitigate the need for further action...
NASA Astrophysics Data System (ADS)
Li, Xiaozhao; Qi, Chengzhi; Shao, Zhushan; Ma, Chao
2018-02-01
Natural brittle rock contains numerous randomly distributed microcracks. Crack initiation, growth, and coalescence play a predominant role in evaluation for the strength and failure of brittle rocks. A new analytical method is proposed to predict the strength and failure of brittle rocks containing initial microcracks. The formulation of this method is based on an improved wing crack model and a suggested micro-macro relation. In this improved wing crack model, the parameter of crack angle is especially introduced as a variable, and the analytical stress-crack relation considering crack angle effect is obtained. Coupling the proposed stress-crack relation and the suggested micro-macro relation describing the relation between crack growth and axial strain, the stress-strain constitutive relation is obtained to predict the rock strength and failure. Considering different initial microcrack sizes, friction coefficients and confining pressures, effects of crack angle on tensile wedge force acting on initial crack interface are studied, and effects of crack angle on stress-strain constitutive relation of rocks are also analyzed. The strength and crack initiation stress under different crack angles are discussed, and the value of most disadvantaged angle triggering crack initiation and rock failure is founded. The analytical results are similar to the published study results. Rationality of this proposed analytical method is verified.
Nasir, Hina; Javaid, Nadeem; Sher, Muhammad; Qasim, Umar; Khan, Zahoor Ali; Alrajeh, Nabil; Niaz, Iftikhar Azim
2016-01-01
This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs); performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE) efficient depth based routing and Enhanced-ACE (E-ACE) are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ). E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment. PMID:27420061
On the probability of exceeding allowable leak rates through degraded steam generator tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cizelj, L.; Sorsek, I.; Riesch-Oppermann, H.
1997-02-01
This paper discusses some possible ways of predicting the behavior of the total leak rate through the damaged steam generator tubes. This failure mode is of special concern in cases where most through-wall defects may remain In operation. A particular example is the application of alternate (bobbin coil voltage) plugging criterion to Outside Diameter Stress Corrosion Cracking at the tube support plate intersections. It is the authors aim to discuss some possible modeling options that could be applied to solve the problem formulated as: Estimate the probability that the sum of all individual leak rates through degraded tubes exceeds themore » predefined acceptable value. The probabilistic approach is of course aiming at reliable and computationaly bearable estimate of the failure probability. A closed form solution is given for a special case of exponentially distributed individual leak rates. Also, some possibilities for the use of computationaly efficient First and Second Order Reliability Methods (FORM and SORM) are discussed. The first numerical example compares the results of approximate methods with closed form results. SORM in particular shows acceptable agreement. The second numerical example considers a realistic case of NPP in Krsko, Slovenia.« less
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Tiger in the fault tree jungle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, P.
1976-01-01
There is yet little evidence of serious efforts to apply formal reliability analysis methods to evaluate, or even to identify, potential common-mode failures (CMF) of reactor safeguard systems. The prospects for event logic modeling in this regard are examined by the primitive device of reviewing actual CMF experience in terms of what the analyst might have perceived a priori. Further insights of the probability and risks aspects of CMFs are sought through consideration of three key likelihood factors: (1) prior probability of cause ever existing, (2) opportunities for removing cause, and (3) probability that a CMF cause will be activatedmore » by conditions associated with a real system challenge. It was concluded that the principal needs for formal logical discipline in the endeavor to decrease CMF-related risks are to discover and to account for strong ''energetic'' dependency couplings that could arise in the major accidents usually classed as ''hypothetical.'' This application would help focus research, design and quality assurance efforts to cope with major CMF causes. But without extraordinary challenges to the reactor safeguard systems, there must continue to be virtually no statistical evidence pertinent to that class of failure dependencies.« less
NASA Astrophysics Data System (ADS)
Toroody, Ahmad Bahoo; Abaiee, Mohammad Mahdi; Gholamnia, Reza; Ketabdari, Mohammad Javad
2016-09-01
Owing to the increase in unprecedented accidents with new root causes in almost all operational areas, the importance of risk management has dramatically risen. Risk assessment, one of the most significant aspects of risk management, has a substantial impact on the system-safety level of organizations, industries, and operations. If the causes of all kinds of failure and the interactions between them are considered, effective risk assessment can be highly accurate. A combination of traditional risk assessment approaches and modern scientific probability methods can help in realizing better quantitative risk assessment methods. Most researchers face the problem of minimal field data with respect to the probability and frequency of each failure. Because of this limitation in the availability of epistemic knowledge, it is important to conduct epistemic estimations by applying the Bayesian theory for identifying plausible outcomes. In this paper, we propose an algorithm and demonstrate its application in a case study for a light-weight lifting operation in the Persian Gulf of Iran. First, we identify potential accident scenarios and present them in an event tree format. Next, excluding human error, we use the event tree to roughly estimate the prior probability of other hazard-promoting factors using a minimal amount of field data. We then use the Success Likelihood Index Method (SLIM) to calculate the probability of human error. On the basis of the proposed event tree, we use the Bayesian network of the provided scenarios to compensate for the lack of data. Finally, we determine the resulting probability of each event based on its evidence in the epistemic estimation format by building on two Bayesian network types: the probability of hazard promotion factors and the Bayesian theory. The study results indicate that despite the lack of available information on the operation of floating objects, a satisfactory result can be achieved using epistemic data.
Probabilistic assessment of landslide tsunami hazard for the northern Gulf of Mexico
NASA Astrophysics Data System (ADS)
Pampell-Manis, A.; Horrillo, J.; Shigihara, Y.; Parambath, L.
2016-01-01
The devastating consequences of recent tsunamis affecting Indonesia and Japan have prompted a scientific response to better assess unexpected tsunami hazards. Although much uncertainty exists regarding the recurrence of large-scale tsunami events in the Gulf of Mexico (GoM), geological evidence indicates that a tsunami is possible and would most likely come from a submarine landslide triggered by an earthquake. This study customizes for the GoM a first-order probabilistic landslide tsunami hazard assessment. Monte Carlo Simulation (MCS) is employed to determine landslide configurations based on distributions obtained from observational submarine mass failure (SMF) data. Our MCS approach incorporates a Cholesky decomposition method for correlated landslide size parameters to capture correlations seen in the data as well as uncertainty inherent in these events. Slope stability analyses are performed using landslide and sediment properties and regional seismic loading to determine landslide configurations which fail and produce a tsunami. The probability of each tsunamigenic failure is calculated based on the joint probability of slope failure and probability of the triggering earthquake. We are thus able to estimate sizes and return periods for probabilistic maximum credible landslide scenarios. We find that the Cholesky decomposition approach generates landslide parameter distributions that retain the trends seen in observational data, improving the statistical validity and relevancy of the MCS technique in the context of landslide tsunami hazard assessment. Estimated return periods suggest that probabilistic maximum credible SMF events in the north and northwest GoM have a recurrence of 5000-8000 years, in agreement with age dates of observed deposits.
van Walraven, Carl; Austin, Peter C; Manuel, Douglas; Knoll, Greg; Jennings, Allison; Forster, Alan J
2010-12-01
Administrative databases commonly use codes to indicate diagnoses. These codes alone are often inadequate to accurately identify patients with particular conditions. In this study, we determined whether we could quantify the probability that a person has a particular disease-in this case renal failure-using other routinely collected information available in an administrative data set. This would allow the accurate identification of a disease cohort in an administrative database. We determined whether patients in a randomly selected 100,000 hospitalizations had kidney disease (defined as two or more sequential serum creatinines or the single admission creatinine indicating a calculated glomerular filtration rate less than 60 mL/min/1.73 m²). The independent association of patient- and hospitalization-level variables with renal failure was measured using a multivariate logistic regression model in a random 50% sample of the patients. The model was validated in the remaining patients. Twenty thousand seven hundred thirteen patients had kidney disease (20.7%). A diagnostic code of kidney disease was strongly associated with kidney disease (relative risk: 34.4), but the accuracy of the code was poor (sensitivity: 37.9%; specificity: 98.9%). Twenty-nine patient- and hospitalization-level variables entered the kidney disease model. This model had excellent discrimination (c-statistic: 90.1%) and accurately predicted the probability of true renal failure. The probability threshold that maximized sensitivity and specificity for the identification of true kidney disease was 21.3% (sensitivity: 80.0%; specificity: 82.2%). Multiple variables available in administrative databases can be combined to quantify the probability that a person has a particular disease. This process permits accurate identification of a disease cohort in an administrative database. These methods may be extended to other diagnoses or procedures and could both facilitate and clarify the use of administrative databases for research and quality improvement. Copyright © 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Roverato, M.; Capra, L.; Sulpizio, R.; Norini, G.
2011-10-01
Throughout its history, Colima Volcano has experienced numerous partial edifice collapses with associated emplacement of debris avalanche deposits of contrasting volume, morphology and texture. A detailed stratigraphic study in the south-eastern sector of the volcano allowed the recognition of two debris avalanche deposits, named San Marcos (> 28,000 cal yr BP, V = ~ 1.3 km 3) and Tonila (15,000-16,000 cal yr BP, V = ~ 1 km 3 ). This work sheds light on the pre-failure conditions of the volcano based primarily on a detailed textural study of debris avalanche deposits and their associated pyroclastic and volcaniclastic successions. Furthermore, we show how the climate at the time of the Tonila collapse influenced the failure mechanisms. The > 28,000 cal yr BP San Marcos collapse was promoted by edifice steep flanks and ongoing tectonic and volcanotectonic deformation, and was followed by a magmatic eruption that emplaced pyroclastic flow deposits. In contrast, the Tonila failure occurred just after the Last Glacial Maximum (22,000-18,000 cal BP) and, in addition to the typical debris avalanche textural characteristics (angular to sub-angular clasts, coarse matrix, jigsaw fit) it shows a hybrid facies characterized by debris avalanche blocks embedded in a finer, homogenous and partially cemented matrix, a texture more characteristic of debris flow deposits. The Tonila debris avalanche is directly overlain by a 7-m thick hydromagmatic pyroclastic succession. Massive debris flow deposits, often more than 10 m thick and containing large amounts of tree trunk logs, represent the top unit in the succession. Fluvial deposits also occur throughout all successions; these represent periods of highly localized stream reworking. All these lines of evidence point to the presence of water in the edifice prior to the Tonila failure, suggesting it may have been a weakening factor. The Tonila failure appears to represent an anomalous event related to the particular climatic conditions at the time of the collapse. The presence of extensive water at the onset of deglaciation modified the mobility of the debris avalanche, and led to the formation of a thick sequence of debris flows. The possibility that such a combination of events can occur, and that their probability is likely to increase during the rainy season, should be taken into consideration when evaluating hazards associated with future collapses at Colima volcano.
Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven
2015-01-01
Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.
Probabilistic finite elements for fracture and fatigue analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Lawrence, M.; Besterfield, G. H.
1989-01-01
The fusion of the probabilistic finite element method (PFEM) and reliability analysis for probabilistic fracture mechanics (PFM) is presented. A comprehensive method for determining the probability of fatigue failure for curved crack growth was developed. The criterion for failure or performance function is stated as: the fatigue life of a component must exceed the service life of the component; otherwise failure will occur. An enriched element that has the near-crack-tip singular strain field embedded in the element is used to formulate the equilibrium equation and solve for the stress intensity factors at the crack-tip. Performance and accuracy of the method is demonstrated on a classical mode 1 fatigue problem.
Mantovani, F; Mastromarino, G; Fenice, O; Canclini, L; Patelli, E; Colombo, F; Vecchio, D; Austoni, E
1994-09-01
The recent clinical and experimental research innovations in Andrology make possible the following classification of impotence: "Failure to initiate" "Failure to store" "Failure to fill" The last aspect, including veno-occlusive dysfunction, is continuously reevaluated by andrologic studies. The main diagnostic procedure of this complex problem, in constant evolution, is represented by cavernometry. Recently, but with full success, we are utilizing direct radioisotopic penogram in video sexy stimulation: in preselection function but probably in future with substitutive function of the more invasive and traditional cavernometry. In spite of this methodologic progress the findings of cavernometry are in continuous discussion as in tumultuous evolution, in anatomo-physiological environment, is the intracavernous district that, for many aspects, necessity of ulterior histochemical, pharmacodynamic and neurophysiological acknowledgements.
Yodogawa, Kenji; Ono, Norihiko; Seino, Yoshihiko
2012-01-01
A 56-year-old man was admitted because of palpitations and dyspnea. A 12-lead electrocardiogram showed irregular wide QRS complex tachycardia with a slur at the initial portion of the QRS complex. He had preexisting long-standing persistent atrial fibrillation, but early excitation syndrome had never been noted. Chest X-ray showed heart enlargement and pulmonary congestion. He was diagnosed with late onset of Wolff-Parkinson-White syndrome, and congestive heart failure was probably caused by rapid ventricular response of atrial fibrillation through the accessory pathway. Emergency catheter ablation for the accessory pathway was undertaken, and heart failure was dramatically improved.
Physicochemical characterization and failure analysis of military coating systems
NASA Astrophysics Data System (ADS)
Keene, Lionel Thomas
Modern military coating systems, as fielded by all branches of the U.S. military, generally consist of a diverse array of organic and inorganic components that can complicate their physicochemical analysis. These coating systems consist of VOC-solvent/waterborne automotive grade polyurethane matrix containing a variety of inorganic pigments and flattening agents. The research presented here was designed to overcome the practical difficulties regarding the study of such systems through the combined application of several cross-disciplinary techniques, including vibrational spectroscopy, electron microscopy, microtomy, ultra-fast laser ablation and optical interferometry. The goal of this research has been to determine the degree and spatial progression of weathering-induced alteration of military coating systems as a whole, as well as to determine the failure modes involved, and characterizing the impact of these failures on the physical barrier performance of the coatings. Transmission-mode Fourier Transform Infrared (FTIR) spectroscopy has been applied to cross-sections of both baseline and artificially weathered samples to elucidate weathering-induced spatial gradients to the baseline chemistry of the coatings. A large discrepancy in physical durability (as indicated by the spatial progression of these gradients) has been found between older and newer generation coatings. Data will be shown implicating silica fillers (previously considered inert) as the probable cause for this behavioral divergence. A case study is presented wherein the application of the aforementioned FTIR technique fails to predict the durability of the coating system as a whole. The exploitation of the ultra-fast optical phenomenon of femtosecond (10-15S) laser ablation is studied as a potential tool to facilitate spectroscopic depth profiling of composite materials. Finally, the interferometric technique of Phase Shifting was evaluated as a potential high-sensitivity technique applied to the problem of determining internal stress evolution in curing and aging coatings.
Accelerated battery-life testing - A concept
NASA Technical Reports Server (NTRS)
Mccallum, J.; Thomas, R. E.
1971-01-01
Test program, employing empirical, statistical and physical methods, determines service life and failure probabilities of electrochemical cells and batteries, and is applicable to testing mechanical, electrical, and chemical devices. Data obtained aids long-term performance prediction of battery or cell.
Fujii, Soichiro; Miura, Ikuo; Tanaka, Hideo
2015-06-01
A 78-year-old male, who had CKD and chronic heart failure, was referred to our hospital for evaluation of leukocytosis. His bone marrow contained 12% blast cells and chromosome analysis showed the Ph chromosome as well as other changes. The patient was diagnosed with the accelerated-phase CML because FISH and RT-PCR disclosed BCR/ABL fusion signals and minor BCR/ABL, respectively. Imatinib was administered, but the CML was resistant to this treatment. We gave him nilotinib employing a reduced and intermittent administration protocol because of the progression of anemia and heart failure. The patient achieved PCyR in 8 months, but, 12 months later, his WBC count increased and 83% of the cells were blasts. Because the probable diagnosis was the blast crisis of CML, we switched from nilotinib to dasatinib. However, leukocytosis worsened and he died of pneumonia. It was later revealed that he had a normal karyotype and both FISH and RT-PCR analysis of BCR/ABL were negative. His final diagnosis was Ph negative AML developing from Ph positive CML in PCyR. Since there were no dysplastic changes indicative of MDS, it was assumed that the AML was not secondary leukemia caused by the tyrosine kinase inhibitor but, rather, de novo AML.
Recent progresses in outcome-dependent sampling with failure time data.
Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo
2017-01-01
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case-cohort design, generalized case-cohort design, stratified case-cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.
Recent progresses in outcome-dependent sampling with failure time data
Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo
2016-01-01
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design. PMID:26759313
Simulating Initial and Progressive Failure of Open-Hole Composite Laminates under Tension
NASA Astrophysics Data System (ADS)
Guo, Zhangxin; Zhu, Hao; Li, Yongcun; Han, Xiaoping; Wang, Zhihua
2016-12-01
A finite element (FE) model is developed for the progressive failure analysis of fiber reinforced polymer laminates. The failure criterion for fiber and matrix failure is implemented in the FE code Abaqus using user-defined material subroutine UMAT. The gradual degradation of the material properties is controlled by the individual fracture energies of fiber and matrix. The failure and damage in composite laminates containing a central hole subjected to uniaxial tension are simulated. The numerical results show that the damage model can be used to accurately predicte the progressive failure behaviour both qualitatively and quantitatively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sallaberry, Cedric Jean-Marie.; Helton, Jon Craig
2012-10-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allowmore » an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). This report describes the Fortran 90 program CPLOAS_2 that implements the following representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent: (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS can be included in the calculations performed by CPLOAS_2.« less
Organ failure and tight glycemic control in the SPRINT study.
Chase, J Geoffrey; Pretty, Christopher G; Pfeifer, Leesa; Shaw, Geoffrey M; Preiser, Jean-Charles; Le Compte, Aaron J; Lin, Jessica; Hewett, Darren; Moorhead, Katherine T; Desaive, Thomas
2010-01-01
Intensive care unit mortality is strongly associated with organ failure rate and severity. The sequential organ failure assessment (SOFA) score is used to evaluate the impact of a successful tight glycemic control (TGC) intervention (SPRINT) on organ failure, morbidity, and thus mortality. A retrospective analysis of 371 patients (3,356 days) on SPRINT (August 2005 - April 2007) and 413 retrospective patients (3,211 days) from two years prior, matched by Acute Physiology and Chronic Health Evaluation (APACHE) III. SOFA is calculated daily for each patient. The effect of the SPRINT TGC intervention is assessed by comparing the percentage of patients with SOFA ≤5 each day and its trends over time and cohort/group. Organ-failure free days (all SOFA components ≤2) and number of organ failures (SOFA components >2) are also compared. Cumulative time in 4.0 to 7.0 mmol/L band (cTIB) was evaluated daily to link tightness and consistency of TGC (cTIB ≥0.5) to SOFA ≤5 using conditional and joint probabilities. Admission and maximum SOFA scores were similar (P = 0.20; P = 0.76), with similar time to maximum (median: one day; IQR: 13 days; P = 0.99). Median length of stay was similar (4.1 days SPRINT and 3.8 days Pre-SPRINT; P = 0.94). The percentage of patients with SOFA ≤5 is different over the first 14 days (P = 0.016), rising to approximately 75% for Pre-SPRINT and approximately 85% for SPRINT, with clear separation after two days. Organ-failure-free days were different (SPRINT = 41.6%; Pre-SPRINT = 36.5%; P < 0.0001) as were the percent of total possible organ failures (SPRINT = 16.0%; Pre-SPRINT = 19.0%; P < 0.0001). By Day 3 over 90% of SPRINT patients had cTIB ≥0.5 (37% Pre-SPRINT) reaching 100% by Day 7 (50% Pre-SPRINT). Conditional and joint probabilities indicate tighter, more consistent TGC under SPRINT (cTIB ≥0.5) increased the likelihood SOFA ≤5. SPRINT TGC resolved organ failure faster, and for more patients, from similar admission and maximum SOFA scores, than conventional control. These reductions mirror the reduced mortality with SPRINT. The cTIB ≥0.5 metric provides a first benchmark linking TGC quality to organ failure. These results support other physiological and clinical results indicating the role tight, consistent TGC can play in reducing organ failure, morbidity and mortality, and should be validated on data from randomised trials.
Organ failure and tight glycemic control in the SPRINT study
2010-01-01
Introduction Intensive care unit mortality is strongly associated with organ failure rate and severity. The sequential organ failure assessment (SOFA) score is used to evaluate the impact of a successful tight glycemic control (TGC) intervention (SPRINT) on organ failure, morbidity, and thus mortality. Methods A retrospective analysis of 371 patients (3,356 days) on SPRINT (August 2005 - April 2007) and 413 retrospective patients (3,211 days) from two years prior, matched by Acute Physiology and Chronic Health Evaluation (APACHE) III. SOFA is calculated daily for each patient. The effect of the SPRINT TGC intervention is assessed by comparing the percentage of patients with SOFA ≤5 each day and its trends over time and cohort/group. Organ-failure free days (all SOFA components ≤2) and number of organ failures (SOFA components >2) are also compared. Cumulative time in 4.0 to 7.0 mmol/L band (cTIB) was evaluated daily to link tightness and consistency of TGC (cTIB ≥0.5) to SOFA ≤5 using conditional and joint probabilities. Results Admission and maximum SOFA scores were similar (P = 0.20; P = 0.76), with similar time to maximum (median: one day; IQR: [1,3] days; P = 0.99). Median length of stay was similar (4.1 days SPRINT and 3.8 days Pre-SPRINT; P = 0.94). The percentage of patients with SOFA ≤5 is different over the first 14 days (P = 0.016), rising to approximately 75% for Pre-SPRINT and approximately 85% for SPRINT, with clear separation after two days. Organ-failure-free days were different (SPRINT = 41.6%; Pre-SPRINT = 36.5%; P < 0.0001) as were the percent of total possible organ failures (SPRINT = 16.0%; Pre-SPRINT = 19.0%; P < 0.0001). By Day 3 over 90% of SPRINT patients had cTIB ≥0.5 (37% Pre-SPRINT) reaching 100% by Day 7 (50% Pre-SPRINT). Conditional and joint probabilities indicate tighter, more consistent TGC under SPRINT (cTIB ≥0.5) increased the likelihood SOFA ≤5. Conclusions SPRINT TGC resolved organ failure faster, and for more patients, from similar admission and maximum SOFA scores, than conventional control. These reductions mirror the reduced mortality with SPRINT. The cTIB ≥0.5 metric provides a first benchmark linking TGC quality to organ failure. These results support other physiological and clinical results indicating the role tight, consistent TGC can play in reducing organ failure, morbidity and mortality, and should be validated on data from randomised trials. PMID:20704712
A short walk in quantum probability
NASA Astrophysics Data System (ADS)
Hudson, Robin
2018-04-01
This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas. This article is part of the themed issue `Hilbert's sixth problem'.
Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements
NASA Technical Reports Server (NTRS)
Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.
1988-01-01
The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.
Space shuttle solid rocket booster recovery system definition, volume 1
NASA Technical Reports Server (NTRS)
1973-01-01
The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.
Piecewise Geometric Estimation of a Survival Function.
1985-04-01
Langberg (1982). One of the by- products of the estimation process is an estimate of the failure rate function: here, another issue is raised. It is evident...envisaged as the infinite product probability space that may be constructed in the usual way from the sequence of probability spaces corresponding to the...received 6 MP (a mercaptopurine used in the treatment of leukemia). The ordered remis- sion times in weeks are: 6, 6, 6, 6+, 7, 9+, 10, 10+, 11+, 13, 16
Statistically based material properties: A military handbook-17 perspective
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Vangel, Mark G.
1990-01-01
The statistical procedures and their importance in obtaining composite material property values in designing structures for aircraft and military combat systems are described. The property value is such that the strength exceeds this value with a prescribed probability with 95 percent confidence in the assertion. The survival probabilities are the 99th percentile and 90th percentile for the A and B basis values respectively. The basis values for strain to failure measurements are defined in a similar manner. The B value is the primary concern.
Probabilistically Perfect Cloning of Two Pure States: Geometric Approach.
Yerokhin, V; Shehu, A; Feldman, E; Bagan, E; Bergou, J A
2016-05-20
We solve the long-standing problem of making n perfect clones from m copies of one of two known pure states with minimum failure probability in the general case where the known states have arbitrary a priori probabilities. The solution emerges from a geometric formulation of the problem. This formulation reveals that cloning converges to state discrimination followed by state preparation as the number of clones goes to infinity. The convergence exhibits a phenomenon analogous to a second-order symmetry-breaking phase transition.
RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for reliability calculations and infinite repair resources for availability calculations. No more than 967 items or groups can be modeled by RELAV. If larger problems can be broken into subsystems of 967 items or less, the subsystem results can be used as item inputs to a system problem. The calculated availabilities are steady-state values. Group results are presented in the order in which they were calculated (from the most embedded level out to the system level). This provides a good mechanism to perform trade studies. Starting from the system result and working backwards, the granularity gets finer; therefore, system elements that contribute most to system degradation are detected quickly. RELAV is a C-language program originally developed under the UNIX operating system on a MASSCOMP MC500 computer. It has been modified, as necessary, and ported to an IBM PC compatible with a math coprocessor. The current version of the program runs in the DOS environment and requires a Turbo C vers. 2.0 compiler. RELAV has a memory requirement of 103 KB and was developed in 1989. RELAV is a copyrighted work with all copyright vested in NASA.
Schouten, Henrike J; Koek, Huiberdina L; Oudega, Ruud; van Delden, Johannes J M; Moons, Karel G M; Geersing, Geert-Jan
2015-02-01
We aimed to validate the Oudega diagnostic decision rule-which was developed and validated among younger aged primary care patients-to rule-out deep vein thrombosis (DVT) in frail older outpatients. In older patients (>60 years, either community dwelling or residing in nursing homes) with clinically suspected DVT, physicians recorded the score on the Oudega rule and d-dimer test. DVT was confirmed with a composite reference standard including ultrasonography examination and 3-month follow-up. The proportion of patients with a very low probability of DVT according to the Oudega rule (efficiency), and the proportion of patients with symptomatic venous thromboembolism during 3 months follow-up within this 'very low risk' group (failure rate) was calculated. DVT occurred in 164 (47%) of the 348 study participants (mean age 81 years, 85% residing in nursing homes). The probability of DVT was very low in 69 patients (Oudega score ≤3 points plus a normal d-dimer test; efficiency 20%) of whom four had non-fatal DVT (failure rate 5.8%; 2.3-14%). With a simple revised version of the Oudega rule for older suspected patients, 43 patients had a low risk of DVT (12% of the total population) of whom only one had DVT (failure rate 2.3%; 0.4-12%). In older suspected patients, application of the original Oudega rule to exclude DVT resulted in a higher failure rate as compared to previous studies. A revised and simplified Oudega strategy specifically developed for elderly suspected patients resulted in a lower failure rate though at the expense of a lower efficiency. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Leaking Containers: Success and Failure in Controlling the Mosquito Aedes aegypti in Brazil.
Löwy, Ilana
2017-04-01
In 1958, the Pan American Health Organization declared that Brazil had successfully eradicated the mosquito Aedes aegypti, responsible for the transmission of yellow fever, dengue fever, chikungunya, and Zika virus. Yet in 2016 the Brazilian minister of health described the situation of dengue fever as "catastrophic." Discussing the recent epidemic of Zika virus, which amplified the crisis produced by the persistence of dengue fever, Brazil's president declared in January 2016 that "we are in the process of losing the war against the mosquito Aedes aegypti." I discuss the reasons for the failure to contain Aedes in Brazil and the consequences of this failure. A longue durée perspective favors a view of the Zika epidemic that does not present it as a health crisis to be contained with a technical solution alone but as a pathology that has the persistence of deeply entrenched structural problems and vulnerabilities.
Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand
2018-05-09
This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.
Parameter estimation in Cox models with missing failure indicators and the OPPERA study.
Brownstein, Naomi C; Cai, Jianwen; Slade, Gary D; Bair, Eric
2015-12-30
In a prospective cohort study, examining all participants for incidence of the condition of interest may be prohibitively expensive. For example, the "gold standard" for diagnosing temporomandibular disorder (TMD) is a physical examination by a trained clinician. In large studies, examining all participants in this manner is infeasible. Instead, it is common to use questionnaires to screen for incidence of TMD and perform the "gold standard" examination only on participants who screen positively. Unfortunately, some participants may leave the study before receiving the "gold standard" examination. Within the framework of survival analysis, this results in missing failure indicators. Motivated by the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study, a large cohort study of TMD, we propose a method for parameter estimation in survival models with missing failure indicators. We estimate the probability of being an incident case for those lacking a "gold standard" examination using logistic regression. These estimated probabilities are used to generate multiple imputations of case status for each missing examination that are combined with observed data in appropriate regression models. The variance introduced by the procedure is estimated using multiple imputation. The method can be used to estimate both regression coefficients in Cox proportional hazard models as well as incidence rates using Poisson regression. We simulate data with missing failure indicators and show that our method performs as well as or better than competing methods. Finally, we apply the proposed method to data from the OPPERA study. Copyright © 2015 John Wiley & Sons, Ltd.
Space Stirling Cryocooler Contamination Lessons Learned and Recommended Control Procedures
NASA Astrophysics Data System (ADS)
Glaister, D. S.; Price, K.; Gully, W.; Castles, S.; Reilly, J.
The most important characteristic of a space cryocooler is its reliability over a lifetime typically in excess of 7 years. While design improvements have reduced the probability of mechanical failure, the risk of internal contamination is still significant and has not been addressed in a consistent approach across the industry. A significant fraction of the endurance test and flight units have experienced some performance degradation related to internal contamination. The purpose of this paper is to describe and assess the contamination issues inside long life, space cryocoolers and to recommend procedures to minimize the probability of encountering contamination related failures and degradation. The paper covers the sources of contamination, the degradation and failure mechanisms, the theoretical and observed cryocooler sensitivity, and the recommended prevention procedures and their impact. We begin with a discussion of the contamination sources, both artificial and intrinsic. Next, the degradation and failure mechanisms are discussed in an attempt to arrive at a contaminant susceptibility, from which we can derive a contamination budget for the machine. This theoretical sensitivity is then compared with the observed sensitivity to illustrate the conservative nature of the assumed scenarios. A number of lessons learned on Raytheon, Ball, Air Force Research Laboratory, and NASA GSFC programs are shared to convey the practical aspects of the contamination problem. Then, the materials and processes required to meet the proposed budget are outlined. An attempt is made to present a survey of processes across industry.
New Approach For Prediction Groundwater Depletion
NASA Astrophysics Data System (ADS)
Moustafa, Mahmoud
2017-01-01
Current approaches to quantify groundwater depletion involve water balance and satellite gravity. However, the water balance technique includes uncertain estimation of parameters such as evapotranspiration and runoff. The satellite method consumes time and effort. The work reported in this paper proposes using failure theory in a novel way to predict groundwater saturated thickness depletion. An important issue in the failure theory proposed is to determine the failure point (depletion case). The proposed technique uses depth of water as the net result of recharge/discharge processes in the aquifer to calculate remaining saturated thickness resulting from the applied pumping rates in an area to evaluate the groundwater depletion. Two parameters, the Weibull function and Bayes analysis were used to model and analyze collected data from 1962 to 2009. The proposed methodology was tested in a nonrenewable aquifer, with no recharge. Consequently, the continuous decline in water depth has been the main criterion used to estimate the depletion. The value of the proposed approach is to predict the probable effect of the current applied pumping rates on the saturated thickness based on the remaining saturated thickness data. The limitation of the suggested approach is that it assumes the applied management practices are constant during the prediction period. The study predicted that after 300 years there would be an 80% probability of the saturated aquifer which would be expected to be depleted. Lifetime or failure theory can give a simple alternative way to predict the remaining saturated thickness depletion with no time-consuming processes such as the sophisticated software required.
Analysis of rockbolt performance at the Waste Isolation Pilot Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrill, L.J.; Francke, C.T.; Saeb, S.
Rockbolt failures at the Waste Isolation Pilot Plant have been recorded since 1990 and are categorized in terms of mode of failure. The failures are evaluated in terms of physical location of installation within the mine, local excavation geometry and stratigraphy, proximity to other excavations or shafts, and excavation age. The database of failures has revealed discrete ares of the mine containing relatively large numbers of failures. The results of metallurgical analyses and standard rockbolt load testing have generally been in agreement with the in situ evaluations.
Material wear and failure mode analysis of breakfast cereal extruder barrels and screw elements
NASA Astrophysics Data System (ADS)
Mastio, Michael Joseph, Jr.
2005-11-01
Nearly seventy-five years ago, the single screw extruder was introduced as a means to produce metal products. Shortly after that, the extruder found its way into the plastics industry. Today much of the world's polymer industry utilizes extruders to produce items such as soda bottles, PVC piping, and toy figurines. Given the significant economical advantages of extruders over conventional batch flow systems, extruders have also migrated into the food industry. Food applications include the meat, pet food, and cereal industries to name just a few. Cereal manufacturers utilize extruders to produce various forms of Ready-to-Eat (RTE) cereals. These cereals are made from grains such as rice, oats, wheat, and corn. The food industry has been incorrectly viewed as an extruder application requiring only minimal energy control and performance capability. This misconception has resulted in very little research in the area of material wear and failure mode analysis of breakfast cereal extruders. Breakfast cereal extruder barrels and individual screw elements are subjected to the extreme pressures and temperatures required to shear and cook the cereal ingredients, resulting in excessive material wear and catastrophic failure of these components. Therefore, this project focuses on the material wear and failure mode analysis of breakfast cereal extruder barrels and screw elements, modeled as a Discrete Time Markov Chain (DTMC) process in which historical data is used to predict future failures. Such predictive analysis will yield cost savings opportunities by providing insight into extruder maintenance scheduling and interchangeability of screw elements. In this DTMC wear analysis, four states of wear are defined and a probability transition matrix is determined based upon 24,041 hours of operational data. This probability transition matrix is used to predict when an extruder component will move to the next state of wear and/or failure. This information can be used to determine maintenance schedules and screw element interchangeability.
Elastic Rock Heterogeneity Controls Brittle Rock Failure during Hydraulic Fracturing
NASA Astrophysics Data System (ADS)
Langenbruch, C.; Shapiro, S. A.
2014-12-01
For interpretation and inversion of microseismic data it is important to understand, which properties of the reservoir rock control the occurrence probability of brittle rock failure and associated seismicity during hydraulic stimulation. This is especially important, when inverting for key properties like permeability and fracture conductivity. Although it became accepted that seismic events are triggered by fluid flow and the resulting perturbation of the stress field in the reservoir rock, the magnitude of stress perturbations, capable of triggering failure in rocks, can be highly variable. The controlling physical mechanism of this variability is still under discussion. We compare the occurrence of microseismic events at the Cotton Valley gas field to elastic rock heterogeneity, obtained from measurements along the treatment wells. The heterogeneity is characterized by scale invariant fluctuations of elastic properties. We observe that the elastic heterogeneity of the rock formation controls the occurrence of brittle failure. In particular, we find that the density of events is increasing with the Brittleness Index (BI) of the rock, which is defined as a combination of Young's modulus and Poisson's ratio. We evaluate the physical meaning of the BI. By applying geomechanical investigations we characterize the influence of fluctuating elastic properties in rocks on the probability of brittle rock failure. Our analysis is based on the computation of stress fluctuations caused by elastic heterogeneity of rocks. We find that elastic rock heterogeneity causes stress fluctuations of significant magnitude. Moreover, the stress changes necessary to open and reactivate fractures in rocks are strongly related to fluctuations of elastic moduli. Our analysis gives a physical explanation to the observed relation between elastic heterogeneity of the rock formation and the occurrence of brittle failure during hydraulic reservoir stimulations. A crucial factor for understanding seismicity in unconventional reservoirs is the role of anisotropy of rocks. We evaluate an elastic VTI rock model corresponding to a shale gas reservoir in the Horn River Basin to understand the relation between stress, event occurrence and elastic heterogeneity in anisotropic rocks.
Failure-probability driven dose painting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogelius, Ivan R.; Håkansson, Katrin; Due, Anne K.
Purpose: To demonstrate a data-driven dose-painting strategy based on the spatial distribution of recurrences in previously treated patients. The result is a quantitative way to define a dose prescription function, optimizing the predicted local control at constant treatment intensity. A dose planning study using the optimized dose prescription in 20 patients is performed.Methods: Patients treated at our center have five tumor subvolumes from the center of the tumor (PET positive volume) and out delineated. The spatial distribution of 48 failures in patients with complete clinical response after (chemo)radiation is used to derive a model for tumor control probability (TCP). Themore » total TCP is fixed to the clinically observed 70% actuarial TCP at five years. Additionally, the authors match the distribution of failures between the five subvolumes to the observed distribution. The steepness of the dose–response is extracted from the literature and the authors assume 30% and 20% risk of subclinical involvement in the elective volumes. The result is a five-compartment dose response model matching the observed distribution of failures. The model is used to optimize the distribution of dose in individual patients, while keeping the treatment intensity constant and the maximum prescribed dose below 85 Gy.Results: The vast majority of failures occur centrally despite the small volumes of the central regions. Thus, optimizing the dose prescription yields higher doses to the central target volumes and lower doses to the elective volumes. The dose planning study shows that the modified prescription is clinically feasible. The optimized TCP is 89% (range: 82%–91%) as compared to the observed TCP of 70%.Conclusions: The observed distribution of locoregional failures was used to derive an objective, data-driven dose prescription function. The optimized dose is predicted to result in a substantial increase in local control without increasing the predicted risk of toxicity.« less
Injury pattern as an indication of seat belt failure in ejected vehicle occupants.
Freeman, Michael D; Eriksson, Anders; Leith, Wendy
2014-09-01
Prior authors have suggested that when occupant ejection occurs in association with a seat belt failure, entanglement of the outboard upper extremity (OUE) with the retracting shoulder belt will invariably occur, leaving injury pattern evidence of belt use. In the present investigation, the authors assessed this theory using data accessed from the NASS-CDS for ejected front seat occupants of passenger vehicles. Logistic regression models were used to assess the associations between seat belt failure status and injuries. Injury types associated with seat belt failure were significant OUE and head injuries (OR = 3.87, [95% CI 1.2, 13.0] and 3.1, [95% CI 1.0, 9.7], respectively). The two injury types were found to be a predictor of seat belt use and subsequent failure only if combined with a high (≥0.8) precrash probability of belt use. The injury pattern associated with a seat belt failure-related ejection has limited use in the forensic investigation of crash-related ejections. © 2014 American Academy of Forensic Sciences.
Goldstein, Benjamin A; Thomas, Laine; Zaroff, Jonathan G; Nguyen, John; Menza, Rebecca; Khush, Kiran K
2016-07-01
Over the past two decades, there have been increasingly long waiting times for heart transplantation. We studied the relationship between heart transplant waiting time and transplant failure (removal from the waitlist, pretransplant death, or death or graft failure within 1 year) to determine the risk that conservative donor heart acceptance practices confer in terms of increasing the risk of failure among patients awaiting transplantation. We studied a cohort of 28,283 adults registered on the United Network for Organ Sharing heart transplant waiting list between 2000 and 2010. We used Kaplan-Meier methods with inverse probability censoring weights to examine the risk of transplant failure accumulated over time spent on the waiting list (pretransplant). In addition, we used transplant candidate blood type as an instrumental variable to assess the risk of transplant failure associated with increased wait time. Our results show that those who wait longer for a transplant have greater odds of transplant failure. While on the waitlist, the greatest risk of failure is during the first 60 days. Doubling the amount of time on the waiting list was associated with a 10% (1.01, 1.20) increase in the odds of failure within 1 year after transplantation. Our findings suggest a relationship between time spent on the waiting list and transplant failure, thereby supporting research aimed at defining adequate donor heart quality and acceptance standards for heart transplantation.
NASA Astrophysics Data System (ADS)
Cauffriez, Laurent
2017-01-01
This paper deals with the modeling of a random failures process of a Safety Instrumented System (SIS). It aims to identify the expected number of failures for a SIS during its lifecycle. Indeed, the fact that the SIS is a system being tested periodically gives the idea to apply Bernoulli trials to characterize the random failure process of a SIS and thus to verify if the PFD (Probability of Failing Dangerously) experimentally obtained agrees with the theoretical one. Moreover, the notion of "odds on" found in Bernoulli theory allows engineers and scientists determining easily the ratio between “outcomes with success: failure of SIS” and “outcomes with unsuccess: no failure of SIS” and to confirm that SIS failures occur sporadically. A Stochastic P-temporised Petri net is proposed and serves as a reference model for describing the failure process of a 1oo1 SIS architecture. Simulations of this stochastic Petri net demonstrate that, during its lifecycle, the SIS is rarely in a state in which it cannot perform its mission. Experimental results are compared to Bernoulli trials in order to validate the powerfulness of Bernoulli trials for the modeling of the failures process of a SIS. The determination of the expected number of failures for a SIS during its lifecycle opens interesting research perspectives for engineers and scientists by completing the notion of PFD.
Performance of Sequoyah Containment Anchorage System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fanous, F.; Greimann, L.; Wassef, W.
1993-01-01
Deformation of a steel containment anchorage system during a severe accident may result in a leakage path at the containment boundaries. Current design criteria are based on either ductile or brittle failure modes of headed bolts that do not account for factors such as cracking of the containment basemat or deformation of the anchor bolt that may affect the behavior of the containment anchorage system. The purpose of this study was to investigate the performance of a typical ice condenser containment`s anchorage system. This was accomplished by analyzing the Sequoyah Containment Anchorage System. Based on a strength of materials approachmore » and assuming that the anchor bolts are resisting the uplift caused by the internal pressure, one can estimate that the failure of the anchor bolts would occur at a containment pressure of 79 psig. To verify these results and to calibrate the strength of materials equation, the Sequoyah containment anchorage system was analyzed with the ABAQUS program using a three-dimensional, finite-element model. The model included portions of the steel containment building, shield building, anchor bolt assembly, reinforced concrete mat and soil foundation material.« less
Comparison of Crack Initiation, Propagation and Coalescence Behavior of Concrete and Rock Materials
NASA Astrophysics Data System (ADS)
Zengin, Enes; Abiddin Erguler, Zeynal
2017-04-01
There are many previously studies carried out to identify crack initiation, propagation and coalescence behavior of different type of rocks. Most of these studies aimed to understand and predict the probable instabilities on different engineering structures such as mining galleries or tunnels. For this purpose, in these studies relatively smaller natural rock and synthetic rock-like models were prepared and then the required laboratory tests were performed to obtain their strength parameters. By using results provided from these models, researchers predicted the rock mass behavior under different conditions. However, in the most of these studies, rock materials and models were considered as contains none or very few discontinuities and structural flaws. It is well known that rock masses naturally are extremely complex with respect to their discontinuities conditions and thus it is sometimes very difficult to understand and model their physical and mechanical behavior. In addition, some vuggy rock materials such as basalts and limestones also contain voids and gaps having various geometric properties. Providing that the failure behavior of these type of rocks controlled by the crack initiation, propagation and coalescence formed from their natural voids and gaps, the effect of these voids and gaps over failure behavior of rocks should be investigated. Intact rocks are generally preferred due to relatively easy side of their homogeneous characteristics in numerical modelling phases. However, it is very hard to extract intact samples from vuggy rocks because of their complex pore sizes and distributions. In this study, the feasibility of concrete samples to model and mimic the failure behavior vuggy rocks was investigated. For this purpose, concrete samples were prepared at a mixture of %65 cement dust and %35 water and their physical and mechanical properties were determined by laboratory experiments. The obtained physical and mechanical properties were used to constitute numerical models, and then uniaxial compressive strength (UCS) tests were performed on these models by using a commercial software called as Particle Flow Code (PFC2D). When the crack behavior of concrete samples obtained from both laboratory tests and numerical models are compared with the results of previous studies, a significant similarity was found. As a result, due to the observed similarity crack behavior between concretes and rocks, it can be concluded that intact concrete samples can be used for modelling purposes to understand the effect of voids and gaps on failure characteristics of vuggy rocks.