Recent progresses in outcome-dependent sampling with failure time data.
Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo
2017-01-01
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case-cohort design, generalized case-cohort design, stratified case-cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.
Recent progresses in outcome-dependent sampling with failure time data
Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo
2016-01-01
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design. PMID:26759313
Improved Multi-Axial, Temperature and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.
2002-01-01
An extensive effort has recently been completed by the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program to completely characterize the effects of multi-axial loading, temperature and time on the failure characteristics of three filled epoxy adhesives (TIGA 321, EA913NA, EA946). As part of this effort, a single general failure criterion was developed that accounted for these effects simultaneously. This model was named the Multi- Axial, Temperature, and Time Dependent or MATT failure criterion. Due to the intricate nature of the failure criterion, some parameters were required to be calculated using complex equations or numerical methods. This paper documents some simple but accurate modifications to the failure criterion to allow for calculations of failure conditions without complex equations or numerical techniques.
1984-10-26
test for independence; ons i ser, -, of the poduct life estimator; dependent risks; 119 ASRACT Coniinue on ’wme-se f nereiary-~and iaen r~f> by Worst...the failure times associated with different failure - modes when we really should use a bivariate (or multivariate) distribution, then what is the...dependencies may be present, then what is the magnitude of the estimation error? S The third specific aim will attempt to obtain bounds on the
Multiaxial Temperature- and Time-Dependent Failure Model
NASA Technical Reports Server (NTRS)
Richardson, David; McLennan, Michael; Anderson, Gregory; Macon, David; Batista-Rodriquez, Alicia
2003-01-01
A temperature- and time-dependent mathematical model predicts the conditions for failure of a material subjected to multiaxial stress. The model was initially applied to a filled epoxy below its glass-transition temperature, and is expected to be applicable to other materials, at least below their glass-transition temperatures. The model is justified simply by the fact that it closely approximates the experimentally observed failure behavior of this material: The multiaxiality of the model has been confirmed (see figure) and the model has been shown to be applicable at temperatures from -20 to 115 F (-29 to 46 C) and to predict tensile failures of constant-load and constant-load-rate specimens with failure times ranging from minutes to months..
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property valuemore » at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.« less
Failure Criterion For Isotropic Time Dependent Materials Which Accounts for Multi-Axial Loading
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.
2003-01-01
The Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program has recently conducted testing to characterize the effects of multi-axial loading, temperature and time on the failure characteristics of TIGA321, EA913NA, EA946 (three filled epoxy adhesives). From the test data a "Multi-Axial, Temperature, and Time Dependent" or MATT failure criterion was developed. It is shown that this criterion simplifies, for constant load and constant load rate conditions, into a form that can be easily used for stress analysis. Failure for TIGA321 and EA913NA are characterized below their glass transition temperature. Failure for EA946 is characterized for conditions that pass through its glass transition. The MATT failure criterion is shown to be accurate for a wide range of conditions for these adhesives.
Robustness and Vulnerability of Networks with Dynamical Dependency Groups.
Bai, Ya-Nan; Huang, Ning; Wang, Lei; Wu, Zhi-Xi
2016-11-28
The dependency property and self-recovery of failure nodes both have great effects on the robustness of networks during the cascading process. Existing investigations focused mainly on the failure mechanism of static dependency groups without considering the time-dependency of interdependent nodes and the recovery mechanism in reality. In this study, we present an evolving network model consisting of failure mechanisms and a recovery mechanism to explore network robustness, where the dependency relations among nodes vary over time. Based on generating function techniques, we provide an analytical framework for random networks with arbitrary degree distribution. In particular, we theoretically find that an abrupt percolation transition exists corresponding to the dynamical dependency groups for a wide range of topologies after initial random removal. Moreover, when the abrupt transition point is above the failure threshold of dependency groups, the evolving network with the larger dependency groups is more vulnerable; when below it, the larger dependency groups make the network more robust. Numerical simulations employing the Erdős-Rényi network and Barabási-Albert scale free network are performed to validate our theoretical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.
1995-08-01
A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less
Evaluation of a Multi-Axial, Temperature, and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.; Rudolphi, Michael (Technical Monitor)
2002-01-01
To obtain a better understanding the response of the structural adhesives used in the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle, an extensive effort has been conducted to characterize in detail the failure properties of these adhesives. This effort involved the development of a failure model that includes the effects of multi-axial loading, temperature, and time. An understanding of the effects of these parameters on the failure of the adhesive is crucial to the understanding and prediction of the safety of the RSRM nozzle. This paper documents the use of this newly developed multi-axial, temperature, and time (MATT) dependent failure model for modeling failure for the adhesives TIGA 321, EA913NA, and EA946. The development of the mathematical failure model using constant load rate normal and shear test data is presented. Verification of the accuracy of the failure model is shown through comparisons between predictions and measured creep and multi-axial failure data. The verification indicates that the failure model performs well for a wide range of conditions (loading, temperature, and time) for the three adhesives. The failure criterion is shown to be accurate through the glass transition for the adhesive EA946. Though this failure model has been developed and evaluated with adhesives, the concepts are applicable for other isotropic materials.
Semiparametric regression analysis of failure time data with dependent interval censoring.
Chen, Chyong-Mei; Shen, Pao-Sheng
2017-09-20
Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Sullivan, Roy M.
2016-01-01
The stress rupture strength of silicon carbide fiber-reinforced silicon carbide composites with a boron nitride fiber coating decreases with time within the intermediate temperature range of 700 to 950 degree Celsius. Various theories have been proposed to explain the cause of the time-dependent stress rupture strength. The objective of this paper is to investigate the relative significance of the various theories for the time-dependent strength of silicon carbide fiber-reinforced silicon carbide composites. This is achieved through the development of a numerically based progressive failure analysis routine and through the application of the routine to simulate the composite stress rupture tests. The progressive failure routine is a time-marching routine with an iterative loop between a probability of fiber survival equation and a force equilibrium equation within each time step. Failure of the composite is assumed to initiate near a matrix crack and the progression of fiber failures occurs by global load sharing. The probability of survival equation is derived from consideration of the strength of ceramic fibers with randomly occurring and slow growing flaws as well as the mechanical interaction between the fibers and matrix near a matrix crack. The force equilibrium equation follows from the global load sharing presumption. The results of progressive failure analyses of the composite tests suggest that the relationship between time and stress-rupture strength is attributed almost entirely to the slow flaw growth within the fibers. Although other mechanisms may be present, they appear to have only a minor influence on the observed time-dependent behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, M.H.; Coon, D.M.
Time-dependent failure at elevated temperatures currently governs the service life of oxynitride glass-joined silicon nitride. Creep, devitrification, stress- aided oxidation-controlled slow crack growth, and viscous cabitation-controlled failure are examined as possible controlling mechanisms. Creep deformation failure is observed above 1000{degrees}C. Fractographic evidence indicates cavity formation and growth below 1000{degrees}C. Auger electron spectroscopy verified that the oxidation rate of the joining glass is governed by the oxygen supply rate. Time-to-failure data and those predicted using the Tsai and Raj, and Raj and Dang viscous cavitation models. It is concluded that viscous relaxation and isolated cavity growth control the rate of failuremore » in oxynitride glass-filled silicon nitride joints below 1000{degrees}C. Several possible methods are also proposed for increasing the service lives of these joints.« less
High Risk of Graft Failure in Emerging Adult Heart Transplant Recipients.
Foster, B J; Dahhou, M; Zhang, X; Dharnidharka, V; Ng, V; Conway, J
2015-12-01
Emerging adulthood (17-24 years) is a period of high risk for graft failure in kidney transplant. Whether a similar association exists in heart transplant recipients is unknown. We sought to estimate the relative hazards of graft failure at different current ages, compared with patients between 20 and 24 years old. We evaluated 11 473 patients recorded in the Scientific Registry of Transplant Recipients who received a first transplant at <40 years old (1988-2013) and had at least 6 months of graft function. Time-dependent Cox models were used to estimate the association between current age (time-dependent) and failure risk, adjusted for time since transplant and other potential confounders. Failure was defined as death following graft failure or retransplant; observation was censored at death with graft function. There were 2567 failures. Crude age-specific graft failure rates were highest in 21-24 year olds (4.2 per 100 person-years). Compared to individuals with the same time since transplant, 21-24 year olds had significantly higher failure rates than all other age periods except 17-20 years (HR 0.92 [95%CI 0.77, 1.09]) and 25-29 years (0.86 [0.73, 1.03]). Among young first heart transplant recipients, graft failure risks are highest in the period from 17 to 29 years of age. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.
Does an inter-flaw length control the accuracy of rupture forecasting in geological materials?
NASA Astrophysics Data System (ADS)
Vasseur, Jérémie; Wadsworth, Fabian B.; Heap, Michael J.; Main, Ian G.; Lavallée, Yan; Dingwell, Donald B.
2017-10-01
Multi-scale failure of porous materials is an important phenomenon in nature and in material physics - from controlled laboratory tests to rockbursts, landslides, volcanic eruptions and earthquakes. A key unsolved research question is how to accurately forecast the time of system-sized catastrophic failure, based on observations of precursory events such as acoustic emissions (AE) in laboratory samples, or, on a larger scale, small earthquakes. Until now, the length scale associated with precursory events has not been well quantified, resulting in forecasting tools that are often unreliable. Here we test the hypothesis that the accuracy of the forecast failure time depends on the inter-flaw distance in the starting material. We use new experimental datasets for the deformation of porous materials to infer the critical crack length at failure from a static damage mechanics model. The style of acceleration of AE rate prior to failure, and the accuracy of forecast failure time, both depend on whether the cracks can span the inter-flaw length or not. A smooth inverse power-law acceleration of AE rate to failure, and an accurate forecast, occurs when the cracks are sufficiently long to bridge pore spaces. When this is not the case, the predicted failure time is much less accurate and failure is preceded by an exponential AE rate trend. Finally, we provide a quantitative and pragmatic correction for the systematic error in the forecast failure time, valid for structurally isotropic porous materials, which could be tested against larger-scale natural failure events, with suitable scaling for the relevant inter-flaw distances.
NASA Astrophysics Data System (ADS)
Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.
2017-04-01
Large mountain slopes in alpine environments undergo a complex long-term evolution from glacial to postglacial environments, through a transient period of paraglacial readjustment. During and after this transition, the interplay among rock strength, topographic relief, and morpho-climatic drivers varying in space and time can lead to the development of different types of slope instability, from sudden catastrophic failures to large, slow, long-lasting yet potentially catastrophic rockslides. Understanding the long-term evolution of large rock slopes requires accounting for the time-dependence of deglaciation unloading, permeability and fluid pressure distribution, displacements and failure mechanisms. In turn, this is related to a convincing description of rock mass damage processes and to their transition from a sub-critical (progressive failure) to a critical (catastrophic failure) character. Although mechanisms of damage occurrence in rocks have been extensively studied in the laboratory, the description of time-dependent damage under gravitational load and variable external actions remains difficult. In this perspective, starting from a time-dependent model conceived for laboratory rock deformation, we developed Dadyn-RS, a tool to simulate the long-term evolution of real, large rock slopes. Dadyn-RS is a 2D, FEM model programmed in Matlab, which combines damage and time-to-failure laws to reproduce both diffused damage and strain localization meanwhile tracking long-term slope displacements from primary to tertiary creep stages. We implemented in the model the ability to account for rock mass heterogeneity and property upscaling, time-dependent deglaciation, as well as damage-dependent fluid pressure occurrence and stress corrosion. We first tested DaDyn-RS performance on synthetic case studies, to investigate the effect of the different model parameters on the mechanisms and timing of long-term slope behavior. The model reproduces complex interactions between topography, deglaciation rate, mechanical properties and fluid pressure occurrence, resulting in different kinematics, damage patterns and timing of slope instabilities. We assessed the role of groundwater on slope damage and deformation mechanisms by introducing time-dependent pressure cycling within simulations. Then, we applied DaDyn-RS to real slopes located in the Italian Central Alps, affected by an active rockslide and a Deep Seated Gravitational Slope Deformation, respectively. From Last Glacial Maximum to present conditions, our model allows reproducing in an explicitly time-dependent framework the progressive development of damage-induced permeability, strain localization and shear band differentiation at different times between the Lateglacial period and the Mid-Holocene climatic transition. Different mechanisms and timings characterize different styles of slope deformations, consistently with available dating constraints. DaDyn-RS is able to account for different long-term slope dynamics, from slow creep to the delayed transition to fast-moving rockslides.
NASA Technical Reports Server (NTRS)
Sullivan, Roy M.
2015-01-01
The stress rupture strength of silicon carbide fiber-reinforced silicon carbide (SiCSiC) composites with a boron nitride (BN) fiber coating decreases with time within the intermediate temperature range of 700-950 C. Various theories have been proposed to explain the cause of the time dependent stress rupture strength. Some previous authors have suggested that the observed composite strength behavior is due to the inherent time dependent strength of the fibers, which is caused by the slow growth of flaws within the fibers. Flaw growth is supposedly enabled by oxidation of free carbon at the grain boundaries. The objective of this paper is to investigate the relative significance of the various theories for the time-dependent strength of SiCSiC composites. This is achieved through the development of a numerically-based progressive failure analysis routine and through the application of the routine to simulate the composite stress rupture tests. The progressive failure routine is a time marching routine with an iterative loop between a probability of fiber survival equation and a force equilibrium equation within each time step. Failure of the composite is assumed to initiate near a matrix crack and the progression of fiber failures occurs by global load sharing. The probability of survival equation is derived from consideration of the strength of ceramic fibers with randomly occurring and slow growing flaws as well as the mechanical interaction between the fibers and matrix near a matrix crack. The force equilibrium equation follows from the global load sharing presumption. The results of progressive failure analyses of the composite tests suggest that the relationship between time and stress-rupture strength is attributed almost entirely to the slow flaw growth within the fibers. Although other mechanisms may be present, they appear to have only a minor influence on the observed time dependent behavior.
Predicting the Lifetime of Dynamic Networks Experiencing Persistent Random Attacks.
Podobnik, Boris; Lipic, Tomislav; Horvatic, Davor; Majdandzic, Antonio; Bishop, Steven R; Eugene Stanley, H
2015-09-21
Estimating the critical points at which complex systems abruptly flip from one state to another is one of the remaining challenges in network science. Due to lack of knowledge about the underlying stochastic processes controlling critical transitions, it is widely considered difficult to determine the location of critical points for real-world networks, and it is even more difficult to predict the time at which these potentially catastrophic failures occur. We analyse a class of decaying dynamic networks experiencing persistent failures in which the magnitude of the overall failure is quantified by the probability that a potentially permanent internal failure will occur. When the fraction of active neighbours is reduced to a critical threshold, cascading failures can trigger a total network failure. For this class of network we find that the time to network failure, which is equivalent to network lifetime, is inversely dependent upon the magnitude of the failure and logarithmically dependent on the threshold. We analyse how permanent failures affect network robustness using network lifetime as a measure. These findings provide new methodological insight into system dynamics and, in particular, of the dynamic processes of networks. We illustrate the network model by selected examples from biology, and social science.
Investigations of Low Temperature Time Dependent Cracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van der Sluys, W A; Robitz, E S; Young, B A
2002-09-30
The objective of this project was to investigate metallurgical and mechanical phenomena associated with time dependent cracking of cold bent carbon steel piping at temperatures between 327 C and 360 C. Boiler piping failures have demonstrated that understanding the fundamental metallurgical and mechanical parameters controlling these failures is insufficient to eliminate it from the field. The results of the project consisted of the development of a testing methodology to reproduce low temperature time dependent cracking in laboratory specimens. This methodology was used to evaluate the cracking resistance of candidate heats in order to identify the factors that enhance cracking sensitivity.more » The resultant data was integrated into current available life prediction tools.« less
The Influence of a High Salt Diet on a Rat Model of Isoproterenol-Induced Heart Failure
Rat models of heart failure (HF) show varied pathology and time to disease outcome, dependent on induction method. We found that subchronic (4 weeks) isoproterenol (ISO) infusion exacerbated cardiomyopathy in Spontaneously Hypertensive Heart Failure (SHHF) rats. Others have shown...
A RAT MODEL OF HEART FAILURE INDUCED BY ISOPROTERENOL AND A HIGH SALT DIET
Rat models of heart failure (HF) show varied pathology and time to disease outcome, dependent on induction method. We found that subchronic (4wk) isoproterenol (ISO) infusion in Spontaneously Hypertensive Heart Failure (SHHF) rats caused cardiac injury with minimal hypertrophy. O...
Outcome-Dependent Sampling with Interval-Censored Failure Time Data
Zhou, Qingning; Cai, Jianwen; Zhou, Haibo
2017-01-01
Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664
Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods
NASA Astrophysics Data System (ADS)
Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed
2018-04-01
This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.
Reliability-based management of buried pipelines considering external corrosion defects
NASA Astrophysics Data System (ADS)
Miran, Seyedeh Azadeh
Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.
Heroic Reliability Improvement in Manned Space Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximationmore » and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.« less
Common Cause Failures and Ultra Reliability
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2012-01-01
A common cause failure occurs when several failures have the same origin. Common cause failures are either common event failures, where the cause is a single external event, or common mode failures, where two systems fail in the same way for the same reason. Common mode failures can occur at different times because of a design defect or a repeated external event. Common event failures reduce the reliability of on-line redundant systems but not of systems using off-line spare parts. Common mode failures reduce the dependability of systems using off-line spare parts and on-line redundancy.
Probability of loss of assured safety in systems with multiple time-dependent failure modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon Craig; Pilch, Martin.; Sallaberry, Cedric Jean-Marie.
2012-09-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allowmore » an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). Representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent are derived and numerically evaluated for a variety of WL/SL configurations, including PLOAS defined by (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS are considered.« less
Bayesian Inference for Time Trends in Parameter Values using Weighted Evidence Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. L. Kelly; A. Malkhasyan
2010-09-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “in-dustry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an applica-tion of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates an approach to incorporating multiple sources of data via applicability weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly; Albert Malkhasyan
2010-06-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “industry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an application of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates the development of a generic prior distribution, which incorporates multiple sources of generic data via weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
[CHRONIC RENAL FAILURE AND PREGNANCY--A CASE REPORT].
Amaliev, G M; Uchikova, E; Malinova, M
2015-01-01
Pregnancy in women with chronic renal failure is a complex therapeutic problem requiring a multidisciplinary approach. It is associated with a higher risk of many perinatal complications. The most common abnormalities are related to: progression of renal failure, development of preeclampsia development of nephrotic syndrome, anemic syndrome, IUGR and fetal death. The prognosis depends on the values of serum creatinine prior to pregnancy, the degree of deterioration of renal function, development of additional obstetric complications and the specific etiological reasons that have led to the occurrence of renal failure. Determining the optimum time for authorization birth depends on the condition of the mother, the condition of the fetus and the rate of progression of renal failure, and the deadline the pregnancy should be terminated is 35 weeks. We present a case of a patient with chronic renal failure, with favorable perinatal outcome.
An unjustified benefit: immortal time bias in the analysis of time-dependent events.
Gleiss, Andreas; Oberbauer, Rainer; Heinze, Georg
2018-02-01
Immortal time bias is a problem arising from methodologically wrong analyses of time-dependent events in survival analyses. We illustrate the problem by analysis of a kidney transplantation study. Following patients from transplantation to death, groups defined by the occurrence or nonoccurrence of graft failure during follow-up seemingly had equal overall mortality. Such naive analysis assumes that patients were assigned to the two groups at time of transplantation, which actually are a consequence of occurrence of a time-dependent event later during follow-up. We introduce landmark analysis as the method of choice to avoid immortal time bias. Landmark analysis splits the follow-up time at a common, prespecified time point, the so-called landmark. Groups are then defined by time-dependent events having occurred before the landmark, and outcome events are only considered if occurring after the landmark. Landmark analysis can be easily implemented with common statistical software. In our kidney transplantation example, landmark analyses with landmarks set at 30 and 60 months clearly identified graft failure as a risk factor for overall mortality. We give further typical examples from transplantation research and discuss strengths and limitations of landmark analysis and other methods to address immortal time bias such as Cox regression with time-dependent covariables. © 2017 Steunstichting ESOT.
Analysis of EDZ Development of Columnar Jointed Rock Mass in the Baihetan Diversion Tunnel
NASA Astrophysics Data System (ADS)
Hao, Xian-Jie; Feng, Xia-Ting; Yang, Cheng-Xiang; Jiang, Quan; Li, Shao-Jun
2016-04-01
Due to the time dependency of the crack propagation, columnar jointed rock masses exhibit marked time-dependent behaviour. In this study, in situ measurements, scanning electron microscope (SEM), back-analysis method and numerical simulations are presented to study the time-dependent development of the excavation damaged zone (EDZ) around underground diversion tunnels in a columnar jointed rock mass. Through in situ measurements of crack propagation and EDZ development, their extent is seen to have increased over time, despite the fact that the advancing face has passed. Similar to creep behaviour, the time-dependent EDZ development curve also consists of three stages: a deceleration stage, a stabilization stage, and an acceleration stage. A corresponding constitutive model of columnar jointed rock mass considering time-dependent behaviour is proposed. The time-dependent degradation coefficient of the roughness coefficient and residual friction angle in the Barton-Bandis strength criterion are taken into account. An intelligent back-analysis method is adopted to obtain the unknown time-dependent degradation coefficients for the proposed constitutive model. The numerical modelling results are in good agreement with the measured EDZ. Not only that, the failure pattern simulated by this time-dependent constitutive model is consistent with that observed in the scanning electron microscope (SEM) and in situ observation, indicating that this model could accurately simulate the failure pattern and time-dependent EDZ development of columnar joints. Moreover, the effects of the support system provided and the in situ stress on the time-dependent coefficients are studied. Finally, the long-term stability analysis of diversion tunnels excavated in columnar jointed rock masses is performed.
Time of non-invasive ventilation.
Nava, Stefano; Navalesi, Paolo; Conti, Giorgio
2006-03-01
Non-invasive ventilation (NIV) is a safe, versatile and effective technique that can avert side effects and complications associated with endotracheal intubation. The success of NIV relies on several factors, including the type and severity of acute respiratory failure, the underlying disease, the location of treatment, and the experience of the team. The time factor is also important. NIV is primarily used to avert the need for endotracheal intubation in patients with early-stage acute respiratory failure and post-extubation respiratory failure. It can also be used as an alternative to invasive ventilation at a more advanced stage of acute respiratory failure or to facilitate the process of weaning from mechanical ventilation. NIV has been used to prevent development of acute respiratory failure or post-extubation respiratory failure. The number of days of NIV and hours of daily use differ, depending on the severity and course of the acute respiratory failure and the timing of application. In this review article, we analyse, compare and discuss the results of studies in which NIV was applied at various times during the evolution of acute respiratory failure.
NASA Astrophysics Data System (ADS)
Zhao, Yang; Dong, Shuhong; Yu, Peishi; Zhao, Junhua
2018-06-01
The loading direction-dependent shear behavior of single-layer chiral graphene sheets at different temperatures is studied by molecular dynamics (MD) simulations. Our results show that the shear properties (such as shear stress-strain curves, buckling strains, and failure strains) of chiral graphene sheets strongly depend on the loading direction due to the structural asymmetry. The maximum values of both the critical buckling shear strain and the failure strain under positive shear deformation can be around 1.4 times higher than those under negative shear deformation. For a given chiral graphene sheet, both its failure strain and failure stress decrease with increasing temperature. In particular, the amplitude to wavelength ratio of wrinkles for different chiral graphene sheets under shear deformation using present MD simulations agrees well with that from the existing theory. These findings provide physical insights into the origins of the loading direction-dependent shear behavior of chiral graphene sheets and their potential applications in nanodevices.
Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.
Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi
2015-10-01
In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.
[Hazard function and life table: an introduction to the failure time analysis].
Matsushita, K; Inaba, H
1987-04-01
Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.
Cena, Tiziana; Musetti, Claudio; Quaglia, Marco; Magnani, Corrado; Stratta, Piero; Bagnardi, Vincenzo; Cantaluppi, Vincenzo
2016-10-01
The aim of this study was to evaluate the association between cancer occurrence and risk of graft failure in kidney transplant recipients. From November 1998 to November 2013, 672 adult patients received their first kidney transplant from a deceased donor and had a minimum follow-up of 6 months. During a median follow-up of 4.7 years (3523 patient-years), 47 patients developed a nonmelanoma skin cancer (NMSC) and 40 a noncutaneous malignancy (NCM). A total of 59 graft failures were observed. The failure rate was 6 per 100 patient-year (pt-yr) after NCM versus 1.5 per 100 pt-yr in patients without NCM. In a time-dependent multivariable model, the occurrence of NCM appeared to be associated with failure (HR = 3.27; 95% CI = 1.44-7.44). The effect of NCM on the cause-specific graft failure was different (P = 0.002) when considering events due to chronic rejection (HR = 0.55) versus other causes (HR = 15.59). The reduction of the immunosuppression after NCM was not associated with a greater risk of graft failure. In conclusion, our data suggest that post-transplant NCM may be a strong risk factor for graft failure, particularly for causes other than chronic rejection. © 2016 Steunstichting ESOT.
Age-Dependent Risk of Graft Failure in Young Kidney Transplant Recipients.
Kaboré, Rémi; Couchoud, Cécile; Macher, Marie-Alice; Salomon, Rémi; Ranchin, Bruno; Lahoche, Annie; Roussey-Kesler, Gwenaelle; Garaix, Florentine; Decramer, Stéphane; Pietrement, Christine; Lassalle, Mathilde; Baudouin, Véronique; Cochat, Pierre; Niaudet, Patrick; Joly, Pierre; Leffondré, Karen; Harambat, Jérôme
2017-06-01
The risk of graft failure in young kidney transplant recipients has been found to increase during adolescence and early adulthood. However, this question has not been addressed outside the United States so far. Our objective was to investigate whether the hazard of graft failure also increases during this age period in France irrespective of age at transplantation. Data of all first kidney transplantation performed before 30 years of age between 1993 and 2012 were extracted from the French kidney transplant database. The hazard of graft failure was estimated at each current age using a 2-stage modelling approach that accounted for both age at transplantation and time since transplantation. Hazard ratios comparing the risk of graft failure during adolescence or early adulthood to other periods were estimated from time-dependent Cox models. A total of 5983 renal transplant recipients were included. The risk of graft failure was found to increase around the age of 13 years until the age of 21 years, and decrease thereafter. Results from the Cox model indicated that the hazard of graft failure during the age period 13 to 23 years was almost twice as high as than during the age period 0 to 12 years, and 25% higher than after 23 years. Among first kidney transplant recipients younger than 30 years in France, those currently in adolescence or early adulthood have the highest risk of graft failure.
Application of a truncated normal failure distribution in reliability testing
NASA Technical Reports Server (NTRS)
Groves, C., Jr.
1968-01-01
Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.
Beeler, N.M.; Lockner, D.A.
2003-01-01
We provide an explanation why earthquake occurrence does not correlate well with the daily solid Earth tides. The explanation is derived from analysis of laboratory experiments in which faults are loaded to quasiperiodic failure by the combined action of a constant stressing rate, intended to simulate tectonic loading, and a small sinusoidal stress, analogous to the Earth tides. Event populations whose failure times correlate with the oscillating stress show two modes of response; the response mode depends on the stressing frequency. Correlation that is consistent with stress threshold failure models, e.g., Coulomb failure, results when the period of stress oscillation exceeds a characteristic time tn; the degree of correlation between failure time and the phase of the driving stress depends on the amplitude and frequency of the stress oscillation and on the stressing rate. When the period of the oscillating stress is less than tn, the correlation is not consistent with threshold failure models, and much higher stress amplitudes are required to induce detectable correlation with the oscillating stress. The physical interpretation of tn is the duration of failure nucleation. Behavior at the higher frequencies is consistent with a second-order dependence of the fault strength on sliding rate which determines the duration of nucleation and damps the response to stress change at frequencies greater than 1/tn. Simple extrapolation of these results to the Earth suggests a very weak correlation of earthquakes with the daily Earth tides, one that would require >13,000 earthquakes to detect. On the basis of our experiments and analysis, the absence of definitive daily triggering of earthquakes by the Earth tides requires that for earthquakes, tn exceeds the daily tidal period. The experiments suggest that the minimum typical duration of earthquake nucleation on the San Andreas fault system is ???1 year.
Failure of underground concrete structures subjected to blast loadings
NASA Technical Reports Server (NTRS)
Ross, C. A.; Nash, P. T.; Griner, G. R.
1979-01-01
The response and failure of two edges of free reinforced concrete slabs subjected to intermediate blast loadings are examined. The failure of the reinforced concrete structures is defined as a condition where actual separation or fracture of the reinforcing elements has occurred. Approximate theoretical methods using stationary and moving plastic hinge mechanisms with linearly varying and time dependent loadings are developed. Equations developed to predict deflection and failure of reinforced concrete beams are presented and compared with the experimental results.
Lee, William; Tay, Andre; Walker, Bruce D; Kuchar, Dennis L; Hayward, Christopher S; Spratt, Phillip; Subbiah, Rajesh N
2016-12-01
Bradyarrhythmia following heart transplantation is common-∼7.5-24% of patients require permanent pacemaker (PPM) implantation. While overall mortality is similar to their non-paced counterparts, the effects of chronic right ventricular pacing (CRVP) in heart transplant patients have not been studied. We aim to examine the effects of CRVP on heart failure and mortality in heart transplant patients. Records of heart transplant recipients requiring PPM at St Vincent's Hospital, Sydney, Australia between January 1990 and January 2015 were examined. Patient's without a right ventricular (RV) pacing lead or a follow-up time of <1 year were excluded. Patients with pre-existing abnormal left ventricular function (<50%) were analysed separately. Patients were grouped by pacing dependence (100% pacing dependent vs. non-pacing dependent). The primary endpoint was clinical or echocardiographic heart failure (<35%) in the first 5 years post-PPM. Thirty-three of 709 heart transplant recipients were studied. Two patients had complete RV pacing dependence, and the remaining 31 patients had varying degrees of pacing requirement, with an underlying ventricular escape rhythm. The primary endpoint occurred significantly more in the pacing-dependent group; 2 (100%) compared with 2 (6%) of the non pacing dependent group (P < 0.0001 by log-rank analysis, HR = 24.58). Non-pacing-dependent patients had reversible causes for heart failure, unrelated to pacing. In comparison, there was no other cause of heart failure in the pacing-dependent group. Permanent atrioventricular block is rare in the heart transplant population. We have demonstrated CRVP as a potential cause of accelerated graft failure in pacing-dependent heart transplant patients. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Jadaan, Osama
2001-01-01
Present capabilities of the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code has the capability to compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth (SCG) type failure conditions CARES/Life can handle the cases of sustained and linearly increasing time-dependent loads, while for cyclic fatigue applications various types of repetitive constant amplitude loads can be accounted for. In real applications applied loads are rarely that simple, but rather vary with time in more complex ways such as, for example, engine start up, shut down, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. The objective of this paper is to demonstrate a methodology capable of predicting the time-dependent reliability of components subjected to transient thermomechanical loads that takes into account the change in material response with time. In this paper, the dominant delayed failure mechanism is assumed to be SCG. This capability has been added to the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code, which has also been modified to have the ability of interfacing with commercially available FEA codes executed for transient load histories. An example involving a ceramic exhaust valve subjected to combustion cycle loads is presented to demonstrate the viability of this methodology and the CARES/Life program.
NASA Astrophysics Data System (ADS)
Sun, Huarui; Bajo, Miguel Montes; Uren, Michael J.; Kuball, Martin
2015-01-01
Gate leakage degradation of AlGaN/GaN high electron mobility transistors under OFF-state stress is investigated using a combination of electrical, optical, and surface morphology characterizations. The generation of leakage "hot spots" at the edge of the gate is found to be strongly temperature accelerated. The time for the formation of each failure site follows a Weibull distribution with a shape parameter in the range of 0.7-0.9 from room temperature up to 120 °C. The average leakage per failure site is only weakly temperature dependent. The stress-induced structural degradation at the leakage sites exhibits a temperature dependence in the surface morphology, which is consistent with a surface defect generation process involving temperature-associated changes in the breakdown sites.
A relation to describe rate-dependent material failure.
Voight, B
1989-01-13
The simple relation OmegaOmega-alpha = 0, where Omega is a measurable quantity such as strain and A and alpha are empirical constants, describes the behavior of materials in terminal stages of failure under conditions of approximately constant stress and temperature. Applicable to metals and alloys, ice, concrete, polymers, rock, and soil, the relation may be extended to conditions of variable and multiaxial stress and may be used to predict time to failure.
NASA Technical Reports Server (NTRS)
Sepehry-Fard, F.; Coulthard, Maurice H.
1995-01-01
The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.
Interruptions and Failure in Higher Education: Evidence from ISEG-UTL
ERIC Educational Resources Information Center
Chagas, Margarida; Fernandaes, Graca Leao
2011-01-01
Failure in higher education (HE) is the outcome of multiple time-dependent determinants. Interruptions in students' individual school trajectories are one of them, and that is why research on this topic has been attracting much attention these days. From an individual point of view, it is expected that interruptions in school trajectory, whatever…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly
Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition,more » substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.« less
Verification of the Multi-Axial, Temperature and Time Dependent (MATT) Failure Criterion
NASA Technical Reports Server (NTRS)
Richardson, David E.; Macon, David J.
2005-01-01
An extensive test and analytical effort has been completed by the Space Shuttle's Reusable Solid Rocket Motor (KSKM) nozzle program to characterize the failure behavior of two epoxy adhesives (TIGA 321 and EA946). As part of this effort, a general failure model, the "Multi-Axial, Temperature, and Time Dependent" or MATT failure criterion was developed. In the initial development of this failure criterion, tests were conducted to provide validation of the theory under a wide range of test conditions. The purpose of this paper is to present additional verification of the MATT failure criterion, under new loading conditions for the adhesives TIGA 321 and EA946. In many cases, the loading conditions involve an extrapolation from the conditions under which the material models were originally developed. Testing was conducted using three loading conditions: multi-axial tension, torsional shear, and non-uniform tension in a bondline condition. Tests were conducted at constant and cyclic loading rates ranging over four orders of magnitude. Tests were conducted under environmental conditions of primary interest to the RSRM program. The temperature range was not extreme, but the loading ranges were extreme (varying by four orders of magnitude). It should be noted that the testing was conducted at temperatures below the glass transition temperature of the TIGA 321 adhesive. However for the EA946, the testing was conducted at temperatures that bracketed the glass transition temperature.
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Huarui, E-mail: huarui.sun@bristol.ac.uk; Bajo, Miguel Montes; Uren, Michael J.
2015-01-26
Gate leakage degradation of AlGaN/GaN high electron mobility transistors under OFF-state stress is investigated using a combination of electrical, optical, and surface morphology characterizations. The generation of leakage “hot spots” at the edge of the gate is found to be strongly temperature accelerated. The time for the formation of each failure site follows a Weibull distribution with a shape parameter in the range of 0.7–0.9 from room temperature up to 120 °C. The average leakage per failure site is only weakly temperature dependent. The stress-induced structural degradation at the leakage sites exhibits a temperature dependence in the surface morphology, which ismore » consistent with a surface defect generation process involving temperature-associated changes in the breakdown sites.« less
An analysis of the value of spermicides in contraception.
1979-11-01
Development of the so-called modern methods of contraception has somewhat eclipsed interest in traditional methods. However, spermicides are still important for many couples and their use appears to be increasing. A brief history of the use of and research into spermicidal contraceptives is presented. The limitations of spermicides are: the necessity for use at the time of intercourse, and their high failure rate. Estimates of the failure rates of spermicides have ranged from .3 pregnancies per 100 woman-years of use to nearly 40, depending on the product used and the population tested. Just as their use depends on various social factors, so does their failure rate. Characteristics of the user deterine failure rates. Motivation is important in lowering failure rates as is education, the intracouple relationship, and previous experience with spermicides. Method failure is also caused by defects in the product, either in the active ingredient of the spermicide or in the base carrier. The main advantage of spermicidal contraception is its safety. Limited research is currently being conducted on spermicides. Areas for improvement in existing spermicides and areas for possible innovation are mentioned.
Barnes, Nicole S; White, Perrin C; Hutchison, Michele R
2012-11-01
There are no data in children with type 2 diabetes (T2D) regarding the durability of glycemic control with oral medication. Therefore, we assessed the likelihood of and time to failure of oral therapy in children and adolescents diagnosed with T2D. Charts of patients presenting to our large tertiary-care children's hospital between January 2000 and June 2007 with a new diagnosis of diabetes (n = 1625) were reviewed to identify those with T2D (n = 184). Subjects' initial therapy, hemoglobin A1c (HbA1c), body mass index, age, gender, and antibody status were documented, as well as subsequent therapies and HbA1c values, to determine whether baseline characteristics predicted future insulin dependence. Kaplan-Meier survival curves and Cox proportional hazards analysis demonstrated time to failure of oral therapy. Eighty-nine patients remained on insulin throughout the study. Baseline characteristics that determined future insulin dependence included being placed on insulin initially, initial HbA1c and race (whites less likely to be insulin dependent at study conclusion). Patients who failed oral therapy were more often reported to be non-compliant or unable to tolerate metformin than those who continued on oral therapy. The median time to failure of oral therapy (metformin monotherapy in 84/95) was not significantly different for patients initially treated with oral therapy (42 months) and insulin (35 months). Thus, children with T2D appear to fail oral therapy more quickly than what is reported in adults. It is not yet known if improving compliance with treatment might allow more children to remain on oral medications. © 2012 John Wiley & Sons A/S.
RENEW v3.2 user's manual, maintenance estimation simulation for Space Station Freedom Program
NASA Technical Reports Server (NTRS)
Bream, Bruce L.
1993-01-01
RENEW is a maintenance event estimation simulation program developed in support of the Space Station Freedom Program (SSFP). This simulation uses reliability and maintainability (R&M) and logistics data to estimate both average and time dependent maintenance demands. The simulation uses Monte Carlo techniques to generate failure and repair times as a function of the R&M and logistics parameters. The estimates are generated for a single type of orbital replacement unit (ORU). The simulation has been in use by the SSFP Work Package 4 prime contractor, Rocketdyne, since January 1991. The RENEW simulation gives closer estimates of performance since it uses a time dependent approach and depicts more factors affecting ORU failure and repair than steady state average calculations. RENEW gives both average and time dependent demand values. Graphs of failures over the mission period and yearly failure occurrences are generated. The averages demand rate for the ORU over the mission period is also calculated. While RENEW displays the results in graphs, the results are also available in a data file for further use by spreadsheets or other programs. The process of using RENEW starts with keyboard entry of the R&M and operational data. Once entered, the data may be saved in a data file for later retrieval. The parameters may be viewed and changed after entry using RENEW. The simulation program runs the number of Monte Carlo simulations requested by the operator. Plots and tables of the results can be viewed on the screen or sent to a printer. The results of the simulation are saved along with the input data. Help screens are provided with each menu and data entry screen.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
Maximum likelihood estimation for semiparametric transformation models with interval-censored data
Mao, Lu; Lin, D. Y.
2016-01-01
Abstract Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656
Durability of filament-wound composite flywheel rotors
NASA Astrophysics Data System (ADS)
Koyanagi, Jun
2012-02-01
This paper predicts the durability of two types of flywheels, one assumes to fail in the radial direction and the other assumes to fail in the circumferential direction. The flywheel failing in the radial direction is a conventional filament-wound composite flywheel and the one failing in the circumferential direction is a tailor-made type. The durability of the former is predicted by Micromechanics of Failure (MMF) (Ha et al. in J. Compos. Mater. 42:1873-1875, 2008), employing time-dependent matrix strength, and that of the latter is predicted by Simultaneous Fiber Failure (SFF) (Koyanagi et al. in J. Compos. Mater. 43:1901-1914, 2009), employing identical time-dependent matrix strength. The predicted durability of the latter is much greater than that of the former, depending on the interface strength. This study suggests that a relatively weak interface is necessary for high-durability composite flywheel fabrication.
Effect of system workload on operating system reliability - A study on IBM 3081
NASA Technical Reports Server (NTRS)
Iyer, R. K.; Rossetti, D. J.
1985-01-01
This paper presents an analysis of operating system failures on an IBM 3081 running VM/SP. Three broad categories of software failures are found: error handling, program control or logic, and hardware related; it is found that more than 25 percent of software failures occur in the hardware/software interface. Measurements show that results on software reliability cannot be considered representative unless the system workload is taken into account. The overall CPU execution rate, although measured to be close to 100 percent most of the time, is not found to correlate strongly with the occurrence of failures. Possible reasons for the observed workload failure dependency, based on detailed investigations of the failure data, are discussed.
Spatio-temporal propagation of cascading overload failures in spatially embedded networks
NASA Astrophysics Data System (ADS)
Zhao, Jichang; Li, Daqing; Sanhedrai, Hillel; Cohen, Reuven; Havlin, Shlomo
2016-01-01
Different from the direct contact in epidemics spread, overload failures propagate through hidden functional dependencies. Many studies focused on the critical conditions and catastrophic consequences of cascading failures. However, to understand the network vulnerability and mitigate the cascading overload failures, the knowledge of how the failures propagate in time and space is essential but still missing. Here we study the spatio-temporal propagation behaviour of cascading overload failures analytically and numerically on spatially embedded networks. The cascading overload failures are found to spread radially from the centre of the initial failure with an approximately constant velocity. The propagation velocity decreases with increasing tolerance, and can be well predicted by our theoretical framework with one single correction for all the tolerance values. This propagation velocity is found similar in various model networks and real network structures. Our findings may help to predict the dynamics of cascading overload failures in realistic systems.
Spatio-temporal propagation of cascading overload failures in spatially embedded networks
Zhao, Jichang; Li, Daqing; Sanhedrai, Hillel; Cohen, Reuven; Havlin, Shlomo
2016-01-01
Different from the direct contact in epidemics spread, overload failures propagate through hidden functional dependencies. Many studies focused on the critical conditions and catastrophic consequences of cascading failures. However, to understand the network vulnerability and mitigate the cascading overload failures, the knowledge of how the failures propagate in time and space is essential but still missing. Here we study the spatio-temporal propagation behaviour of cascading overload failures analytically and numerically on spatially embedded networks. The cascading overload failures are found to spread radially from the centre of the initial failure with an approximately constant velocity. The propagation velocity decreases with increasing tolerance, and can be well predicted by our theoretical framework with one single correction for all the tolerance values. This propagation velocity is found similar in various model networks and real network structures. Our findings may help to predict the dynamics of cascading overload failures in realistic systems. PMID:26754065
Transient Reliability of Ceramic Structures For Heat Engine Applications
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Jadaan, Osama M.
2002-01-01
The objectives of this report was to develop a methodology to predict the time-dependent reliability (probability of failure) of brittle material components subjected to transient thermomechanical loading, taking into account the change in material response with time. This methodology for computing the transient reliability in ceramic components subjected to fluctuation thermomechanical loading was developed, assuming SCG (Slow Crack Growth) as the delayed mode of failure. It takes into account the effect of varying Weibull modulus and materials with time. It was also coded into a beta version of NASA's CARES/Life code, and an example demonstrating its viability was presented.
Stress and Reliability Analysis of a Metal-Ceramic Dental Crown
NASA Technical Reports Server (NTRS)
Anusavice, Kenneth J; Sokolowski, Todd M.; Hojjatie, Barry; Nemeth, Noel N.
1996-01-01
Interaction of mechanical and thermal stresses with the flaws and microcracks within the ceramic region of metal-ceramic dental crowns can result in catastrophic or delayed failure of these restorations. The objective of this study was to determine the combined influence of induced functional stresses and pre-existing flaws and microcracks on the time-dependent probability of failure of a metal-ceramic molar crown. A three-dimensional finite element model of a porcelain fused-to-metal (PFM) molar crown was developed using the ANSYS finite element program. The crown consisted of a body porcelain, opaque porcelain, and a metal substrate. The model had a 300 Newton load applied perpendicular to one cusp, a load of 30ON applied at 30 degrees from the perpendicular load case, directed toward the center, and a 600 Newton vertical load. Ceramic specimens were subjected to a biaxial flexure test and the load-to-failure of each specimen was measured. The results of the finite element stress analysis and the flexure tests were incorporated in the NASA developed CARES/LIFE program to determine the Weibull and fatigue parameters and time-dependent fracture reliability of the PFM crown. CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/Or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program.
Global Failure Modes in High Temperature Composite Structures
NASA Technical Reports Server (NTRS)
Knauss, W. G.
1998-01-01
Composite materials have been considered for many years as the major advance in the construction of energy efficient aerospace structures. Notable advances have been made in understanding the special design considerations that set composites apart from the usual "isotropic" engineering materials such as the metals. As a result, a number of significant engineering designs have been accomplished. However, one shortcoming of the currently favored composites is their relatively unforgiving behavior with respect to failure (brittleness) under seemingly mild impact conditions and large efforts are underway to rectify that situation, much along the lines of introducing thermoplastic matrix materials. Because of their relatively more pronounced (thermo) viscoelastic behavior these materials respond with "toughness" in fracture situations. From the point of view of applications requiring material strength, this property is highly desirable. This feature impacts several important and distinct engineering problems which have been' considered under this grant and cover the 1) effect of impact damage on structural (buckling) stability of composite panels, the 2) effect of time dependence on the progression of buckling instabilities, and the 3) evolution of damage and fracture at generic thickness discontinuities in structures. The latter topic has serious implications for structural stability problems (buckling failure in reinforced shell structures) as well as failure progression in stringer-reinforced shell structures. This grant has dealt with these issues. Polymer "toughness" is usually associated with uncrosslinked or thermo-plastic polymers. But, by comparison with their thermoset counterparts they tend to exhibit more pronounced time dependent material behavior; also, that time dependence can occur at lower temperatures which places restriction in the high temperature use of these "newer and tougher" materials that are not quite so serious with the thermoset matrix materials. From a structural point of view the implications of this material behavior are potentially severe in that structural failure characteristics are no longer readily observed in short term qualification tests so characteristic for aerospace structures built from typical engineering metals.
Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations
NASA Technical Reports Server (NTRS)
Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor
2014-01-01
One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.
GaN HEMTs with p-GaN gate: field- and time-dependent degradation
NASA Astrophysics Data System (ADS)
Meneghesso, G.; Meneghini, M.; Rossetto, I.; Canato, E.; Bartholomeus, J.; De Santi, C.; Trivellin, N.; Zanoni, E.
2017-02-01
GaN-HEMTs with p-GaN gate have recently demonstrated to be excellent normally-off devices for application in power conversion systems, thanks to the high and robust threshold voltage (VTH>1 V), the high breakdown voltage, and the low dynamic Ron increase. For this reason, studying the stability and reliability of these devices under high stress conditions is of high importance. This paper reports on our most recent results on the field- and time-dependent degradation of GaN-HEMTs with p-GaN gate submitted to stress with positive gate bias. Based on combined step-stress experiments, constant voltage stress and electroluminescence testing we demonstrated that: (i) when submitted to high/positive gate stress, the transistors may show a negative threshold voltage shift, that is ascribed to the injection of holes from the gate metal towards the p-GaN/AlGaN interface; (ii) in a step-stress experiment, the analyzed commercial devices fail at gate voltages higher than 9-10 V, due to the extremely high electric field over the p-GaN/AlGaN stack; (iii) constant voltage stress tests indicate that the failure is also time-dependent and Weibull distributed. The several processes that can explain the time-dependent failure are discussed in the following.
A real-time diagnostic and performance monitor for UNIX. M.S. Thesis
NASA Technical Reports Server (NTRS)
Dong, Hongchao
1992-01-01
There are now over one million UNIX sites and the pace at which new installations are added is steadily increasing. Along with this increase, comes a need to develop simple efficient, effective and adaptable ways of simultaneously collecting real-time diagnostic and performance data. This need exists because distributed systems can give rise to complex failure situations that are often un-identifiable with single-machine diagnostic software. The simultaneous collection of error and performance data is also important for research in failure prediction and error/performance studies. This paper introduces a portable method to concurrently collect real-time diagnostic and performance data on a distributed UNIX system. The combined diagnostic/performance data collection is implemented on a distributed multi-computer system using SUN4's as servers. The approach uses existing UNIX system facilities to gather system dependability information such as error and crash reports. In addition, performance data such as CPU utilization, disk usage, I/O transfer rate and network contention is also collected. In the future, the collected data will be used to identify dependability bottlenecks and to analyze the impact of failures on system performance.
Weng, Hong-Lei; Cai, Xiaobo; Yuan, Xiaodong; Liebe, Roman; Dooley, Steven; Li, Hai; Wang, Tai-Ling
2015-01-01
Massive hepatic necrosis is a key event underlying acute liver failure, a serious clinical syndrome with high mortality. Massive hepatic necrosis in acute liver failure has unique pathophysiological characteristics including extremely rapid parenchymal cell death and removal. On the other hand, massive necrosis rapidly induces the activation of liver progenitor cells, the so-called “second pathway of liver regeneration.” The final clinical outcome of acute liver failure depends on whether liver progenitor cell-mediated regeneration can efficiently restore parenchymal mass and function within a short time. This review summarizes the current knowledge regarding massive hepatic necrosis and liver progenitor cell-mediated regeneration in patients with acute liver failure, the two sides of one coin. PMID:26136687
Weng, Hong-Lei; Cai, Xiaobo; Yuan, Xiaodong; Liebe, Roman; Dooley, Steven; Li, Hai; Wang, Tai-Ling
2015-01-01
Massive hepatic necrosis is a key event underlying acute liver failure, a serious clinical syndrome with high mortality. Massive hepatic necrosis in acute liver failure has unique pathophysiological characteristics including extremely rapid parenchymal cell death and removal. On the other hand, massive necrosis rapidly induces the activation of liver progenitor cells, the so-called "second pathway of liver regeneration." The final clinical outcome of acute liver failure depends on whether liver progenitor cell-mediated regeneration can efficiently restore parenchymal mass and function within a short time. This review summarizes the current knowledge regarding massive hepatic necrosis and liver progenitor cell-mediated regeneration in patients with acute liver failure, the two sides of one coin.
NASA Astrophysics Data System (ADS)
Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.
2018-03-01
In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.
Prentice, Ross L; Zhao, Shanshan
2018-01-01
The Dabrowska (Ann Stat 16:1475-1489, 1988) product integral representation of the multivariate survivor function is extended, leading to a nonparametric survivor function estimator for an arbitrary number of failure time variates that has a simple recursive formula for its calculation. Empirical process methods are used to sketch proofs for this estimator's strong consistency and weak convergence properties. Summary measures of pairwise and higher-order dependencies are also defined and nonparametrically estimated. Simulation evaluation is given for the special case of three failure time variates.
Robust inference in discrete hazard models for randomized clinical trials.
Nguyen, Vinh Q; Gillen, Daniel L
2012-10-01
Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.
On rate-state and Coulomb failure models
Gomberg, J.; Beeler, N.; Blanpied, M.
2000-01-01
We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified Coulomb failure model in which the failure stress threshold is lowered due to weakening, increasing the clock advance. The deviation from a non-Coulomb response also depends on the loading rate, elastic stiffness, initial conditions, and assumptions about how state evolves.
NASA Technical Reports Server (NTRS)
Waas, A.; Babcock, C., Jr.
1986-01-01
A series of experiments was carried out to determine the mechanism of failure in compressively loaded laminated plates with a circular cutout. Real time holographic interferometry and photomicrography are used to observe the progression of failure. These observations together with post experiment plate sectioning and deplying for interior damage observation provide useful information for modelling the failure process. It is revealed that the failure is initiated as a localised instability in the zero layers, at the hole surface. With increasing load extensive delamination cracking is observed. The progression of failure is by growth of these delaminations induced by delamination buckling. Upon reaching a critical state, catastrophic failure of the plate is observed. The levels of applied load and the rate at which these events occur depend on the plate stacking sequence.
Broët, Philippe; Tsodikov, Alexander; De Rycke, Yann; Moreau, Thierry
2004-06-01
This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests.
DEPEND - A design environment for prediction and evaluation of system dependability
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.; Iyer, Ravishankar K.
1990-01-01
The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.
NASA Astrophysics Data System (ADS)
Basavalingappa, Adarsh
Copper interconnects are typically polycrystalline and follow a lognormal grain size distribution. Polycrystalline copper interconnect microstructures with a lognormal grain size distribution were obtained with a Voronoi tessellation approach. The interconnect structures thus obtained were used to study grain growth mechanisms, grain boundary scattering, scattering dependent resistance of interconnects, stress evolution, vacancy migration, reliability life times, impact of orientation dependent anisotropy on various mechanisms, etc. In this work, the microstructures were used to study the impact of microstructure and elastic anisotropy of copper on thermal and electromigration induced failure. A test structure with copper and bulk moduli values was modeled to do a comparative study with the test structures with textured microstructure and elastic anisotropy. By subjecting the modeled test structure to a thermal stress by ramping temperature down from 400 °C to 100 °C, a significant variation in normal stresses and pressure were observed at the grain boundaries. This variation in normal stresses and hydrostatic stresses at the grain boundaries was found to be dependent on the orientation, dimensions, surroundings, and location of the grains. This may introduce new weak points within the metal line where normal stresses can be very high depending on the orientation of the grains leading to delamination and accumulation sites for vacancies. Further, the hydrostatic stress gradients act as a driving force for vacancy migration. The normal stresses can exceed certain grain orientation dependent critical threshold values and induce delamination at the copper and cap material interface, thereby leading to void nucleation and growth. Modeled test structures were subjected to a series of copper depositions at 250 °C followed by copper etch at 25 °C to obtain initial stress conditions. Then the modeled test structures were subjected to 100,000 hours ( 11.4 years) of simulated thermal stress at an elevated temperature of 150 °C. Vacancy migration due to concentration gradients, thermal gradients, and mechanical stress gradients were considered under the applied thermal stress. As a result, relatively high concentrations of vacancies were observed in the test structure due to a driving force caused by the pressure gradients resulting from the elastic anisotropy of copper. The grain growth mechanism was not considered in these simulations. Studies with two grain analysis demonstrated that the stress gradients developed will be severe when (100) grains are adjacent to (111) grains, therefore making them the weak points for potentially reliability failures. Ilan Blech discovered that electromigration occurs above a critical product of the current density and metal length, commonly referred as Blech condition. Electromigration stress simulations in this work were carried out by subjecting test structures to scaled current densities to overcome the Blech condition of (jL)crit for small dimensions of test structure and the low temperature stress condition used. Vacancy migration under the electromigration stress conditions was considered along with the vacancy migration induced stress evolution. A simple void growth model was used which assumes voids start to form when vacancies reach a critical level. Increase of vacancies in a localized region increases the resistance of the metal line. Considering a 10% increase in resistance as a failure criterion, the distributions of failure times were obtained for given electromigration stress conditions. Bimodal/multimodal failure distributions were obtained as a result. The sigma values were slightly lower than the ones commonly observed from experiments. The anisotropy of the elastic moduli of copper leads to the development of significantly different stress values which are dependent on the orientation of the grains. This results in some grains having higher normal stress than the others. This grain orientation dependent normal stress can reach a critical stress necessary to induce delamination at the copper and cap interface. Time taken to reach critical stress was considered as time to fail and distributions of failure times were obtained for structures with different grain orientations in the microstructure for different critical stress values. The sigma values of the failure distributions thus obtained for different constant critical stress values had a strong dependence of on the critical stress. It is therefore critical to use the appropriate critical stress value for the delamination of copper and cap interface. The critical stress necessary to overcome the local adhesion of the copper and the cap material interface is dependent on grain orientation of the copper. Simulations were carried out by considering grain orientation dependent critical normal stress values as failure criteria. The sigma value thus obtained with selected critical stress values were comparable to sigma values commonly observed from experiments.
Dual permeability FEM models for distributed fiber optic sensors development
NASA Astrophysics Data System (ADS)
Aguilar-López, Juan Pablo; Bogaard, Thom
2017-04-01
Fiber optic cables are commonly known for being robust and reliable mediums for transferring information at the speed of light in glass. Billions of kilometers of cable have been installed around the world for internet connection and real time information sharing. Yet, fiber optic cable is not only a mean for information transfer but also a way to sense and measure physical properties of the medium in which is installed. For dike monitoring, it has been used in the past for detecting inner core and foundation temperature changes which allow to estimate water infiltration during high water events. The DOMINO research project, aims to develop a fiber optic based dike monitoring system which allows to directly sense and measure any pore pressure change inside the dike structure. For this purpose, questions like which location, how many sensors, which measuring frequency and which accuracy are required for the sensor development. All these questions may be initially answered with a finite element model which allows to estimate the effects of pore pressure change in different locations along the cross section while having a time dependent estimation of a stability factor. The sensor aims to monitor two main failure mechanisms at the same time; The piping erosion failure mechanism and the macro-stability failure mechanism. Both mechanisms are going to be modeled and assessed in detail with a finite element based dual permeability Darcy-Richards numerical solution. In that manner, it is possible to assess different sensing configurations with different loading scenarios (e.g. High water levels, rainfall events and initial soil moisture and permeability conditions). The results obtained for the different configurations are later evaluated based on an entropy based performance evaluation. The added value of this kind of modelling approach for the sensor development is that it allows to simultaneously model the piping erosion and macro-stability failure mechanisms in a time dependent manner. In that way, the estimated pore pressures may be related to the monitored one and to both failure mechanisms. Furthermore, the approach is intended to be used in a later stage for the real time monitoring of the failure.
Matrix Dominated Failure of Fiber-Reinforced Composite Laminates Under Static and Dynamic Loading
NASA Astrophysics Data System (ADS)
Schaefer, Joseph Daniel
Hierarchical material systems provide the unique opportunity to connect material knowledge to solving specific design challenges. Representing the quickest growing class of hierarchical materials in use, fiber-reinforced polymer composites (FRPCs) offer superior strength and stiffness-to-weight ratios, damage tolerance, and decreasing production costs compared to metals and alloys. However, the implementation of FRPCs has historically been fraught with inadequate knowledge of the material failure behavior due to incomplete verification of recent computational constitutive models and improper (or non-existent) experimental validation, which has severely slowed creation and development. Noted by the recent Materials Genome Initiative and the Worldwide Failure Exercise, current state of the art qualification programs endure a 20 year gap between material conceptualization and implementation due to the lack of effective partnership between computational coding (simulation) and experimental characterization. Qualification processes are primarily experiment driven; the anisotropic nature of composites predisposes matrix-dominant properties to be sensitive to strain rate, which necessitates extensive testing. To decrease the qualification time, a framework that practically combines theoretical prediction of material failure with limited experimental validation is required. In this work, the Northwestern Failure Theory (NU Theory) for composite lamina is presented as the theoretical basis from which the failure of unidirectional and multidirectional composite laminates is investigated. From an initial experimental characterization of basic lamina properties, the NU Theory is employed to predict the matrix-dependent failure of composites under any state of biaxial stress from quasi-static to 1000 s-1 strain rates. It was found that the number of experiments required to characterize the strain-rate-dependent failure of a new composite material was reduced by an order of magnitude, and the resulting strain-rate-dependence was applicable for a large class of materials. The presented framework provides engineers with the capability to quickly identify fiber and matrix combinations for a given application and determine the failure behavior over the range of practical loadings cases. The failure-mode-based NU Theory may be especially useful when partnered with computational approaches (which often employ micromechanics to determine constituent and constitutive response) to provide accurate validation of the matrix-dominated failure modes experienced by laminates during progressive failure.
Toward lean satellites reliability improvement using HORYU-IV project as case study
NASA Astrophysics Data System (ADS)
Faure, Pauline; Tanaka, Atomu; Cho, Mengu
2017-04-01
Lean satellite programs are programs in which the satellite development philosophy is driven by fast delivery and low cost. Though this concept offers the possibility to develop and fly risky missions without jeopardizing a space program, most of these satellites suffer infant mortality and fail to achieve their mission minimum success. Lean satellites with high infant mortality rate indicate that testing prior to launch is insufficient. In this study, the authors monitored failures occurring during the development of the lean satellite HORYU-IV to identify the evolution of the cumulative number of failures against cumulative testing time. Moreover, the sub-systems driving the failures depending on the different development phases were identified. The results showed that half to 2/3 of the failures are discovered during the early stage of testing. Moreover, when the mean time before failure was calculated, it appeared that for any development phase considered, a new failure appears on average every 20 h of testing. Simulations were also performed and it showed that for an initial testing time of 50 h, reliability after 1 month launch can be improved by nearly 6 times as compared to an initial testing time of 20 h. Through this work, the authors aim at providing a qualitative reference for lean satellites developers to better help them manage resources to develop lean satellites following a fast delivery and low cost philosophy while ensuring sufficient reliability to achieve mission minimum success.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Dean; Oberkampf, William Louis; Helton, Jon Craig
2004-12-01
Relationships to determine the probability that a weak link (WL)/strong link (SL) safety system will fail to function as intended in a fire environment are investigated. In the systems under study, failure of the WL system before failure of the SL system is intended to render the overall system inoperational and thus prevent the possible occurrence of accidents with potentially serious consequences. Formal developments of the probability that the WL system fails to deactivate the overall system before failure of the SL system (i.e., the probability of loss of assured safety, PLOAS) are presented for several WWSL configurations: (i) onemore » WL, one SL, (ii) multiple WLs, multiple SLs with failure of any SL before any WL constituting failure of the safety system, (iii) multiple WLs, multiple SLs with failure of all SLs before any WL constituting failure of the safety system, and (iv) multiple WLs, multiple SLs and multiple sublinks in each SL with failure of any sublink constituting failure of the associated SL and failure of all SLs before failure of any WL constituting failure of the safety system. The indicated probabilities derive from time-dependent temperatures in the WL/SL system and variability (i.e., aleatory uncertainty) in the temperatures at which the individual components of this system fail and are formally defined as multidimensional integrals. Numerical procedures based on quadrature (i.e., trapezoidal rule, Simpson's rule) and also on Monte Carlo techniques (i.e., simple random sampling, importance sampling) are described and illustrated for the evaluation of these integrals. Example uncertainty and sensitivity analyses for PLOAS involving the representation of uncertainty (i.e., epistemic uncertainty) with probability theory and also with evidence theory are presented.« less
NASA Astrophysics Data System (ADS)
Su, Po-Cheng; Hsu, Chun-Chi; Du, Sin-I.; Wang, Tahui
2017-12-01
Read operation induced disturbance in SET-state in a tungsten oxide resistive switching memory is investigated. We observe that the reduction of oxygen vacancy density during read-disturb follows power-law dependence on cumulative read-disturb time. Our study shows that the SET-state read-disturb immunity progressively degrades by orders of magnitude as SET/RESET cycle number increases. To explore the cause of the read-disturb degradation, we perform a constant voltage stress to emulate high-field stress effects in SET/RESET cycling. We find that the read-disturb failure time degradation is attributed to high-field stress-generated oxide traps. Since the stress-generated traps may substitute for some of oxygen vacancies in forming conductive percolation paths in a switching dielectric, a stressed cell has a reduced oxygen vacancy density in SET-state, which in turn results in a shorter read-disturb failure time. We develop an analytical read-disturb degradation model including both cycling induced oxide trap creation and read-disturb induced oxygen vacancy reduction. Our model can well reproduce the measured read-disturb failure time degradation in a cycled cell without using fitting parameters.
Beeler, N.M.; Tullis, T.E.; Kronenberg, A.K.; Reinen, L.A.
2007-01-01
Earthquake occurrence probabilities that account for stress transfer and time-dependent failure depend on the product of the effective normal stress and a lab-derived dimensionless coefficient a. This coefficient describes the instantaneous dependence of fault strength on deformation rate, and determines the duration of precursory slip. Although an instantaneous rate dependence is observed for fracture, friction, crack growth, and low temperature plasticity in laboratory experiments, the physical origin of this effect during earthquake faulting is obscure. We examine this rate dependence in laboratory experiments on different rock types using a normalization scheme modified from one proposed by Tullis and Weeks [1987]. We compare the instantaneous rate dependence in rock friction with rate dependence measurements from higher temperature dislocation glide experiments. The same normalization scheme is used to compare rate dependence in friction to rock fracture and to low-temperature crack growth tests. For particular weak phyllosilicate minerals, the instantaneous friction rate dependence is consistent with dislocation glide. In intact rock failure tests, for each rock type considered, the instantaneous rate dependence is the same size as for friction, suggesting a common physical origin. During subcritical crack growth in strong quartzofeldspathic and carbonate rock where glide is not possible, the instantaneous rate dependence measured during failure or creep tests at high stress has long been thought to be due to crack growth; however, direct comparison between crack growth and friction tests shows poor agreement. The crack growth rate dependence appears to be higher than the rate dependence of friction and fracture by a factor of two to three for all rock types considered. Copyright 2007 by the American Geophysical Union.
Stress/strain changes and triggered seismicity at The Geysers, California
Gomberg, J.; Davis, S.
1996-01-01
The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency or equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.
Stress/strain changes and triggered seismicity at The Geysers, California
NASA Astrophysics Data System (ADS)
Gomberg, Joan; Davis, Scott
1996-01-01
The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency, or, equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.
Kuo, Lindsay E; Kaufman, Elinore; Hoffman, Rebecca L; Pascual, Jose L; Martin, Niels D; Kelz, Rachel R; Holena, Daniel N
2017-03-01
Failure-to-rescue is defined as the conditional probability of death after a complication, and the failure-to-rescue rate reflects a center's ability to successfully "rescue" patients after complications. The validity of the failure-to-rescue rate as a quality measure is dependent on the preventability of death and the appropriateness of this measure for use in the trauma population is untested. We sought to evaluate the relationship between preventability and failure-to-rescue in trauma. All adjudications from a mortality review panel at an academic level I trauma center from 2005-2015 were merged with registry data for the same time period. The preventability of each death was determined by panel consensus as part of peer review. Failure-to-rescue deaths were defined as those occurring after any registry-defined complication. Univariate and multivariate logistic regression models between failure-to-rescue status and preventability were constructed and time to death was examined using survival time analyses. Of 26,557 patients, 2,735 (10.5%) had a complication, of whom 359 died for a failure-to-rescue rate of 13.2%. Of failure-to-rescue deaths, 272 (75.6%) were judged to be non-preventable, 65 (18.1%) were judged potentially preventable, and 22 (6.1%) were judged to be preventable by peer review. After adjusting for other patient factors, there remained a strong association between failure-to-rescue status and potentially preventable (odds ratio 2.32, 95% confidence interval, 1.47-3.66) and preventable (odds ratio 14.84, 95% confidence interval, 3.30-66.71) judgment. Despite a strong association between failure-to-rescue status and preventability adjudication, only a minority of deaths meeting the definition of failure to rescue were judged to be preventable or potentially preventable. Revision of the failure-to-rescue metric before use in trauma care benchmarking is warranted. Copyright © 2016 Elsevier Inc. All rights reserved.
Kuo, Lindsay E.; Kaufman, Elinore; Hoffman, Rebecca L.; Pascual, Jose L.; Martin, Niels D.; Kelz, Rachel R.; Holena, Daniel N.
2018-01-01
Background Failure-to-rescue is defined as the conditional probability of death after a complication, and the failure-to-rescue rate reflects a center’s ability to successfully “rescue” patients after complications. The validity of the failure-to-rescue rate as a quality measure is dependent on the preventability of death and the appropriateness of this measure for use in the trauma population is untested. We sought to evaluate the relationship between preventability and failure-to-rescue in trauma. Methods All adjudications from a mortality review panel at an academic level I trauma center from 2005–2015 were merged with registry data for the same time period. The preventability of each death was determined by panel consensus as part of peer review. Failure-to-rescue deaths were defined as those occurring after any registry-defined complication. Univariate and multivariate logistic regression models between failure-to-rescue status and preventability were constructed and time to death was examined using survival time analyses. Results Of 26,557 patients, 2,735 (10.5%) had a complication, of whom 359 died for a failure-to-rescue rate of 13.2%. Of failure-to-rescue deaths, 272 (75.6%) were judged to be non-preventable, 65 (18.1%) were judged potentially preventable, and 22 (6.1%) were judged to be preventable by peer review. After adjusting for other patient factors, there remained a strong association between failure-to-rescue status and potentially preventable (odds ratio 2.32, 95% confidence interval, 1.47–3.66) and preventable (odds ratio 14.84, 95% confidence interval, 3.30–66.71) judgment. Conclusion Despite a strong association between failure-to-rescue status and preventability adjudication, only a minority of deaths meeting the definition of failure to rescue were judged to be preventable or potentially preventable. Revision of the failure-to-rescue metric before use in trauma care benchmarking is warranted. PMID:27788924
Zhu, Yujie; Hanafy, Mohamed A; Killingsworth, Cheryl R; Walcott, Gregory P; Young, Martin E; Pogwizd, Steven M
2014-01-01
Patients with chronic heart failure (CHF) exhibit a morning surge in ventricular arrhythmias, but the underlying cause remains unknown. The aim of this study was to determine if heart rate dynamics, autonomic input (assessed by heart rate variability (HRV)) and nonlinear dynamics as well as their abnormal time-of-day-dependent oscillations in a newly developed arrhythmogenic canine heart failure model are associated with a morning surge in ventricular arrhythmias. CHF was induced in dogs by aortic insufficiency & aortic constriction, and assessed by echocardiography. Holter monitoring was performed to study time-of-day-dependent variation in ventricular arrhythmias (PVCs, VT), traditional HRV measures, and nonlinear dynamics (including detrended fluctuations analysis α1 and α2 (DFAα1 & DFAα2), correlation dimension (CD), and Shannon entropy (SE)) at baseline, as well as 240 days (240 d) and 720 days (720 d) following CHF induction. LV fractional shortening was decreased at both 240 d and 720 d. Both PVCs and VT increased with CHF duration and showed a morning rise (2.5-fold & 1.8-fold increase at 6 AM-noon vs midnight-6 AM) during CHF. The morning rise in HR at baseline was significantly attenuated by 52% with development of CHF (at both 240 d & 720 d). Morning rise in the ratio of low frequency to high frequency (LF/HF) HRV at baseline was markedly attenuated with CHF. DFAα1, DFAα2, CD and SE all decreased with CHF by 31, 17, 34 and 7%, respectively. Time-of-day-dependent variations in LF/HF, CD, DFA α1 and SE, observed at baseline, were lost during CHF. Thus in this new arrhythmogenic canine CHF model, attenuated morning HR rise, blunted autonomic oscillation, decreased cardiac chaos and complexity of heart rate, as well as aberrant time-of-day-dependent variations in many of these parameters were associated with a morning surge of ventricular arrhythmias.
Zhu, Yujie; Hanafy, Mohamed A.; Killingsworth, Cheryl R.; Walcott, Gregory P.; Young, Martin E.; Pogwizd, Steven M.
2014-01-01
Patients with chronic heart failure (CHF) exhibit a morning surge in ventricular arrhythmias, but the underlying cause remains unknown. The aim of this study was to determine if heart rate dynamics, autonomic input (assessed by heart rate variability (HRV)) and nonlinear dynamics as well as their abnormal time-of-day-dependent oscillations in a newly developed arrhythmogenic canine heart failure model are associated with a morning surge in ventricular arrhythmias. CHF was induced in dogs by aortic insufficiency & aortic constriction, and assessed by echocardiography. Holter monitoring was performed to study time-of-day-dependent variation in ventricular arrhythmias (PVCs, VT), traditional HRV measures, and nonlinear dynamics (including detrended fluctuations analysis α1 and α2 (DFAα1 & DFAα2), correlation dimension (CD), and Shannon entropy (SE)) at baseline, as well as 240 days (240d) and 720 days (720d) following CHF induction. LV fractional shortening was decreased at both 240d and 720d. Both PVCs and VT increased with CHF duration and showed a morning rise (2.5-fold & 1.8-fold increase at 6 AM-noon vs midnight-6 AM) during CHF. The morning rise in HR at baseline was significantly attenuated by 52% with development of CHF (at both 240d & 720d). Morning rise in the ratio of low frequency to high frequency (LF/HF) HRV at baseline was markedly attenuated with CHF. DFAα1, DFAα2, CD and SE all decreased with CHF by 31, 17, 34 and 7%, respectively. Time-of-day-dependent variations in LF/HF, CD, DFA α1 and SE, observed at baseline, were lost during CHF. Thus in this new arrhythmogenic canine CHF model, attenuated morning HR rise, blunted autonomic oscillation, decreased cardiac chaos and complexity of heart rate, as well as aberrant time-of-day-dependent variations in many of these parameters were associated with a morning surge of ventricular arrhythmias. PMID:25140699
Dependent Lifelengths Induced by Dynamic Environments
1988-02-14
item has not failed at any time r, our assessment of the failure rate will increase since we expect that the dominant failure mechanism is governed ...of a dynamic environment on the system over a finite range [ 0, T’ ) can be captured through a polynomial environental factor function j7(r). We...Vol. 7, pp. 295- 306. Singpurwalla, N.D. (1988). Foundational issues in reliability and risk analysis. SIAM Review. To app.!ar. 85
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
Lessons Learned from Dependency Usage in HERA: Implications for THERP-Related HRA Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
April M. Whaley; Ronald L. Boring; Harold S. Blackman
Dependency occurs when the probability of success or failure on one action changes the probability of success or failure on a subsequent action. Dependency may serve as a modifier on the human error probabilities (HEPs) for successive actions in human reliability analysis (HRA) models. Discretion should be employed when determining whether or not a dependency calculation is warranted: dependency should not be assigned without strongly grounded reasons. Human reliability analysts may sometimes assign dependency in cases where it is unwarranted. This inappropriate assignment is attributed to a lack of clear guidance to encompass the range of scenarios human reliability analystsmore » are addressing. Inappropriate assignment of dependency produces inappropriately elevated HEP values. Lessons learned about dependency usage in the Human Event Repository and Analysis (HERA) system may provide clarification and guidance for analysts using first-generation HRA methods. This paper presents the HERA approach to dependency assessment and discusses considerations for dependency usage in HRA, including the cognitive basis for dependency, direction for determining when dependency should be assessed, considerations for determining the dependency level, temporal issues to consider when assessing dependency, (e.g., considering task sequence versus overall event sequence, and dependency over long periods of time), and diagnosis and action influences on dependency.« less
A quantile regression model for failure-time data with time-dependent covariates
Gorfine, Malka; Goldberg, Yair; Ritov, Ya’acov
2017-01-01
Summary Since survival data occur over time, often important covariates that we wish to consider also change over time. Such covariates are referred as time-dependent covariates. Quantile regression offers flexible modeling of survival data by allowing the covariates to vary with quantiles. This article provides a novel quantile regression model accommodating time-dependent covariates, for analyzing survival data subject to right censoring. Our simple estimation technique assumes the existence of instrumental variables. In addition, we present a doubly-robust estimator in the sense of Robins and Rotnitzky (1992, Recovery of information and adjustment for dependent censoring using surrogate markers. In: Jewell, N. P., Dietz, K. and Farewell, V. T. (editors), AIDS Epidemiology. Boston: Birkhaäuser, pp. 297–331.). The asymptotic properties of the estimators are rigorously studied. Finite-sample properties are demonstrated by a simulation study. The utility of the proposed methodology is demonstrated using the Stanford heart transplant dataset. PMID:27485534
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
BROËT, PHILIPPE; TSODIKOV, ALEXANDER; DE RYCKE, YANN; MOREAU, THIERRY
2010-01-01
This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests. PMID:15293627
NASA Technical Reports Server (NTRS)
Brinson, R. F.
1985-01-01
A method for lifetime or durability predictions for laminated fiber reinforced plastics is given. The procedure is similar to but not the same as the well known time-temperature-superposition principle for polymers. The method is better described as an analytical adaptation of time-stress-super-position methods. The analytical constitutive modeling is based upon a nonlinear viscoelastic constitutive model developed by Schapery. Time dependent failure models are discussed and are related to the constitutive models. Finally, results of an incremental lamination analysis using the constitutive and failure model are compared to experimental results. Favorable results between theory and predictions are presented using data from creep tests of about two months duration.
Post - SM4 Flux Calibration of the STIS Echelle Modes
NASA Astrophysics Data System (ADS)
Bostroem, Azalee; Aloisi, A.; Bohlin, R. C.; Proffitt, C. R.; Osten, R. A.; Lennon, D.
2010-07-01
Like all STIS spectroscopic modes, STIS echelle modes show a wavelength dependent decline in detector sensitivity with time. The echelle sensitivity is further affected by a time-dependent shift in the blaze function. To better correct the effects of the echelle sensitivity loss and the blaze function changes, we derive new baselines for echelle sensitivities from post-HST Servicing Mission 4 observations of the standard star G191-B2B. We present how these baseline sensitivities compare to pre-failure trends.
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
Long-term reliability study and failure analysis of quantum cascade lasers
NASA Astrophysics Data System (ADS)
Xie, Feng; Nguyen, Hong-Ky; Leblanc, Herve; Hughes, Larry; Wang, Jie; Miller, Dean J.; Lascola, Kevin
2017-02-01
Here we present lifetime test results of 4 groups of quantum cascade lasers (QCL) under various aging conditions including an accelerated life test. The total accumulated life time exceeds 1.5 million device·hours, which is the largest QCL reliability study ever reported. The longest single device aging time was 46.5 thousand hours (without failure) in the room temperature test. Four failures were found in a group of 19 devices subjected to the accelerated life test with a heat-sink temperature of 60 °C and a continuous-wave current of 1 A. Visual inspection of the laser facets of failed devices revealed an astonishing phenomenon, which has never been reported before, which manifested as a dark belt of an unknown substance appearing on facets. Although initially assumed to be contamination from the environment, failure analysis revealed that the dark substance is a thermally induced oxide of InP in the buried heterostructure semiinsulating layer. When the oxidized material starts to cover the core and blocks the light emission, it begins to cause the failure of QCLs in the accelerated test. An activation energy of 1.2 eV is derived from the dependence of the failure rate on laser core temperature. With the activation energy, the mean time to failure of the quantum cascade lasers operating at a current density of 5 kA/cm2 and heat-sink temperature of 25°C is expected to be 809 thousand hours.
Product Reliability Trends, Derating Considerations and Failure Mechanisms with Scaled CMOS
NASA Technical Reports Server (NTRS)
White, Mark; Vu, Duc; Nguyen, Duc; Ruiz, Ron; Chen, Yuan; Bernstein, Joseph B.
2006-01-01
As microelectronics is scaled into the deep sub-micron regime, space and aerospace users of advanced technology CMOS are reassessing how scaling effects impact long-term product reliability. The effects of electromigration (EM), time-dependent-dielectric-breakdown (TDDB) and hot carrier degradation (HCI and NBTI) wearout mechanisms on scaled technologies and product reliability are investigated, accelerated stress testing across several technology nodes is performed, and FA is conducted to confirm the failure mechanism(s).
Fournier, Marie-Cécile; Foucher, Yohann; Blanche, Paul; Buron, Fanny; Giral, Magali; Dantan, Etienne
2016-05-01
In renal transplantation, serum creatinine (SCr) is the main biomarker routinely measured to assess patient's health, with chronic increases being strongly associated with long-term graft failure risk (death with a functioning graft or return to dialysis). Joint modeling may be useful to identify the specific role of risk factors on chronic evolution of kidney transplant recipients: some can be related to the SCr evolution, finally leading to graft failure, whereas others can be associated with graft failure without any modification of SCr. Sample data for 2749 patients transplanted between 2000 and 2013 with a functioning kidney at 1-year post-transplantation were obtained from the DIVAT cohort. A shared random effect joint model for longitudinal SCr values and time to graft failure was performed. We show that graft failure risk depended on both the current value and slope of the SCr. Deceased donor graft patient seemed to have a higher SCr increase, similar to patient with diabetes history, while no significant association of these two features with graft failure risk was found. Patient with a second graft was at higher risk of graft failure, independent of changes in SCr values. Anti-HLA immunization was associated with both processes simultaneously. Joint models for repeated and time-to-event data bring new opportunities to improve the epidemiological knowledge of chronic diseases. For instance in renal transplantation, several features should receive additional attention as we demonstrated their correlation with graft failure risk was independent of the SCr evolution.
Guest Editor's Introduction: Special section on dependable distributed systems
NASA Astrophysics Data System (ADS)
Fetzer, Christof
1999-09-01
We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potential disadvantages when used in dependable systems. For example, minimizing the time to market might lead to the release of components with inherent design faults (e.g. use of `shortcuts' that only work most of the time). In addition, the components might be more complex than needed and, hence, potentially have more design faults than simpler components. However, given economic constraints and the ability to cope with some of the problems using fault-evasion and fault-tolerance, only for a small percentage of systems can one justify not using COTS components. Distributed systems built from current COTS components are asynchronous systems in the sense that there exists no a priori known bound on the transmission delay of messages or the execution time of processes. When designing a distributed algorithm, one would like to make sure (e.g. by testing or verification) that it is correct, i.e. satisfies its specification. Many distributed algorithms make use of consensus (eventually all non-crashed processes have to agree on a value), leader election (a crashed leader is eventually replaced by a new leader, but at any time there is at most one leader) or a group membership detection service (a crashed process is eventually suspected to have crashed but only crashed processes are suspected). From a theoretical point of view, the service specifications given for such services are not implementable in asynchronous systems. In particular, for each implementation one can derive a counter example in which the service violates its specification. From a practical point of view, the consensus, the leader election, and the membership detection problem are solvable in asynchronous distributed systems. In this special section, Raynal and Tronel show how to bridge this difference by showing how to implement the group membership detection problem with a negligible probability [1] to fail in an asynchronous system. The group membership detection problem is specified by a liveness condition (L) and a safety property (S): (L) if a process p crashes, then eventually every non-crashed process q has to suspect that p has crashed; and (S) if a process q suspects p, then p has indeed crashed. One can show that either (L) or (S) is implementable, but one cannot implement both (L) and (S) at the same time in an asynchronous system. In practice, one only needs to implement (L) and (S) such that the probability that (L) or (S) is violated becomes negligible. Raynal and Tronel propose and analyse a protocol that implements (L) with certainty and that can be tuned such that the probability that (S) is violated becomes negligible. Designing and implementing distributed fault-tolerant protocols for asynchronous systems is a difficult but not an impossible task. A fault-tolerant protocol has to detect and mask certain failure classes, e.g. crash failures and message omission failures. There is a trade-off between the performance of a fault-tolerant protocol and the failure classes the protocol can tolerate. One wants to tolerate as many failure classes as needed to satisfy the stochastic requirements of the protocol [1] while still maintaining a sufficient performance. Since clients of a protocol have different requirements with respect to the performance/fault-tolerance trade-off, one would like to be able to customize protocols such that one can select an appropriate performance/fault-tolerance trade-off. In this special section Hiltunen et al describe how one can compose protocols from micro-protocols in their Cactus system. They show how a group RPC system can be tailored to the needs of a client. In particular, they show how considering additional failure classes affects the performance of a group RPC system. References [1] Cristian F 1991 Understanding fault-tolerant distributed systems Communications of ACM 34 (2) 56-78 [2] Heimerdinger W L and Weinstock C B 1992 A conceptual framework for system fault tolerance Technical Report 92-TR-33, CMU/SEI [3] Laprie J C (ed) 1992 Dependability: Basic Concepts and Terminology (Vienna: Springer)
Experimental and computed results investigating time-dependent failure in a borosilicate glass
NASA Astrophysics Data System (ADS)
Chocron, Sidney; Barnette, Darrel; Holmquist, Timothy; Anderson, Charles E.; Bigger, Rory; Moore, Thomas
2017-01-01
Symmetric plate-impact tests of borosilicate glass were performed from low (116 m/s) to higher (351 m/s) velocities. The tests were recorded with an ultra-high-speed camera to see the shock and failure propagation. The velocity of the back of the target was also recorded with a PDV (Photon Doppler Velocimeter). The images show failure nucleation sites that trail the shock wave. Interestingly, even though the failure wave is clearly seen, the PDV never detected the expected recompression wave. The reason might be that at these low impact velocities the recompression wave is too small to be seen and is lost in the noise. This work also presents a new way to interpret the signals from the PDV. By letting part of the signal travel through the target and reflect on the impact side, it is possible to see the PDV decrease in intensity with time, probably due to the damage growth behind the shock wave.
NASA Astrophysics Data System (ADS)
Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.
2018-01-01
Large alpine rock slopes undergo long-term evolution in paraglacial to postglacial environments. Rock mass weakening and increased permeability associated with the progressive failure of deglaciated slopes promote the development of potentially catastrophic rockslides. We captured the entire life cycle of alpine slopes in one damage-based, time-dependent 2-D model of brittle creep, including deglaciation, damage-dependent fluid occurrence, and rock mass property upscaling. We applied the model to the Spriana rock slope (Central Alps), affected by long-term instability after Last Glacial Maximum and representing an active threat. We simulated the evolution of the slope from glaciated conditions to present day and calibrated the model using site investigation data and available temporal constraints. The model tracks the entire progressive failure path of the slope from deglaciation to rockslide development, without a priori assumptions on shear zone geometry and hydraulic conditions. Complete rockslide differentiation occurs through the transition from dilatant damage to a compacting basal shear zone, accounting for observed hydraulic barrier effects and perched aquifer formation. Our model investigates the mechanical role of deglaciation and damage-controlled fluid distribution in the development of alpine rockslides. The absolute simulated timing of rock slope instability development supports a very long "paraglacial" period of subcritical rock mass damage. After initial damage localization during the Lateglacial, rockslide nucleation initiates soon after the onset of Holocene, whereas full mechanical and hydraulic rockslide differentiation occurs during Mid-Holocene, supporting a key role of long-term damage in the reported occurrence of widespread rockslide clusters of these ages.
Both high and low HbA1c predict incident heart failure in type 2 diabetes mellitus.
Parry, Helen M; Deshmukh, Harshal; Levin, Daniel; Van Zuydam, Natalie; Elder, Douglas H J; Morris, Andrew D; Struthers, Allan D; Palmer, Colin N A; Doney, Alex S F; Lang, Chim C
2015-03-01
Type 2 diabetes mellitus is an independent risk factor for heart failure development, but the relationship between incident heart failure and antecedent glycemia has not been evaluated. The Genetics of Diabetes Audit and Research in Tayside Study study holds data for 8683 individuals with type 2 diabetes mellitus. Dispensed prescribing, hospital admission data, and echocardiography reports were linked to extract incident heart failure cases from December 1998 to August 2011. All available HbA1c measures until heart failure development or end of study were used to model HbA1c time-dependently. Individuals were observed from study enrolment until heart failure development or end of study. Proportional hazard regression calculated heart failure development risk associated with specific HbA1c ranges accounting for comorbidities associated with heart failure, including blood pressure, body mass index, and coronary artery disease. Seven hundred and one individuals with type 2 diabetes mellitus (8%) developed heart failure during follow up (mean 5.5 years, ±2.8 years). Time-updated analysis with longitudinal HbA1c showed that both HbA1c <6% (hazard ratio =1.60; 95% confidence interval, 1.38-1.86; P value <0.0001) and HbA1c >10% (hazard ratio =1.80; 95% confidence interval, 1.60-2.16; P value <0.0001) were independently associated with the risk of heart failure. Both high and low HbA1c predicted heart failure development in our cohort, forming a U-shaped relationship. © 2015 American Heart Association, Inc.
NASA Technical Reports Server (NTRS)
Zhu, Dongming; Lee, Kang N.; Miller, Robert A.
2002-01-01
Plasma-sprayed ZrO2-8wt%Y2O3 and mullite+BSAS/Si multilayer thermal and environmental barrier coating (TBC-EBC) systems on SiC/SiC ceramic matrix composite (CMC) substrates were thermally cyclic tested under high thermal gradients using a laser high-heat-flux rig in conjunction with furnace exposure in water-vapor environments. Coating sintering and interface damage were assessed by monitoring the real-time thermal conductivity changes during the laser heat-flux tests and by examining the microstructural changes after exposure. Sintering kinetics of the coating systems were also independently characterized using a dilatometer. It was found that the coating failure involved both the time-temperature dependent sintering and the cycle frequency dependent cyclic fatigue processes. The water vapor environments not only facilitated the initial coating conductivity increases due to enhanced sintering and interface reaction, but also promoted later conductivity reductions due to the accelerated coating cracking and delamination. The failure mechanisms of the coating systems are also discussed based on the cyclic test results and are correlated to the sintering and thermal stress behavior under the thermal gradient test conditions.
Maerz, Adam H.; Gould, Jeffrey R.; Enoka, Roger M.
2011-01-01
Presynaptic modulation of Ia afferents converging onto the motor neuron pool of the extensor carpi radialis (ECR) was compared during contractions (20% of maximal force) sustained to failure as subjects controlled either the angular position of the wrist while supporting an inertial load (position task) or exerted an equivalent force against a rigid restraint (force task). Test Hoffmann (H) reflexes were evoked in the ECR by stimulating the radial nerve above the elbow. Conditioned H reflexes were obtained by stimulating either the median nerve above the elbow or at the wrist (palmar branch) to assess presynaptic inhibition of homonymous (D1 inhibition) and heteronymous Ia afferents (heteronymous Ia facilitation), respectively. The position task was briefer than the force task (P = 0.001), although the maximal voluntary force and electromyograph for ECR declined similarly at failure for both tasks. Changes in the amplitude of the conditioned H reflex were positively correlated between the two conditioning methods (P = 0.02) and differed between the two tasks (P < 0.05). The amplitude of the conditioned H reflex during the position task first increased (129 ± 20.5% of the initial value, P < 0.001) before returning to its initial value (P = 0.22), whereas it increased progressively during the force task to reach 122 ± 17.4% of the initial value at failure (P < 0.001). Moreover, changes in conditioned H reflexes were associated with the time to task failure and force fluctuations. The results suggest a task- and time-dependent modulation of presynaptic inhibition of Ia afferents during fatiguing contractions. PMID:21543747
NASA Astrophysics Data System (ADS)
Meredith, Philip
2016-04-01
Earthquake ruptures and volcanic eruptions are the most dramatic manifestations of the dynamic failure of a critically stressed crust. However, these are actually very rare events in both space and time; and most of the crust spends most of its time in a highly stressed but subcritical state. Under upper crustal conditions most rocks accommodate applied stresses in a brittle manner through cracking, fracturing and faulting. Cracks can grow at all scales from the grain scale to the crustal scale, and under different stress regimes. Under tensile stresses, single, long cracks tend to grow at the expense of shorter ones; while under all-round compressive, multiple microcracks tend to coalesce to form macroscopic fractures or faults. Deformation in the crust also occurs over a wide range of strain rates, from the very slow rates associated with tectonic loading up to the very fast rates occurring during earthquake rupture. It is now well-established that reactions between chemically-active pore fluids and the rock matrix can lead to time-dependent, subcritical crack propagation and failure in rocks. In turn, this can allow them to deform and fail over extended periods of time at stresses well below their short-term strength, and even at constant stress; a process known as brittle creep. Such cracking at constant stress eventually leads to accelerated deformation and critical, dynamic failure. However, in the period between sequential dynamic failure events, fractures can become subject to chemically-enhanced time-dependent strength recovery processes such as healing or the growth of mineral veins. We show that such strengthening can be much faster than previously suggested and can occur over geologically very short time-spans. These observations of ultra-slow cracking and ultra-fast healing have profound implications for the evolution and dynamics of the Earth's crust. To obtain a complete understanding of crustal dynamics we require a detailed knowledge of all these time-dependent mechanisms. Such knowledge should be based on micromechanics, but also provide an adequate description at the macroscopic or crustal scale. One way of moving towards this is to establish a relationship between the internal, microstructural state of the rock and the macroscopically observable external quantities. Here, we present a number of examples of attempts to reconcile these ideas through external measurements of stress and strain evolution during deformation with simultaneous measurements of the evolution of key internal variables such as elastic wave speeds, acoustic emission output, porosity and permeability. Overall, the combined data are able to explain both the complexity of stress-strain relations during constant strain rate loading and the shape of creep curves during constant stress loading, thus providing a unifying framework to describe the time-dependent mechanical behaviour of crustal rocks.
Time-Dependent Damage Investigation of Rock Mass in an In Situ Experimental Tunnel
Jiang, Quan; Cui, Jie; Chen, Jing
2012-01-01
In underground tunnels or caverns, time-dependent deformation or failure of rock mass, such as extending cracks, gradual rock falls, etc., are a costly irritant and a major safety concern if the time-dependent damage of surrounding rock is serious. To understand the damage evolution of rock mass in underground engineering, an in situ experimental testing was carried out in a large belowground tunnel with a scale of 28.5 m in width, 21 m in height and 352 m in length. The time-dependent damage of rock mass was detected in succession by an ultrasonic wave test after excavation. The testing results showed that the time-dependent damage of rock mass could last a long time, i.e., nearly 30 days. Regression analysis of damage factors defined by wave velocity, resulted in the time-dependent evolutional damage equation of rock mass, which corresponded with logarithmic format. A damage viscoelastic-plastic model was developed to describe the exposed time-dependent deterioration of rock mass by field test, such as convergence of time-dependent damage, deterioration of elastic modules and logarithmic format of damage factor. Furthermore, the remedial measures for damaged surrounding rock were discussed based on the measured results and the conception of damage compensation, which provides new clues for underground engineering design.
Time-dependent earthquake probabilities
Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.
2005-01-01
We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.
A FORTRAN program for multivariate survival analysis on the personal computer.
Mulder, P G
1988-01-01
In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.
Time-dependent brittle deformation (creep) at Mt. Etna volcano
NASA Astrophysics Data System (ADS)
Heap, M. J.; Baud, P.; Meredith, P. G.; Vinciguerra, S.; Bell, A. F.; Main, I. G.
2009-04-01
Mt. Etna is the largest and most active volcano in Europe. Time-dependent weakening mechanisms, leading to slow fracturing, have been shown to act during pre-eruptive patterns of flank eruptions at Mt. Etna volcano. Due to the high permeability of its volcanic rocks, the volcanic edifice hosts one of the biggest hydrogeologic reservoirs of Sicily (Ogniben, 1966). The presence of a fluid phase in cracks within rock has been shown to dramatically affect both mechanical and chemical interactions. Chemically, it promotes time-dependent brittle deformation through such mechanisms as stress corrosion cracking that allows rocks to deform at stresses far below their short-term failure strength. Such crack growth is highly non-linear and accelerates towards dynamic failure over extended periods of time, even under constant applied stress; a phenomenon known as ‘brittle creep'. Here we report results from a study of time-dependent brittle creep in water-saturated samples of Etna basalt (EB) under triaxial stress conditions (confining pressure of 50 MPa and pore fluid pressure of 20 MPa). Samples of EB were loaded at a constant strain rate of 10-5 s-1 to a pre-determined percentage of the short-term strength and left to deform under constant stress until failure. Crack damage evolution was monitored throughout each experiment by measuring the independent damage proxies of axial strain, pore volume change and output of acoustic emission (AE) energy, during brittle creep of creep strain rates ranging over four orders of magnitude. Our data not only demonstrates that basalt creeps in the brittle regime but also that the applied differential stress exerts a crucial influence on both time-to-failure and creep strain rate in EB. Furthermore, stress corrosion is considered to be responsible for the acceleratory cracking and seismicity prior to volcanic eruptions and is invoked as an important mechanism in forecasting models. Stress-stepping creep experiments were then performed to allow the influence of the effective confining stress to be studied in detail. Experiments were performed under effective stress conditions of 10, 30 and 50 MPa (whilst maintaining a constant pore fluid pressure of 20 MPa). In addition to the purely mechanical influence of water, governed by the effective stress, which results in a shift of the creep strain rate curves to lower strain rates at higher effective stresses. Our results also demonstrate that the chemically-driven process of stress corrosion cracking appears to be inhibited at higher effective stress. This results in an increase in the gradient of the creep strain rate curves with increasing effective stress. We suggest that the most likely cause of this change is a decrease in water mobility due to a reduction in crack aperture and an increase in water viscosity at higher pressure. Finally, we show that a theoretical model based on mean-field damage mechanics creep laws is able to reproduce the experimental strain-time relations. Our results indicate that the local changes in the stress field and fluid circulation can have a profound impact in the time-to-failure properties of the basaltic volcanic pile.
Time-dependent brittle deformation at Mt. Etna volcano
NASA Astrophysics Data System (ADS)
Baud, Patrick; Heap, Michael; Meredith, Philip; Vinciguerra, Sergio; Bell, Andrew; Main, Ian
2010-05-01
Time-dependent weakening mechanisms, leading to slow fracturing, are likely to act during the build up to flank eruptions at Mt. Etna volcano and are potentially a primary control on pre-eruptive patterns of seismicity and deformation. Due to the high permeability of its volcanic rocks, the volcanic edifice hosts a large water reservoir (Ogniben, 1966). The presence of a fluid phase in cracks within rock has been shown to dramatically affect both mechanical and chemical interactions. Chemically, it promotes time-dependent brittle deformation through such mechanisms as stress corrosion cracking that allows rocks to deform at stresses far below their short-term failure strength. Such crack growth is highly non-linear and accelerates towards dynamic failure over extended periods of time, even under constant applied stress; a phenomenon known as ‘brittle creep'. Here we report results from a study of time-dependent brittle creep in water-saturated samples of Etna basalt (EB) under triaxial stress conditions (confining pressure of 50 MPa and pore fluid pressure of 20 MPa). Samples of EB were loaded at a constant strain rate of 10-5 s-1 to a pre-determined percentage of the short-term strength and left to deform under constant stress until failure. Crack damage evolution was monitored throughout each experiment by measuring the independent damage proxies of axial strain, pore volume change and output of acoustic emission (AE) energy, during brittle creep of creep strain rates ranging over four orders of magnitude. Our data not only demonstrates that basalt creeps in the brittle regime but also that the applied differential stress exerts a crucial influence on both time-to-failure and creep strain rate in EB. Furthermore, stress corrosion is considered to be responsible for the acceleratory cracking and seismicity prior to volcanic eruptions and is invoked as an important mechanism in forecasting models. Stress-stepping creep experiments were then performed to allow the influence of the effective confining stress to be studied in detail. Experiments were performed under effective stress conditions of 10, 30 and 50 MPa (whilst maintaining a constant pore fluid pressure of 20 MPa). In addition to the purely mechanical influence of water, governed by the effective stress, which results in a shift of the creep strain rate curves to lower strain rates at higher effective stresses. Our results also demonstrate that the chemically-driven process of stress corrosion cracking appears to be inhibited at higher effective stress. This results in an increase in the gradient of the creep strain rate curves with increasing effective stress. We suggest that the most likely cause of this change is a decrease in water mobility due to a reduction in crack aperture and an increase in water viscosity at higher pressure. Finally, we show that a theoretical model based on mean-field damage mechanics creep laws is able to reproduce the experimental strain-time relations and inverse seismicity plots using our experimental AE data. Our results indicate that the local changes in the stress field and fluid circulation can have a profound impact in the time-to-failure properties of the basaltic volcanic pile.
Efficient Meshfree Large Deformation Simulation of Rainfall Induced Soil Slope Failure
NASA Astrophysics Data System (ADS)
Wang, Dongdong; Li, Ling
2010-05-01
An efficient Lagrangian Galerkin meshfree framework is presented for large deformation simulation of rainfall-induced soil slope failure. Detailed coupled soil-rainfall seepage equations are given for the proposed formulation. This nonlinear meshfree formulation is featured by the Lagrangian stabilized conforming nodal integration method where the low cost nature of nodal integration approach is kept and at the same time the numerical stability is maintained. The initiation and evolution of progressive failure in the soil slope is modeled by the coupled constitutive equations of isotropic damage and Drucker-Prager pressure-dependent plasticity. The gradient smoothing in the stabilized conforming integration also serves as a non-local regularization of material instability and consequently the present method is capable of effectively capture the shear band failure. The efficacy of the present method is demonstrated by simulating the rainfall-induced failure of two typical soil slopes.
48 CFR 2852.233-70 - Protests filed directly with the Department of Justice.
Code of Federal Regulations, 2010 CFR
2010-10-01
... time as the scheduling conference, depending on availability of the necessary parties. (f) Oral... economic interest would be affected by the award of a contract or by the failure to award a contract. (b) A...
Canonical failure modes of real-time control systems: insights from cognitive theory
NASA Astrophysics Data System (ADS)
Wallace, Rodrick
2016-04-01
Newly developed necessary conditions statistical models from cognitive theory are applied to generalisation of the data-rate theorem for real-time control systems. Rather than graceful degradation under stress, automatons and man/machine cockpits appear prone to characteristic sudden failure under demanding fog-of-war conditions. Critical dysfunctions span a spectrum of phase transition analogues, ranging from a ground state of 'all targets are enemies' to more standard data-rate instabilities. Insidious pathologies also appear possible, akin to inattentional blindness consequent on overfocus on an expected pattern. Via no-free-lunch constraints, different equivalence classes of systems, having structure and function determined by 'market pressures', in a large sense, will be inherently unreliable under different but characteristic canonical stress landscapes, suggesting that deliberate induction of failure may often be relatively straightforward. Focusing on two recent military case histories, these results provide a caveat emptor against blind faith in the current path-dependent evolutionary trajectory of automation for critical real-time processes.
A state-based approach to trend recognition and failure prediction for the Space Station Freedom
NASA Technical Reports Server (NTRS)
Nelson, Kyle S.; Hadden, George D.
1992-01-01
A state-based reasoning approach to trend recognition and failure prediction for the Altitude Determination, and Control System (ADCS) of the Space Station Freedom (SSF) is described. The problem domain is characterized by features (e.g., trends and impending failures) that develop over a variety of time spans, anywhere from several minutes to several years. Our state-based reasoning approach, coupled with intelligent data screening, allows features to be tracked as they develop in a time-dependent manner. That is, each state machine has the ability to encode a time frame for the feature it detects. As features are detected, they are recorded and can be used as input to other state machines, creating a hierarchical feature recognition scheme. Furthermore, each machine can operate independently of the others, allowing simultaneous tracking of features. State-based reasoning was implemented in the trend recognition and the prognostic modules of a prototype Space Station Freedom Maintenance and Diagnostic System (SSFMDS) developed at Honeywell's Systems and Research Center.
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)
2007-01-01
Structures often comprise smaller substructures that are connected to each other or attached to the ground by a set of finite connections. Under static loading one or more of these connections may exceed allowable limits and be deemed to fail. Of particular interest is the structural response when a connection is severed (failed) while the structure is under static load. A transient failure analysis procedure was developed by which it is possible to examine the dynamic effects that result from introducing a discrete failure while a structure is under static load. The failure is introduced by replacing a connection load history by a time-dependent load set that removes the connection load at the time of failure. The subsequent transient response is examined to determine the importance of the dynamic effects by comparing the structural response with the appropriate allowables. Additionally, this procedure utilizes a standard finite element transient analysis that is readily available in most commercial software, permitting the study of dynamic failures without the need to purchase software specifically for this purpose. The procedure is developed and explained, demonstrated on a simple cantilever box example, and finally demonstrated on a real-world example, the American Airlines Flight 587 (AA587) vertical tail plane (VTP).
Accelerated fatigue durability of a high performance composite
NASA Technical Reports Server (NTRS)
Rotem, A.
1982-01-01
The fatigue behavior of multidirectional graphite-epoxy laminates was analyzed theoretically and experimentally in an effort to establish an accelerated testing methodology. Analysis of the failure mechanism in fatigue of the laminates led to the determination of the failure mode governing fracture. The nonlinear, cyclic-dependent shear modulus was used to calculate the changing stress field in the laminate during the fatigue loading. Fatigue tests were performed at three different temperatures: 25 C, 74 C, and 114 C. The prediction of the S-N curves was made based on the artificial static strength artificial static strength at a reference temperature and the fatigue functions associated with them. The prediction of an S-N curve at other temperatures was performed using shifting factors determined for the specific failure mode. For multidirectional laminates, different S-N curves at different temperatures could be predicted using these shifting factors. Different S-N curves at different temperatures occur only when the fatigue failure mode is matrix dominated. It was found that whenever the fatigue failure mode is fiber dominated, temperature, over the range investigated, had no influence on the fatigue life. These results permit the prediction of long-time, low temperature fatigue behavior from data obtained in short time, high temperature testing, for laminates governed by a matrix failure mode.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrest, S.R.; Ban, V.S.; Gasparian, G.
1988-05-01
The authors measured the mean time to failure (MTTF) for a statistically significant population of planar In/sub 0.53/Ga/sub 0.47/As/InP heterostructure p-i-n photodetectors at several elevated temperatures. The probability for failure is fit to a log-normal distribution, with the result that the width of the failure distribution is sigma = 0.55 +- 0.2, and is roughly independent of temperature. From the temperature dependence of the MTFF data, they find that the failure mechanism is thermally activated, with an activation energy of 1.5 +- 0.2 eV measured in the temperature range of 170 - 250/sup 0/C. This extrapolates to a MTTF ofmore » less than 0.1 failure in 10/sup 9/ h (or < 0.1 FIT) at 70/sup 0/C, indicating that such devices are useful for systems requiring extremely high reliable components, even if operated at elevated temperatures for significant time periods. To the authors' knowledge, this activation energy is the highest value reported for In/sub 0.53/Ga/sub 0.47/As/InP photodetectors, and is significantly higher than the energies of -- 0.85 eV often suspected to these devices.« less
Corder, Roger; Warburton, Richard C; Khan, Noorafza Q; Brown, Ruth E; Wood, Elizabeth G; Lees, Delphine M
2004-11-01
Reduced endothelium-dependent vasodilator responses with increased synthesis of ET-1 (endothelin-1) are characteristics of endothelial dysfunction in heart failure and are predictive of mortality. Identification of treatments that correct these abnormalities may have particular benefit for patients who become refractory to current regimens. Hawthorn preparations have a long history in the treatment of heart failure. Therefore we tested their inhibitory effects on ET-1 synthesis by cultured endothelial cells. These actions were compared with that of GSE (grape seed extract), as the vasoactive components of both these herbal remedies are mainly oligomeric flavan-3-ols called procyanidins. This showed extracts of hawthorn and grape seed were equipotent as inhibitors of ET-1 synthesis. GSE also produced a potent endothelium-dependent vasodilator response on preparations of isolated aorta. Suppression of ET-1 synthesis at the same time as induction of endothelium-dependent vasodilation is a similar response to that triggered by laminar shear stress. Based on these results and previous findings, we hypothesize that through their pharmacological properties procyanidins stimulate a pseudo laminar shear stress response in endothelial cells, which helps restore endothelial function and underlies the benefit from treatment with hawthorn extract in heart failure.
The Spectrum of Renal Allograft Failure
Chand, Sourabh; Atkinson, David; Collins, Clare; Briggs, David; Ball, Simon; Sharif, Adnan; Skordilis, Kassiani; Vydianath, Bindu; Neil, Desley; Borrows, Richard
2016-01-01
Background Causes of “true” late kidney allograft failure remain unclear as study selection bias and limited follow-up risk incomplete representation of the spectrum. Methods We evaluated all unselected graft failures from 2008–2014 (n = 171; 0–36 years post-transplantation) by contemporary classification of indication biopsies “proximate” to failure, DSA assessment, clinical and biochemical data. Results The spectrum of graft failure changed markedly depending on the timing of allograft failure. Failures within the first year were most commonly attributed to technical failure, acute rejection (with T-cell mediated rejection [TCMR] dominating antibody-mediated rejection [ABMR]). Failures beyond a year were increasingly dominated by ABMR and ‘interstitial fibrosis with tubular atrophy’ without rejection, infection or recurrent disease (“IFTA”). Cases of IFTA associated with inflammation in non-scarred areas (compared with no inflammation or inflammation solely within scarred regions) were more commonly associated with episodes of prior rejection, late rejection and nonadherence, pointing to an alloimmune aetiology. Nonadherence and late rejection were common in ABMR and TCMR, particularly Acute Active ABMR. Acute Active ABMR and nonadherence were associated with younger age, faster functional decline, and less hyalinosis on biopsy. Chronic and Chronic Active ABMR were more commonly associated with Class II DSA. C1q-binding DSA, detected in 33% of ABMR episodes, were associated with shorter time to graft failure. Most non-biopsied patients were DSA-negative (16/21; 76.1%). Finally, twelve losses to recurrent disease were seen (16%). Conclusion This data from an unselected population identifies IFTA alongside ABMR as a very important cause of true late graft failure, with nonadherence-associated TCMR as a phenomenon in some patients. It highlights clinical and immunological characteristics of ABMR subgroups, and should inform clinical practice and individualised patient care. PMID:27649571
Lanfear, David E; Levy, Wayne C; Stehlik, Josef; Estep, Jerry D; Rogers, Joseph G; Shah, Keyur B; Boyle, Andrew J; Chuang, Joyce; Farrar, David J; Starling, Randall C
2017-05-01
Timing of left ventricular assist device (LVAD) implantation in advanced heart failure patients not on inotropes is unclear. Relevant prediction models exist (SHFM [Seattle Heart Failure Model] and HMRS [HeartMate II Risk Score]), but use in this group is not established. ROADMAP (Risk Assessment and Comparative Effectiveness of Left Ventricular Assist Device and Medical Management in Ambulatory Heart Failure Patients) is a prospective, multicenter, nonrandomized study of 200 advanced heart failure patients not on inotropes who met indications for LVAD implantation, comparing the effectiveness of HeartMate II support versus optimal medical management. We compared SHFM-predicted versus observed survival (overall survival and LVAD-free survival) in the optimal medical management arm (n=103) and HMRS-predicted versus observed survival in all LVAD patients (n=111) using Cox modeling, receiver-operator characteristic (ROC) curves, and calibration plots. In the optimal medical management cohort, the SHFM was a significant predictor of survival (hazard ratio=2.98; P <0.001; ROC area under the curve=0.71; P <0.001) but not LVAD-free survival (hazard ratio=1.41; P =0.097; ROC area under the curve=0.56; P =0.314). SHFM showed adequate calibration for survival but overestimated LVAD-free survival. In the LVAD cohort, the HMRS had marginal discrimination at 3 (Cox P =0.23; ROC area under the curve=0.71; P =0.026) and 12 months (Cox P =0.036; ROC area under the curve=0.62; P =0.122), but calibration was poor, underestimating survival across time and risk subgroups. In non-inotrope-dependent advanced heart failure patients receiving optimal medical management, the SHFM was predictive of overall survival but underestimated the risk of clinical worsening and LVAD implantation. Among LVAD patients, the HMRS had marginal discrimination and underestimated survival post-LVAD implantation. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01452802. © 2017 American Heart Association, Inc.
SUBSTANCE P IN HEART FAILURE: THE GOOD AND THE BAD
Dehlin, Heather M.; Levick, Scott P.
2015-01-01
The tachykinin, substance P, is found primarily in sensory nerves. In the heart, substance P-containing nerve fibers are often found surrounding coronary vessels, making them ideally situated to sense changes in the myocardial environment. Recent studies in rodents have identified substance P as having dual roles in the heart, depending on disease etiology and/or timing. Thus far, these studies indicate that substance P may be protective acutely following ischemia-reperfusion, but damaging long-term in non-ischemic induced remodeling and heart failure. Sensory nerves may be at the apex of the cascade of events leading to heart failure, therefore, they make a promising potential therapeutic target that warrants increased investigation. PMID:24286592
Pavlova, Viola; Grimm, Volker; Dietz, Rune; Sonne, Christian; Vorkamp, Katrin; Rigét, Frank F; Letcher, Robert J; Gustavson, Kim; Desforges, Jean-Pierre; Nabe-Nielsen, Jacob
2016-01-01
Polychlorinated biphenyls (PCBs) can cause endocrine disruption, cancer, immunosuppression, or reproductive failure in animals. We used an individual-based model to explore whether and how PCB-associated reproductive failure could affect the dynamics of a hypothetical polar bear (Ursus maritimus) population exposed to PCBs to the same degree as the East Greenland subpopulation. Dose-response data from experimental studies on a surrogate species, the mink (Mustela vision), were used in the absence of similar data for polar bears. Two alternative types of reproductive failure in relation to maternal sum-PCB concentrations were considered: increased abortion rate and increased cub mortality. We found that the quantitative impact of PCB-induced reproductive failure on population growth rate depended largely on the actual type of reproductive failure involved. Critical potencies of the dose-response relationship for decreasing the population growth rate were established for both modeled types of reproductive failure. Comparing the model predictions of the age-dependent trend of sum-PCBs concentrations in females with actual field measurements from East Greenland indicated that it was unlikely that PCB exposure caused a high incidence of abortions in the subpopulation. However, on the basis of this analysis, it could not be excluded that PCB exposure contributes to higher cub mortality. Our results highlight the necessity for further research on the possible influence of PCBs on polar bear reproduction regarding their physiological pathway. This includes determining the exact cause of reproductive failure, i.e., in utero exposure versus lactational exposure of offspring; the timing of offspring death; and establishing the most relevant reference metrics for the dose-response relationship.
1988-05-31
non-negative random variables with system life Y = r ( TI, ..., rp ) and failure pattern kT) - [, ifY =- Td , I I and Y<Ty, j* (2.2) S=a , otherwise...Moeschberger - 3a. TYPE OF REPORT 1i3b. TIME COVERED 114. DATE OF REPORT (Year, Month, Day) S. PAGE COUNT Final I FROM9 -1- 8 2 Td .2-3l--8 7 IMay 31...T1iY > Td )since average concordanceeover the range Y > Tiis 0. When i - I arid l=-0, then Ti -Xi <.04= Ti, Xi < V ,Yi <Xi. Here if Ti Y1&<T, the
NASA Astrophysics Data System (ADS)
Lakowicz, Joseph R.; Szmacinski, Henryk; Johnson, Michael L.
1990-05-01
We examined the time -dependent donor decays of 2 - amino purine (2 -APU) , in the presence of increasing amounts of acceptor 2-aminobenzophenine (2-ABP). As the concentration of 2-ABP increases, the frequency-responses diverge from that predicted by Forster. The data were found to be consistent with modified Forster equations, but at this time we do not state that these modified expressions provide a correct molecular description of this donor-acceptor system. To the best of our knowledge this is the first paper which reports a failure of the Forster theory for randomly distributed donors and acceptors.
Transient Reliability Analysis Capability Developed for CARES/Life
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2001-01-01
The CARES/Life software developed at the NASA Glenn Research Center provides a general-purpose design tool that predicts the probability of the failure of a ceramic component as a function of its time in service. This award-winning software has been widely used by U.S. industry to establish the reliability and life of a brittle material (e.g., ceramic, intermetallic, and graphite) structures in a wide variety of 21st century applications.Present capabilities of the NASA CARES/Life code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code can compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth failure conditions CARES/Life can handle sustained and linearly increasing time-dependent loads, whereas in cyclic fatigue applications various types of repetitive constant-amplitude loads can be accounted for. However, in real applications applied loads are rarely that simple but vary with time in more complex ways such as engine startup, shutdown, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. A methodology has now been developed to allow the CARES/Life computer code to perform reliability analysis of ceramic components undergoing transient thermal and mechanical loading. This means that CARES/Life will be able to analyze finite element models of ceramic components that simulate dynamic engine operating conditions. The methodology developed is generalized to account for material property variation (on strength distribution and fatigue) as a function of temperature. This allows CARES/Life to analyze components undergoing rapid temperature change in other words, components undergoing thermal shock. In addition, the capability has been developed to perform reliability analysis for components that undergo proof testing involving transient loads. This methodology was developed for environmentally assisted crack growth (crack growth as a function of time and loading), but it will be extended to account for cyclic fatigue (crack growth as a function of load cycles) as well.
Score tests for independence in semiparametric competing risks models.
Saïd, Mériem; Ghazzali, Nadia; Rivest, Louis-Paul
2009-12-01
A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.
Köberich, Stefan; Lohrmann, Christa; Mittag, Oskar; Dassen, Theo
2015-06-01
To evaluate the effects of a nurse-led, hospital-based heart failure specific education session with a three-month telephone follow-up on self-care behaviour, care dependency and quality of life for patients with chronic heart failure. Patient education in patients with heart failure is able to promote heart failure-specific self-care, to reduce mortality, morbidity and rehospitalisation rates and to enhance quality of life, especially if heart failure education is embedded in a multidisciplinary approach. Evidence of the effect of a nurse-led self-care education, quality of life and care dependency in addition to standard medical treatment in Germany is lacking. Nonblinded, prospective, single-centre, randomised controlled trial. Sixty-four patients were allocated either to the intervention group or to the control group. Patients in the intervention group received education about heart failure self-care with a consecutive telephone follow-up over three months in addition to standard medical treatment. Patients in the control group received standard medical treatment only. Data of 110 patients (58 in the intervention group and 52 in the control group) with a mean age of 62 years and mean left ventricular ejection fraction of 28·2% could be analysed. Self-care education had a significant influence on overall heart failure self-care but not on quality of life and care dependency. A single education session with a consecutive telephone follow-up is able to improve overall self-care behaviours but not quality of life. Care dependency was not influenced by the education session. The easy to implement and short educational intervention has a positive effect on self-care behaviour for patients with heart failure. However, there was no effect on quality of life and care dependency. To improve quality of life and to influence care dependency, different measures have to be applied. © 2015 John Wiley & Sons Ltd.
Using Landslide Failure Forecast Models in Near Real Time: the Mt. de La Saxe case-study
NASA Astrophysics Data System (ADS)
Manconi, Andrea; Giordan, Daniele
2014-05-01
Forecasting the occurrence of landslide phenomena in space and time is a major scientific challenge. The approaches used to forecast landslides mainly depend on the spatial scale analyzed (regional vs. local), the temporal range of forecast (long- vs. short-term), as well as the triggering factor and the landslide typology considered. By focusing on short-term forecast methods for large, deep seated slope instabilities, the potential time of failure (ToF) can be estimated by studying the evolution of the landslide deformation over time (i.e., strain rate) provided that, under constant stress conditions, landslide materials follow creep mechanism before reaching rupture. In the last decades, different procedures have been proposed to estimate ToF by considering simplified empirical and/or graphical methods applied to time series of deformation data. Fukuzono, 1985 proposed a failure forecast method based on the experience performed during large scale laboratory experiments, which were aimed at observing the kinematic evolution of a landslide induced by rain. This approach, known also as the inverse-velocity method, considers the evolution over time of the inverse value of the surface velocity (v) as an indicator of the ToF, by assuming that failure approaches while 1/v tends to zero. Here we present an innovative method to aimed at achieving failure forecast of landslide phenomena by considering near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and then apply straightforward statistical methods to obtain confidence intervals on the time of failure. Our results can be relevant to support the management of early warning systems during landslide emergency conditions, also when the predefined displacement and/or velocity thresholds are exceeded. In addition, our statistical approach for the definition of confidence interval and forecast reliability can be applied also to different failure forecast methods. We applied for the first time the herein presented approach in near real time during the emergency scenario relevant to the reactivation of the La Saxe rockslide, a large mass wasting menacing the population of Courmayeur, northern Italy, and the important European route E25. We show how the application of simplified but robust forecast models can be a convenient method to manage and support early warning systems during critical situations. References: Fukuzono T. (1985), A New Method for Predicting the Failure Time of a Slope, Proc. IVth International Conference and Field Workshop on Landslides, Tokyo.
Time-dependent response of filamentary composite spherical pressure vessels
NASA Technical Reports Server (NTRS)
Dozier, J. D.
1983-01-01
A filamentary composite spherical pressure vessel is modeled as a pseudoisotropic (or transversely isotropic) composite shell, with the effects of the liner and fill tubes omitted. Equations of elasticity, macromechanical and micromechanical formulations, and laminate properties are derived for the application of an internally pressured spherical composite vessel. Viscoelastic properties for the composite matrix are used to characterize time-dependent behavior. Using the maximum strain theory of failure, burst pressure and critical strain equations are formulated, solved in the Laplace domain with an associated elastic solution, and inverted back into the time domain using the method of collocation. Viscoelastic properties of HBFR-55 resin are experimentally determined and a Kevlar/HBFR-55 system is evaluated with a FORTRAN program. The computed reduction in burst pressure with respect to time indicates that the analysis employed may be used to predict the time-dependent response of a filamentary composite spherical pressure vessel.
Advanced Self-Calibrating, Self-Repairing Data Acquisition System
NASA Technical Reports Server (NTRS)
Medelius, Pedro J. (Inventor); Eckhoff, Anthony J. (Inventor); Angel, Lucena R. (Inventor); Perotti, Jose M. (Inventor)
2002-01-01
An improved self-calibrating and self-repairing Data Acquisition System (DAS) for use in inaccessible areas, such as onboard spacecraft, and capable of autonomously performing required system health checks, failure detection. When required, self-repair is implemented utilizing a "spare parts/tool box" system. The available number of spare components primarily depends upon each component's predicted reliability which may be determined using Mean Time Between Failures (MTBF) analysis. Failing or degrading components are electronically removed and disabled to reduce power consumption, before being electronically replaced with spare components.
Mechanical loading of bovine pericardium accelerates enzymatic degradation.
Ellsmere, J C; Khanna, R A; Lee, J M
1999-06-01
Bioprosthetic heart valves fail as the result of two simultaneous processes: structural deterioration and calcification. Leaflet deterioration and perforation have been correlated with regions of highest stress in the tissue. The failures have long been assumed to be due to simple mechanical fatigue of the collagen fibre architecture; however, we have hypothesized that local stresses-and particularly dynamic stresses-accelerate local proteolysis, leading to tissue failure. This study addresses that hypothesis. Using a novel, custom-built microtensile culture system, strips of bovine pericardium were subjected to static and dynamic loads while being exposed to solutions of microbial collagenase or trypsin (a non-specific proteolytic enzyme). The time to extend to 30% strain (defined here as time to failure) was recorded. After failure, the percentage of collagen solubilized was calculated based on the amount of hydroxyproline present in solution. All data were analyzed by analysis of variance (ANOVA). In collagenase, exposure to static load significantly decreased the time to failure (P < 0.002) due to increased mean rate of collagen solubilization. Importantly, specimens exposed to collagenase and dynamic load failed faster than those exposed to collagenase under the same average static load (P = 0.02). In trypsin, by contrast, static load never led to failure and produced only minimal degradation. Under dynamic load, however, specimens exposed to collagenase, trypsin, and even Tris/CaCl2 buffer solution, all failed. Only samples exposed to Hanks' physiological solution did not fail. Failure of the specimens exposed to trypsin and Tris/CaCl2 suggests that the non-collagenous components and the calcium-dependent proteolytic enzymes present in pericardial tissue may play roles in the pathogenesis of bioprosthetic heart valve degeneration.
Surrogate oracles, generalized dependency and simpler models
NASA Technical Reports Server (NTRS)
Wilson, Larry
1990-01-01
Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.
NASA Astrophysics Data System (ADS)
Xu, T.; Zhou, G. L.; Heap, Michael J.; Zhu, W. C.; Chen, C. F.; Baud, Patrick
2017-09-01
An understanding of the influence of temperature on brittle creep in granite is important for the management and optimization of granitic nuclear waste repositories and geothermal resources. We propose here a two-dimensional, thermo-mechanical numerical model that describes the time-dependent brittle deformation (brittle creep) of low-porosity granite under different constant temperatures and confining pressures. The mesoscale model accounts for material heterogeneity through a stochastic local failure stress field, and local material degradation using an exponential material softening law. Importantly, the model introduces the concept of a mesoscopic renormalization to capture the co-operative interaction between microcracks in the transition from distributed to localized damage. The mesoscale physico-mechanical parameters for the model were first determined using a trial-and-error method (until the modeled output accurately captured mechanical data from constant strain rate experiments on low-porosity granite at three different confining pressures). The thermo-physical parameters required for the model, such as specific heat capacity, coefficient of linear thermal expansion, and thermal conductivity, were then determined from brittle creep experiments performed on the same low-porosity granite at temperatures of 23, 50, and 90 °C. The good agreement between the modeled output and the experimental data, using a unique set of thermo-physico-mechanical parameters, lends confidence to our numerical approach. Using these parameters, we then explore the influence of temperature, differential stress, confining pressure, and sample homogeneity on brittle creep in low-porosity granite. Our simulations show that increases in temperature and differential stress increase the creep strain rate and therefore reduce time-to-failure, while increases in confining pressure and sample homogeneity decrease creep strain rate and increase time-to-failure. We anticipate that the modeling presented herein will assist in the management and optimization of geotechnical engineering projects within granite.
Why Do Medial Unicompartmental Knee Arthroplasties Fail Today?
van der List, Jelle P; Zuiderbaan, Hendrik A; Pearle, Andrew D
2016-05-01
Failure rates are higher in medial unicompartmental knee arthroplasty (UKA) than total knee arthroplasty. To improve these failure rates, it is important to understand why medial UKA fail. Because individual studies lack power to show failure modes, a systematic review was performed to assess medial UKA failure modes. Furthermore, we compared cohort studies with registry-based studies, early with midterm and late failures and fixed-bearing with mobile-bearing implants. Databases of PubMed, EMBASE, and Cochrane and annual registries were searched for medial UKA failures. Studies were included when they reported >25 failures or when they reported early (<5 years), midterm (5-10 years), or late failures (>10 years). Thirty-seven cohort studies (4 level II studies and 33 level III studies) and 2 registry-based studies were included. A total of 3967 overall failures, 388 time-dependent failures, and 1305 implant design failures were identified. Aseptic loosening (36%) and osteoarthritis (OA) progression (20%) were the most common failure modes. Aseptic loosening (26%) was most common early failure mode, whereas OA progression was more commonly seen in midterm and late failures (38% and 40%, respectively). Polyethylene wear (12%) and instability (12%) were more common in fixed-bearing implants, whereas pain (14%) and bearing dislocation (11%) were more common in mobile-bearing implants. This level III systematic review identified aseptic loosening and OA progression as the major failure modes. Aseptic loosening was the main failure mode in early years and mobile-bearing implants, whereas OA progression caused most failures in late years and fixed-bearing implants. Copyright © 2016 Elsevier Inc. All rights reserved.
The influence of microstructure on the probability of early failure in aluminum-based interconnects
NASA Astrophysics Data System (ADS)
Dwyer, V. M.
2004-09-01
For electromigration in short aluminum interconnects terminated by tungsten vias, the well known "short-line" effect applies. In a similar manner, for longer lines, early failure is determined by a critical value Lcrit for the length of polygranular clusters. Any cluster shorter than Lcrit is "immortal" on the time scale of early failure where the figure of merit is not the standard t50 value (the time to 50% failures), but rather the total probability of early failure, Pcf. Pcf is a complex function of current density, linewidth, line length, and material properties (the median grain size d50 and grain size shape factor σd). It is calculated here using a model based around the theory of runs, which has proved itself to be a useful tool for assessing the probability of extreme events. Our analysis shows that Pcf is strongly dependent on σd, and a change in σd from 0.27 to 0.5 can cause an order of magnitude increase in Pcf under typical test conditions. This has implications for the web-based two-dimensional grain-growth simulator MIT/EmSim, which generates grain patterns with σd=0.27, while typical as-patterned structures are better represented by a σd in the range 0.4 - 0.6. The simulator will consequently overestimate interconnect reliability due to this particular electromigration failure mode.
Frequency-Magnitude relationships for Underwater Landslides of the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Urgeles, R.; Gràcia, E.; Lo Iacono, C.; Sànchez-Serra, C.; Løvholt, F.
2017-12-01
An updated version of the submarine landslide database of the Mediterranean Sea contains 955 MTDs and 2608 failure scars showing that submarine landslides are ubiquitous features along Mediterranean continental margins. Their distribution reveals that major deltaic wedges display the larger submarine landslides, while seismically active margins are characterized by relatively small failures. In all regions, landslide size distributions display power law scaling for landslides > 1 km3. We find consistent differences on the exponent of the power law depending on the geodynamic setting. Active margins present steep slopes of the frequency-magnitude relationship whereas passive margins tend to display gentler slopes. This pattern likely responds to the common view that tectonically active margins have numerous but small failures, while passive margins have larger but fewer failures. Available age information suggests that failures exceeding 1000 km3 are infrequent and may recur every 40 kyr. Smaller failures that can still cause significant damage might be relatively frequent, with failures > 1 km3 likely recurring every 40 years. The database highlights that our knowledge of submarine landslide activity with time is limited to a few tens of thousand years. Available data suggest that submarine landslides may preferentially occur during lowstand periods, but no firm conclusion can be made on this respect, as only 149 landslides (out of 955 included in the database) have relatively accurate age determinations. The timing and regional changes in the frequency-magnitude distribution suggest that sedimentation patterns and pore pressure development have had a major role in triggering slope failures and control the sediment flux from mass wasting to the deep basin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sallaberry, Cedric Jean-Marie.; Helton, Jon Craig
2012-10-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allowmore » an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). This report describes the Fortran 90 program CPLOAS_2 that implements the following representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent: (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS can be included in the calculations performed by CPLOAS_2.« less
Modeling joint restoration strategies for interdependent infrastructure systems.
Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.
Crazing in Polymeric and Composite Systems
1988-04-30
Characterization of Random Microstructural Systems , Proceedings, International Conference on Structure, Solid Mechanics and Engineering Design in Civil...AND COMPOSITE SYSTEMS 12. PERSONAL AUTHOR(S) HSIAO, C. C. 13a. TYPE OF REPORT J13b. TIME COVERED 14. DATE OF REPORT (Year, Month, Day) 15. PAGE COUNT...study of the failure of composite systems under stress is important both theoretically and practically. This program aims to develop time dependent
NASA Astrophysics Data System (ADS)
Jayawardena, Adikaramge Asiri
The goal of this dissertation is to identify electrical and thermal parameters of an LED package that can be used to predict catastrophic failure real-time in an application. Through an experimental study the series electrical resistance and thermal resistance were identified as good indicators of contact failure of LED packages. This study investigated the long-term changes in series electrical resistance and thermal resistance of LED packages at three different current and junction temperature stress conditions. Experiment results showed that the series electrical resistance went through four phases of change; including periods of latency, rapid increase, saturation, and finally a sharp decline just before failure. Formation of voids in the contact metallization was identified as the underlying mechanism for series resistance increase. The rate of series resistance change was linked to void growth using the theory of electromigration. The rate of increase of series resistance is dependent on temperature and current density. The results indicate that void growth occurred in the cap (Au) layer, was constrained by the contact metal (Ni) layer, preventing open circuit failure of contact metal layer. Short circuit failure occurred due to electromigration induced metal diffusion along dislocations in GaN. The increase in ideality factor, and reverse leakage current with time provided further evidence to presence of metal in the semiconductor. An empirical model was derived for estimation of LED package failure time due to metal diffusion. The model is based on the experimental results and theories of electromigration and diffusion. Furthermore, the experimental results showed that the thermal resistance of LED packages increased with aging time. A relationship between thermal resistance change rate, with case temperature and temperature gradient within the LED package was developed. The results showed that dislocation creep is responsible for creep induced plastic deformation in the die-attach solder. The temperatures inside the LED package reached the melting point of die-attach solder due to delamination just before catastrophic open circuit failure. A combined model that could estimate life of LED packages based on catastrophic failure of thermal and electrical contacts is presented for the first time. This model can be used to make a-priori or real-time estimation of LED package life based on catastrophic failure. Finally, to illustrate the usefulness of the findings from this thesis, two different implementations of real-time life prediction using prognostics and health monitoring techniques are discussed.
How to Advance TPC Benchmarks with Dependability Aspects
NASA Astrophysics Data System (ADS)
Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco
Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.
Anusavice, Kenneth J; Jadaan, Osama M; Esquivel-Upshaw, Josephine F
2013-11-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Predicted fracture probabilities (Pf) for centrally loaded 1.6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8mm/0.8mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4mm/1.2mm). CARES/Life results support the proposed crown design and load orientation hypotheses. The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. Copyright © 2013 Academy of Dental Materials. All rights reserved.
Anusavice, Kenneth J.; Jadaan, Osama M.; Esquivel–Upshaw, Josephine
2013-01-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. Objective The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6 mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Materials and methods Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Results Predicted fracture probabilities (Pf) for centrally-loaded 1,6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8 mm/0.8 mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4 mm/1.2 mm). Conclusion CARES/Life results support the proposed crown design and load orientation hypotheses. Significance The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. PMID:24060349
Toxic plants: Effects on reproduction and fetal and embryonic development in livestock
USDA-ARS?s Scientific Manuscript database
Reproductive success is dependent on a large number of carefully orchestrated biological events that must occur in a specifically timed sequence. The interference with one of more of these sequences or events may result in total reproductive failure or a more subtle reduction in reproductive potent...
Clinical Pharmacodynamics: Principles of Drug Response and Alterations in Kidney Disease.
Keller, Frieder; Hann, Alexander
2018-05-16
Pharmacokinetics and pharmacodynamics follow the logic of cause and consequence. Receptor-mediated and reversible effects can be distinguished from direct and irreversible effects. Reversible effects are capacity-limited and saturable whereas irreversible effects are limited only by the number of viable targets. In the case of receptor-mediated and reversible effects a threshold and a ceiling concentration can be defined. Antimicrobial drugs with concentration-dependent action are distinguished from drugs with time-dependent action. Concentration-dependent effects are associated with a high ceiling concentration and the target is the high peak. Time-dependent effects are associated with a high threshold concentration and the target is the high trough. During kidney dysfunction, alterations of drug response are usually attributed to pharmacokinetic but rarely to pharmacodynamic changes. Dose adjustment calculations, therefore, tacitly presume that pharmacodynamic parameters remain unchanged while only pharmacokinetic parameters are altered in kidney failure. Kidney dysfunction influences the pharmacokinetic parameters of at least 50% of all essential drugs. Clinicians usually consider pharmacokinetics when kidney disease is found, but pharmacodynamics is as important. Alterations of pharmacodynamic parameters are conceivable but only rarely reported in kidney failure. Sometimes surprising dosing adjustments are needed when pharmacodynamic concepts are brought into the decision process of which dose to choose. Pharmacokinetics and pharmacodynamics should both be considered when any dosing regimen is determined. Copyright © 2018 by the American Society of Nephrology.
Some limitations of frequency as a component of risk: an expository note.
Cox, Louis Anthony
2009-02-01
Students of risk analysis are often taught that "risk is frequency times consequence" or, more generally, that risk is determined by the frequency and severity of adverse consequences. But is it? This expository note reviews the concepts of frequency as average annual occurrence rate and as the reciprocal of mean time to failure (MTTF) or mean time between failures (MTBF) in a renewal process. It points out that if two risks (represented as two (frequency, severity) pairs for adverse consequences) have identical values for severity but different values of frequency, then it is not necessarily true that the one with the smaller value of frequency is preferable-and this is true no matter how frequency is defined. In general, there is not necessarily an increasing relation between the reciprocal of the mean time until an event occurs, its long-run average occurrences per year, and other criteria, such as the probability or expected number of times that it will happen over a specific interval of interest, such as the design life of a system. Risk depends on more than frequency and severity of consequences. It also depends on other information about the probability distribution for the time of a risk event that can become lost in simple measures of event "frequency." More flexible descriptions of risky processes, such as point process models can avoid these limitations.
New constraints on mechanisms of remotely triggered seismicity at Long Valley Caldera
Brodsky, E.E.; Prejean, S.G.
2005-01-01
Regional-scale triggering of local earthquakes in the crust by seismic waves from distant main shocks has now been robustly documented for over a decade. Some of the most thoroughly recorded examples of repeated triggering of a single site from multiple, large earthquakes are measured in geothermal fields of the western United States like Long Valley Caldera. As one of the few natural cases where the causality of an earthquake sequence is apparent, triggering provides fundamental constraints on the failure processes in earthquakes. We show here that the observed triggering by seismic waves is inconsistent with any mechanism that depends on cumulative shaking as measured by integrated energy density. We also present evidence for a frequency-dependent triggering threshold. On the basis of the seismic records of 12 regional and teleseismic events recorded at Long Valley Caldera, long-period waves (>30 s) are more effective at generating local seismicity than short-period waves of comparable amplitude. If the properties of the system are stationary over time, the failure threshold for long-period waves is ~0.05 cm/s vertical shaking. Assuming a phase velocity of 3.5 km/s and an elastic modulus of 3.5 x 1010Pa, the threshold in terms of stress is 5 kPa. The frequency dependence is due in part to the attenuation of the surface waves with depth. Fluid flow through a porous medium can produce the rest of the observed frequency dependence of the threshold. If the threshold is not stationary with time, pore pressures that are >99.5% of lithostatic and vary over time by a factor of 4 could explain the observations with no frequency dependence of the triggering threshold. Copyright 2005 by the American Geophysical Union.
Effects of surface removal on rolling-element fatigue
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.
1987-01-01
The Lundberg-Palmgren equation was modified to show the effect on rolling-element fatigue life of removing by grinding a portion of the stressed volume of the raceways of a rolling-element bearing. Results of this analysis show that depending on the amount of material removed, and depending on the initial running time of the bearing when material removal occurs, the 10-percent life of the reground bearings ranges from 74 to 100 percent of the 10-percent life of a brand new bearing. Three bearing types were selected for testing. A total of 250 bearings were reground. Of this matter, 30 bearings from each type were endurance tested to 1600 hr. No bearing failure occurred related to material removal. Two bearing failures occurred due to defective rolling elements and were typical of those which may occur in new bearings.
Ethical dilemmas in psychiatric evaluations in patients with fulminant liver failure.
Appel, Jacob; Vaidya, Swapna
2014-04-01
Fulminant hepatic failure (FHF) is one of the more dramatic and challenging syndromes in clinical medicine. Time constraints and the scarcity of organs complicate the evaluation process in the case of patients presenting with FHF, raising ethical questions related to fairness and justice. The challenges are compounded by an absence of standardized guidelines. Acetaminophen overdose, often occurring in patients with histories of psychiatric illness and substance dependence, has emerged as the most common cause of FHF. The weak correlations between psychosocial factors and nonadherence, as per some studies, suggest that adherence may be influenced by systematic factors. Most research suggests that applying rigid ethical parameters in these patients, rather than allowing for case-dependent flexibility, can be problematic. The decision to transplant in patients with FHF has to be made in a very narrow window of time. The time-constrained process is fraught with uncertainties and limitations, given the absence of patient interview, fluctuating medical eligibility, and limited data. Although standardized scales exist, their benefit in such settings appears limited. Predicting compliance with posttransplant medical regimens is difficult to assess and raises the question of prospective studies to monitor compliance.
The influence of temperature on brittle creep in sandstones
NASA Astrophysics Data System (ADS)
Heap, M. J.; Baud, P.; Meredith, P. G.; Vinciguerra, S.
2009-04-01
The characterization of time-dependent brittle rock deformation is fundamental to understanding the long-term evolution and dynamics of the Earth's upper crust. The presence of water promotes time-dependent deformation through environment-dependent stress corrosion cracking that allows rocks to deform at stresses far below their short-term failure stress. Here we report results from an experimental study of the influence of an elevated temperature on time-dependent brittle creep in water-saturated samples of Darley Dale (initial porosity of 13%), Bentheim (23%) and Crab Orchard (4%) sandstones. We present results from both conventional creep experiments (or ‘static fatigue' tests) and stress-stepping creep experiments performed under 20°C and 75°C and an effective confining pressure of 30 MPa (50 MPa confining pressure and a 20 MPa pore fluid pressure). The evolution of crack damage was monitored throughout each experiment by measuring the three proxies for damage (1) axial strain (2) pore volume change and (3) the output of AE energy. Conventional creep experiments have demonstrated that, for any given applied differential stress, the time-to-failure is dramatically reduced and the creep strain rate is significantly increased by application of an elevated temperature. Stress-stepping creep experiments have allowed us to investigate the influence of temperature in detail. Results from these experiments show that the creep strain rate for Darley Dale and Bentheim sandstones increases by approximately 3 orders of magnitude, and for Crab Orchard sandstone increases by approximately 2 orders of magnitude, as temperature is increased from 20°C to 75°C at a fixed effective differential stress. We discuss these results in the context of the different mineralogical and microstructural properties of the three rock types and the micro-mechanical and chemical processes operating on them.
Subcritical crack growth and other time- and environment-dependent behavior in crustal rocks
NASA Technical Reports Server (NTRS)
Swanson, P. L.
1984-01-01
Stable crack growth strongly influences both the fracture strength of brittle rocks and some of the phenomena precursory to catastrophic failure. Quantification of the time and environment dependence of fracture propagation is attempted with the use of a fracture mechanics technique. Some of the difficulties encountered when applying techniques originally developed for simple synthetic materials to complex materials like rocks are examined. A picture of subcritical fracture propagation is developed that embraces the essential ingredients of the microstructure, a microcrack process zone, and the different roles that the environment plays. To do this, the results of (1) fracture mechanics experiments on five rock types, (2) optical and scanning electron microscopy, (3) studies of microstructural aspects of fracture in ceramics, and (4) exploratory tests examining the time-dependent response of rock to the application of water are examined.
An Autonomous Distributed Fault-Tolerant Local Positioning System
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2017-01-01
We describe a fault-tolerant, GPS-independent (Global Positioning System) distributed autonomous positioning system for static/mobile objects and present solutions for providing highly-accurate geo-location data for the static/mobile objects in dynamic environments. The reliability and accuracy of a positioning system fundamentally depends on two factors; its timeliness in broadcasting signals and the knowledge of its geometry, i.e., locations and distances of the beacons. Existing distributed positioning systems either synchronize to a common external source like GPS or establish their own time synchrony using a scheme similar to a master-slave by designating a particular beacon as the master and other beacons synchronize to it, resulting in a single point of failure. Another drawback of existing positioning systems is their lack of addressing various fault manifestations, in particular, communication link failures, which, as in wireless networks, are increasingly dominating the process failures and are typically transient and mobile, in the sense that they typically affect different messages to/from different processes over time.
Mechanism of electromigration failure in Damascene processed copper interconnects
NASA Astrophysics Data System (ADS)
Michael, Nancy Lyn
2002-11-01
A major unresolved issue in Cu interconnect reliability is the interface role in the failure mechanism of real structures. The present study investigates failure in single-level damascene Cu interconnects with variations in interface condition, passivation and barrier, and linewidth. In the first phase, accelerated electromigration testing of 0.25mum Cu interconnects capped with SiN or SiCN, shows that lifetime and failure mode vary with capping layer. The first mode, seen primarily in SiN samples, is characterized by gradual resistance increase and extensive interface damage, believed to result from failure led by interface electromigration. The competing failure mode, found in SiCN capped samples, is characterized by abrupt resistance increase and localized voiding. The second phase fixes SiCN as the capping material and varies barrier material and line width. The three barrier materials, Ta, TaN, and Ta/TaN, produce similar lifetime statistics and failure is abrupt. Line width, however, does have a strong influence on failure time. The line width/grain size ratio ranged from 0.53 to 2.2 but does not correlate with mean time to failure (MTF). The strong dependence on interface fraction, combined with the conclusion from phase one that interface electromigration is not rate controlling, suggests another mechanism related to the interface is a controlling factor. The possibility that contamination and defects at the interface are key to this failure mode was investigated using electro-thermal fatigue (ETF). In ETF, where lines are simultaneously subjected to thermal cycling and constant current, damage caused by thermal stress is accelerated. Tests reveal that in 80 nm lines, transient failure occurs at times far below MTF in electromigration tests at higher temperatures. Failure found in ETF is clearly a result of damage growth due to thermal/mechanical stress rather than electromigration. At the stress levels created by the moderate ETF test conditions, the only place voids are likely to nucleate and grow is at pre-existing defects and impurities. In narrower lines, where smaller voids can cause catastrophic damage, defects have a greater effect on MTF. Results from this investigation suggest that impurities and defects in the Cu and at the interface, must be carefully controlled to make reliable narrow Cu interconnects.
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
Pervasive Restart In MOOSE-based Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derek Gaston; Cody Permann; David Andrs
Multiphysics applications are inherently complicated. Solving for multiple, interacting physical phenomena involves the solution of multiple equations, and each equation has its own data dependencies. Feeding the correct data to these equations at exactly the right time requires extensive effort in software design. In an ideal world, multiphysics applications always run to completion and produce correct answers. Unfortunately, in reality, there can be many reasons why a simulation might fail: power outage, system failure, exceeding a runtime allotment on a supercomputer, failure of the solver to converge, etc. A failure after many hours spent computing can be a significant setbackmore » for a project. Therefore, the ability to “continue” a solve from the point of failure, rather than starting again from scratch, is an essential component of any high-quality simulation tool. This process of “continuation” is commonly termed “restart” in the computational community. While the concept of restarting an application sounds ideal, the aforementioned complexities and data dependencies present in multiphysics applications make its implementation decidedly non-trivial. A running multiphysics calculation accumulates an enormous amount of “state”: current time, solution history, material properties, status of mechanical contact, etc. This “state” data comes in many different forms, including scalar, tensor, vector, and arbitrary, application-specific data types. To be able to restart an application, you must be able to both store and retrieve this data, effectively recreating the state of the application before the failure. When utilizing the Multiphysics Object Oriented Simulation Environment (MOOSE) framework developed at Idaho National Laboratory, this state data is stored both internally within the framework itself (such as solution vectors and the current time) and within the applications that use the framework. In order to implement restart in MOOSE-based applications, the total state of the system (both within the framework and without) must be stored and retrieved. To this end, the MOOSE team has implemented a “pervasive” restart capability which allows any object within MOOSE (or within a MOOSE-based application) to be declared as “state” data, and handles the storage and retrieval of said data.« less
Evaluation of 1.5-T Cell Flash Memory Total Ionizing Dose Response
NASA Astrophysics Data System (ADS)
Clark, Lawrence T.; Holbert, Keith E.; Adams, James W.; Navale, Harshad; Anderson, Blake C.
2015-12-01
Flash memory is an essential part of systems used in harsh environments, experienced by both terrestrial and aerospace TID applications. This paper presents studies of COTS flash memory TID hardness. While there is substantial literature on flash memory TID response, this work focuses for the first time on 1.5 transistor per cell flash memory. The experimental results show hardness varying from about 100 krad(Si) to over 250 krad(Si) depending on the usage model. We explore the circuit and device aspects of the results, based on the extensive reliability literature for this flash memory type. Failure modes indicate both device damage and circuit marginalities. Sector erase failure limits, but read only operation allows TID exceeding 200 krad(Si). The failures are analyzed by type.
On Failure in Polycrystalline and Amorphous Brittle Materials
NASA Astrophysics Data System (ADS)
Bourne, N. K.
2009-12-01
The performance of behaviour of brittle materials depends upon discrete deformation mechanisms operating during the loading process. The critical mechanisms determining the behaviour of armour ceramics have not been isolated using traditional ballistics. It has recently become possible to measure strength histories in materials under shock. The data gained for the failed strength of the armour are shown to relate directly to the penetration measured into tiles. Further the material can be loaded and recovered for post-mortem examination. Failure is by micro-fracture that is a function of the defects and then cracking activated by plasticity mechanisms within the grains and failure at grain boundaries in the amorphous intergranular phase. Thus it is the shock-induced plastic yielding of grains at the impact face that determines the later time penetration through the tile.
Finegan, Donal P; Scheel, Mario; Robinson, James B; Tjaden, Bernhard; Di Michiel, Marco; Hinds, Gareth; Brett, Dan J L; Shearing, Paul R
2016-11-16
Catastrophic failure of lithium-ion batteries occurs across multiple length scales and over very short time periods. A combination of high-speed operando tomography, thermal imaging and electrochemical measurements is used to probe the degradation mechanisms leading up to overcharge-induced thermal runaway of a LiCoO 2 pouch cell, through its interrelated dynamic structural, thermal and electrical responses. Failure mechanisms across multiple length scales are explored using a post-mortem multi-scale tomography approach, revealing significant morphological and phase changes in the LiCoO 2 electrode microstructure and location dependent degradation. This combined operando and multi-scale X-ray computed tomography (CT) technique is demonstrated as a comprehensive approach to understanding battery degradation and failure.
Bidirectional Cardio-Respiratory Interactions in Heart Failure.
Radovanović, Nikola N; Pavlović, Siniša U; Milašinović, Goran; Kirćanski, Bratislav; Platiša, Mirjana M
2018-01-01
We investigated cardio-respiratory coupling in patients with heart failure by quantification of bidirectional interactions between cardiac (RR intervals) and respiratory signals with complementary measures of time series analysis. Heart failure patients were divided into three groups of twenty, age and gender matched, subjects: with sinus rhythm (HF-Sin), with sinus rhythm and ventricular extrasystoles (HF-VES), and with permanent atrial fibrillation (HF-AF). We included patients with indication for implantation of implantable cardioverter defibrillator or cardiac resynchronization therapy device. ECG and respiratory signals were simultaneously acquired during 20 min in supine position at spontaneous breathing frequency in 20 healthy control subjects and in patients before device implantation. We used coherence, Granger causality and cross-sample entropy analysis as complementary measures of bidirectional interactions between RR intervals and respiratory rhythm. In heart failure patients with arrhythmias (HF-VES and HF-AF) there is no coherence between signals ( p < 0.01), while in HF-Sin it is reduced ( p < 0.05), compared with control subjects. In all heart failure groups causality between signals is diminished, but with significantly stronger causality of RR signal in respiratory signal in HF-VES. Cross-sample entropy analysis revealed the strongest synchrony between respiratory and RR signal in HF-VES group. Beside respiratory sinus arrhythmia there is another type of cardio-respiratory interaction based on the synchrony between cardiac and respiratory rhythm. Both of them are altered in heart failure patients. Respiratory sinus arrhythmia is reduced in HF-Sin patients and vanished in heart failure patients with arrhythmias. Contrary, in HF-Sin and HF-VES groups, synchrony increased, probably as consequence of some dominant neural compensatory mechanisms. The coupling of cardiac and respiratory rhythm in heart failure patients varies depending on the presence of atrial/ventricular arrhythmias and it could be revealed by complementary methods of time series analysis.
Basic Principles of Sea and Swell. A Programmed Unit of Instruction.
ERIC Educational Resources Information Center
Marine Maritime Academy, Castine.
Whether in carrier flight operations, resupply at sea, antisubmarine warfare, amphibious landings, sea search and rescue, or ship routing, sea conditions, at the place and time the operation is being conducted, become vitally important. The success or failure of any operation being conducted in an ocean environment is greatly dependent upon the…
The Failure of Feminist Epistemology
ERIC Educational Resources Information Center
Shelton, Jim D.
2006-01-01
Mankind has generally done its best to pursue the truth, since the beginning of time. Given the unlikely tenets of their ideology, though, today's feminists see the need to distort this pursuit. Therefore, radicals in that camp argue that the sex of the thinker is significant to the idea, that truth depends on its social construction, or that…
ERIC Educational Resources Information Center
Block, Joel
1978-01-01
Failure- and misconduct-prone black and Hispanic high school students were given five weekly sessions of rational-emotive education. Comparisons were made with alternate treatment and on-treatment controls. The rational-emotive groups showed greatest improvement on all dependent variables over an extended period of time. (Author/MFD)
Post-Servicing Mission 4 Flux Calibration of the STIS Echelle Modes
NASA Astrophysics Data System (ADS)
Azalee Bostroem, K.; Aloisi, A.; Proffitt, C.; Osten, R.; Bohlin, R.
2011-01-01
STIS echelle modes show a wavelength-dependent decline in sensitivity with time. While this trend is observed in all STIS spectroscopic modes, the echelle sensitivity is further affected by a time-dependent shift in the blaze function. To improve the echelle flux calibration, new baselines for the echelle sensitivities are derived from post-Servicing Mission 4 (SM4) observations of the Hubble Space Telescope standard star G191-B2B. We present how these baseline sensitivities compare to pre-failure trends. Specifically, where the new results differ from expectations and discuss anomalous results found in E140H monitoring observations are highlighted.
Analysis of Accelerometer Data from a Woven Inflatable Creep Burst Test
NASA Technical Reports Server (NTRS)
James, George H.; Grygier, Michael; Selig, Molly M.
2015-01-01
Accelerometers were used to montor an inflatable test article during a creep test to failure. The test article experienced impulse events that were classified based on the response of the sensors and their time-dependent manifestation. These impulse events required specialized techniques to process the structural dynamics data. However, certain phenomena were defined as worthy of additional study. An assessment of one phenomena (a frequency near 1000Hz) showed a time dependent frequency and an amplitude that increased significantly near the end of the test. Hence, these observations are expected to drive future understanding of and utility in inflatable space structures.
NASA Astrophysics Data System (ADS)
Domínguez, Carlos; García, Rafael A.; Aroca, Marcelo; Carrero, Alicia
2012-02-01
A thorough study of the physical conditions in the Pennsylvania Edge Notch Tensile (PENT) test, such as stress and temperature, is carried out in order to reduce the long test times observed in new bimodal and multimodal polyethylene grades (third and fourth generation grades) which may withstand hundreds or thousands of hours at the standard conditions (80○C, 2.4 MPa). The results show how on increasing the temperature up to 90○C and the applied stress up to 2.8 MPa, the failure time may be reduced by a factor of 6. Special attention should be paid to the n and Q parameters of the Brown and Lu equation, because variations in those parameters could dramatically change the result of the comparison between different systems depending on the temperature and stress value.
Principles underlying the Fourth Power Nature of Structured Shock Waves
NASA Astrophysics Data System (ADS)
Grady, Dennis
2017-06-01
Steady structured shock waves in materials including metals, glasses, compounds and solid mixtures, when represented through plots of Hugoniot stress against a measure of the strain rate through which the Hugoniot state is achieved, have consistently demonstrated a dependence to the fourth power. A perhaps deeper observation is that the product of the energy dissipated through the transition to the Hugoniot state and the time duration of the Hugoniot state event exhibits invariance independent of the Hugoniot amplitude. Invariance of the energy-time product and the fourth-power trend are to first order equivalent. Further, constancy of this energy-time product is observed in other dynamic critical state failure events including spall fracture, dynamic compaction and adiabatic shear failure. The presentation pursues the necessary background exposing the foregoing shock physics observations and explores possible statistical physics principals that may underlie the collective dynamic observations.
Time-dependent Brittle Deformation in Etna Basalt
NASA Astrophysics Data System (ADS)
Heap, M. J.; Baud, P.; Meredith, P. G.; Vinciguerra, S.; Bell, A. F.; Main, I. G.
2008-12-01
Mt Etna is the largest and most active volcano in Europe. Due to the high permeability of its volcanic rocks, the volcanic edifice hosts one of the biggest hydrogeologic reservoirs of Sicily (Ogniben, 1966). Pre-eruptive patterns of flank eruptions, closely monitored by means of ground deformation and seismicity, revealed the slow development of fracture systems at different altitudes, marked by repeated bursts of seismicity and accelerating/decelerating deformation patterns acting over the scale of months to days. The presence of a fluid phase in cracks within rock has been shown to dramatically affect both mechanical and chemical interactions. Chemically, it promotes time-dependent brittle deformation through such mechanisms as stress corrosion cracking that allows rocks to deform at stresses far below their short-term failure strength. Such crack growth is highly non-linear and accelerates towards dynamic failure over extended periods of time, even under constant applied stress; a phenomenon known as 'brittle creep'. Stress corrosion is considered to be responsible for the acceleratory cracking and seismicity prior to volcanic eruptions and is invoked as an important mechanism in forecasting models. Here we report results from a study of time-dependent brittle creep in water-saturated samples of Etna basalt (EB) under triaxial stress conditions (confining pressure of 50 MPa and pore fluid pressure of 20 MPa). Samples of EB were loaded at a constant strain rate of 10-5 s-1 to a pre-determined percentage of the short- term strength and left to deform under constant stress until failure. Crack damage evolution was monitored throughout each experiment by measuring the independent damage proxies of axial strain, pore volume change and output of acoustic emission (AE) energy, during brittle creep of creep strain rates ranging over four orders of magnitude. Our data demonstrate that the applied differential stress exerts a crucial influence on both time-to-failure and creep strain rate in EB. Stress-stepping creep experiments were then performed to allow the influence of the effective confining stress to be studied in detail. Experiments were performed under effective stress conditions of 10, 30 and 50 MPa (whilst maintaining a constant pore fluid pressure of 20 MPa). In addition to the purely mechanical influence of water, governed by the effective stress, which results in a shift of the creep strain rate curves to lower strain rates at higher effective stresses. Our results also demonstrate that the chemically-driven process of stress corrosion cracking appears to be inhibited at higher effective stress. This results in an increase in the gradient of the creep strain rate curves with increasing effective stress. We suggest that the most likely cause of this change is a decrease in water mobility due to a reduction in crack aperture and an increase in water viscosity at higher pressure. Finally, we show that a theoretical model based on mean-field damage mechanics creep laws is able to reproduce the experimental strain-time relations. Our results indicate that the local changes in the stress field and fluid circulation can have a profound impact in the time- to-failure properties of the basaltic volcanic pile.
Tensile properties of latex paint films with TiO2 pigment
NASA Astrophysics Data System (ADS)
Hagan, Eric W. S.; Charalambides, Maria N.; Young, Christina T.; Learner, Thomas J. S.; Hackney, Stephen
2009-05-01
The tensile properties of latex paint films containing TiO2 pigment were studied with respect to temperature, strain-rate and moisture content. The purpose of performing these experiments was to assist museums in defining safe conditions for modern paintings held in collections. The glass transition temperature of latex paint binders is in close proximity to ambient temperature, resulting in high strain-rate dependence in typical exposure environments. Time dependence of modulus and failure strain is discussed in the context of time-temperature superposition, which was used to extend the experimental time scale. Nonlinear viscoelastic material models are also presented, which incorporate a Prony series with the Ogden or Neo-Hookean hyperelastic function for different TiO2 concentrations.
Schackman, Bruce R; Ribaudo, Heather J; Krambrink, Amy; Hughes, Valery; Kuritzkes, Daniel R; Gulick, Roy M
2007-12-15
Blacks had higher rates of virologic failure than whites on efavirenz-containing regimens in the AIDS Clinical Trials Group (ACTG) A5095 study; preliminary analyses also suggested an association with adherence. We rigorously examined associations over time among race, virologic failure, 4 self-reported adherence metrics, and quality of life (QOL). ACTG A5095 was a double-blind placebo-controlled study of treatment-naive HIV-positive patients randomized to zidovudine/lamivudine/abacavir versus zidovudine/lamivudine plus efavirenz versus zidovudine/lamivudine/abacavir plus efavirenz. Virologic failure was defined as confirmed HIV-1 RNA >or=200 copies/mL at >or=16 weeks on study. The zidovudine/lamivudine/abacavir arm was discontinued early because of virologic inferiority. We examined virologic failure differences for efavirenz-containing arms according to missing 0 (adherent) versus at least 1 dose (nonadherent) during the past 4 days, alternative self-reported adherence metrics, and QOL. Analyses used the Fisher exact, log rank tests, and Cox proportional hazards models. The study population included white (n = 299), black (n = 260), and Hispanic (n = 156) patients with >or=1 adherence evaluation. Virologic failure was associated with week 12 nonadherence during the past 4 days for blacks (53% nonadherent failed vs. 25% adherent; P < 0.001) but not for whites (20% nonadherent failed vs. 20% adherent; P = 0.91). After adjustment for baseline covariates and treatment, there was a significant interaction between race and week 12 adherence (P = 0.02). In time-dependent Cox models using self-reports over time to reflect recent adherence, there was a significantly higher failure risk for nonadherent subjects (hazard ratio [HR] = 2.07; P < 0.001). Significant race-adherence interactions were seen in additional models of adherence: missing at least 1 medication dose ever (P = 0.04), past month (P < 0.01), or past weekend (P = 0.05). Lower QOL was significantly associated with virologic failure (P < 0.001); there was no evidence of an interaction between QOL and race (P = 0.39) or adherence (P = 0.51) in predicting virologic failure. There was a greater effect of nonadherence on virologic failure in blacks given efavirenz-containing regimens than in whites. Self-reported adherence and QOL are independent predictors of virologic failure.
Progression in children with intestinal failure at a referral hospital in Medellín, Colombia.
Contreras-Ramírez, M M; Giraldo-Villa, A; Henao-Roldan, C; Martínez-Volkmar, M I; Valencia-Quintero, A F; Montoya-Delgado, D C; Ruiz-Navas, P; García-Loboguerrero, F
2016-01-01
Patients with intestinal failure are unable to maintain adequate nutrition and hydration due to a reduction in the functional area of the intestine. Different strategies have the potential to benefit these patients by promoting intestinal autonomy, enhancing quality of life, and increasing survival. To describe the clinical characteristics of children with intestinal failure and disease progression in terms of intestinal autonomy and survival. A retrospective study was conducted, evaluating 33 pediatric patients with intestinal failure that were hospitalized within the time frame of December 2005 and December 2013 at a tertiary care referral center. Patient characteristics were described upon hospital admission, estimating the probability of achieving intestinal autonomy and calculating the survival rate. Patient median age upon hospital admission was 2 months (interquartile range [IQR]: 1-4 months) and 54.5% of the patients were boys. Intestinal autonomy was achieved in 69.7% of the cases with a median time of 148 days (IQR: 63 - 431 days), which decreased to 63 days in patients with a spared ileocecal valve. Survival was 91% during a median follow-up of 281 days (IQR: 161 - 772 days). Medical management of patients with intestinal failure is complex. Nutritional support and continuous monitoring are of the utmost importance and long-term morbidity and mortality depends on the early recognition and management of the associated complications. Copyright © 2016. Published by Masson Doyma México S.A.
Cascading Failures as Continuous Phase-Space Transitions
Yang, Yang; Motter, Adilson E.
2017-12-14
In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. We derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-likemore » function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.« less
Cascading Failures as Continuous Phase-Space Transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yang; Motter, Adilson E.
In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. We derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-likemore » function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.« less
Modeling joint restoration strategies for interdependent infrastructure systems
Simonovic, Slobodan P.
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300
Sakashita, Kazumi; Matthews, Wallace J; Yamamoto, Loren G
2013-06-01
Children and youth with special health care needs (CYSHCN) are complex and often dependent on electrical devices (technoelectric dependent) for life support/maintenance. Because they are reliant on electricity and electricity failure is common, the purpose of this study was to survey their preparedness for electricity failure. Parents and caregivers of technoelectric CYSHCN were asked to complete a preparedness questionnaire. We collected a convenience sample of 50 patients. These 50 patients utilized a total of 166 electrical devices. A home ventilator, oxygen concentrator, and a feeding pump were identified as the most important device for the children in 35 of the 50 patients, yet only 19 of the 35 patients could confirm that this device had a battery backup. Also, 22 of the 50 patients had a prolonged power failure preparedness plan. Technoelectric-dependent CYSHCN are poorly prepared for electrical power failure.
Clarke, S G; Phillips, A T M; Bull, A M J; Cobb, J P
2012-06-01
The impact of anatomical variation and surgical error on excessive wear and loosening of the acetabular component of large diameter metal-on-metal hip arthroplasties was measured using a multi-factorial analysis through 112 different simulations. Each surgical scenario was subject to eight different daily loading activities using finite element analysis. Excessive wear appears to be predominantly dependent on cup orientation, with inclination error having a higher influence than version error, according to the study findings. Acetabular cup loosening, as inferred from initial implant stability, appears to depend predominantly on factors concerning the area of cup-bone contact, specifically the level of cup seating achieved and the individual patient's anatomy. The extent of press fit obtained at time of surgery did not appear to influence either mechanism of failure in this study. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pandemic influenza and critical infrastructure dependencies: possible impact on hospitals.
Itzwerth, Ralf L; Macintyre, C Raina; Shah, Smita; Plant, Aileen J
2006-11-20
Hospitals will be particularly challenged when pandemic influenza spreads. Within the health sector in general, existing pandemic plans focus on health interventions to control outbreaks. The critical relationship between the health sector and other sectors is not well understood and addressed. Hospitals depend on critical infrastructure external to the organisation itself. Existing plans do not adequately consider the complexity and interdependency of systems upon which hospitals rely. The failure of one such system can trigger a failure of another, causing cascading breakdowns. Health is only one of the many systems that struggle at maximum capacity during "normal" times, as current business models operate with no or minimal "excess" staff and have become irreducible operations. This makes interconnected systems highly vulnerable to acute disruptions, such as a pandemic. Companies use continuity plans and highly regulated business continuity management to overcome process interruptions. This methodology can be applied to hospitals to minimise the impact of a pandemic.
Ding, Jieli; Zhou, Haibo; Liu, Yanyan; Cai, Jianwen; Longnecker, Matthew P.
2014-01-01
Motivated by the need from our on-going environmental study in the Norwegian Mother and Child Cohort (MoBa) study, we consider an outcome-dependent sampling (ODS) scheme for failure-time data with censoring. Like the case-cohort design, the ODS design enriches the observed sample by selectively including certain failure subjects. We present an estimated maximum semiparametric empirical likelihood estimation (EMSELE) under the proportional hazards model framework. The asymptotic properties of the proposed estimator were derived. Simulation studies were conducted to evaluate the small-sample performance of our proposed method. Our analyses show that the proposed estimator and design is more efficient than the current default approach and other competing approaches. Applying the proposed approach with the data set from the MoBa study, we found a significant effect of an environmental contaminant on fecundability. PMID:24812419
Snow fracture: From micro-cracking to global failure
NASA Astrophysics Data System (ADS)
Capelli, Achille; Reiweger, Ingrid; Schweizer, Jürg
2017-04-01
Slab avalanches are caused by a crack forming and propagating in a weak layer within the snow cover, which eventually causes the detachment of the overlying cohesive slab. The gradual damage process leading to the nucleation of the initial failure is still not entirely understood. Therefore, we studied the damage process preceding snow failure by analyzing the acoustic emissions (AE) generated by bond failure or micro-cracking. The AE allow studying the ongoing progressive failure in a non-destructive way. We performed fully load-controlled failure experiments on snow samples presenting a weak layer and recorded the generated AE. The size and frequency of the generated AE increased before failure revealing an acceleration of the damage process with increased size and frequency of damage and/or microscopic cracks. The AE energy was power-law distributed and the exponent (b-value) decreased approaching failure. The waiting time followed an exponential distribution with increasing exponential coefficient λ before failure. The decrease of the b-value and the increase of λ correspond to a change in the event distribution statistics indicating a transition from homogeneously distributed uncorrelated damage producing mostly small AE to localized damage, which cause larger correlated events which leads to brittle failure. We observed brittle failure for the fast experiment and a more ductile behavior for the slow experiments. This rate dependence was reflected also in the AE signature. In the slow experiments the b value and λ were almost constant, and the energy rate increase was moderate indicating that the damage process was in a stable state - suggesting the damage and healing processes to be balanced. On a shorter time scale, however, the AE parameters varied indicating that the damage process was not steady but consisted of a sum of small bursts. We assume that the bursts may have been generated by cascades of correlated micro-cracks caused by localization of stresses at a small scale. The healing process may then have prevented the self-organization of this small scale damage and, therefore, the total failure of the sample.
Alani, Amir M.; Faramarzi, Asaad
2015-01-01
In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. PMID:26068092
Challenges in Resolution for IC Failure Analysis
NASA Astrophysics Data System (ADS)
Martinez, Nick
1999-10-01
Resolution is becoming more and more of a challenge in the world of Failure Analysis in integrated circuits. This is a result of the ongoing size reduction in microelectronics. Determining the cause of a failure depends upon being able to find the responsible defect. The time it takes to locate a given defect is extremely important so that proper corrective actions can be taken. The limits of current microscopy tools are being pushed. With sub-micron feature sizes and even smaller killing defects, optical microscopes are becoming obsolete. With scanning electron microscopy (SEM), the resolution is high but the voltage involved can make these small defects transparent due to the large mean-free path of incident electrons. In this presentation, I will give an overview of the use of inspection methods in Failure Analysis and show example studies of my work as an Intern student at Texas Instruments. 1. Work at Texas Instruments, Stafford, TX, was supported by TI. 2. Work at Texas Tech University, was supported by NSF Grant DMR9705498.
White, Richard A.; Lu, Chunling; Rodriguez, Carly A.; Bayona, Jaime; Becerra, Mercedes C.; Burgos, Marcos; Centis, Rosella; Cohen, Theodore; Cox, Helen; D'Ambrosio, Lia; Danilovitz, Manfred; Falzon, Dennis; Gelmanova, Irina Y.; Gler, Maria T.; Grinsdale, Jennifer A.; Holtz, Timothy H.; Keshavjee, Salmaan; Leimane, Vaira; Menzies, Dick; Milstein, Meredith B.; Mishustin, Sergey P.; Pagano, Marcello; Quelapio, Maria I.; Shean, Karen; Shin, Sonya S.; Tolman, Arielle W.; van der Walt, Martha L.; Van Deun, Armand; Viiklepp, Piret
2016-01-01
Debate persists about monitoring method (culture or smear) and interval (monthly or less frequently) during treatment for multidrug-resistant tuberculosis (MDR-TB). We analysed existing data and estimated the effect of monitoring strategies on timing of failure detection. We identified studies reporting microbiological response to MDR-TB treatment and solicited individual patient data from authors. Frailty survival models were used to estimate pooled relative risk of failure detection in the last 12 months of treatment; hazard of failure using monthly culture was the reference. Data were obtained for 5410 patients across 12 observational studies. During the last 12 months of treatment, failure detection occurred in a median of 3 months by monthly culture; failure detection was delayed by 2, 7, and 9 months relying on bimonthly culture, monthly smear and bimonthly smear, respectively. Risk (95% CI) of failure detection delay resulting from monthly smear relative to culture is 0.38 (0.34–0.42) for all patients and 0.33 (0.25–0.42) for HIV-co-infected patients. Failure detection is delayed by reducing the sensitivity and frequency of the monitoring method. Monthly monitoring of sputum cultures from patients receiving MDR-TB treatment is recommended. Expanded laboratory capacity is needed for high-quality culture, and for smear microscopy and rapid molecular tests. PMID:27587552
Prediction of failure pressure and leak rate of stress corrosion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majumdar, S.; Kasza, K.; Park, J. Y.
2002-06-24
An ''equivalent rectangular crack'' approach was employed to predict rupture pressures and leak rates through laboratory generated stress corrosion cracks and steam generator tubes removed from the McGuire Nuclear Station. Specimen flaws were sized by post-test fractography in addition to a pre-test advanced eddy current technique. The predicted and observed test data on rupture and leak rate are compared. In general, the test failure pressures and leak rates are closer to those predicted on the basis of fractography than on nondestructive evaluation (NDE). However, the predictions based on NDE results are encouraging, particularly because they have the potential to determinemore » a more detailed geometry of ligamented cracks, from which failure pressure and leak rate can be more accurately predicted. One test specimen displayed a time-dependent increase of leak rate under constant pressure.« less
Hattori, Yusuke; Ishibashi, Kohei; Noda, Takashi; Okamura, Hideo; Kanzaki, Hideaki; Anzai, Toshihisa; Yasuda, Satoshi; Kusano, Kengo
2017-09-01
We describe the case of a 37-year-old woman who presented with complete right bundle branch block and right axis deviation. She was admitted to our hospital due to severe heart failure and was dependent on inotropic agents. Cardiac resynchronization therapy was initiated but did not improve her condition. After the optimization of the pacing timing, we performed earlier right ventricular pacing, which led to an improvement of her heart failure. Earlier right ventricular pacing should be considered in patients with complete right bundle branch block and right axis deviation when cardiac resynchronization therapy is not effective.
Structural health monitoring of wind turbine blades : SE 265 Final Project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barkley, W. C.; Jacobs, Laura D.; Rutherford, A. C.
2006-03-23
ACME Wind Turbine Corporation has contacted our dynamic analysis firm regarding structural health monitoring of their wind turbine blades. ACME has had several failures in previous years. Examples are shown in Figure 1. These failures have resulted in economic loss for the company due to down time of the turbines (lost revenue) and repair costs. Blade failures can occur in several modes, which may depend on the type of construction and load history. Cracking and delamination are some typical modes of blade failure. ACME warranties its turbines and wishes to decrease the number of blade failures they have to repairmore » and replace. The company wishes to implement a real time structural health monitoring system in order to better understand when blade replacement is necessary. Because of warranty costs incurred to date, ACME is interested in either changing the warranty period for the blades in question or predicting imminent failure before it occurs. ACME's current practice is to increase the number of physical inspections when blades are approaching the end of their fatigue lives. Implementation of an in situ monitoring system would eliminate or greatly reduce the need for such physical inspections. Another benefit of such a monitoring system is that the life of any given component could be extended since real conditions would be monitored. The SHM system designed for ACME must be able to operate while the wind turbine is in service. This means that wireless communication options will likely be implemented. Because blade failures occur due to cyclic stresses in the blade material, the sensing system will focus on monitoring strain at various points.« less
NASA Astrophysics Data System (ADS)
Reid, Mark; Iverson, Richard; Brien, Dianne; Iverson, Neal; LaHusen, Richard; Logan, Matthew
2017-04-01
Shallow landslides and ensuing debris flows are a common hazard worldwide, yet forecasting their initiation at a specific site is challenging. These challenges arise, in part, from diverse near-surface hydrologic pathways under different wetting conditions, 3D failure geometries, and the effects of suction in partially saturated soils. Simplistic hydrologic models typically used for regional hazard assessment disregard these complexities. As an alterative to field studies where the effects of these governing factors can be difficult to isolate, we used the USGS debris-flow flume to conduct controlled, field-scale landslide initiation experiments. Using overhead sprinklers or groundwater injectors on the flume bed, we triggered failures using three different wetting conditions: groundwater inflow from below, prolonged moderate-intensity precipitation, and bursts of high-intensity precipitation. Failures occurred in 6 m3 (0.65-m thick and 2-m wide) prisms of loamy sand on a 31° slope; these field-scale failures enabled realistic incorporation of nonlinear scale-dependent effects such as soil suction. During the experiments, we monitored soil deformation, variably saturated pore pressures, and moisture changes using ˜50 sensors sampling at 20 Hz. From ancillary laboratory tests, we determined shear strength, saturated hydraulic conductivities, and unsaturated moisture retention characteristics. The three different wetting conditions noted above led to different hydrologic pathways and influenced instrumental responses and failure timing. During groundwater injection, pore-water pressures increased from the bed of the flume upwards into the sediment, whereas prolonged moderate infiltration wet the sediment from the ground surface downward. In both cases, pore pressures acting on the impending failure surface slowly rose until abrupt failure. In contrast, a burst of intense sprinkling caused rapid failure without precursory development of widespread positive pore pressures. Using coupled 2D variably saturated groundwater flow modeling and 3D limit-equilibrium analyses, we simulated the observed hydrologic behaviors and the time evolution of changes in factors of safety. Our measured parameters successfully reproduced pore pressure observations without calibration. We also quantified the mechanical effects of 3D geometry and unsaturated soil suction on stability. Although suction effects appreciably increased the stability of drier sediment, they were dampened (to <10% increase) in wetted sediment. 3D geometry effects from the lateral margins consistently increased factors of safety by >20% in wet or dry sediment. Importantly, both 3D and suction effects enabled more accurate simulation of failure times. Without these effects, failure timing and/or back-calculated shear strengths would be markedly incorrect. Our results indicate that simplistic models could not consistently predict the timing of slope failure given diverse hydrologic pathways. Moreover, high frequency monitoring (with sampling periods < ˜60 s) would be required to measure and interpret the effects of rapid hydrologic triggers, such as intense rain bursts.
Level of Automation and Failure Frequency Effects on Simulated Lunar Lander Performance
NASA Technical Reports Server (NTRS)
Marquez, Jessica J.; Ramirez, Margarita
2014-01-01
A human-in-the-loop experiment was conducted at the NASA Ames Research Center Vertical Motion Simulator, where instrument-rated pilots completed a simulated terminal descent phase of a lunar landing. Ten pilots participated in a 2 x 2 mixed design experiment, with level of automation as the within-subjects factor and failure frequency as the between subjects factor. The two evaluated levels of automation were high (fully automated landing) and low (manual controlled landing). During test trials, participants were exposed to either a high number of failures (75% failure frequency) or low number of failures (25% failure frequency). In order to investigate the pilots' sensitivity to changes in levels of automation and failure frequency, the dependent measure selected for this experiment was accuracy of failure diagnosis, from which D Prime and Decision Criterion were derived. For each of the dependent measures, no significant difference was found for level of automation and no significant interaction was detected between level of automation and failure frequency. A significant effect was identified for failure frequency suggesting failure frequency has a significant effect on pilots' sensitivity to failure detection and diagnosis. Participants were more likely to correctly identify and diagnose failures if they experienced the higher levels of failures, regardless of level of automation
Tong, W; Kiyokawa, H; Soos, T J; Park, M S; Soares, V C; Manova, K; Pollard, J W; Koff, A
1998-09-01
The involvement of cyclin-dependent kinase inhibitors in differentiation remains unclear: are the roles of cyclin-dependent kinase inhibitors restricted to cell cycle arrest; or also required for completion of the differentiation program; or both? Here, we report that differentiation of luteal cells can be uncoupled from growth arrest in p27-deficient mice. In these mice, female-specific infertility correlates with a failure of embryos to implant at embryonic day 4.5. We show by ovarian transplant and hormone reconstitution experiments that failure to regulate luteal cell estradiol is one physiological mechanism for infertility in these mice. This failure is not due to a failure of p27-deficient granulosa cells to differentiate after hormonal stimulation; P450scc, a marker for luteal progesterone biosynthesis, is expressed and granulosa cell-specific cyclin D2 expression is reduced. However, unlike their wild-type counterparts, p27-deficient luteal cells continue to proliferate for up to 3.5 days after hormonal stimulation. By day 5.5, however, these cells withdraw from the cell cycle, suggesting that p27 plays a role in the early events regulating withdrawal of cells from the cell cycle. We have further shown that in the absence of this timely withdrawal, estradiol regulation is perturbed, explaining in part how fertility is compromised at the level of implantation. These data support the interpretation of our previous observations on oligodendrocyte differentiation about a role for p27 in establishing the nonproliferative state, which in some cases (oligodendrocytes) is required for differentiation, whereas in other cases it is required for the proper functioning of a differentiated cell (luteal cell).
Puvanesarajah, Varun; Amin, Raj; Qureshi, Rabia; Shafiq, Babar; Stein, Ben; Hassanzadeh, Hamid; Yarboro, Seth
2018-06-01
Proximal femur fractures are one of the most common fractures observed in dialysis-dependent patients. Given the large comorbidity burden present in this patient population, more information is needed regarding post-operative outcomes. The goal of this study was to assess morbidity and mortality following operative fixation of femoral neck fractures in the dialysis-dependent elderly. The full set of medicare data from 2005 to 2014 was retrospectively analyzed. Elderly patients with femoral neck fractures were selected. Patients were stratified based on dialysis dependence. Post-operative morbidity and mortality outcomes were compared between the two populations. Adjusted odds were calculated to determine the effect of dialysis dependence on outcomes. A total of 320,629 patients met the inclusion criteria. Of dialysis-dependent patients, 1504 patients underwent internal fixation and 2662 underwent arthroplasty. For both surgical cohorts, dialysis dependence was found to be associated with at least 1.9 times greater odds of mortality within 1 and 2 years post-operatively. Blood transfusions within 90 days and infections within 2 years were significantly increased in the dialysis-dependent study cohort. Dialysis dependence alone did not contribute to increased mechanical failure or major medical complications. Regardless of the surgery performed, dialysis dependence is a significant risk factor for major post-surgical morbidity and mortality after operative treatment of femoral neck fractures in this population. Increased mechanical failure in the internal fixation group was not observed. The increased risk associated with caring for this population should be understood when considering surgical intervention and counseling patients.
Woehrle, Holger; Cowie, Martin R; Eulenburg, Christine; Suling, Anna; Angermann, Christiane; d'Ortho, Marie-Pia; Erdmann, Erland; Levy, Patrick; Simonds, Anita K; Somers, Virend K; Zannad, Faiez; Teschler, Helmut; Wegscheider, Karl
2017-08-01
This on-treatment analysis was conducted to facilitate understanding of mechanisms underlying the increased risk of all-cause and cardiovascular mortality in heart failure patients with reduced ejection fraction and predominant central sleep apnoea randomised to adaptive servo ventilation versus the control group in the SERVE-HF trial.Time-dependent on-treatment analyses were conducted (unadjusted and adjusted for predictive covariates). A comprehensive, time-dependent model was developed to correct for asymmetric selection effects (to minimise bias).The comprehensive model showed increased cardiovascular death hazard ratios during adaptive servo ventilation usage periods, slightly lower than those in the SERVE-HF intention-to-treat analysis. Self-selection bias was evident. Patients randomised to adaptive servo ventilation who crossed over to the control group were at higher risk of cardiovascular death than controls, while control patients with crossover to adaptive servo ventilation showed a trend towards lower risk of cardiovascular death than patients randomised to adaptive servo ventilation. Cardiovascular risk did not increase as nightly adaptive servo ventilation usage increased.On-treatment analysis showed similar results to the SERVE-HF intention-to-treat analysis, with an increased risk of cardiovascular death in heart failure with reduced ejection fraction patients with predominant central sleep apnoea treated with adaptive servo ventilation. Bias is inevitable and needs to be taken into account in any kind of on-treatment analysis in positive airway pressure studies. Copyright ©ERS 2017.
A Nonlinear Viscoelastic Model for Ceramics at High Temperatures
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Panoskaltsis, Vassilis P.; Gasparini, Dario A.; Choi, Sung R.
2002-01-01
High-temperature creep behavior of ceramics is characterized by nonlinear time-dependent responses, asymmetric behavior in tension and compression, and nucleation and coalescence of voids leading to creep rupture. Moreover, creep rupture experiments show considerable scatter or randomness in fatigue lives of nominally equal specimens. To capture the nonlinear, asymmetric time-dependent behavior, the standard linear viscoelastic solid model is modified. Nonlinearity and asymmetry are introduced in the volumetric components by using a nonlinear function similar to a hyperbolic sine function but modified to model asymmetry. The nonlinear viscoelastic model is implemented in an ABAQUS user material subroutine. To model the random formation and coalescence of voids, each element is assigned a failure strain sampled from a lognormal distribution. An element is deleted when its volumetric strain exceeds its failure strain. Element deletion has been implemented within ABAQUS. Temporal increases in strains produce a sequential loss of elements (a model for void nucleation and growth), which in turn leads to failure. Nonlinear viscoelastic model parameters are determined from uniaxial tensile and compressive creep experiments on silicon nitride. The model is then used to predict the deformation of four-point bending and ball-on-ring specimens. Simulation is used to predict statistical moments of creep rupture lives. Numerical simulation results compare well with results of experiments of four-point bending specimens. The analytical model is intended to be used to predict the creep rupture lives of ceramic parts in arbitrary stress conditions.
Microseismic Signature of Magma Failure: Testing Failure Forecast in Heterogeneous Material
NASA Astrophysics Data System (ADS)
Vasseur, J.; Lavallee, Y.; Hess, K.; Wassermann, J. M.; Dingwell, D. B.
2012-12-01
Volcanoes exhibit a range of seismic precursors prior to eruptions. This range of signals derive from different processes, which if quantified, may tell us when and how the volcano will erupt: effusively or explosively. This quantification can be performed in laboratory. Here we investigated the signals associated with the deformation and failure of single-phase silicate liquids compare to mutli-phase magmas containing pores and crystals as heterogeneities. For the past decades, magmas have been simplified as viscoelastic fluids with grossly predictable failure, following an analysis of the stress and strain rate conditions in volcanic conduits. Yet it is clear that the way magmas fail is not unique and evidences increasingly illustrate the role of heterogeneities in the process of magmatic fragmentation. In such multi-phase magmas, failure cannot be predicted using current rheological laws. Microseismicity, as detected in the laboratory by analogous Acoustic Emission (AE), can be used to monitor fracture initiation and propagation, and thus provides invaluable information to characterise the process of brittle failure underlying explosive eruptions. Tri-axial press experiments on different synthetised and natural glass samples have been performed to investigate the acoustic signature of failure. We observed that the failure of single-phase liquids occurs without much strain and is preceded by the constant nucleation, propagation and coalescence of cracks as demonstrated by the monitored AE. In contrast, the failure of multi-phase magmas depends on the applied stress and is strain dependent. The path dependence of magma failure is nonetheless accompanied by supra exponential acceleration in released AEs. Analysis of the released AEs following material Failure Forecast Method (FFM) suggests that the predicability of failure is enhanced by the presence of heterogeneities in magmas. We discuss our observations in terms of volcanic scenarios.
Rational temporal predictions can underlie apparent failures to delay gratification.
McGuire, Joseph T; Kable, Joseph W
2013-04-01
An important category of seemingly maladaptive decisions involves failure to postpone gratification. A person pursuing a desirable long-run outcome may abandon it in favor of a short-run alternative that has been available all along. Here we present a theoretical framework in which this seemingly irrational behavior emerges from stable preferences and veridical judgments. Our account recognizes that decision makers generally face uncertainty regarding the time at which future outcomes will materialize. When timing is uncertain, the value of persistence depends crucially on the nature of a decision maker's prior temporal beliefs. Certain forms of temporal beliefs imply that a delay's predicted remaining length increases as a function of time already waited. In this type of situation, the rational, utility-maximizing strategy is to persist for a limited amount of time and then give up. We show empirically that people's explicit predictions of remaining delay lengths indeed increase as a function of elapsed time in several relevant domains, implying that temporal judgments offer a rational basis for limiting persistence. We then develop our framework into a simple working model and show how it accounts for individual differences in a laboratory task (the well-known "marshmallow test"). We conclude that delay-of-gratification failure, generally viewed as a manifestation of limited self-control capacity, can instead arise as an adaptive response to the perceived statistics of one's environment.
Rational temporal predictions can underlie apparent failures to delay gratification
McGuire, Joseph T.; Kable, Joseph W.
2013-01-01
An important category of seemingly maladaptive decisions involves failure to postpone gratification. A person pursuing a desirable long-run outcome may abandon it in favor of a short-run alternative that has been available all along. Here we present a theoretical framework in which this seemingly irrational behavior emerges from stable preferences and veridical judgments. Our account recognizes that decision makers generally face uncertainty regarding the time at which future outcomes will materialize. When timing is uncertain, the value of persistence depends crucially on the nature of a decision-maker’s prior temporal beliefs. Certain forms of temporal beliefs imply that a delay’s predicted remaining length increases as a function of time already waited. In this type of situation, the rational, utility-maximizing strategy is to persist for a limited amount of time and then give up. We show empirically that people’s explicit predictions of remaining delay lengths indeed increase as a function of elapsed time in several relevant domains, implying that temporal judgments offer a rational basis for limiting persistence. We then develop our framework into a simple working model and show how it accounts for individual differences in a laboratory task (the well-known “marshmallow test”). We conclude that delay-of-gratification failure, generally viewed as a manifestation of limited self-control capacity, can instead arise as an adaptive response to the perceived statistics of one’s environment. PMID:23458085
Optimizing network connectivity for mobile health technologies in sub-Saharan Africa.
Siedner, Mark J; Lankowski, Alexander; Musinga, Derrick; Jackson, Jonathon; Muzoora, Conrad; Hunt, Peter W; Martin, Jeffrey N; Bangsberg, David R; Haberer, Jessica E
2012-01-01
Mobile health (mHealth) technologies hold incredible promise to improve healthcare delivery in resource-limited settings. Network reliability across large catchment areas can be a major challenge. We performed an analysis of network failure frequency as part of a study of real-time adherence monitoring in rural Uganda. We hypothesized that the addition of short messaging service (SMS+GPRS) to the standard cellular network modality (GPRS) would reduce network disruptions and improve transmission of data. Participants were enrolled in a study of real-time adherence monitoring in southwest Uganda. In June 2011, we began using Wisepill devices that transmit data each time the pill bottle is opened. We defined network failures as medication interruptions of >48 hours duration that were transmitted when network connectivity was re-established. During the course of the study, we upgraded devices from GPRS to GPRS+SMS compatibility. We compared network failure rates between GPRS and GPRS+SMS periods and created geospatial maps to graphically demonstrate patterns of connectivity. One hundred fifty-seven participants met inclusion criteria of seven days of SMS and seven days of SMS+GPRS observation time. Seventy-three percent were female, median age was 40 years (IQR 33-46), 39% reported >1-hour travel time to clinic and 17% had home electricity. One hundred one had GPS coordinates recorded and were included in the geospatial maps. The median number of network failures per person-month for the GPRS and GPRS+SMS modalities were 1.5 (IQR 1.0-2.2) and 0.3 (IQR 0-0.9) respectively, (mean difference 1.2, 95%CI 1.0-1.3, p-value<0.0001). Improvements in network connectivity were notable throughout the region. Study costs increased by approximately $1USD per person-month. Addition of SMS to standard GPRS cellular network connectivity can significantly reduce network connection failures for mobile health applications in remote areas. Projects depending on mobile health data in resource-limited settings should consider this upgrade to optimize mHealth applications.
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak; ...
2015-07-15
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
Chung, C-S; Lee, T-H; Chiu, C-T; Chen, Y
2017-12-01
Intestinal failure characterized by inadequate maintenance of nutrition via normal intestinal function comprises a group of disorders with many different causes. If parenteral nutrition dependency develops, which is associated with higher mortality and complications, it is considered for intestine transplantation. However, the graft failure rate is not low, and acute cellular rejection is one of the most important reasons for graft failure. As a result, early identification of rejection and timely modification of anti-rejection medications have been considered to be associated with better graft and patient survival rates. The diagnostic gold standard for rejection is mainly based on histology, but hours of delay by pathology may occur. Some researchers investigated the association of endoscopic images with graft rejection to provide timely diagnosis. In this study, we present the first case report with characteristic features under magnifying endoscopy with a narrow-band imaging system to predict epithelial regeneration and improvement of graft rejection in a patient with small-bowel transplantation. Copyright © 2017 Elsevier Inc. All rights reserved.
Rumination and Rebound from Failure as a Function of Gender and Time on Task
Whiteman, Ronald C.; Mangels, Jennifer A.
2016-01-01
Rumination is a trait response to blocked goals that can have positive or negative outcomes for goal resolution depending on where attention is focused. Whereas “moody brooding” on affective states may be maladaptive, especially for females, “reflective pondering” on concrete strategies for problem solving may be more adaptive. In the context of a challenging general knowledge test, we examined how Brooding and Reflection rumination styles predicted students’ subjective and event-related responses (ERPs) to negative feedback, as well as use of this feedback to rebound from failure on a later surprise retest. For females only, Brooding predicted unpleasant feelings after failure as the task progressed. It also predicted enhanced attention to errors through both bottom-up and top-down processes, as indexed by increased early (400–600 ms) and later (600–1000 ms) late positive potentials (LPP), respectively. Reflection, despite increasing females’ initial attention to negative feedback (i.e., early LPP), as well as both genders’ recurring negative thoughts, did not result in sustained top-down attention (i.e., late LPP) or enhanced negative feelings toward errors. Reflection also facilitated rebound from failure in both genders, although Brooding did not hinder it. Implications of these gender and time-related rumination effects for learning in challenging academic situations are discussed. PMID:26901231
Time-dependent limited penetrable visibility graph analysis of nonstationary time series
NASA Astrophysics Data System (ADS)
Gao, Zhong-Ke; Cai, Qing; Yang, Yu-Xuan; Dang, Wei-Dong
2017-06-01
Recent years have witnessed the development of visibility graph theory, which allows us to analyze a time series from the perspective of complex network. We in this paper develop a novel time-dependent limited penetrable visibility graph (TDLPVG). Two examples using nonstationary time series from RR intervals and gas-liquid flows are provided to demonstrate the effectiveness of our approach. The results of the first example suggest that our TDLPVG method allows characterizing the time-varying behaviors and classifying heart states of healthy, congestive heart failure and atrial fibrillation from RR interval time series. For the second example, we infer TDLPVGs from gas-liquid flow signals and interestingly find that the deviation of node degree of TDLPVGs enables to effectively uncover the time-varying dynamical flow behaviors of gas-liquid slug and bubble flow patterns. All these results render our TDLPVG method particularly powerful for characterizing the time-varying features underlying realistic complex systems from time series.
A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The two identified failure modes follow different acceleration functions. Catastrophic failures follow the traditional power-law relationship to the applied voltage. Slow degradation failures fit well to an exponential law relationship to the applied electrical field. Finally, the impact of capacitor structure on the reliability of BME capacitors is discussed with respect to the number of dielectric layers in an MLCC unit, the number of BaTiO3 grains per dielectric layer, and the chip size of the capacitor device.
NASA Technical Reports Server (NTRS)
Sepehry-Fard, F.; Coulthard, Maurice H.
1995-01-01
The process to predict the values of the maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle cost spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability, and maintenance support costs. It is the objective of this report to identify the magnitude of the expected enhancement in the accuracy of the results for the International Space Station reliability and maintainability data packages by providing examples. These examples partially portray the necessary information hy evaluating the impact of the said enhancements on the life cycle cost and the availability of the International Space Station.
Urinary sodium excretion and kidney failure in non-diabetic chronic kidney disease
Fan, Li; Tighiouart, Hocine; Levey, Andrew S.; Beck, Gerald J.; Sarnak, Mark J.
2014-01-01
Current guidelines recommend under 2g/day sodium intake in chronic kidney disease, but there are few studies relating sodium intake to long-term outcomes. Here we evaluated the association of mean baseline 24-hour urinary sodium excretion with kidney failure and a composite outcome of kidney failure or all-cause mortality using Cox regression in 840 participants enrolled in the Modification of Diet in Renal Disease Study. Mean 24-hour urinary sodium excretion was 3.46 g/day. Kidney failure developed in 617 and the composite outcome was reached in 723. In the primary analyses there was no association between 24-hour urine sodium and kidney failure [HR 0.99 (95% CI 0.91–1.08)] nor on the composite outcome [HR 1.01 (95% CI 0.93–1.09),] each per 1g/day higher urine sodium. In exploratory analyses there was a significant interaction of baseline proteinuria and sodium excretion with kidney failure. Using a 2-slope model, when urine sodium was under 3g/day, higher urine sodium was associated with increased risk of kidney failure in those with baseline proteinuria under 1g/day, and lower risk of kidney failure in those with baseline proteinuria of 1g/day or more. There was no association between urine sodium and kidney failure when urine sodium was 3g/day or more. Results were consistent using first baseline and time-dependent urine sodium. Thus, we noted no association of urine sodium with kidney failure. Results of the exploratory analyses need to be verified in additional studies and the mechanism explored. PMID:24646858
Fault Injection Techniques and Tools
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.
1997-01-01
Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.
Software dependability in the Tandem GUARDIAN system
NASA Technical Reports Server (NTRS)
Lee, Inhwan; Iyer, Ravishankar K.
1995-01-01
Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.
Reliability analysis of airship remote sensing system
NASA Astrophysics Data System (ADS)
Qin, Jun
1998-08-01
Airship Remote Sensing System (ARSS) for obtain the dynamic or real time images in the remote sensing of the catastrophe and the environment, is a mixed complex system. Its sensor platform is a remote control airship. The achievement of a remote sensing mission depends on a series of factors. For this reason, it is very important for us to analyze reliability of ARSS. In first place, the system model was simplified form multi-stage system to two-state system on the basis of the result of the failure mode and effect analysis and the failure tree failure mode effect and criticality analysis. The failure tree was created after analyzing all factors and their interrelations. This failure tree includes four branches, e.g. engine subsystem, remote control subsystem, airship construction subsystem, flying metrology and climate subsystem. By way of failure tree analysis and basic-events classing, the weak links were discovered. The result of test running shown no difference in comparison with theory analysis. In accordance with the above conclusions, a plan of the reliability growth and reliability maintenance were posed. System's reliability are raised from 89 percent to 92 percent with the reformation of the man-machine interactive interface, the augmentation of the secondary better-groupie and the secondary remote control equipment.
Earthquake triggering by transient and static deformations
Gomberg, J.; Beeler, N.M.; Blanpied, M.L.; Bodin, P.
1998-01-01
Observational evidence for both static and transient near-field and far-field triggered seismicity are explained in terms of a frictional instability model, based on a single degree of freedom spring-slider system and rate- and state-dependent frictional constitutive equations. In this study a triggered earthquake is one whose failure time has been advanced by ??t (clock advance) due to a stress perturbation. Triggering stress perturbations considered include square-wave transients and step functions, analogous to seismic waves and coseismic static stress changes, respectively. Perturbations are superimposed on a constant background stressing rate which represents the tectonic stressing rate. The normal stress is assumed to be constant. Approximate, closed-form solutions of the rate-and-state equations are derived for these triggering and background loads, building on the work of Dieterich [1992, 1994]. These solutions can be used to simulate the effects of static and transient stresses as a function of amplitude, onset time t0, and in the case of square waves, duration. The accuracies of the approximate closed-form solutions are also evaluated with respect to the full numerical solution and t0. The approximate solutions underpredict the full solutions, although the difference decreases as t0, approaches the end of the earthquake cycle. The relationship between ??t and t0 differs for transient and static loads: a static stress step imposed late in the cycle causes less clock advance than an equal step imposed earlier, whereas a later applied transient causes greater clock advance than an equal one imposed earlier. For equal ??t, transient amplitudes must be greater than static loads by factors of several tens to hundreds depending on t0. We show that the rate-and-state model requires that the total slip at failure is a constant, regardless of the loading history. Thus a static load applied early in the cycle, or a transient applied at any time, reduces the stress at the initiation of failure, whereas static loads that are applied sufficiently late raise it. Rate-and-state friction predictions differ markedly from those based on Coulomb failure stress changes (??CFS) in which ??t equals the amplitude of the static stress change divided by the background stressing rate. The ??CFS model assumes a stress failure threshold, while the rate-and-state equations require a slip failure threshold. The complete rale-and-state equations predict larger ??t than the ??CFS model does for static stress steps at small t0, and smaller ??t than the ??CFS model for stress steps at large t0. The ??CFS model predicts nonzero ??t only for transient loads that raise the stress to failure stress levels during the transient. In contrast, the rate-and-state model predicts nonzero ??t for smaller loads, and triggered failure may occur well after the transient is finished. We consider heuristically the effects of triggering on a population of faults, as these effects might be evident in seismicity data. Triggering is manifest as an initial increase in seismicity rate that may be followed by a quiescence or by a return to the background rate. Available seismicity data are insufficient to discriminate whether triggered earthquakes are "new" or clock advanced. However, if triggering indeed results from advancing the failure time of inevitable earthquakes, then our modeling suggests that a quiescence always follows transient triggering and that the duration of increased seismicity also cannot exceed the duration of a triggering transient load. Quiescence follows static triggering only if the population of available faults is finite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Incorporation of real-time component information using equipment condition assessment (ECA) through the developmentof enhanced risk monitors (ERM) for active components in advanced reactor (AR) and advanced small modular reactor (SMR) designs. We incorporate time-dependent failure probabilities from prognostic health management (PHM) systems to dynamically update the risk metric of interest. This information is used to augment data used for supervisory control and plant-wide coordination of multiple modules by providing the incremental risk incurred due to aging and demands placed on components that support mission requirements.
Enhanced stability of steep channel beds to mass failure and debris flow initiation
NASA Astrophysics Data System (ADS)
Prancevic, J.; Lamb, M. P.; Ayoub, F.; Venditti, J. G.
2015-12-01
Debris flows dominate bedrock erosion and sediment transport in very steep mountain channels, and are often initiated from failure of channel-bed alluvium during storms. While several theoretical models exist to predict mass failures, few have been tested because observations of in-channel bed failures are extremely limited. To fill this gap in our understanding, we performed laboratory flume experiments to identify the conditions necessary to initiate bed failures in non-cohesive sediment of different sizes (D = 0.7 mm to 15 mm) on steep channel-bed slopes (S = 0.45 to 0.93) and in the presence of water flow. In beds composed of sand, failures occurred under sub-saturated conditions on steep bed slopes (S > 0.5) and under super-saturated conditions at lower slopes. In beds of gravel, however, failures occurred only under super-saturated conditions at all tested slopes, even those approaching the dry angle of repose. Consistent with theoretical models, mass failures under super-saturated conditions initiated along a failure plane approximately one grain-diameter below the bed surface, whereas the failure plane was located near the base of the bed under sub-saturated conditions. However, all experimental beds were more stable than predicted by 1-D infinite-slope stability models. In partially saturated sand, enhanced stability appears to result from suction stress. Enhanced stability in gravel may result from turbulent energy losses in pores or increased granular friction for failures that are shallow with respect to grain size. These grain-size dependent effects are not currently included in stability models for non-cohesive sediment, and they may help to explain better the timing and location of debris flow occurrence.
The evolution of concepts for soil erosion modelling
NASA Astrophysics Data System (ADS)
Kirkby, Mike
2013-04-01
From the earliest models for soil erosion, based on power laws relating sediment discharge or yield to slope length and gradient, the development of the Universal Soil Loss Equation was a natural step, although one that has long continued to hinder the development of better perceptual models for erosion processes. Key stumbling blocks have been: 1. The failure to go through runoff generation as a key intermediary 2. The failure to separate hydrological and strength parameters of the soil 3. The failure to treat sediment transport along a slope as a routing problem 4. The failure to analyse the nature of the dependence on vegetation Key advances have been in these directions (among others) 1. Improved understanding of the hydrological processes (e.g. infiltration and runoff, sediment entrainment) leading to KINEROS, LISEM,WEPP, PESERA 2. Recognition of selective sediment transport (e.g. transport- or supply-limited removal, grain travel distances) leading e.g. to MAHLERAN 3. Development of models adapted to particular time/space scales Some major remaining problems 1. Failure to integrate geomorphological and agronomic approaches 2. Tillage erosion - Is erosion loss of sediment or lowering of centre of mass? 3. Dynamic change during an event, as rills etc form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sallaberry, Cedric Jean-Marie; Helton, Jon C.
2015-05-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high - consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to d eactivate the entire system before the SL system fails (i.e., degrades into a configurationmore » that could allow an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). This report describes the Fortran 90 program CPLOAS_2 that implements the following representations for PLOAS for situations in which both link physical properties and link failure properties are time - dependent: (i) failure of all SLs before failure of any WL, (ii) failure of any SL before f ailure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS can be included in the calculations performed by CPLOAS_2. Keywords: Aleatory uncertainty, CPLOAS_2, Epistemic uncertainty, Probability of loss of assured safety, Strong link, Uncertainty analysis, Weak link« less
Song, Z Q; Ni, Y; Peng, L M; Liang, H Y; He, L H
2016-03-31
Bioinspired discontinuous nanolaminate design becomes an efficient way to mitigate the strength-ductility tradeoff in brittle materials via arresting the crack at the interface followed by controllable interface failure. The analytical solution and numerical simulation based on the nonlinear shear-lag model indicates that propagation of the interface failure can be unstable or stable when the interfacial shear stress between laminae is uniform or highly localized, respectively. A dimensionless key parameter defined by the ratio of two characteristic lengths governs the transition between the two interface-failure modes, which can explain the non-monotonic size-dependent mechanical properties observed in various laminate composites.
Brittle Creep of Tournemire Shale: Orientation, Temperature and Pressure Dependences
NASA Astrophysics Data System (ADS)
Geng, Zhi; Bonnelye, Audrey; Dick, Pierre; David, Christian; Chen, Mian; Schubnel, Alexandre
2017-04-01
Time and temperature dependent rock deformation has both scientific and socio-economic implications for natural hazards, the oil and gas industry and nuclear waste disposal. During the past decades, most studies on brittle creep have focused on igneous rocks and porous sedimentary rocks. To our knowledge, only few studies have been carried out on the brittle creep behavior of shale. Here, we conducted a series of creep experiments on shale specimens coming from the French Institute for Nuclear Safety (IRSN) underground research laboratory located in Tournemire, France. Conventional tri-axial experiments were carried under two different temperatures (26˚ C, 75˚ C) and confining pressures (10 MPa, 80 MPa), for three orientations (σ1 along, perpendicular and 45˚ to bedding). Following the methodology developed by Heap et al. [2008], differential stress was first increased to ˜ 60% of the short term peak strength (10-7/s, Bonnelye et al. 2016), and then in steps of 5 to 10 MPa every 24 hours until brittle failure was achieved. In these long-term experiments (approximately 10 days), stress and strains were recorded continuously, while ultrasonic acoustic velocities were recorded every 1˜15 minutes, enabling us to monitor the evolution of elastic wave speed anisotropy. Temporal evolution of anisotropy was illustrated by inverting acoustic velocities to Thomsen parameters. Finally, samples were investigated post-mortem using scanning electron microscopy. Our results seem to contradict our traditional understanding of loading rate dependent brittle failure. Indeed, the brittle creep failure stress of our Tournemire shale samples was systematically observed ˜50% higher than its short-term peak strength, with larger final axial strain accumulated. At higher temperatures, the creep failure strength of our samples was slightly reduced and deformation was characterized with faster 'steady-state' creep axial strain rates at each steps, and larger final axial strain accumulated. At each creep step, ultrasonic wave velocities first decreased, and then increased gradually. The magnitude of elastic wave velocity variations showed an important orientation and temperature dependence. Velocities measured perpendicular to bedding showed increased variation, variation that was enhanced at higher temperature and higher pressure. The case of complete elastic anisotropy reversal was even observed for sample deformed perpendicular to bedding, with a reduction amount of axial strain needed to reach anisotropy reversal at higher temperature. Our data were indicative of competition between crack growth, sealing/healing, and possibly mineral rotation or anisotropic compaction during creep. SEM investigation confirmed evidence of time dependent pressure solution and crack sealing/healing. Our research not only has practical engineering consequence but, more importantly, can provide valuable insights into the underlying mechanisms of creep in complex media like shale. In particular, our study highlights that the short-term peak strength has little meaning in shale material, which can over-consolidate importantly by 'plastic' flow. In addition, we showed that elastic anisotropy can switch and even reverse over relatively short time periods (<10 days) and for relatively small amount of plastic deformation (<5%).
Aftershock triggering by complete Coulomb stress changes
Kilb, Debi; Gomberg, J.; Bodin, P.
2002-01-01
We examine the correlation between seismicity rate change following the 1992, M7.3, Landers, California, earthquake and characteristics of the complete Coulomb failure stress (CFS) changes (??CFS(t)) that this earthquake generated. At close distances the time-varying "dynamic" portion of the stress change depends on how the rupture develops temporally and spatially and arises from radiated seismic waves and from permanent coseismic fault displacement. The permanent "static" portion (??CFS) depends only on the final coseismic displacement. ??CFS diminishes much more rapidly with distance than the transient, dynamic stress changes. A common interpretation of the strong correlation between ??CFS and aftershocks is that load changes can advance or delay failure. Stress changes may also promote failure by physically altering properties of the fault or its environs. Because it is transient, ??CFS(t) can alter the failure rate only by the latter means. We calculate both ??CFS and the maximum positive value of ??CFS(t) (peak ??CFS(t)) using a reflectivity program. Input parameters are constrained by modeling Landers displacement seismograms. We quantify the correlation between maps of seismicity rate changes and maps of modeled ??CFS and peak ??CFS(t) and find agreement for both models. However, rupture directivity, which does not affect ??CFS, creates larger peak ??CFS(t) values northwest of the main shock. This asymmetry is also observed in seismicity rate changes but not in ??CFS. This result implies that dynamic stress changes are as effective as static stress changes in triggering aftershocks and may trigger earthquakes long after the waves have passed.
Predicting Rapid Relapse Following Treatment for Chemical Dependence: A Matched-Subjects Design.
ERIC Educational Resources Information Center
Svanum, Soren; McAdoo, William George
1989-01-01
Persons who underwent residential treatment for chemical dependency were identified as three-month treatment failures (N=52) or successes (N=52). Subjects were matched on Minnesota Multiphasic Personality Inventory (MMPI) scores. Found posttreatment depression, anxiety, and sleep problems strongly related to failure among psychiatric MMPI group;…
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Narrowing the scope of failure prediction using targeted fault load injection
NASA Astrophysics Data System (ADS)
Jordan, Paul L.; Peterson, Gilbert L.; Lin, Alan C.; Mendenhall, Michael J.; Sellers, Andrew J.
2018-05-01
As society becomes more dependent upon computer systems to perform increasingly critical tasks, ensuring that those systems do not fail becomes increasingly important. Many organizations depend heavily on desktop computers for day-to-day operations. Unfortunately, the software that runs on these computers is written by humans and, as such, is still subject to human error and consequent failure. A natural solution is to use statistical machine learning to predict failure. However, since failure is still a relatively rare event, obtaining labelled training data to train these models is not a trivial task. This work presents new simulated fault-inducing loads that extend the focus of traditional fault injection techniques to predict failure in the Microsoft enterprise authentication service and Apache web server. These new fault loads were successful in creating failure conditions that were identifiable using statistical learning methods, with fewer irrelevant faults being created.
Size-Dependent Realized Fecundity in Two Lepidopteran Capital Breeders.
Rhainds, Marc
2015-08-01
Body size is correlated with potential fecundity in capital breeders, but size-dependent functions of realized fecundity may be impacted by reproductive losses due to mating failure or oviposition time limitations (number of eggs remaining in the abdomen of females at death). Post-mortem assessment of adults collected in the field after natural death represents a sound approach to quantify how body size affects realized fecundity. This approach is used here for two Lepidoptera for which replicated field data are available, the spruce budworm Choristoneura fumiferana Clemens (Tortricidae) and bagworm Metisa plana Walker (Psychidae). Dead female budworms were collected on drop trays placed beneath tree canopies at four locations. Most females had mated during their lifetime (presence of a spermatophore in spermatheca), and body size did not influence mating failure. Oviposition time limitation was the major factor restricting realized fecundity of females, and its incidence was independent of body size at three of the four locations. Both realized and potential fecundity of female budworms increased linearly with body size. Female bagworms are neotenous and reproduce within a bag; hence, parameters related to realized fecundity are unusually tractable. For each of five consecutive generations of bagworms, mating probability increased with body size, so that virgin-dead females were predominantly small, least fecund individuals. The implication of size-dependent reproductive losses are compared for the two organisms in terms of life history theory and population dynamics, with an emphasis on how differential female motility affects the evolutionary and ecological consequences of size-dependent realized fecundity. © Crown copyright 2015.
Successes and failures of sixty years of vector control in French Guiana: what is the next step?
Epelboin, Yanouk; Chaney, Sarah C; Guidez, Amandine; Habchi-Hanriot, Nausicaa; Talaga, Stanislas; Wang, Lanjiao; Dusfour, Isabelle
2018-03-12
Since the 1940s, French Guiana has implemented vector control to contain or eliminate malaria, yellow fever, and, recently, dengue, chikungunya, and Zika. Over time, strategies have evolved depending on the location, efficacy of the methods, development of insecticide resistance, and advances in vector control techniques. This review summarises the history of vector control in French Guiana by reporting the records found in the private archives of the Institute Pasteur in French Guiana and those accessible in libraries worldwide. This publication highlights successes and failures in vector control and identifies the constraints and expectations for vector control in this French overseas territory in the Americas.
Creep rupture of polymer-matrix composites
NASA Technical Reports Server (NTRS)
Brinson, H. F.; Morris, D. H.; Griffith, W. I.
1981-01-01
The time-dependent creep-rupture process in graphite-epoxy laminates is examined as a function of temperature and stress level. Moisture effects are not considered. An accelerated characterization method of composite-laminate viscoelastic modulus and strength properties is reviewed. It is shown that lamina-modulus master curves can be obtained using a minimum of normally performed quality-control-type testing. Lamina-strength master curves, obtained by assuming a constant-strain-failure criterion, are presented along with experimental data, and reasonably good agreement is shown to exist between the two. Various phenomenological delayed failure models are reviewed and two (the modified rate equation and the Larson-Miller parameter method) are compared to creep-rupture data with poor results.
Reliability and life prediction of ceramic composite structures at elevated temperatures
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Gyekenyesi, John P.
1994-01-01
Methods are highlighted that ascertain the structural reliability of components fabricated of composites with ceramic matrices reinforced with ceramic fibers or whiskers and subject to quasi-static load conditions at elevated temperatures. Each method focuses on a particular composite microstructure: whisker-toughened ceramics, laminated ceramic matrix composites, and fabric reinforced ceramic matrix composites. In addition, since elevated service temperatures usually involve time-dependent effects, a section dealing with reliability degradation as a function of load history has been included. A recurring theme throughout this chapter is that even though component failure is controlled by a sequence of many microfailure events, failure of ceramic composites will be modeled using macrovariables.
Ng'andu, N H
1997-03-30
In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly.
Thorn, Stephanie R; Giesy, Sarah L; Myers, Martin G; Boisclair, Yves R
2010-08-01
Mice lacking leptin (ob/ob) or its full-length receptor (db/db) are obese and reproductively incompetent. Fertility, pregnancy, and lactation are restored, respectively, in ob/ob mice treated with leptin through mating, d 6.5 post coitum, and pregnancy. Therefore, leptin signaling is needed for lactation, but the timing of its action and the affected mammary process remain unknown. To address this issue, we used s/s mice lacking only leptin-dependent signal transducer and activator of transcription (STAT)3 signaling. These mice share many features with db/db mice, including obesity, but differ by retaining sufficient activity of the hypothalamic-pituitary-ovarian axis to support reproduction. The s/s mammary epithelium was normal at 3 wk of age but failed to expand through the mammary fat pad (MFP) during the subsequent pubertal period. Ductal growth failure was not corrected by estrogen therapy and did not relate to inadequate IGF-I production by the MFP or to the need for epithelial or stromal leptin-STAT3 signaling. Ductal growth failure coincided with adipocyte hypertrophy and increased MFP production of leptin, TNFalpha, and IL6. These cytokines, however, were unable to inhibit the proliferation of a collection of mouse mammary epithelial cell lines. In conclusion, the very first step of postnatal mammary development fails in s/s mice despite sufficient estrogen IGF-I and an hypothalamic-pituitary-ovarian axis capable of supporting reproduction. This failure is not caused by mammary loss of leptin-dependent STAT3 signaling or by the development of inflammation. These data imply the existence of an unknown mechanism whereby leptin-dependent STAT3 signaling and obesity alter mammary ductal development.
Radiation dependence of inverter propagation delay from timing sampler measurements
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Blaes, B. R.; Lin, Y.-S.
1989-01-01
A timing sampler consisting of 14 four-stage inverter-pair chains with different load capacitances was fabricated in 1.6-micron n-well CMOS and irradiated with cobalt-60 at 10 rad(Si)/s. For this CMOS process the measured results indicate that the rising delay increases by about 2.2 ns/Mrad(Si) and the falling delay increase is very small, i.e., less than 300 ps/Mrad(Si). The amount of radiation-induced delay depends on the size of the load capacitance. The maximum value observed for this effect was 5.65 ns/pF-Mrad(Si). Using a sensitivity analysis, the sensitivity of the rising delay to radiation can be explained by a simple timing model and the radiation sensitivity of dc MOSFET parameters. This same approach could not explain the insensitivity of the falling delay to radiation. This may be due to a failure of the timing model and/or trapping effects.
NASA Astrophysics Data System (ADS)
Cao, Ri-hong; Cao, Ping; Lin, Hang; Pu, Cheng-zhi; Ou, Ke
2016-03-01
Joints and fissures with similar orientation or characteristics are common in natural rocks; the inclination and density of the fissures affect the mechanical properties and failure mechanism of the rock mass. However, the strength, crack coalescence pattern, and failure mode of rock specimens containing multi-fissures have not been studied comprehensively. In this paper, combining similar material testing and discrete element numerical method (PFC2D), the peak strength and failure characteristics of rock-like materials with multi-fissures are explored. Rock-like specimens were made of cement and sand and pre-existing fissures created by inserting steel shims into cement mortar paste and removing them during curing. The peak strength of multi-fissure specimens depends on the fissure angle α (which is measured counterclockwise from horizontal) and fissure number ( N f). Under uniaxial compressional loading, the peak strength increased with increasing α. The material strength was lowest for α = 25°, and highest for α = 90°. The influence of N f on the peak strength depended on α. For α = 25° and 45°, N f had a strong effect on the peak strength, while for higher α values, especially for the 90° sample, there were no obvious changes in peak strength with different N f. Under uniaxial compression, the coalescence modes between the fissures can be classified into three categories: S-mode, T-mode, and M-mode. Moreover, the failure mode can be classified into four categories: mixed failure, shear failure, stepped path failure, and intact failure. The failure mode of the specimen depends on α and N f. The peak strength and failure modes in the numerically simulated and experimental results are in good agreement.
Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy
2013-01-01
Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.
New early warning system for gravity-driven ruptures based on codetection of acoustic signal
NASA Astrophysics Data System (ADS)
Faillettaz, J.
2016-12-01
Gravity-driven rupture phenomena in natural media - e.g. landslide, rockfalls, snow or ice avalanches - represent an important class of natural hazards in mountainous regions. To protect the population against such events, a timely evacuation often constitutes the only effective way to secure the potentially endangered area. However, reliable prediction of imminence of such failure events remains challenging due to the nonlinear and complex nature of geological material failure hampered by inherent heterogeneity, unknown initial mechanical state, and complex load application (rainfall, temperature, etc.). Here, a simple method for real-time early warning that considers both the heterogeneity of natural media and characteristics of acoustic emissions attenuation is proposed. This new method capitalizes on codetection of elastic waves emanating from microcracks by multiple and spatially separated sensors. Event-codetection is considered as surrogate for large event size with more frequent codetected events (i.e., detected concurrently on more than one sensor) marking imminence of catastrophic failure. Simple numerical model based on a Fiber Bundle Model considering signal attenuation and hypothetical arrays of sensors confirms the early warning potential of codetection principles. Results suggest that although statistical properties of attenuated signal amplitude could lead to misleading results, monitoring the emergence of large events announcing impeding failure is possible even with attenuated signals depending on sensor network geometry and detection threshold. Preliminary application of the proposed method to acoustic emissions during failure of snow samples has confirmed the potential use of codetection as indicator for imminent failure at lab scale. The applicability of such simple and cheap early warning system is now investigated at a larger scale (hillslope). First results of such a pilot field experiment are presented and analysed.
Moisture-Induced TBC Spallation on Turbine Blade Samples
NASA Technical Reports Server (NTRS)
Smialek, James
2011-01-01
Delayed failure of TBCs is a widely observed laboratory phenomenon, although many of the early observations went unreported. The weekend effect or DeskTop Spallation (DTS) is characterized by initial survival of a TBC after accelerated laboratory thermal cycling, then failure by exposure to ambient humidity or water. Once initiated, failure can occur quite dramatically in less than a second. To this end, the water drop test and digital video recordings have become useful techniques in studies at NASA (Smialek, Zhu, Cuy), DECHMA (Rudolphi, Renusch, Schuetze), and CNRS Toulouse/SNECMA (Deneux, Cadoret, Hervier, Monceau). In the present study the results for a commercial turbine blade, with a standard EB-PVD 7YSZ TBC top coat and Pt-aluminide diffusion bond coat are reported. Cut sections were intermittently oxidized at 1100, 1150, and 1200 C and monitored by weight change and visual appearance. Failures were distributed widely over a 5-100 hr time range, depending on temperature. At some opportune times, failure was captured by video recording, documenting the appearance and speed of the moisture-induced spallation process. Failure interfaces exhibited alumina scale grains, decorated with Ta-rich oxide particles, and alumina inclusions as islands and streamers. The phenomenon is thus rooted in moisture-induced delayed spallation (MIDS) of the alumina scale formed on the bond coat. In that regard, many studies show the susceptibility of alumina scales to moisture, as long as high strain energy and a partially exposed interface exist. The latter conditions result from severe cyclic oxidation conditions, which produce a highly stressed and partially damaged scale. In one model, it has been proposed that moisture reacts with aluminum in the bond coat to release hydrogen atoms that embrittle the interface. A negative synergistic effect with interfacial sulfur is also invoked.
Moisture-Induced TBC Spallation on Turbine Blade Samples
NASA Technical Reports Server (NTRS)
Smialek, James L.
2011-01-01
Delayed failure of TBCs is a widely observed laboratory phenomenon, although many of the early observations went unreported. "The weekend effect" or "DeskTop Spallation" (DTS) is characterized by initial survival of a TBC after accelerated laboratory thermal cycling, then failure by exposure to ambient humidity or water. Once initiated, failure can occur quite dramatically in less than a second. To this end, the water drop test and digital video recordings have become useful techniques in studies at NASA (Smialek, Zhu, Cuy), DECHMA (Rudolphi, Renusch, Schuetze), and CNRS Toulouse/SNECMA (Deneux, Cadoret, Hervier, Monceau). In the present study the results for a commercial turbine blade, with a standard EB-PVD 7YSZ TBC top coat and Pt-aluminide diffusion bond monitored by weight change and visual appearance. Failures were distributed widely over a 5-100 hr time range, depending on temperature. At some opportune times, failure was captured by video recording, documenting the appearance and speed of the moisture-induced spallation process. Failure interfaces exhibited alumina scale grains, decorated with Ta-rich oxide particles, and alumina inclusions as islands and streamers. The phenomenon is thus rooted in moisture-induced delayed spallation (MIDS) of the alumina scale formed on the bond coat. In that regard, many studies show the susceptibility of alumina scales to moisture, as long as high strain energy and a partially exposed interface exist. The latter conditions result from severe cyclic oxidation conditions, which produce a highly stressed and partially damaged scale. In one model, it has been proposed that moisture reacts with aluminum in the bond coat to release hydrogen atoms that 'embrittle' the interface. A negative synergistic effect with interfacial sulfur is also invoked.
Bellera, Carine; Proust-Lima, Cécile; Joseph, Lawrence; Richaud, Pierre; Taylor, Jeremy; Sandler, Howard; Hanley, James; Mathoulin-Pélissier, Simone
2018-04-01
Background Biomarker series can indicate disease progression and predict clinical endpoints. When a treatment is prescribed depending on the biomarker, confounding by indication might be introduced if the treatment modifies the marker profile and risk of failure. Objective Our aim was to highlight the flexibility of a two-stage model fitted within a Bayesian Markov Chain Monte Carlo framework. For this purpose, we monitored the prostate-specific antigens in prostate cancer patients treated with external beam radiation therapy. In the presence of rising prostate-specific antigens after external beam radiation therapy, salvage hormone therapy can be prescribed to reduce both the prostate-specific antigens concentration and the risk of clinical failure, an illustration of confounding by indication. We focused on the assessment of the prognostic value of hormone therapy and prostate-specific antigens trajectory on the risk of failure. Methods We used a two-stage model within a Bayesian framework to assess the role of the prostate-specific antigens profile on clinical failure while accounting for a secondary treatment prescribed by indication. We modeled prostate-specific antigens using a hierarchical piecewise linear trajectory with a random changepoint. Residual prostate-specific antigens variability was expressed as a function of prostate-specific antigens concentration. Covariates in the survival model included hormone therapy, baseline characteristics, and individual predictions of the prostate-specific antigens nadir and timing and prostate-specific antigens slopes before and after the nadir as provided by the longitudinal process. Results We showed positive associations between an increased prostate-specific antigens nadir, an earlier changepoint and a steeper post-nadir slope with an increased risk of failure. Importantly, we highlighted a significant benefit of hormone therapy, an effect that was not observed when the prostate-specific antigens trajectory was not accounted for in the survival model. Conclusion Our modeling strategy was particularly flexible and accounted for multiple complex features of longitudinal and survival data, including the presence of a random changepoint and a time-dependent covariate.
5-year operation experience with the 1.8 K refrigeration units of the LHC cryogenic system
NASA Astrophysics Data System (ADS)
Ferlin, G.; Tavian, L.; Claudet, S.; Pezzetti, M.
2015-12-01
Since 2009, the Large Hadron Collider (LHC) is in operation at CERN. The LHC superconducting magnets distributed over eight sectors of 3.3-km long are cooled at 1.9 K in pressurized superfluid helium. The nominal operating temperature of 1.9 K is produced by eight 1.8-K refrigeration units based on centrifugal cold compressors (3 or 4 stages depending to the vendor) combined with warm volumetric screw compressors with sub-atmospheric suction. After about 5 years of continuous operation, we will present the results concerning the availability for the final user of these refrigeration units and the impact of the design choice on the recovery time after a system trip. We will also present the individual results for each rotating machinery in terms of failure origin and of Mean Time between Failure (MTBF), as well as the consolidations and upgrades applied to these refrigeration units.
Analysis of thermomechanical fatigue of unidirectional titanium metal matrix composites
NASA Technical Reports Server (NTRS)
Mirdamadi, M.; Johnson, W. S.; Bahei-El-din, Y. A.; Castelli, M. G.
1991-01-01
Thermomechanical fatigue (TMF) data was generated for a Ti-15V-3Cr-3Al-3Sn (Ti-15-3) material reinforced with SCS-6 silicon carbide fibers for both in-phase and out-of-phase thermomechanical cycling. Significant differences in failure mechanisms and fatigue life were noted for in-phase and out-of-phase testing. The purpose of the research is to apply a micromechanical model to the analysis of the data. The analysis predicts the stresses in the fiber and the matrix during the thermal and mechanical cycling by calculating both the thermal and mechanical stresses and their rate-dependent behavior. The rate-dependent behavior of the matrix was characterized and was used to calculate the constituent stresses in the composite. The predicted 0 degree fiber stress range was used to explain the composite failure. It was found that for a given condition, temperature, loading frequency, and time at temperature, the 0 degree fiber stress range may control the fatigue life of the unidirectional composite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Representations are developed and illustrated for the distribution of link property values at the time of link failure in the presence of aleatory uncertainty in link properties. The following topics are considered: (i) defining properties for weak links and strong links, (ii) cumulative distribution functions (CDFs) for link failure time, (iii) integral-based derivation of CDFs for link property at time of link failure, (iv) sampling-based approximation of CDFs for link property at time of link failure, (v) verification of integral-based and sampling-based determinations of CDFs for link property at time of link failure, (vi) distributions of link properties conditional onmore » time of link failure, and (vii) equivalence of two different integral-based derivations of CDFs for link property at time of link failure.« less
THE FAILURE OF STRUCTURAL METALS SUBJECTED TO STRAIN-CYCLING CONDITIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swindeman, R.W.; Douglas, D.A.
1958-11-01
Data showing the isothermal strain-cycling capacity of three metals, inconel, Hastelloy "B," and beryllium, are presented. It is noted that at frequencies of 0.5 cycles per minute the data satisfied am equation of the form N/ sup alpha / epsilon /sub p/ = K, where N is the number of cycles to failure, epsilon /sub p/ is the plastic strain per cycle, and alpha and K are constants whose values depend on the structure and test conditions. Data on Ihconel are given to establish the effect of grain size, specimen geometry, temperature, and frequency. It is found that at temperaturesmore » above 1300 F, grain sine amd frequency exert a pronounced effect on the rupture life. Fine-gralned metal survives more cycles before failure than coarsegrained material. Lomg time cycles shorten the number of cycles to failure when the strain per cycle is low. Thermal strain cycling dain for ihconel are compared to strain cycling data at the same mean temperature. Good correlation is found to exist between the two types of data. (auth)« less
Integrated Design Software Predicts the Creep Life of Monolithic Ceramic Components
NASA Technical Reports Server (NTRS)
1996-01-01
Significant improvements in propulsion and power generation for the next century will require revolutionary advances in high-temperature materials and structural design. Advanced ceramics are candidate materials for these elevated-temperature applications. As design protocols emerge for these material systems, designers must be aware of several innate features, including the degrading ability of ceramics to carry sustained load. Usually, time-dependent failure in ceramics occurs because of two different, delayedfailure mechanisms: slow crack growth and creep rupture. Slow crack growth initiates at a preexisting flaw and continues until a critical crack length is reached, causing catastrophic failure. Creep rupture, on the other hand, occurs because of bulk damage in the material: void nucleation and coalescence that eventually leads to macrocracks which then propagate to failure. Successful application of advanced ceramics depends on proper characterization of material behavior and the use of an appropriate design methodology. The life of a ceramic component can be predicted with the NASA Lewis Research Center's Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design programs. CARES/CREEP determines the expected life of a component under creep conditions, and CARES/LIFE predicts the component life due to fast fracture and subcritical crack growth. The previously developed CARES/LIFE program has been used in numerous industrial and Government applications.
Tsakiridis, Kosmas; Visouli, Aikaterini N.; Machairiotis, Nikolaos; Christofis, Christos; Stylianaki, Aikaterini; Katsikogiannis, Nikolaos; Mpakas, Andreas; Courcoutsakis, Nicolaos; Zarogoulidis, Konstantinos
2012-01-01
New symptom onset of respiratory distress without other cause, and new hemi-diaphragmatic elevation on chest radiography postcardiotomy, are usually adequate for the diagnosis of phrenic nerve paresis. The symptom severity varies (asymptomatic state to severe respiratory failure) depending on the degree of the lesion (paresis vs. paralysis), the laterality (unilateral or bilateral), the age, and the co-morbidity (respiratory, cardiac disease, morbid obesity, etc). Surgical treatment (hemi-diaphragmatic plication) is indicated only in the presence of symptoms. The established surgical treatment is plication of the affected hemidiaphragm which is generally considered safe and effective. Several techniques and approaches are employed for diaphragmatic plication (thoracotomy, video-assisted thoracoscopic surgery, video-assisted mini-thoracotomy, laparoscopic surgery). The timing of surgery depends on the severity and the progression of symptoms. In infants and young children with postcardiotomy phrenic nerve paresis the clinical status is usually severe (failure to wean from mechanical ventilation), and early plication is indicated. Adults with postcardiotomy phrenic nerve paresis usually suffer from chronic dyspnoea, and, in the absence of respiratory distress, conservative treatment is recommended for 6 months -2 years, since improvement is often observed. Nevertheless, earlier surgical treatment may be indicated in non-resolving respiratory failure. We present early (25th day postcardiotomy) right hemi-diaphragm plication, through a video assisted mini-thoracotomy in a high risk patient with postcardiotomy phrenic nerve paresis and respiratory distress. Early surgery with minimal surgical trauma, short operative time, minimal blood loss and postoperative pain, led to fast rehabilitation and avoidance of prolonged hospitalization complications. The relevant literature is discussed. PMID:23304442
Investigation of precipitate refinement in Mg alloys by an analytical composite failure model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabei, Ali; Li, Dongsheng; Lavender, Curt A.
2015-10-01
An analytical model is developed to simulate precipitate refinement in second phase strengthened magnesium alloys. The model is developed based on determination of the stress fields inside elliptical precipitates embedded in a rate dependent inelastic matrix. The stress fields are utilized to determine the failure mode that governs the refinement behavior. Using an AZ31 Mg alloy as an example, the effects the applied load, aspect ratio and orientation of the particle is studied on the macroscopic failure of a single α-Mg17Al12 precipitate. Additionally, a temperature dependent version of the corresponding constitutive law is used to incorporate the effects of temperature.more » In plane strain compression, an extensional failure mode always fragments the precipitates. The critical strain rate at which the precipitates start to fail strongly depends on the orientation of the precipitate with respect to loading direction. The results show that the higher the aspect ratio is, the easier the precipitate fractures. Precipitate shape is another factor influencing the failure response. In contrast to elliptical precipitates with high aspect ratio, spherical precipitates are strongly resistant to sectioning. In pure shear loading, in addition to the extensional mode of precipitate failure, a shearing mode may get activated depending on orientation and aspect ratio of the precipitate. The effect of temperature in relation to strain rate was also verified for plane strain compression and pure shear loading cases.« less
1985-11-26
etc.).., Major decisions involving reliability ptudies, based on competing risk methodology , have been made in the past and will continue to be made...censoring mechanism. In such instances, the methodology for estimating relevant reliabili- ty probabilities has received considerable attention (cf. David...proposal for a discussion of the general methodology . .,4..% . - ’ -. - ’ . ’ , . * I - " . . - - - - . . ,_ . . . . . . . . .4
Analysis of Time Dependent Electric Field Degradation in AlGaN/GaN HEMTs (POSTPRINT)
2014-10-01
identifying and understanding the failure mechanisms that limit the safe operating area of GaN HEMTs. 15. SUBJECT TERMS aluminum gallium nitride... gallium nitride, HEMTs, semiconductor device reliability, transistors 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER...area of GaN HEMTs. Index Terms— Aluminum gallium nitride, gallium nitride, HEMTs, semiconductor device reliability, transistors. I. INTRODUCTION A
Techniques for Improving Pilot Recovery from System Failures
NASA Technical Reports Server (NTRS)
Pritchett, Amy R.
2001-01-01
This project examined the application of intelligent cockpit systems to aid air transport pilots at the tasks of reacting to in-flight system failures and of planning and then following a safe four dimensional trajectory to the runway threshold during emergencies. Two studies were conducted. The first examined pilot performance with a prototype awareness/alerting system in reacting to on-board system failures. In a full-motion, high-fidelity simulator, Army helicopter pilots were asked to fly a mission during which, without warning or briefing, 14 different failures were triggered at random times. Results suggest that the amount of information pilots require from such diagnostic systems is strongly dependent on their training; for failures they are commonly trained to react to with a procedural response, they needed only an indication of which failure to follow, while for 'un-trained' failures, they benefited from more intelligent and informative systems. Pilots were also found to over-rely on the system in conditions were it provided false or mis-leading information. In the second study, a proof-of-concept system was designed suitable for helping pilots replan their flights in emergency situations for quick, safe trajectory generation. This system is described in this report, including: the use of embedded fast-time simulation to predict the trajectory defined by a series of discrete actions; the models of aircraft and pilot dynamics required by the system; and the pilot interface. Then, results of a flight simulator evaluation with airline pilots are detailed. In 6 of 72 simulator runs, pilots were not able to establish a stable flight path on localizer and glideslope, suggesting a need for cockpit aids. However, results also suggest that, to be operationally feasible, such an aid must be capable of suggesting safe trajectories to the pilot; an aid that only verified plans entered by the pilot was found to have significantly detrimental effects on performance and pilot workload. Results also highlight that the trajectories suggested by the aid must capture the context of the emergency; for example, in some emergencies pilots were willing to violate flight envelope limits to reduce time in flight - in other emergencies the opposite was found.
High Speed Dynamics in Brittle Materials
NASA Astrophysics Data System (ADS)
Hiermaier, Stefan
2015-06-01
Brittle Materials under High Speed and Shock loading provide a continuous challenge in experimental physics, analysis and numerical modelling, and consequently for engineering design. The dependence of damage and fracture processes on material-inherent length and time scales, the influence of defects, rate-dependent material properties and inertia effects on different scales make their understanding a true multi-scale problem. In addition, it is not uncommon that materials show a transition from ductile to brittle behavior when the loading rate is increased. A particular case is spallation, a brittle tensile failure induced by the interaction of stress waves leading to a sudden change from compressive to tensile loading states that can be invoked in various materials. This contribution highlights typical phenomena occurring when brittle materials are exposed to high loading rates in applications such as blast and impact on protective structures, or meteorite impact on geological materials. A short review on experimental methods that are used for dynamic characterization of brittle materials will be given. A close interaction of experimental analysis and numerical simulation has turned out to be very helpful in analyzing experimental results. For this purpose, adequate numerical methods are required. Cohesive zone models are one possible method for the analysis of brittle failure as long as some degree of tension is present. Their recent successful application for meso-mechanical simulations of concrete in Hopkinson-type spallation tests provides new insight into the dynamic failure process. Failure under compressive loading is a particular challenge for numerical simulations as it involves crushing of material which in turn influences stress states in other parts of a structure. On a continuum scale, it can be modeled using more or less complex plasticity models combined with failure surfaces, as will be demonstrated for ceramics. Models which take microstructural cracking directly into account may provide a more physics-based approach for compressive failure in the future.
Hou, Kun-Mean; Zhang, Zhan
2017-01-01
Cyber Physical Systems (CPSs) need to interact with the changeable environment under various interferences. To provide continuous and high quality services, a self-managed CPS should automatically reconstruct itself to adapt to these changes and recover from failures. Such dynamic adaptation behavior introduces systemic challenges for CPS design, advice evaluation and decision process arrangement. In this paper, a formal compositional framework is proposed to systematically improve the dependability of the decision process. To guarantee the consistent observation of event orders for causal reasoning, this work first proposes a relative time-based method to improve the composability and compositionality of the timing property of events. Based on the relative time solution, a formal reference framework is introduced for self-managed CPSs, which includes a compositional FSM-based actor model (subsystems of CPS), actor-based advice and runtime decomposable decisions. To simplify self-management, a self-similar recursive actor interface is proposed for decision (actor) composition. We provide constraints and seven patterns for the composition of reliability and process time requirements. Further, two decentralized decision process strategies are proposed based on our framework, and we compare the reliability with the static strategy and the centralized processing strategy. The simulation results show that the one-order feedback strategy has high reliability, scalability and stability against the complexity of decision and random failure. This paper also shows a way to simplify the evaluation for dynamic system by improving the composability and compositionality of the subsystem. PMID:29120357
Zhou, Peng; Zuo, Decheng; Hou, Kun-Mean; Zhang, Zhan
2017-11-09
Cyber Physical Systems (CPSs) need to interact with the changeable environment under various interferences. To provide continuous and high quality services, a self-managed CPS should automatically reconstruct itself to adapt to these changes and recover from failures. Such dynamic adaptation behavior introduces systemic challenges for CPS design, advice evaluation and decision process arrangement. In this paper, a formal compositional framework is proposed to systematically improve the dependability of the decision process. To guarantee the consistent observation of event orders for causal reasoning, this work first proposes a relative time-based method to improve the composability and compositionality of the timing property of events. Based on the relative time solution, a formal reference framework is introduced for self-managed CPSs, which includes a compositional FSM-based actor model (subsystems of CPS), actor-based advice and runtime decomposable decisions. To simplify self-management, a self-similar recursive actor interface is proposed for decision (actor) composition. We provide constraints and seven patterns for the composition of reliability and process time requirements. Further, two decentralized decision process strategies are proposed based on our framework, and we compare the reliability with the static strategy and the centralized processing strategy. The simulation results show that the one-order feedback strategy has high reliability, scalability and stability against the complexity of decision and random failure. This paper also shows a way to simplify the evaluation for dynamic system by improving the composability and compositionality of the subsystem.
Molecular Dynamics Modeling of PPTA Crystals in Aramid Fibers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mercer, Brian Scott
2016-05-19
In this work, molecular dynamics modeling is used to study the mechanical properties of PPTA crystallites, which are the fundamental microstructural building blocks of polymer aramid bers such as Kevlar. Particular focus is given to constant strain rate axial loading simulations of PPTA crystallites, which is motivated by the rate-dependent mechanical properties observed in some experiments with aramid bers. In order to accommodate the covalent bond rupture that occurs in loading a crystallite to failure, the reactive bond order force eld ReaxFF is employed to conduct the simulations. Two major topics are addressed: The rst is the general behavior ofmore » PPTA crystallites under strain rate loading. Constant strain rate loading simulations of crystalline PPTA reveal that the crystal failure strain increases with increasing strain rate, while the modulus is not a ected by the strain rate. Increasing temperature lowers both the modulus and the failure strain. The simulations also identify the C N bond connecting the aromatic rings as weakest primary bond along the backbone of the PPTA chain. The e ect of chain-end defects on PPTA micromechanics is explored, and it is found that the presence of a chain-end defect transfers load to the adjacent chains in the hydrogen-bonded sheet in which the defect resides, but does not in uence the behavior of any other chains in the crystal. Chain-end defects are found to lower the strength of the crystal when clustered together, inducing bond failure via stress concentrations arising from the load transfer to bonds in adjacent chains near the defect site. The second topic addressed is the nature of primary and secondary bond failure in crystalline PPTA. Failure of both types of bonds is found to be stochastic in nature and driven by thermal uctuations of the bonds within the crystal. A model is proposed which uses reliability theory to model bonds under constant strain rate loading as components with time-dependent failure rate functions. The model is shown to work well for predicting the onset of primary backbone bond failure, as well as the onset of secondary bond failure via chain slippage for the case of isolated non-interacting chain-end defects.« less
An interesting case of rifampicin-dependent/-enhanced multidrug-resistant tuberculosis.
Zhong, M; Zhang, X; Wang, Y; Zhang, C; Chen, G; Hu, P; Li, M; Zhu, B; Zhang, W; Zhang, Y
2010-01-01
We report a case of rifampicin (RMP) dependent/enhanced multidrug-resistant (MDR-TB) from a patient who had been treated with the World Health Organization optional thrice-weekly treatment and document the clinical and bacteriological features. RMP-enhanced tubercle bacilli that grew poorly without RMP but grew better in its presence were isolated from the patient with treatment failure. The bacteria grown without RMP consisted of mixed morphologies of short rod-shaped acid-fast bacteria and poorly stained coccoid bacteria, but stained normally as acid-fast rods when grown in the presence of RMP. The isolated RMP-enhanced bacteria harbored the common S531L mutation and a novel mutation F584S in the rpoB gene. Treatment containing RMP or replacement of RMP with more powerful rifapentine paradoxically aggravated the disease, but its removal led to successful cure of the patient. This study highlights the potential dangers of continued treatment of MDR-TB with rifamycins that can occur due to delayed or absent drug susceptibility results and calls for timely detection of RMP-dependent/-enhanced bacteria in treatment failure patients by including RMP in culture media and removal of RMP from treatment regimen upon detection.
Prerenal azotemia in congestive heart failure.
Macedo, Etienne; Mehta, Ravindra
2010-01-01
Prerenal failure is used to designate a reversible form of acute renal dysfunction. However, the terminology encompasses different conditions that vary considerably. The Acute Kidney Injury Network group has recently standardized the acute kidney injury (AKI) definition and classification system; however, these criteria have not determined specific diagnostic criteria to classify prerenal conditions. The difference in the pathophysiology and manifestations of prerenal failure suggests that our current approach needs to be revaluated. Several mechanisms are recognized as contributory to development of a prerenal state associated with cardiac failure. Because of the broad differences in patients' reserve capacity and functional status, prerenal states may be triggered at different time points during the course of the disease. Prerenal state needs to be classified depending on the underlying capacity for compensation, the nature, timing of the insult and the adaptation to chronic comorbidities. Current diagnosis of prerenal conditions is relatively insensitive and would benefit from additional research to define and classify the condition. Identification of high-risk states and high-risk processes associated with the use of new biomarkers for AKI will provide new tools to distinguish between the prerenal and established AKI. Achieving a consensus definition for prerenal syndrome will allow physicians to describe treatments and interventions as well as conduct and compare epidemiological studies in order to better describe the implications of this syndrome. Copyright 2010 S. Karger AG, Basel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashiwa, Bryan Andrew; Hull, Lawrence Mark
Highlights of recent phenomenological studies of metal failure are given. Failure leading to spallation and fragmentation are typically of interest. The current ‘best model’ includes the following; a full history stress in tension; nucleation initiating dynamic relaxation; toward a tensile yield function; failure dependent on strain, strain rate, and temperature; a mean-preserving ‘macrodefect’ is introduced when failure occurs in tension; and multifield theoretical refinements
A bivariate model for analyzing recurrent multi-type automobile failures
NASA Astrophysics Data System (ADS)
Sunethra, A. A.; Sooriyarachchi, M. R.
2017-09-01
The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by the bivariate model. The proposed model can be used to determine the time and type of failure that would occur in the automobiles considered here.
Average inactivity time model, associated orderings and reliability properties
NASA Astrophysics Data System (ADS)
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin
2017-08-04
In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.
Computer-aided operations engineering with integrated models of systems and operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.
1982-06-01
STOCKATZC LV AaMIQ.YN 0gp M@lIm iii s m -r ANAs WgLMSZIb 940=04 WoeU-O PolytechnicInstitute June 1982 Stochastic Availability of a Repairable System ...STOCHASTIC AVAILABILITY OF A REPAIRABLE SYSTEM WITH AN AGE AND MAINTENANCE DEPENDENT FAILURE RATE by JACK-KANG CHAN June 1982 Report No..Poly EE/CS 82-004...1.1 Concepts of System Availability 1 1.2 Maintenance and Failure Rate 7 1.3 Summary Chapter 2 SYSTEM4 MODEL 2.1 A Repairable System with Lintenance
Chetta, Matthew D; Aliu, Oluseyi; Zhong, Lin; Sears, Erika D; Waljee, Jennifer F; Chung, Kevin C; Momoh, Adeyiza O
2017-04-01
Implant-based reconstruction rates have risen among irradiation-treated breast cancer patients in the United States. This study aims to assess the morbidity associated with various breast reconstruction techniques in irradiated patients. From the MarketScan Commercial Claims and Encounters database, the authors selected breast cancer patients who had undergone mastectomy, irradiation, and breast reconstruction from 2009 to 2012. Demographic and clinical treatment data, including data on the timing of irradiation relative to breast reconstruction were recorded. Complications and failures after implant and autologous reconstruction were also recorded. A multivariable logistic regression model was developed with postoperative complications as the dependent variable and patient demographic and clinical variables as independent variables. Four thousand seven hundred eighty-one irradiated patients who met the inclusion criteria were selected. A majority of the patients [n = 3846 (80 percent)] underwent reconstruction with implants. Overall complication rates were 45.3 percent and 30.8 percent for patients with implant and autologous reconstruction, respectively. Failure of reconstruction occurred in 29.4 percent of patients with implant reconstruction compared with 4.3 percent of patients with autologous reconstruction. In multivariable logistic regression, irradiated patients with implant reconstruction had two times the odds of having any complication and 11 times the odds of failure relative to patients with autologous reconstruction. Implant-based breast reconstruction in the irradiated patient, although popular, is associated with significant morbidity. Failures of reconstruction with implants in these patients approach 30 percent in the short term, suggesting a need for careful shared decision-making, with full disclosure of the potential morbidity. Therapeutic, III.
Li, Mengmeng; Rao, Man; Chen, Kai; Zhou, Jianye; Song, Jiangping
2017-07-15
Real-time quantitative reverse transcriptase-PCR (qRT-PCR) is a feasible tool for determining gene expression profiles, but the accuracy and reliability of the results depends on the stable expression of selected housekeeping genes in different samples. By far, researches on stable housekeeping genes in human heart failure samples are rare. Moreover the effect of heart failure on the expression of housekeeping genes in right and left ventricles is yet to be studied. Therefore we aim to provide stable housekeeping genes for both ventricles in heart failure and normal heart samples. In this study, we selected seven commonly used housekeeping genes as candidates. By using the qRT-PCR, the expression levels of ACTB, RAB7A, GAPDH, REEP5, RPL5, PSMB4 and VCP in eight heart failure and four normal heart samples were assessed. The stability of candidate housekeeping genes was evaluated by geNorm and Normfinder softwares. GAPDH showed the least variation in all heart samples. Results also indicated the difference of gene expression existed in heart failure left and right ventricles. GAPDH had the highest expression stability in both heart failure and normal heart samples. We also propose using different sets of housekeeping genes for left and right ventricles respectively. The combination of RPL5, GAPDH and PSMB4 is suitable for the right ventricle and the combination of GAPDH, REEP5 and RAB7A is suitable for the left ventricle. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimization and resilience of complex supply-demand networks
NASA Astrophysics Data System (ADS)
Zhang, Si-Ping; Huang, Zi-Gang; Dong, Jia-Qi; Eisenberg, Daniel; Seager, Thomas P.; Lai, Ying-Cheng
2015-06-01
Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers. We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.
Seto, Emily; Leonard, Kevin J; Cafazzo, Joseph A; Barnsley, Jan; Masino, Caterina; Ross, Heather J
2012-02-10
Previous trials of heart failure telemonitoring systems have produced inconsistent findings, largely due to diverse interventions and study designs. The objectives of this study are (1) to provide in-depth insight into the effects of telemonitoring on self-care and clinical management, and (2) to determine the features that enable successful heart failure telemonitoring. Semi-structured interviews were conducted with 22 heart failure patients attending a heart function clinic who had used a mobile phone-based telemonitoring system for 6 months. The telemonitoring system required the patients to take daily weight and blood pressure readings, weekly single-lead ECGs, and to answer daily symptom questions on a mobile phone. Instructions were sent to the patient's mobile phone based on their physiological values. Alerts were also sent to a cardiologist's mobile phone, as required. All clinicians involved in the study were also interviewed post-trial (N = 5). The interviews were recorded, transcribed, and then analyzed using a conventional content analysis approach. The telemonitoring system improved patient self-care by instructing the patients in real-time how to appropriately modify their lifestyle behaviors. Patients felt more aware of their heart failure condition, less anxiety, and more empowered. Many were willing to partially fund the use of the system. The clinicians were able to manage their patients' heart failure conditions more effectively, because they had physiological data reported to them frequently to help in their decision-making (eg, for medication titration) and were alerted at the earliest sign of decompensation. Essential characteristics of the telemonitoring system that contributed to improved heart failure management included immediate self-care and clinical feedback (ie, teachable moments), how the system was easy and quick to use, and how the patients and clinicians perceived tangible benefits from telemonitoring. Some clinical concerns included ongoing costs of the telemonitoring system and increased clinical workload. A few patients did not want to be watched long-term while some were concerned they might become dependent on the system. The success of a telemonitoring system is highly dependent on its features and design. The essential system characteristics identified in this study should be considered when developing telemonitoring solutions.
The Stigma of Failure in Organizations
2008-01-15
divisions, suggests why the stigma of failure may not always apply in public-sector organizations, and suggests why the development of entrepreneurship ...within organizations may be path- dependent. 14. SUBJECT TERMS Corporate entrepreneurship , stigma of failure 15. NUMBER OF PAGES 59 16. PRICE...of failure may not always apply in public-sector organizations, and suggests why the development of entrepreneurship within organizations may be
NASA Astrophysics Data System (ADS)
Xu, Yuan; Dai, Feng
2018-03-01
A novel method is developed for characterizing the mechanical response and failure mechanism of brittle rocks under dynamic compression-shear loading: an inclined cylinder specimen using a modified split Hopkinson pressure bar (SHPB) system. With the specimen axis inclining to the loading direction of SHPB, a shear component can be introduced into the specimen. Both static and dynamic experiments are conducted on sandstone specimens. Given carefully pulse shaping, the dynamic equilibrium of the inclined specimens can be satisfied, and thus the quasi-static data reduction is employed. The normal and shear stress-strain relationships of specimens are subsequently established. The progressive failure process of the specimen illustrated via high-speed photographs manifests a mixed failure mode accommodating both the shear-dominated failure and the localized tensile damage. The elastic and shear moduli exhibit certain loading-path dependence under quasi-static loading but loading-path insensitivity under high loading rates. Loading rate dependence is evidently demonstrated through the failure characteristics involving fragmentation, compression and shear strength and failure surfaces based on Drucker-Prager criterion. Our proposed method is convenient and reliable to study the dynamic response and failure mechanism of rocks under combined compression-shear loading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Intravaia, F.; Behunin, R. O.; Henkel, C.
Here, we discuss the failure of the Markov approximation in the description of atom-surface fluctuation-induced interactions, both in equilibrium (Casimir-Polder forces) and out of equilibrium (quantum friction). Using general theoretical arguments, we show that the Markov approximation can lead to erroneous predictions of such phenomena with regard to both strength and functional dependencies on system parameters. Particularly, we show that the long-time power-law tails of two-time dipole correlations and their corresponding low-frequency behavior, neglected in the Markovian limit, affect the prediction of the force. These findings highlight the importance of non-Markovian effects in dispersion interactions.
1994-01-01
ive pro gramming or ltc’utrolns tld -, iii tile mixe’d field (’c)lthl(’ betet-- F-valer (calculation.is f’roni transfiirmiation data sets in min~ed...1991) but not all in vivo and fractions of the hypersensitive cells are expected at in vitro studies of radiation carcinogenesis (Ullrich the end of...fbrt donating beam time’ and dosimetry ticetrolls H. ( ;’suii berg. mauituscript ill prep- support and io the4 ntimerotis AFR RI staff members ara it
NASA Astrophysics Data System (ADS)
Brideau, Marc-André; Yan, Ming; Stead, Doug
2009-01-01
Rock slope failures are frequently controlled by a complex combination of discontinuities that facilitate kinematic release. These discontinuities are often associated with discrete folds, faults, and shear zones, and/or related tectonic damage. The authors, through detailed case studies, illustrate the importance of considering the influence of tectonic structures not only on three-dimensional kinematic release but also in the reduction of rock mass properties due to induced damage. The case studies selected reflect a wide range of rock mass conditions. In addition to active rock slope failures they include two major historic failures, the Hope Slide, which occurred in British Columbia in 1965 and the Randa rockslides which occurred in Switzerland in 1991. Detailed engineering geological mapping combined with rock testing, GIS data analysis and for selected case numerical modelling, have shown that specific rock slope failure mechanisms may be conveniently related to rock mass classifications such as the Geological Strength Index (GSI). The importance of brittle intact rock fracture in association with pre-existing rock mass damage is emphasized though a consideration of the processes involved in the progressive-time dependent development not only of though-going failure surfaces but also lateral and rear-release mechanisms. Preliminary modelling data are presented to illustrate the importance of intact rock fracture and step-path failure mechanisms; and the results are discussed with reference to selected field observations. The authors emphasize the importance of considering all forms of pre-existing rock mass damage when assessing potential or operative failure mechanisms. It is suggested that a rock slope rock mass damage assessment can provide an improved understanding of the potential failure mode, the likely hazard presented, and appropriate methods of both analysis and remedial treatment.
Winther, Sine V.; Tuomainen, Tomi; Borup, Rehannah; Tavi, Pasi; Antoons, Gudrun; Thomsen, Morten B.
2016-01-01
The heart-failure relevant Potassium Channel Interacting Protein 2 (KChIP2) augments CaV1.2 and KV4.3. KChIP3 represses CaV1.2 transcription in cardiomyocytes via interaction with regulatory DNA elements. Hence, we tested nuclear presence of KChIP2 and if KChIP2 translocates into the nucleus in a Ca2+ dependent manner. Cardiac biopsies from human heart-failure patients and healthy donor controls showed that nuclear KChIP2 abundance was significantly increased in heart failure; however, this was secondary to a large variation of total KChIP2 content. Administration of ouabain did not increase KChIP2 content in nuclear protein fractions in anesthetized mice. KChIP2 was expressed in cell lines, and Ca2+ ionophores were applied in a concentration- and time-dependent manner. The cell lines had KChIP2-immunoreactive protein in the nucleus in the absence of treatments to modulate intracellular Ca2+ concentration. Neither increasing nor decreasing intracellular Ca2+ concentrations caused translocation of KChIP2. Microarray analysis did not identify relief of transcriptional repression in murine KChIP2−/− heart samples. We conclude that although there is a baseline presence of KChIP2 in the nucleus both in vivo and in vitro, KChIP2 does not directly regulate transcriptional activity. Moreover, the nuclear transport of KChIP2 is not dependent on Ca2+. Thus, KChIP2 does not function as a conventional transcription factor in the heart. PMID:27349185
ERIC Educational Resources Information Center
ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.
This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 20 titles deal with a variety of topics, including the following: the relationships between reading achievement and such factors as dependency, attitude toward reading, mastery of word attack skills, reaction time on selected…
Experimental Study of Solder/Copper Interface Failure Under Varying Strain Rates
2011-03-01
Factors Affecting Solder Joint Reliability Gu et al. [1] determined that during the life cycle of electronic assemblies, approximately 55 percent of...related to vibration and shock, with the remaining percentage associated with changes in 2 humidity. Research conducted by Ross et al. [2] adds...that creep strain is the most important time-dependent factor affecting the reliability of solder joints in electronic equipment. 2. Effects of
NASA Astrophysics Data System (ADS)
Che-Aron, Z.; Abdalla, A. H.; Abdullah, K.; Hassan, W. H.
2013-12-01
In recent years, Cognitive Radio (CR) technology has largely attracted significant studies and research. Cognitive Radio Ad Hoc Network (CRAHN) is an emerging self-organized, multi-hop, wireless network which allows unlicensed users to opportunistically access available licensed spectrum bands for data communication under an intelligent and cautious manner. However, in CRAHNs, a lot of failures can easily occur during data transmission caused by PU (Primary User) activity, topology change, node fault, or link degradation. In this paper, an attempt has been made to evaluate the performance of the Multi-Radio Link-Quality Source Routing (MR-LQSR) protocol in CRAHNs under different path failure rate. In the MR-LQSR protocol, the Weighted Cumulative Expected Transmission Time (WCETT) is used as the routing metric. The simulations are carried out using the NS-2 simulator. The protocol performance is evaluated with respect to performance metrics like average throughput, packet loss, average end-to-end delay and average jitter. From the simulation results, it is observed that the number of path failures depends on the PUs number and mobility rate of SUs (Secondary Users). Moreover, the protocol performance is greatly affected when the path failure rate is high, leading to major service outages.
NASA Astrophysics Data System (ADS)
Sexton, E.; Thomas, A.; Delbridge, B. G.
2017-12-01
Large earthquakes often exhibit complex slip distributions and occur along non-planar fault geometries, resulting in variable stress changes throughout the region of the fault hosting aftershocks. To better discern the role of geometric discontinuities on aftershock sequences, we compare areas of enhanced and reduced Coulomb failure stress and mean stress for systematic differences in the time dependence and productivity of these aftershock sequences. In strike-slip faults, releasing structures, including stepovers and bends, experience an increase in both Coulomb failure stress and mean stress during an earthquake, promoting fluid diffusion into the region and further failure. Conversely, Coulomb failure stress and mean stress decrease in restraining bends and stepovers in strike-slip faults, and fluids diffuse away from these areas, discouraging failure. We examine spatial differences in seismicity patterns along structurally complex strike-slip faults which have hosted large earthquakes, such as the 1992 Mw 7.3 Landers, the 2010 Mw 7.2 El-Mayor Cucapah, the 2014 Mw 6.0 South Napa, and the 2016 Mw 7.0 Kumamoto events. We characterize the behavior of these aftershock sequences with the Epidemic Type Aftershock-Sequence Model (ETAS). In this statistical model, the total occurrence rate of aftershocks induced by an earthquake is λ(t) = λ_0 + \\sum_{i:t_i
NASA Astrophysics Data System (ADS)
Du, Kun; Tao, Ming; Li, Xi-bing; Zhou, Jian
2016-09-01
Slabbing/spalling and rockburst are unconventional types of failure of hard rocks under conditions of unloading and various dynamic loads in environments with high and complex initial stresses. In this study, the failure behaviors of different rock types (granite, red sandstone, and cement mortar) were investigated using a novel testing system coupled to true-triaxial static loads and local dynamic disturbances. An acoustic emission system and a high-speed camera were used to record the real-time fracturing processes. The true-triaxial unloading test results indicate that slabbing occurred in the granite and sandstone, whereas the cement mortar underwent shear failure. Under local dynamically disturbed loading, none of the specimens displayed obvious fracturing at low-amplitude local dynamic loading; however, the degree of rock failure increased as the local dynamic loading amplitude increased. The cement mortar displayed no failure during testing, showing a considerable load-carrying capacity after testing. The sandstone underwent a relatively stable fracturing process, whereas violent rockbursts occurred in the granite specimen. The fracturing process does not appear to depend on the direction of local dynamic loading, and the acoustic emission count rate during rock fragmentation shows that similar crack evolution occurred under the two test scenarios (true-triaxial unloading and local dynamically disturbed loading).
Nonlinear temperature dependent failure analysis of finite width composite laminates
NASA Technical Reports Server (NTRS)
Nagarkar, A. P.; Herakovich, C. T.
1979-01-01
A quasi-three dimensional, nonlinear elastic finite element stress analysis of finite width composite laminates including curing stresses is presented. Cross-ply, angle-ply, and two quasi-isotropic graphite/epoxy laminates are studied. Curing stresses are calculated using temperature dependent elastic properties that are input as percent retention curves, and stresses due to mechanical loading in the form of an axial strain are calculated using tangent modulii obtained by Ramberg-Osgood parameters. It is shown that curing stresses and stresses due to tensile loading are significant as edge effects in all types of laminate studies. The tensor polynomial failure criterion is used to predict the initiation of failure. The mode of failure is predicted by examining individual stress contributions to the tensor polynomial.
Bristow, Michael R; Kao, David P; Breathett, Khadijah K; Altman, Natasha L; Gorcsan, John; Gill, Edward A; Lowes, Brian D; Gilbert, Edward M; Quaife, Robert A; Mann, Douglas L
2017-11-01
Diagnosis, prognosis, treatment, and development of new therapies for diseases or syndromes depend on a reliable means of identifying phenotypes associated with distinct predictive probabilities for these various objectives. Left ventricular ejection fraction (LVEF) provides the current basis for combined functional and structural phenotyping in heart failure by classifying patients as those with heart failure with reduced ejection fraction (HFrEF) and those with heart failure with preserved ejection fraction (HFpEF). Recently the utility of LVEF as the major phenotypic determinant of heart failure has been challenged based on its load dependency and measurement variability. We review the history of the development and adoption of LVEF as a critical measurement of LV function and structure and demonstrate that, in chronic heart failure, load dependency is not an important practical issue, and we provide hemodynamic and molecular biomarker evidence that LVEF is superior or equal to more unwieldy methods of identifying phenotypes of ventricular remodeling. We conclude that, because it reliably measures both left ventricular function and structure, LVEF remains the best current method of assessing pathologic remodeling in heart failure in both individual clinical and multicenter group settings. Because of the present and future importance of left ventricular phenotyping in heart failure, LVEF should be measured by using the most accurate technology and methodologic refinements available, and improved characterization methods should continue to be sought. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Time-Dependent Behavior of High-Strength Kevlar and Vectran Webbing
NASA Technical Reports Server (NTRS)
Jones, Thomas C.; Doggett, William R.
2014-01-01
High-strength Kevlar and Vectran webbings are currently being used by both NASA and industry as the primary load-bearing structure in inflatable space habitation modules. The time-dependent behavior of high-strength webbing architectures is a vital area of research that is providing critical material data to guide a more robust design process for this class of structures. This paper details the results of a series of time-dependent tests on 1-inch wide webbing including an initial set of comparative tests between specimens that underwent realtime and accelerated creep at 65 and 70% of their ultimate tensile strength. Variability in the ultimate tensile strength of the webbings is investigated and compared with variability in the creep life response. Additional testing studied the effects of load and displacement rate, specimen length and the time-dependent effects of preconditioning the webbings. The creep test facilities, instrumentation and test procedures are also detailed. The accelerated creep tests display consistently longer times to failure than their real-time counterparts; however, several factors were identified that may contribute to the observed disparity. Test setup and instrumentation, grip type, loading scheme, thermal environment and accelerated test postprocessing along with material variability are among these factors. Their effects are discussed and future work is detailed for the exploration and elimination of some of these factors in order to achieve a higher fidelity comparison.
NASA Astrophysics Data System (ADS)
Morelle, X. P.; Chevalier, J.; Bailly, C.; Pardoen, T.; Lani, F.
2017-08-01
The nonlinear deformation and fracture of RTM6 epoxy resin is characterized as a function of strain rate and temperature under various loading conditions involving uniaxial tension, notched tension, uniaxial compression, torsion, and shear. The parameters of the hardening law depend on the strain-rate and temperature. The pressure-dependency and hardening law, as well as four different phenomenological failure criteria, are identified using a subset of the experimental results. Detailed fractography analysis provides insight into the competition between shear yielding and maximum principal stress driven brittle failure. The constitutive model and a stress-triaxiality dependent effective plastic strain based failure criterion are readily introduced in the standard version of Abaqus, without the need for coding user subroutines, and can thus be directly used as an input in multi-scale modeling of fibre-reinforced composite material. The model is successfully validated against data not used for the identification and through the full simulation of the crack propagation process in the V-notched beam shear test.
A model for the progressive failure of laminated composite structural components
NASA Technical Reports Server (NTRS)
Allen, D. H.; Lo, D. C.
1991-01-01
Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
NASA Technical Reports Server (NTRS)
Rotem, Assa
1990-01-01
Laminated composite materials tend to fail differently under tensile or compressive load. Under tension, the material accumulates cracks and fiber fractures, while under compression, the material delaminates and buckles. Tensile-compressive fatigue may cause either of these failure modes depending on the specific damage occurring in the laminate. This damage depends on the stress ratio of the fatigue loading. Analysis of the fatigue behavior of the composite laminate under tension-tension, compression-compression, and tension-compression had led to the development of a fatigue envelope presentation of the failure behavior. This envelope indicates the specific failure mode for any stress ratio and number of loading cycles. The construction of the fatigue envelope is based on the applied stress-cycles to failure (S-N) curves of both tensile-tensile and compressive-compressive fatigue. Test results are presented to verify the theoretical analysis.
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
NASA Technical Reports Server (NTRS)
Powers, L. M.; Jadaan, O. M.; Gyekenyesi, J. P.
1998-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural application such as in advanced turbine engine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilizes commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life, of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the Ceramics Analysis and Reliability Evaluation of Structures/CREEP (CARES/CREEP) integrated design program, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benchmark problems and engine components are included.
NASA Technical Reports Server (NTRS)
Gyekenyesi, J. P.; Powers, L. M.; Jadaan, O. M.
1998-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilized commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the CARES/CREEP (Ceramics Analysis and Reliability Evaluation of Structures/CREEP) integrated design programs, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benechmark problems and engine components are included.
Deep geothermal: The ‘Moon Landing’ mission in the unconventional energy and minerals space
Regenauer-Lieb, Klaus; Bunger, Andrew; Chua, Hui Tong; ...
2015-01-30
Deep geothermal from the hot crystalline basement has remained an unsolved frontier for the geothermal industry for the past 30 years. This poses the challenge for developing a new unconventional geomechanics approach to stimulate such reservoirs. While a number of new unconventional brittle techniques are still available to improve stimulation on short time scales, the astonishing richness of failure modes of longer time scales in hot rocks has so far been overlooked. These failure modes represent a series of microscopic processes: brittle microfracturing prevails at low temperatures and fairly high deviatoric stresses, while upon increasing temperature and decreasing applied stressmore » or longer time scales, the failure modes switch to transgranular and intergranular creep fractures. Accordingly, fluids play an active role and create their own pathways through facilitating shear localization by a process of time-dependent dissolution and precipitation creep, rather than being a passive constituent by simply following brittle fractures that are generated inside a shear zone caused by other localization mechanisms. We lay out a new paradigm for reservoir stimulation by reactivating pre-existing faults at reservoir scale in a reservoir scale aseismic, ductile manner. A side effect of the new “soft” stimulation method is that owing to the design specification of a macroscopic ductile response, the proposed method offers the potential of a safer control over the stimulation process compared to conventional stimulation protocols such as currently employed in shale gas reservoirs.« less
Reliable Broadcast under Cascading Failures in Interdependent Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Sisi; Lee, Sangkeun; Chinthavali, Supriya
Reliable broadcast is an essential tool to disseminate information among a set of nodes in the presence of failures. We present a novel study of reliable broadcast in interdependent networks, in which the failures in one network may cascade to another network. In particular, we focus on the interdependency between the communication network and power grid network, where the power grid depends on the signals from the communication network for control and the communication network depends on the grid for power. In this paper, we build a resilient solution to handle crash failures in the communication network that may causemore » cascading failures and may even partition the network. In order to guarantee that all the correct nodes deliver the messages, we use soft links, which are inactive backup links to non-neighboring nodes that are only active when failures occur. At the core of our work is a fully distributed algorithm for the nodes to predict and collect the information of cascading failures so that soft links can be maintained to correct nodes prior to the failures. In the presence of failures, soft links are activated to guarantee message delivery and new soft links are built accordingly for long term robustness. Our evaluation results show that the algorithm achieves low packet drop rate and handles cascading failures with little overhead.« less
Besser, Avi; Priel, Beatriz
2011-01-01
This study evaluated the intervening role of meaning-making processes in emotional responses to negative life events based on Blatt's (1974, 2004) formulations concerning the role of personality predispositions in depression. In a pre/post within-subject study design, a community sample of 233 participants reacted to imaginary scenarios of interpersonal rejection and achievement failure. Meaning-making processes relating to threats to self-definition and interpersonal relatedness were examined following the exposure to the scenarios. The results indicated that the personality predisposition of Dependency, but not Self-Criticism predicted higher levels of negative affect following the interpersonal rejection event, independent of baseline levels of negative affect. This effect was mediated by higher levels of negative meaning-making processes related to the effect of the interpersonal rejection scenario on Dependent individuals' senses of interpersonal relatedness and self-worth. In addition, both Self-Criticism and Dependency predicted higher levels of negative affect following the achievement failure event, independent of baseline levels of negative affect. Finally, the effect of Self-Criticism was mediated by higher levels of negative meaning-making processes related to the effect of the achievement failure scenario on self-critical individuals' senses of self-definition.
Evaluation of Enhanced Risk Monitors for Use on Advanced Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Veeramany, Arun; Bonebrake, Christopher A.
This study provides an overview of the methodology for integrating time-dependent failure probabilities into nuclear power reactor risk monitors. This prototypic enhanced risk monitor (ERM) methodology was evaluated using a hypothetical probabilistic risk assessment (PRA) model, generated using a simplified design of a liquid-metal-cooled advanced reactor (AR). Component failure data from industry compilation of failures of components similar to those in the simplified AR model were used to initialize the PRA model. Core damage frequency (CDF) over time were computed and analyzed. In addition, a study on alternative risk metrics for ARs was conducted. Risk metrics that quantify the normalizedmore » cost of repairs, replacements, or other operations and management (O&M) actions were defined and used, along with an economic model, to compute the likely economic risk of future actions such as deferred maintenance based on the anticipated change in CDF due to current component condition and future anticipated degradation. Such integration of conventional-risk metrics with alternate-risk metrics provides a convenient mechanism for assessing the impact of O&M decisions on safety and economics of the plant. It is expected that, when integrated with supervisory control algorithms, such integrated-risk monitors will provide a mechanism for real-time control decision-making that ensure safety margins are maintained while operating the plant in an economically viable manner.« less
Mission Data System Java Edition Version 7
NASA Technical Reports Server (NTRS)
Reinholtz, William K.; Wagner, David A.
2013-01-01
The Mission Data System framework defines closed-loop control system abstractions from State Analysis including interfaces for state variables, goals, estimators, and controllers that can be adapted to implement a goal-oriented control system. The framework further provides an execution environment that includes a goal scheduler, execution engine, and fault monitor that support the expression of goal network activity plans. Using these frameworks, adapters can build a goal-oriented control system where activity coordination is verified before execution begins (plan time), and continually during execution. Plan failures including violations of safety constraints expressed in the plan can be handled through automatic re-planning. This version optimizes a number of key interfaces and features to minimize dependencies, performance overhead, and improve reliability. Fault diagnosis and real-time projection capabilities are incorporated. This version enhances earlier versions primarily through optimizations and quality improvements that raise the technology readiness level. Goals explicitly constrain system states over explicit time intervals to eliminate ambiguity about intent, as compared to command-oriented control that only implies persistent intent until another command is sent. A goal network scheduling and verification process ensures that all goals in the plan are achievable before starting execution. Goal failures at runtime can be detected (including predicted failures) and handled by adapted response logic. Responses can include plan repairs (try an alternate tactic to achieve the same goal), goal shedding, ignoring the fault, cancelling the plan, or safing the system.
NASA Astrophysics Data System (ADS)
Spiridonov, I.; Shopova, M.; Boeva, R.; Nikolov, M.
2012-05-01
One of the biggest problems in color reproduction processes is color shifts occurring when images are viewed under different illuminants. Process ink colors and their combinations that match under one light source will often appear different under another light source. This problem is referred to as color balance failure or color inconstancy. The main goals of the present study are to investigate and determine the color balance failure (color inconstancy) of offset printed images expressed by color difference and color gamut changes depending on three of the most commonly used in practice illuminants, CIE D50, CIE F2 and CIE A. The results obtained are important from a scientific and a practical point of view. For the first time, a methodology is suggested and implemented for the examination and estimation of color shifts by studying a large number of color and gamut changes in various ink combinations for different illuminants.
Right ventricular sarcoidosis: is it time for updated diagnostic criteria?
Vakil, Kairav; Minami, Elina; Fishbein, Daniel P
2014-04-01
A 55-year-old woman with a history of complete heart block, atrial flutter, and progressive right ventricular failure was referred to our tertiary care center to be evaluated for cardiac transplantation. The patient's clinical course included worsening right ventricular dysfunction for 3 years before the current evaluation. Our clinical findings raised concerns about arrhythmogenic right ventricular cardiomyopathy. Noninvasive imaging, including a positron emission tomographic scan, did not reveal obvious myocardial pathologic conditions. Given the end-stage nature of the patient's right ventricular failure and her dependence on inotropic agents, she underwent urgent listing and subsequent heart transplantation. Pathologic examination of the explanted heart revealed isolated right ventricular sarcoidosis with replacement fibrosis. Biopsy samples of the cardiac allograft 6 months after transplantation showed no recurrence of sarcoidosis. This atypical presentation of isolated cardiac sarcoidosis posed a considerable diagnostic challenge. In addition to discussing the patient's case, we review the relevant medical literature and discuss the need for updated differential diagnostic criteria for end-stage right ventricular failure that mimics arrhythmogenic right ventricular cardiomyopathy.
Sensitivity study on durability variables of marine concrete structures
NASA Astrophysics Data System (ADS)
Zhou, Xin'gang; Li, Kefei
2013-06-01
In order to study the influence of parameters on durability of marine concrete structures, the parameter's sensitivity analysis was studied in this paper. With the Fick's 2nd law of diffusion and the deterministic sensitivity analysis method (DSA), the sensitivity factors of apparent surface chloride content, apparent chloride diffusion coefficient and its time dependent attenuation factor were analyzed. The results of the analysis show that the impact of design variables on concrete durability was different. The values of sensitivity factor of chloride diffusion coefficient and its time dependent attenuation factor were higher than others. Relative less error in chloride diffusion coefficient and its time dependent attenuation coefficient induces a bigger error in concrete durability design and life prediction. According to probability sensitivity analysis (PSA), the influence of mean value and variance of concrete durability design variables on the durability failure probability was studied. The results of the study provide quantitative measures of the importance of concrete durability design and life prediction variables. It was concluded that the chloride diffusion coefficient and its time dependent attenuation factor have more influence on the reliability of marine concrete structural durability. In durability design and life prediction of marine concrete structures, it was very important to reduce the measure and statistic error of durability design variables.
Risk of Sprint Fidelis defibrillator lead failure is highly dependent on age.
Girerd, Nicolas; Nonin, Emilie; Pinot, Julien; Morel, Elodie; Flys, Carine; Scridon, Alina; Chevalier, Philippe
2011-01-01
In 2007, Medtronic Sprint Fidelis defibrillator leads were taken off the market due to a high rate of lead failure. Current data do not allow for risk stratification of patients with regard to lead failure. We sought to determine predictors of Sprint Fidelis lead failure. Between 2004 and 2007, 269 Sprint Fidelis leads were implanted in 258 patients in our centre. Variables associated with lead failure were assessed by the Kaplan-Meier method and a Cox survival model. During a median follow-up of 2.80 years (maximum 5.32), we observed 33 (12.3%) Sprint Fidelis lead failures (5-year survival, 65.6% ± 7.5%). In univariate analysis, age was the only predictor of lead failure (hazard ratio [HR] for 1-year increase 0.97; 95% confidence interval [CI] 0.95-0.99; p=0.009). Patients aged<62.5 years (median) had a significantly increased risk of lead failure compared with patients aged>62.5 years (HR 2.80; CI 1.30-6.02; p=0.009). Survival without Sprint Fidelis lead failure was 55.6% ± 10.4%) in patients aged<62.5 years (24/134 leads) vs 78.6% ± 8.8% in patients aged>62.5 years (9/135 leads). The annual incidence of lead failure in patients aged<62.5 years was 11.6% ± 4.9% during the fourth year after implantation and 22.9% ± 13.2% during the fifth year. Overall, we found a higher rate of Sprint Fidelis lead dysfunction than previously described. Lead failure was much more frequent in younger patients. Our results emphasize the need for close follow-up of younger patients with Sprint Fidelis leads and suggest that, in these patients, the implantation of a new implantable cardioverter defibrillator lead at the time of generator replacement might be reasonable. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Pollitz, F.F.; Sacks, I.S.
2002-01-01
The M 7.3 June 28, 1992 Landers and M 7.1 October 16, 1999 Hector Mine earthquakes, California, both right lateral strike-slip events on NNW-trending subvertical faults, occurred in close proximity in space and time in a region where recurrence times for surface-rupturing earthquakes are thousands of years. This suggests a causal role for the Landers earthquake in triggering the Hector Mine earthquake. Previous modeling of the static stress change associated with the Landers earthquake shows that the area of peak Hector Mine slip lies where the Coulomb failure stress promoting right-lateral strike-slip failure was high, but the nucleation point of the Hector Mine rupture was neutrally to weakly promoted, depending on the assumed coefficient of friction. Possible explanations that could account for the 7-year delay between the two ruptures include background tectonic stressing, dissipation of fluid pressure gradients, rate- and state-dependent friction effects, and post-Landers viscoelastic relaxation of the lower crust and upper mantle. By employing a viscoelastic model calibrated by geodetic data collected during the time period between the Landers and Hector Mine events, we calculate that postseismic relaxation produced a transient increase in Coulomb failure stress of about 0.7 bars on the impending Hector Mine rupture surface. The increase is greatest over the broad surface that includes the 1999 nucleation point and the site of peak slip further north. Since stress changes of magnitude greater than or equal to 0.1 bar are associated with documented causal fault interactions elsewhere, viscoelastic relaxation likely contributed to the triggering of the Hector Mine earthquake. This interpretation relies on the assumption that the faults occupying the central Mojave Desert (i.e., both the Landers and Hector Mine rupturing faults) were critically stressed just prior to the Landers earthquake.
What Reliability Engineers Should Know about Space Radiation Effects
NASA Technical Reports Server (NTRS)
DiBari, Rebecca
2013-01-01
Space radiation in space systems present unique failure modes and considerations for reliability engineers. Radiation effects is not a one size fits all field. Threat conditions that must be addressed for a given mission depend on the mission orbital profile, the technologies of parts used in critical functions and on application considerations, such as supply voltages, temperature, duty cycle, and redundancy. In general, the threats that must be addressed are of two types-the cumulative degradation mechanisms of total ionizing dose (TID) and displacement damage (DD). and the prompt responses of components to ionizing particles (protons and heavy ions) falling under the heading of single-event effects. Generally degradation mechanisms behave like wear-out mechanisms on any active components in a system: Total Ionizing Dose (TID) and Displacement Damage: (1) TID affects all active devices over time. Devices can fail either because of parametric shifts that prevent the device from fulfilling its application or due to device failures where the device stops functioning altogether. Since this failure mode varies from part to part and lot to lot, lot qualification testing with sufficient statistics is vital. Displacement damage failures are caused by the displacement of semiconductor atoms from their lattice positions. As with TID, failures can be either parametric or catastrophic, although parametric degradation is more common for displacement damage. Lot testing is critical not just to assure proper device fi.mctionality throughout the mission. It can also suggest remediation strategies when a device fails. This paper will look at these effects on a variety of devices in a variety of applications. This paper will look at these effects on a variety of devices in a variety of applications. (2) On the NEAR mission a functional failure was traced to a PIN diode failure caused by TID induced high leakage currents. NEAR was able to recover from the failure by reversing the current of a nearby Thermal Electric Cooler (turning the TEC into a heater). The elevated temperature caused the PIN diode to anneal and the device to recover. It was by lot qualification testing that NEAR knew the diode would recover when annealed. This paper will look at these effects on a variety of devices in a variety of applications. Single Event Effects (SEE): (1) In contrast to TID and displacement damage, Single Event Effects (SEE) resemble random failures. SEE modes can range from changes in device logic (single-event upset, or SEU). temporary disturbances (single-event transient) to catastrophic effects such as the destructive SEE modes, single-event latchup (SEL). single-event gate rupture (SEGR) and single-event burnout (SEB) (2) The consequences of nondestructive SEE modes such as SEU and SET depend critically on their application--and may range from trivial nuisance errors to catastrophic loss of mission. It is critical not just to ensure that potentially susceptible devices are well characterized for their susceptibility, but also to work with design engineers to understand the implications of each error mode. -For destructive SEE, the predominant risk mitigation strategy is to avoid susceptible parts, or if that is not possible. to avoid conditions under which the part may be susceptible. Destructive SEE mechanisms are often not well understood, and testing is slow and expensive, making rate prediction very challenging. (3) Because the consequences of radiation failure and degradation modes depend so critically on the application as well as the component technology, it is essential that radiation, component. design and system engineers work togetherpreferably starting early in the program to ensure critical applications are addressed in time to optimize the probability of mission success.
Redundancy relations and robust failure detection
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.
1984-01-01
All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.
Dobre, Mirela; Yang, Wei; Pan, Qiang; Appel, Lawrence; Bellovich, Keith; Chen, Jing; Feldman, Harold; Fischer, Michael J; Ham, L L; Hostetter, Thomas; Jaar, Bernard G; Kallem, Radhakrishna R; Rosas, Sylvia E; Scialla, Julia J; Wolf, Myles; Rahman, Mahboob
2015-04-20
Serum bicarbonate varies over time in chronic kidney disease (CKD) patients, and this variability may portend poor cardiovascular outcomes. The aim of this study was to conduct a time-updated longitudinal analysis to evaluate the association of serum bicarbonate with long-term clinical outcomes: heart failure, atherosclerotic events, renal events (halving of estimated glomerular filtration rate [eGFR] or end-stage renal disease), and mortality. Serum bicarbonate was measured annually, in 3586 participants with CKD, enrolled in the Chronic Renal Insufficiency Cohort (CRIC) study. Marginal structural models were created to allow for integration of all available bicarbonate measurements and proper adjustment for time-dependent confounding. During the 6 years follow-up, 512 participants developed congestive heart failure (26/1000 person-years) and 749 developed renal events (37/1000 person-years). The risk of heart failure and death was significantly higher for participants who maintained serum bicarbonate >26 mmol/L for the entire duration of follow-up (hazard ratio [HR] 1.66; 95% confidence interval [CI], 1.23 to 2.23, and HR 1.36, 95% CI 1.02 to 1.82, respectively) compared with participants who kept their bicarbonate 22 to 26 mmol/L, after adjusting for demographics, co-morbidities, medications including diuretics, eGFR, and proteinuria. Participants who maintained serum bicarbonate <22 mmol/L had almost a 2-fold increased risk of renal disease progression (HR 1.97; 95% CI, 1.50 to 2.57) compared with participants with bicarbonate 22 to 26 mmol/L. In this large CKD cohort, persistent serum bicarbonate >26 mmol/L was associated with increased risk of heart failure events and mortality. Further studies are needed to determine the optimal range of serum bicarbonate in CKD to prevent adverse clinical outcomes. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
NASA Astrophysics Data System (ADS)
Luo, X. M.; Zhang, B.; Zhang, G. P.
2014-09-01
Thermal fatigue failure of metallization interconnect lines subjected to alternating currents (AC) is becoming a severe threat to the long-term reliability of micro/nanodevices with increasing electrical current density/power. Here, thermal fatigue failure behaviors and damage mechanisms of nanocrystalline Au interconnect lines on the silicon glass substrate have been investigated by applying general alternating currents (the pure alternating current coupled with a direct current (DC) component) with different frequencies ranging from 0.05 Hz to 5 kHz. We observed both thermal fatigue damages caused by Joule heating-induced cyclic strain/stress and electromigration (EM) damages caused by the DC component. Besides, the damage formation showed a strong electrically-thermally-mechanically coupled effect and frequency dependence. At lower frequencies, thermal fatigue damages were dominant and the main damage forms were grain coarsening with grain boundary (GB) cracking/voiding and grain thinning. At higher frequencies, EM damages took over and the main damage forms were GB cracking/voiding of smaller grains and hillocks. Furthermore, the healing effect of the reversing current was considered to elucidate damage mechanisms of the nanocrystalline Au lines generated by the general AC. Lastly, a modified model was proposed to predict the lifetime of the nanocrystalline metal interconnect lines, i.e., that was a competing drift velocity-based approach based on the threshold time required for reverse diffusion/healing to occur.
A quality risk management model approach for cell therapy manufacturing.
Lopez, Fabio; Di Bartolo, Chiara; Piazza, Tommaso; Passannanti, Antonino; Gerlach, Jörg C; Gridelli, Bruno; Triolo, Fabio
2010-12-01
International regulatory authorities view risk management as an essential production need for the development of innovative, somatic cell-based therapies in regenerative medicine. The available risk management guidelines, however, provide little guidance on specific risk analysis approaches and procedures applicable in clinical cell therapy manufacturing. This raises a number of problems. Cell manufacturing is a poorly automated process, prone to operator-introduced variations, and affected by heterogeneity of the processed organs/tissues and lot-dependent variability of reagent (e.g., collagenase) efficiency. In this study, the principal challenges faced in a cell-based product manufacturing context (i.e., high dependence on human intervention and absence of reference standards for acceptable risk levels) are identified and addressed, and a risk management model approach applicable to manufacturing of cells for clinical use is described for the first time. The use of the heuristic and pseudo-quantitative failure mode and effect analysis/failure mode and critical effect analysis risk analysis technique associated with direct estimation of severity, occurrence, and detection is, in this specific context, as effective as, but more efficient than, the analytic hierarchy process. Moreover, a severity/occurrence matrix and Pareto analysis can be successfully adopted to identify priority failure modes on which to act to mitigate risks. The application of this approach to clinical cell therapy manufacturing in regenerative medicine is also discussed. © 2010 Society for Risk Analysis.
Calculation of Centrally Loaded Thin-Walled Columns Above the Buckling Limit
NASA Technical Reports Server (NTRS)
Reinitzhuber, F.
1945-01-01
When thin-walled columns formed from flanged sheet, such as used in airplane construction, are subjected to axial load, their behavior at failure varies according to the slenderness ratio. On long columns the axis deflects laterally while the cross section form is maintained; buckling results. The respective breaking load in the elastic range is computed by Euler's formula and for the plastic range by the Engesser- Karman formula. Its magnitude is essentially dependent upon the length. On intermediate length columns, especially where open sections are concerned, the cross section is distorted while the cross section form is preserved; twisting failure results. The buckling load in twisting is calculated according to Wagner and Kappus. On short columns the straight walls of low-bending resistance that form the column are deflected at the same time that the cross section form changes - buckling occurs without immediate failure. Then the buckling load of the total section computable from the buckling loads of the section walls is not the ultimate load; quite often, especially on thin-walled sections, it lies considerably higher and is secured by tests. Both loads, the buckling and the ultimate load are only in a small measure dependent upon length. The present report is an attempt to theoretically investigate the behavior of such short, thin-walled columns above the buckling load with the conventional calculating methods.
NASA Technical Reports Server (NTRS)
Stang, Ambrose H; Ramberg, Walter; Back, Goldie
1937-01-01
This report presents the results of tests of 63 chromium-molybdenum steel tubes and 102 17st aluminum-alloy tubes of various sizes and lengths made to study the dependence of the torsional strength on both the dimensions of the tube and the physical properties of the tube material. Three types of failure are found to be important for sizes of tubes frequently used in aircraft construction: (1) failure by plastic shear, in which the tube material reached its yield strength before the critical torque was reached; (2) failure by elastic two-lobe buckling, which depended only on the elastic properties of the tube material and the dimensions of the tube; and (3) failure by a combination of (1) and (2) that is, by buckling taking place after some yielding of the tube material.
Slope failures in Northern Vermont, USA
Lee, F.T.; Odum, J.K.; Lee, J.D.
1997-01-01
Rockfalls and debris avalanches from steep hillslopes in northern Vermont are a continuing hazard for motorists, mountain climbers, and hikers. Huge blocks of massive schist and gneiss can reach the valley floor intact, whereas others may trigger debris avalanches on their downward travel. Block movement is facilitated by major joints both parallel and perpendicular to the glacially over-steepened valley walls. The slope failures occur most frequently in early spring, accompanying freeze/thaw cycles, and in the summer, following heavy rains. The study reported here began in August 1986 and ended in June 1989. Manual and automated measurements of temperature and displacement were made at two locations on opposing valley walls. Both cyclic-reversible and permanent displacements occurred during the 13-month monitoring period. The measurements indicate that freeze/thaw mechanisms produce small irreversible incremental movements, averaging 0.53 mm/yr, that displace massive blocks and produce rockfalls. The initial freeze/thaw weakening of the rock mass also makes slopes more susceptible to attrition by water, and heavy rains have triggered rockfalls and consequent debris flows and avalanches. Temperature changes on the rock surface produced time-dependent cyclic displacements of the rock blocks that were not instantaneous but lagged behind the temperature changes. Statistical analyses of the data were used to produce models of cyclic time-dependent rock block behavior. Predictions based solely on temperature changes gave poor results. A model using time and temperature and incorporating the lag effect predicts block displacement more accurately.
A flexible cure rate model with dependent censoring and a known cure threshold.
Bernhardt, Paul W
2016-11-10
We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Mechanistic Considerations Used in the Development of the PROFIT PCI Failure Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pankaskie, P. J.
A fuel Pellet-Zircaloy Cladding (thermo-mechanical-chemical) Interactions (PC!) failure model for estimating the probability of failure in !ransient increases in power (PROFIT) was developed. PROFIT is based on 1) standard statistical methods applied to available PC! fuel failure data and 2) a mechanistic analysis of the environmental and strain-rate-dependent stress versus strain characteristics of Zircaloy cladding. The statistical analysis of fuel failures attributable to PCI suggested that parameters in addition to power, transient increase in power, and burnup are needed to define PCI fuel failures in terms of probability estimates with known confidence limits. The PROFIT model, therefore, introduces an environmentalmore » and strain-rate dependent strain energy absorption to failure (SEAF) concept to account for the stress versus strain anomalies attributable to interstitial-disloction interaction effects in the Zircaloy cladding. Assuming that the power ramping rate is the operating corollary of strain-rate in the Zircaloy cladding, then the variables of first order importance in the PCI fuel failure phenomenon are postulated to be: 1. pre-transient fuel rod power, P{sub I}, 2. transient increase in fuel rod power, {Delta}P, 3. fuel burnup, Bu, and 4. the constitutive material property of the Zircaloy cladding, SEAF.« less
Vocal fold tissue failure: preliminary data and constitutive modeling.
Chan, Roger W; Siegmund, Thomas
2004-08-01
In human voice production (phonation), linear small-amplitude vocal fold oscillation occurs only under restricted conditions. Physiologically, phonation more often involves large-amplitude oscillation associated with tissue stresses and strains beyond their linear viscoelastic limits, particularly in the lamina propria extracellular matrix (ECM). This study reports some preliminary measurements of tissue deformation and failure response of the vocal fold ECM under large-strain shear The primary goal was to formulate and test a novel constitutive model for vocal fold tissue failure, based on a standard-linear cohesive-zone (SL-CZ) approach. Tissue specimens of the sheep vocal fold mucosa were subjected to torsional deformation in vitro, at constant strain rates corresponding to twist rates of 0.01, 0.1, and 1.0 rad/s. The vocal fold ECM demonstrated nonlinear stress-strain and rate-dependent failure response with a failure strain as low as 0.40 rad. A finite-element implementation of the SL-CZ model was capable of capturing the rate dependence in these preliminary data, demonstrating the model's potential for describing tissue failure. Further studies with additional tissue specimens and model improvements are needed to better understand vocal fold tissue failure.
DAMP-Mediated Innate Immune Failure and Pneumonia after Trauma
2017-10-01
Correlation Between Chemotaxis and Ca2+ release AUC ND6 ND3 ND4 ND5 COX1 6 similarity of amino acid sequences based upon their component residues. We used... correlation to chemotaxis studies. These findings give us confidence that our mechanistic studies in mice can be able be used translationally to...evaluated time-dependent changes in peripheral blood in trauma patients to identify changes correlated with infection. Methods: Total leukocytes were
Workshop on the Destruction of Bacterial Spores Held in Brussels, Belgium on May 1-3, 1985.
1985-05-03
pasteurization , sterilization , UHT, Association, Chipping Campden, fluidized beds, new developments - UK) failures in commercial heat processing 9. Window of...exposure of the food to high temperatures have been diminished by rotation outoclaves and/or HTST (high temperature short time processes). For economic...effect commercial sterility and product - . safety is dependent not only on the inherent heat resistance of spores . .. but also on the numbers
Dioxin inhibition of swim bladder development in zebrafish: is it secondary to heart failure?
Yue, Monica S; Peterson, Richard E; Heideman, Warren
2015-05-01
The swim bladder is a gas-filled organ that is used for regulating buoyancy and is essential for survival in most teleost species. In zebrafish, swim bladder development begins during embryogenesis and inflation occurs within 5 days post fertilization (dpf). Embryos exposed to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) before 96 h post fertilization (hpf) developed swim bladders normally until the growth/elongation phase, at which point growth was arrested. It is known that TCDD exposure causes heart malformations that lead to heart failure in zebrafish larvae, and that blood circulation is a key factor in normal development of the swim bladder. The adverse effects of TCDD exposure on the heart occur during the same period of time that swim bladder development and growth occurs. Based on this coincident timing, and the dependence of swim bladder development on proper circulatory development, we hypothesized that the adverse effects of TCDD on swim bladder development were secondary to heart failure. We compared swim bladder development in TCDD-exposed embryos to: (1) silent heart morphants, which lack cardiac contractility, and (2) transiently transgenic cmlc2:caAHR-2AtRFP embryos, which mimic TCDD-induced heart failure via heart-specific, constitutive activation of AHR signaling. Both of these treatment groups, which were not exposed to TCDD, developed hypoplastic swim bladders of comparable size and morphology to those found in TCDD-exposed embryos. Furthermore, in all treatment groups swim bladder development was arrested during the growth/elongation phase. Together, these findings support a potential role for heart failure in the inhibition of swim bladder development caused by TCDD. Copyright © 2015 Elsevier B.V. All rights reserved.
Failure mechanisms of uni-ply composite plates with a circular hole under static compressive loading
NASA Technical Reports Server (NTRS)
Khamseh, A. R.; Waas, A. M.
1992-01-01
The objective of the study was to identify and study the failure mechanisms associated with compressive-loaded uniply graphite/epoxy square plates with a central circular hole. It is found that the type of compressive failure depends on the hole size. For large holes with the diameter/width ratio exceeding 0.062, fiber buckling/kinking initiated at the hole is found to be the dominant failure mechanism. In plates with smaller hole sizes, failure initiates away from the hole edge or complete global failure occurs. Critical buckle wavelengths at failure are presented as a function of the normalized hole diameter.
Jörres, A; Gahl, G M; Dobis, C; Polenakovic, M H; Cakalaroski, K; Rutkowski, B; Kisielnicka, E; Krieter, D H; Rumpf, K W; Guenther, C; Gaus, W; Hoegel, J
1999-10-16
There is controversy as to whether haemodialysis-membrane biocompatibility (ie, the potential to activate complement and neutrophils) influences mortality of patients with acute renal failure. We did a prospective randomised multicentre trial in patients with dialysis-dependent acute renal failure treated with two different types of low-flux membrane. 180 patients with acute renal failure were randomly assigned bioincompatible Cuprophan (n=90) or polymethyl-methacrylate (n=90) membranes. The main outcome was survival 14 days after the end of therapy (treatment success). Odds ratios for survival were calculated and the two groups were compared by Fisher's exact test. Analyses were based on patients treated according to protocol (76 Cuprophan, 84 polymethyl methacrylate). At the start of dialysis, the groups did not differ significantly in age, sex, severity of illness (as calculated by APACHE II scores), prevalence of oliguria, or biochemical measures of acute renal failure. 44 patients (58% [95% CI 46-69]) assigned Cuprophan membranes and 50 patients (60% [48-70]) assigned polymethyl-methacrylate membranes survived. The odds ratio for treatment failure on Cuprophan compared with polymethyl-methacrylate membranes was 1.07 (0.54-2.11; p=0.87). No difference between Cuprophan and polymethyl-methacrylate membranes was detected when the analysis was adjusted for age and APACHE II score. 18 patients in the Cuprophan group and 20 in the polymethyl-methacrylate group had clinical complications of therapy (mainly hypotension). There were no differences in outcome for patients with dialysis-dependent acute renal failure between those treated with Cuprophan membranes and those treated with polymethyl-methacrylate membranes.
Optimal maintenance policy incorporating system level and unit level for mechanical systems
NASA Astrophysics Data System (ADS)
Duan, Chaoqun; Deng, Chao; Wang, Bingran
2018-04-01
The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.
Goldstein, Benjamin A; Thomas, Laine; Zaroff, Jonathan G; Nguyen, John; Menza, Rebecca; Khush, Kiran K
2016-07-01
Over the past two decades, there have been increasingly long waiting times for heart transplantation. We studied the relationship between heart transplant waiting time and transplant failure (removal from the waitlist, pretransplant death, or death or graft failure within 1 year) to determine the risk that conservative donor heart acceptance practices confer in terms of increasing the risk of failure among patients awaiting transplantation. We studied a cohort of 28,283 adults registered on the United Network for Organ Sharing heart transplant waiting list between 2000 and 2010. We used Kaplan-Meier methods with inverse probability censoring weights to examine the risk of transplant failure accumulated over time spent on the waiting list (pretransplant). In addition, we used transplant candidate blood type as an instrumental variable to assess the risk of transplant failure associated with increased wait time. Our results show that those who wait longer for a transplant have greater odds of transplant failure. While on the waitlist, the greatest risk of failure is during the first 60 days. Doubling the amount of time on the waiting list was associated with a 10% (1.01, 1.20) increase in the odds of failure within 1 year after transplantation. Our findings suggest a relationship between time spent on the waiting list and transplant failure, thereby supporting research aimed at defining adequate donor heart quality and acceptance standards for heart transplantation.
NASA Astrophysics Data System (ADS)
Waas, Anthony M.
A series of experiments were performed to determine the mechanism of failure in compressively loaded laminated plates in the presence of stress gradients generated by a circular cutout. Real time holographic interferometry and in-situ photomicrography of the hole surface, were used to observe the progression of failure.The test specimens are multi-layered composite flat plates, which are loaded in compression. The plates are made of two material systems, T300/BP907 and IM7/8551-7. Two different lay-ups of T300/BP907 and four different lay-ups of IM7/8551-7 are investigated.The load on the specimen is slowly increased and a series of interferograms are produced during the load cycle. These interferograms are video-recorded. The results obtained from the interferograms and photo-micrographs are substantiated by sectioning studies and ultrasonic C-scanning of some specimens which are unloaded prior to catastrophic failure, but beyond failure initiation. This is made possible by the servo-controlled loading mechanism that regulates the load application and offers the flexibility of unloading a specimen at any given instance in the loadtime history.An underlying objective of the present investigation is the identification of the physics of the failure initiation process. This required testing specimens with different stacking sequences, for a fixed hole diameter, so that consistent trends in the failure process could be identified.It is revealed that the failure is initiated as a localized instability in the 0? plies at the hole surface, approximately at right angles to the loading direction. This instability emanating at the hole edge and propagating into the interior of the specimen within the 0? plies is found to be fiber microbuckling. The microbuckling is found to occur at a local strain level of [...]8600 [mu]strain at the hole edge for the IM material system. This initial failure renders a narrow zone of fibers within the 0? plies to loose structural integrity. Subsequent to the 0?-ply failure, extensive delamination cracking is observed with increasing load. The through thickness location of these delaminations is found to depend on the position of the 0? plies.The delaminated portions spread to the undamaged areas of the laminate by a combination of delamination buckling and growth, the buckling further enhancing the growth. When the delaminated area reaches a critical size, about 75-100% of the hole radius in extent, an accelerated growth rate of the delaminated portions is observed. The culmination of this last event is the complete loss of flexural stiffness of each of the delaminated portions leading to catastrophic failure of the plate. The levels of applied load and the rate at which these events occur depend on the plate stacking sequence.A simple mechanical model is presented for the microbuckling problem. This model addresses the buckling instability of a semi-infinte layered half-plane alternatingly stacked with fibers and matrix, loaded parallel to the surface of the half-plane. The fibers are modelled using Bernoulli-Navier beam theory, and the matrix is assumed to be a linearly elastic foundation. The predicted buckling strains are found to overestimate the experimental result. However, the dependence of the buckling strain on parameters such as the fiber volume fraction, ratio of Youngs moduli of the constituents and Poisson's ratio of the matrix are obtained from the analysis. It is seen that a high fiber volume fraction, increased matrix stiffness, and perfect bonding between fiber and matrix are desirable properties for increasing the compressive strength.
Hilton, Michael F; Whiteford, Harvey A
2010-12-01
This study investigates associations between psychological distress and workplace accidents, workplace failures and workplace successes. The Health and Work Performance Questionnaire (HPQ) was distributed to employees of 58 large employers. A total of 60,556 full-time employees were eligible for analysis. The HPQ probed whether the respondent had, in the past 30-days, a workplace accident, success or failure ("yes" or "no"). Psychological distress was quantified using the Kessler 6 (K6) scale and categorised into low, moderate and high psychological distress. Three binomial logistic regressions were performed with the dependent variables being workplace accident, success or failure. Covariates in the models were K6 category, gender, age, marital status, education level, job category, physical health and employment sector. Accounting for all other variables, moderate and high psychological distress significantly (P < 0.0001) increased the odds ratio (OR) for a workplace accident to 1.4 for both levels of distress. Moderate and high psychological distress significantly (P < 0.0001) increased the OR (OR = 2.3 and 2.6, respectively) for a workplace failure and significantly (P < 0.0001) decreased the OR for a workplace success (OR = 0.8 and 0.7, respectively). Moderate and high psychological distress increase the OR's for workplace accidents work failures and decrease the OR of workplace successes at similar levels. As the prevalence of moderate psychological distress is approximately double that of high psychological distress moderate distress consequentially has a greater workplace impact.
Processes of coastal bluff erosion in weakly lithified sands, Pacifica, California, USA
Collins, B.D.; Sitar, N.
2008-01-01
Coastal bluff erosion and landsliding are currently the major geomorphic processes sculpting much of the marine terrace dominated coastline of northern California. In this study, we identify the spatial and temporal processes responsible for erosion and landsliding in an area of weakly lithified sand coastal bluffs located south of San Francisco, California. Using the results of a five year observational study consisting of site visits, terrestrial lidar scanning, and development of empirical failure indices, we identify the lithologic and process controls that determine the failure mechanism and mode for coastal bluff retreat in this region and present concise descriptions of each process. Bluffs composed of weakly cemented sands (unconfined compressive strength - UCS between 5 and 30??kPa) fail principally due to oversteepening by wave action with maximum slope inclinations on the order of 65 at incipient failure. Periods of significant wave action were identified on the basis of an empirical wave run-up equation, predicting failure when wave run-up exceeds the seasonal average value and the bluff toe elevation. The empirical relationship was verified through recorded observations of failures. Bluffs composed of moderately cemented sands (UCS up to 400??kPa) fail due to precipitation-induced groundwater seepage, which leads to tensile strength reduction and fracture. An empirical rainfall threshold was also developed to predict failure on the basis of a 48-hour cumulative precipitation index but was found to be dependent on a time delay in groundwater seepage in some cases.
A comparative critical study between FMEA and FTA risk analysis methods
NASA Astrophysics Data System (ADS)
Cristea, G.; Constantinescu, DM
2017-10-01
Today there is used an overwhelming number of different risk analyses techniques with acronyms such as: FMEA (Failure Modes and Effects Analysis) and its extension FMECA (Failure Mode, Effects, and Criticality Analysis), DRBFM (Design Review by Failure Mode), FTA (Fault Tree Analysis) and and its extension ETA (Event Tree Analysis), HAZOP (Hazard & Operability Studies), HACCP (Hazard Analysis and Critical Control Points) and What-if/Checklist. However, the most used analysis techniques in the mechanical and electrical industry are FMEA and FTA. In FMEA, which is an inductive method, information about the consequences and effects of the failures is usually collected through interviews with experienced people, and with different knowledge i.e., cross-functional groups. The FMEA is used to capture potential failures/risks & impacts and prioritize them on a numeric scale called Risk Priority Number (RPN) which ranges from 1 to 1000. FTA is a deductive method i.e., a general system state is decomposed into chains of more basic events of components. The logical interrelationship of how such basic events depend on and affect each other is often described analytically in a reliability structure which can be visualized as a tree. Both methods are very time-consuming to be applied thoroughly, and this is why it is oftenly not done so. As a consequence possible failure modes may not be identified. To address these shortcomings, it is proposed to use a combination of FTA and FMEA.
[Biochemical failure after curative treatment for localized prostate cancer].
Zouhair, Abderrahim; Jichlinski, Patrice; Mirimanoff, René-Olivier
2005-12-07
Biochemical failure after curative treatment for localized prostate cancer is frequent. The diagnosis of biochemical failure is clear when PSA levels rise after radical prostatectomy, but may be more difficult after external beam radiation therapy. The main difficulty once biochemical failure is diagnosed is to distinguish between local and distant failure, given the low sensitivity of standard work-up exams. Metabolic imaging techniques currently under evaluation may in the future help us to localize the site of failures. There are several therapeutic options depending on the initial curative treatment, each with morbidity risks that should be considered in multidisciplinary decision-making.
Mechanical properties of cancellous bone in the human mandibular condyle are anisotropic.
Giesen, E B; Ding, M; Dalstra, M; van Eijden, T M
2001-06-01
The objective of the present study was (1) to test the hypothesis that the elastic and failure properties of the cancellous bone of the mandibular condyle depend on the loading direction, and (2) to relate these properties to bone density parameters. Uniaxial compression tests were performed on cylindrical specimens (n=47) obtained from the condyles of 24 embalmed cadavers. Two loading directions were examined, i.e., a direction coinciding with the predominant orientation of the plate-like trabeculae (axial loading) and a direction perpendicular to the plate-like trabeculae (transverse loading). Archimedes' principle was applied to determine bone density parameters. The cancellous bone was in axial loading 3.4 times stiffer and 2.8 times stronger upon failure than in transverse loading. High coefficients of correlation were found among the various mechanical properties and between them and the apparent density and volume fraction. The anisotropic mechanical properties can possibly be considered as a mechanical adaptation to the loading of the condyle in vivo.
On non-parametric maximum likelihood estimation of the bivariate survivor function.
Prentice, R L
The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.
Büdingen, Fiona V.; Gonzalez, Daniel; Tucker, Amelia N.
2014-01-01
The liver is a complex organ with great ability to influence drug pharmacokinetics (PK). Due to its wide array of function, its impairment has the potential to affect bioavailability, enterohepatic circulation, drug distribution, metabolism, clearance, and biliary elimination. These alterations differ widely depending on the cause of the liver failure, if it is acute or chronic in nature, the extent of impairment, and comorbid conditions. In addition, the effects on liver functions do not occur in a proportional or predictable manner for escalating degrees of liver impairment. The ability of hepatic alterations to influence PK is also dependent on drug characteristics, such as administration route, chemical properties, protein binding, and extraction ratio, among others. This complexity makes it difficult to predict what effects these changes will have on a particular drug. Unlike certain classes of agents, efficacy of anti-infectives is most often dependent on fulfilling PK/pharmacodynamic targets, such as maximum concentration/minimum inhibitory concentration (Cmax/MIC), area under the curve/minimum inhibitory concentration (AUC/MIC), time above MIC (T>MIC), half-maximal inhibitory concentration (IC50) or half-maximal effective concentration (EC50), or the time above the concentration which inhibits viral replication by 95% (T>EC95). Loss of efficacy and/or an increased risk of toxicity may occur in certain circumstances of liver injury. Although it is important to consider these potential alterations and their effects on specific anti-infectives, many lack data to constitute specific dosing adjustments, making it important to monitor patients for effectiveness and toxicities of therapy. PMID:24949199
Real-time diagnostics of the reusable rocket engine using on-line system identification
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1990-01-01
A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.
A method for developing design diagrams for ceramic and glass materials using fatigue data
NASA Technical Reports Server (NTRS)
Heslin, T. M.; Magida, M. B.; Forrest, K. A.
1986-01-01
The service lifetime of glass and ceramic materials can be expressed as a plot of time-to-failure versus applied stress whose plot is parametric in percent probability of failure. This type of plot is called a design diagram. Confidence interval estimates for such plots depend on the type of test that is used to generate the data, on assumptions made concerning the statistical distribution of the test results, and on the type of analysis used. This report outlines the development of design diagrams for glass and ceramic materials in engineering terms using static or dynamic fatigue tests, assuming either no particular statistical distribution of test results or a Weibull distribution and using either median value or homologous ratio analysis of the test results.
Static and Impulsive Models of Solar Active Regions
NASA Technical Reports Server (NTRS)
Patsourakos, S.; Klimchuk, James A.
2008-01-01
The physical modeling of active regions (ARs) and of the global coronal is receiving increasing interest lately. Recent attempts to model ARs using static equilibrium models were quite successful in reproducing AR images of hot soft X-ray (SXR) loops. They however failed to predict the bright EUV warm loops permeating ARs: the synthetic images were dominated by intense footpoint emission. We demonstrate that this failure is due to the very weak dependence of loop temperature on loop length which cannot simultaneously account for both hot and warm loops in the same AR. We then consider time-dependent AR models based on nanoflare heating. We demonstrate that such models can simultaneously reproduce EUV and SXR loops in ARs. Moreover, they predict radial intensity variations consistent with the localized core and extended emissions in SXR and EUV AR observations respectively. We finally show how the AR morphology can be used as a gauge of the properties (duration, energy, spatial dependence, repetition time) of the impulsive heating.
Non-Markovianity in atom-surface dispersion forces
Intravaia, F.; Behunin, R. O.; Henkel, C.; ...
2016-10-18
Here, we discuss the failure of the Markov approximation in the description of atom-surface fluctuation-induced interactions, both in equilibrium (Casimir-Polder forces) and out of equilibrium (quantum friction). Using general theoretical arguments, we show that the Markov approximation can lead to erroneous predictions of such phenomena with regard to both strength and functional dependencies on system parameters. Particularly, we show that the long-time power-law tails of two-time dipole correlations and their corresponding low-frequency behavior, neglected in the Markovian limit, affect the prediction of the force. These findings highlight the importance of non-Markovian effects in dispersion interactions.
Non-Markovianity in atom-surface dispersion forces
NASA Astrophysics Data System (ADS)
Intravaia, F.; Behunin, R. O.; Henkel, C.; Busch, K.; Dalvit, D. A. R.
2016-10-01
We discuss the failure of the Markov approximation in the description of atom-surface fluctuation-induced interactions, both in equilibrium (Casimir-Polder forces) and out of equilibrium (quantum friction). Using general theoretical arguments, we show that the Markov approximation can lead to erroneous predictions of such phenomena with regard to both strength and functional dependencies on system parameters. In particular, we show that the long-time power-law tails of two-time dipole correlations and their corresponding low-frequency behavior, neglected in the Markovian limit, affect the prediction of the force. Our findings highlight the importance of non-Markovian effects in dispersion interactions.
Milani-Nejad, Nima; Canan, Benjamin D; Elnakish, Mohammad T; Davis, Jonathan P; Chung, Jae-Hoon; Fedorov, Vadim V; Binkley, Philip F; Higgins, Robert S D; Kilic, Ahmet; Mohler, Peter J; Janssen, Paul M L
2015-12-15
Cross-bridge cycling rate is an important determinant of cardiac output, and its alteration can potentially contribute to reduced output in heart failure patients. Additionally, animal studies suggest that this rate can be regulated by muscle length. The purpose of this study was to investigate cross-bridge cycling rate and its regulation by muscle length under near-physiological conditions in intact right ventricular muscles of nonfailing and failing human hearts. We acquired freshly explanted nonfailing (n = 9) and failing (n = 10) human hearts. All experiments were performed on intact right ventricular cardiac trabeculae (n = 40) at physiological temperature and near the normal heart rate range. The failing myocardium showed the typical heart failure phenotype: a negative force-frequency relationship and β-adrenergic desensitization (P < 0.05), indicating the expected pathological myocardium in the right ventricles. We found that there exists a length-dependent regulation of cross-bridge cycling kinetics in human myocardium. Decreasing muscle length accelerated the rate of cross-bridge reattachment (ktr) in both nonfailing and failing myocardium (P < 0.05) equally; there were no major differences between nonfailing and failing myocardium at each respective length (P > 0.05), indicating that this regulatory mechanism is preserved in heart failure. Length-dependent assessment of twitch kinetics mirrored these findings; normalized dF/dt slowed down with increasing length of the muscle and was virtually identical in diseased tissue. This study shows for the first time that muscle length regulates cross-bridge kinetics in human myocardium under near-physiological conditions and that those kinetics are preserved in the right ventricular tissues of heart failure patients. Copyright © 2015 the American Physiological Society.
Garara, Bhavin; Wood, Alasdair; Marcus, Hani J; Tsang, Kevin; Wilson, Mark H; Khan, Mansoor
2016-03-01
Intramuscular diaphragmatic stimulation using an abdominal laparoscopic approach has been proposed as a safer alternative to traditional phrenic nerve stimulation. It has also been suggested that early implementation of diaphragmatic pacing may prevent diaphragm atrophy and lead to earlier ventilator independence. The aim of this study was therefore to systematically review the safety and effectiveness of intramuscular diaphragmatic stimulators in the treatment of patients with traumatic high cervical injuries resulting in long-term ventilator dependence, with particular emphasis on the affect of timing of insertion of such stimulators. The Cochrane database and PubMed were searched between January 2000 and June 2015. Reference lists of selected papers were also reviewed. The inclusion criteria used to select from the pool of eligible studies were: (1) reported on adult patients with traumatic high cervical injury, who were ventilator-dependant, (2) patients underwent intramuscular diaphragmatic stimulation, and (3) commented on safety and/or effectiveness. 12 articles were included in the review. Reported safety issues post insertion of intramuscular electrodes included pneumothorax, infection, and interaction with pre-existing cardiac pacemaker. Only one procedural failure was reported. The percentage of patients reported as independent of ventilatory support post procedure ranged between 40% and 72.2%. The mean delay of insertion ranged from 40 days to 9.7 years; of note the study with the average shortest delay in insertion reported the greatest percentage of fully weaned patients. Although evidence for intramuscular diaphragmatic stimulation in patients with high cervical injuries and ventilator dependent respiratory failure is currently limited, the technique appears to be safe and effective. Earlier implantation of such devices does not appear to be associated with greater surgical risk, and may be more effective. Further high quality studies are warranted to investigate the impact of delay of insertion on ventilator weaning. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directionality and Orientation Effects on the Resistance to Propagating Shear Failure
NASA Astrophysics Data System (ADS)
Leis, B. N.; Barbaro, F. J.; Gray, J. M.
Hydrocarbon pipelines transporting compressible products like methane or high-vapor-pressure (HVP) liquids under supercritical conditions can be susceptible to long-propagating failures. As the unplanned release of such hydrocarbons can lead to significant pollution and/or the horrific potential of explosion and/or a very large fire, design criteria to preclude such failures were essential to environmental and public safety. Thus, technology was developed to establish the minimum arrest requirements to avoid such failures shortly after this design concern was evident. Soon after this technology emerged in the early 1970sit became evident that its predictions were increasinglynon-conservative as the toughness of line-pipe steel increased. A second potentially critical factor for what was a one-dimensional technology was that changes in steel processing led to directional dependence in both the flow and fracture properties. While recognized, this dependence was tacitly ignored in quantifying arrest, as were early observations that indicated propagating shear failure was controlled by plastic collapse rather than by fracture processes.
Tham, Yow Keat; Bernardo, Bianca C; Ooi, Jenny Y Y; Weeks, Kate L; McMullen, Julie R
2015-09-01
The onset of heart failure is typically preceded by cardiac hypertrophy, a response of the heart to increased workload, a cardiac insult such as a heart attack or genetic mutation. Cardiac hypertrophy is usually characterized by an increase in cardiomyocyte size and thickening of ventricular walls. Initially, such growth is an adaptive response to maintain cardiac function; however, in settings of sustained stress and as time progresses, these changes become maladaptive and the heart ultimately fails. In this review, we discuss the key features of pathological cardiac hypertrophy and the numerous mediators that have been found to be involved in the pathogenesis of cardiac hypertrophy affecting gene transcription, calcium handling, protein synthesis, metabolism, autophagy, oxidative stress and inflammation. We also discuss new mediators including signaling proteins, microRNAs, long noncoding RNAs and new findings related to the role of calcineurin and calcium-/calmodulin-dependent protein kinases. We also highlight mediators and processes which contribute to the transition from adaptive cardiac remodeling to maladaptive remodeling and heart failure. Treatment strategies for heart failure commonly include diuretics, angiotensin converting enzyme inhibitors, angiotensin II receptor blockers and β-blockers; however, mortality rates remain high. Here, we discuss new therapeutic approaches (e.g., RNA-based therapies, dietary supplementation, small molecules) either entering clinical trials or in preclinical development. Finally, we address the challenges that remain in translating these discoveries to new and approved therapies for heart failure.
Correlated seed failure as an environmental veto to synchronize reproduction of masting plants.
Bogdziewicz, Michał; Steele, Michael A; Marino, Shealyn; Crone, Elizabeth E
2018-07-01
Variable, synchronized seed production, called masting, is a widespread reproductive strategy in plants. Resource dynamics, pollination success, and, as described here, environmental veto are possible proximate mechanisms driving masting. We explored the environmental veto hypothesis, which assumes that reproductive synchrony is driven by external factors preventing reproduction in some years, by extending the resource budget model of masting with correlated reproductive failure. We ran this model across its parameter space to explore how key parameters interact to drive seeding dynamics. Next, we parameterized the model based on 16 yr of seed production data for populations of red (Quercus rubra) and white (Quercus alba) oaks. We used these empirical models to simulate seeding dynamics, and compared simulated time series with patterns observed in the field. Simulations showed that resource dynamics and reproduction failure can produce masting even in the absence of pollen coupling. In concordance with this, in both oaks, among-year variation in resource gain and correlated reproductive failure were necessary and sufficient to reproduce masting, whereas pollen coupling, although present, was not necessary. Reproductive failure caused by environmental veto may drive large-scale synchronization without density-dependent pollen limitation. Reproduction-inhibiting weather events are prevalent in ecosystems, making described mechanisms likely to operate in many systems. © 2018 The Authors New Phytologist © 2018 New Phytologist Trust.
Predicting Quarantine Failure Rates
2004-01-01
Preemptive quarantine through contact-tracing effectively controls emerging infectious diseases. Occasionally this quarantine fails, however, and infected persons are released. The probability of quarantine failure is typically estimated from disease-specific data. Here a simple, exact estimate of the failure rate is derived that does not depend on disease-specific parameters. This estimate is universally applicable to all infectious diseases. PMID:15109418
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Riviere, J.; Lubbers, N.; Barros, K.; Marone, C.; Johnson, P. A.
2016-12-01
Forecasting failure is a primary goal in diverse domains that include earthquake physics, materials science, nondestructive evaluation of materials and other engineering applications. Due to the highly complex physics of material failure and limitations on gathering data in the failure nucleation zone, this goal has often appeared out of reach; however, recent advances in instrumentation sensitivity, instrument density and data analysis show promise toward forecasting failure times. Here, we show that we can predict frictional failure times of both slow and fast stick slip failure events in the laboratory. This advance is made possible by applying a machine learning approach known as Random Forests1(RF) to the continuous acoustic emission (AE) time series recorded by detectors located on the fault blocks. The RF is trained using a large number of statistical features derived from the AE time series signal. The model is then applied to data not previously analyzed. Remarkably, we find that the RF method predicts upcoming failure time far in advance of a stick slip event, based only on a short time window of data. Further, the algorithm accurately predicts the time of the beginning and end of the next slip event. The predicted time improves as failure is approached, as other data features add to prediction. Our results show robust predictions of slow and dynamic failure based on acoustic emissions from the fault zone throughout the laboratory seismic cycle. The predictions are based on previously unidentified tremor-like acoustic signals that occur during stress build up and the onset of macroscopic frictional weakening. We suggest that the tremor-like signals carry information about fault zone processes and allow precise predictions of failure at any time in the slow slip or stick slip cycle2. If the laboratory experiments represent Earth frictional conditions, it could well be that signals are being missed that contain highly useful predictive information. 1Breiman, L. Random forests. Machine Learning 45, 5-32 (2001). 2Rouet-Leduc, B. C. Hulbert, N. Lubbers, K. Barros and P. A. Johnson, Learning the physics of failure, in review (2016).
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
Two-Dimensional Imaging Velocimetry of Heterogeneous Flow and Brittle Failure in Diamond
NASA Astrophysics Data System (ADS)
Ali, S. J.; Smith, R.; Erskine, D.; Eggert, J.; Celliers, P. M.; Collins, G. W.; Jeanloz, R.
2014-12-01
Understanding the nature and dynamics of heterogeneous flow in diamond subjected to shock compression is important for many fields of research, from inertial confinement fusion to the study of carbon rich planets. Waves propagating through a shocked material can be significantly altered by the various deformation mechanisms present in shocked materials, including anisotropic sound speeds, phase transformations, plastic/inelastic flow and brittle failure. Quantifying the spatial and temporal effects of these deformation mechanisms has been limited by a lack of diagnostics capable of obtaining simultaneous micron resolution spatial measurements and nanosecond resolution time measurements. We have utilized the 2D Janus High Resolution Velocimeter at LLNL to study the time and space dependence of fracture in shock-compressed diamond above the Hugoniot elastic limit. Previous work on the OMEGA laser facility (Rochester) has shown that the free-surface reflectivity of μm-grained diamond samples drops linearly with increasing sample pressure, whereas under the same conditions the reflectivity of nm-grained samples remains unaffected. These disparate observations can be understood by way of better documenting fracture in high-strain compression of diamond. To this end, we have imaged the development and evolution of elastic-wave propagation, plastic-wave propagation and fracture networks in the three primary orientations of single-crystal diamond, as well as in microcrystalline and nanocrystalline diamond, and find that the deformation behavior depends sensitively on the orientation and crystallinity of the diamonds.
[Reliability of a positron emission tomography system (CTI:PT931/04-12)].
Watanuki, Shoichi; Ishii, Keizo; Itoh, Masatoshi; Orihara, Hikonojyo
2002-05-01
The maintenance data of a PET system (PT931/04-12 CTI Inc.) was analyzed to evaluate its reliability. We examined whether the initial performance for the system resolution and efficiency is kept. The reliability of the PET system was evaluated from the value of MTTF (mean time to failure) and MTBF (mean time between failures) for each part of the system obtained from the maintenance data for 13 years. The initial performance was kept for the resolution, but the efficiency decreased to 72% of the initial value. The 83% of the troubles of the system was for detector block (DB) and DB control module (BC). The MTTF of DB and BC were 2,733 and 3,314 days, and the MTBF of DB and BC per detector ring were 38 and 114 days. The MTBF of the system was 23 days. We found seasonal dependence for the number of troubles of DB and BC. This means that the trouble may be related the humidity. The reliability of the PET system strongly depends on the MTBF of DB and BC. The improvement in quality of these parts and optimization of the environment in operation may increase the reliability of the PET system. For the popularization of PET, it is effective to evaluate the reliability of the system and to show it to the users.
Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr
Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.
Evolution of damage during deformation in porous granular materials (Louis Néel Medal Lecture)
NASA Astrophysics Data System (ADS)
Main, Ian
2014-05-01
'Crackling noise' occurs in a wide variety of systems that respond to external forcing in an intermittent way, leading to sudden bursts of energy release similar to those heard when crunching up a piece of paper or listening to a fire. In mineral magnetism ('Barkhausen') crackling noise occurs due to sudden changes in the size and orientation of microscopic ferromagnetic domains when the external magnetic field is changed. In rock physics sudden changes in internal stress associated with microscopically brittle failure events lead to acoustic emissions that can be recorded on the sample boundary, and used to infer the state of internal damage. Crackling noise is inherently stochastic, but the population of events often exhibits remarkably robust scaling properties, in terms of the source area, duration, energy, and in the waiting time between events. Here I describe how these scaling properties emerge and evolve spontaneously in a fully-dynamic discrete element model of sedimentary rocks subject to uniaxial compression at a constant strain rate. The discrete elements have structural disorder similar to that of a real rock, and this is the only source of heterogeneity. Despite the stationary loading and the lack of any time-dependent weakening processes, the results are all characterized by emergent power law distributions over a broad range of scales, in agreement with experimental observation. As deformation evolves, the scaling exponents change systematically in a way that is similar to the evolution of damage in experiments on real sedimentary rocks. The potential for real-time failure forecasting is examined by using synthetic and real data from laboratory tests and prior to volcanic eruptions. The combination of non-linearity and an irreducible stochastic component leads to significant variations in the precision and accuracy of the forecast failure time, leading to a significant proportion of 'false alarms' (forecast too early) and 'missed events' (forecast too late), as well as an over-optimistic assessments of forecasting power and quality when the failure time is known (the 'benefit of hindsight'). The evolution becomes progressively more complex, and the forecasting power diminishes, in going from ideal synthetics to controlled laboratory tests to open natural systems at larger scales in space and time.
NASA Astrophysics Data System (ADS)
Saleh, Joseph Homer; Geng, Fan; Ku, Michelle; Walker, Mitchell L. R.
2017-10-01
With a few hundred spacecraft launched to date with electric propulsion (EP), it is possible to conduct an epidemiological study of EP's on orbit reliability. The first objective of the present work was to undertake such a study and analyze EP's track record of on orbit anomalies and failures by different covariates. The second objective was to provide a comparative analysis of EP's failure rates with those of chemical propulsion. Satellite operators, manufacturers, and insurers will make reliability- and risk-informed decisions regarding the adoption and promotion of EP on board spacecraft. This work provides evidence-based support for such decisions. After a thorough data collection, 162 EP-equipped satellites launched between January 1997 and December 2015 were included in our dataset for analysis. Several statistical analyses were conducted, at the aggregate level and then with the data stratified by severity of the anomaly, by orbit type, and by EP technology. Mean Time To Anomaly (MTTA) and the distribution of the time to (minor/major) anomaly were investigated, as well as anomaly rates. The important findings in this work include the following: (1) Post-2005, EP's reliability has outperformed that of chemical propulsion; (2) Hall thrusters have robustly outperformed chemical propulsion, and they maintain a small but shrinking reliability advantage over gridded ion engines. Other results were also provided, for example the differentials in MTTA of minor and major anomalies for gridded ion engines and Hall thrusters. It was shown that: (3) Hall thrusters exhibit minor anomalies very early on orbit, which might be indicative of infant anomalies, and thus would benefit from better ground testing and acceptance procedures; (4) Strong evidence exists that EP anomalies (onset and likelihood) and orbit type are dependent, a dependence likely mediated by either the space environment or differences in thrusters duty cycles; (5) Gridded ion thrusters exhibit both infant and wear-out failures, and thus would benefit from a reliability growth program that addresses both these types of problems.
Chen, Ling; Feng, Yanqin; Sun, Jianguo
2017-10-01
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.
Graft failure after allogeneic hematopoietic stem cell transplantation.
Ozdemir, Zehra Narli; Civriz Bozdağ, Sinem
2018-04-18
Graft failure is a serious complication of allogeneic hematopoietic stem cell transplantation (allo-HSCT) defined as either lack of initial engraftment of donor cells (primary graft failure) or loss of donor cells after initial engraftment (secondary graft failure). Successful transplantation depends on the formation of engrafment, in which donor cells are integrated into the recipient's cell population. In this paper, we distinguish two different entities, graft failure (GF) and poor graft function (PGF), and review the current comprehensions of the interactions between the immune and hematopoietic compartments in these conditions. Factors associated with graft failure include histocompatibility locus antigen (HLA)-mismatched grafts, underlying disease, type of conditioning regimen and stem cell source employed, low stem cell dose, ex vivo T-cell depletion, major ABO incompatibility, female donor grafts for male recipients, disease status at transplantation. Although several approaches have been developed which aimed to prevent graft rejection, establish successful engraftment and treat graft failure, GF remains a major obstacle to the success of allo-HSCT. Allogeneic hematopoietic stem cell transplantation (allo-HSCT) still remains to be the curative treatment option for various non-malignant and malignant hematopoietic diseases. The outcome of allo-HSCT primarily depends on the engraftment of the graft. Graft failure (GF), is a life-threatening complication which needs the preferential therapeutic manipulation. In this paper, we focused on the definitions of graft failure / poor graft function and also we reviewed the current understanding of the pathophysiology, risk factors and treatment approaches for these entities. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Gordon, Craig A.
This thesis examines the ability of a small, single-engine airplane to return to the runway following an engine failure shortly after takeoff. Two sets of trajectories are examined. One set of trajectories has the airplane fly a straight climb on the runway heading until engine failure. The other set of trajectories has the airplane perform a 90° turn at an altitude of 500 feet and continue until engine failure. Various combinations of wind speed, wind direction, and engine failure times are examined. The runway length required to complete the entire flight from the beginning of the takeoff roll to wheels stop following the return to the runway after engine failure is calculated for each case. The optimal trajectories following engine failure consist of three distinct segments: a turn back toward the runway using a large bank angle and angle of attack; a straight glide; and a reversal turn to align the airplane with the runway. The 90° turn results in much shorter required runway lengths at lower headwind speeds. At higher headwind speeds, both sets of trajectories are limited by the length of runway required for the landing rollout, but the straight climb cases generally require a lower angle of attack to complete the flight. The glide back to the runway is performed at an airspeed below the best glide speed of the airplane due to the need to conserve potential energy after the completion of the turn back toward the runway. The results are highly dependent on the rate of climb of the airplane during powered flight. The results of this study can aid the pilot in determining whether or not a return to the runway could be performed in the event of an engine failure given the specific wind conditions and runway length at the time of takeoff. The results can also guide the pilot in determining the takeoff profile that would offer the greatest advantage in returning to the runway.
Modeling Security Aspects of Network
NASA Astrophysics Data System (ADS)
Schoch, Elmar
With more and more widespread usage of computer systems and networks, dependability becomes a paramount requirement. Dependability typically denotes tolerance or protection against all kinds of failures, errors and faults. Sources of failures can basically be accidental, e.g., in case of hardware errors or software bugs, or intentional due to some kind of malicious behavior. These intentional, malicious actions are subject of security. A more complete overview on the relations between dependability and security can be found in [31]. In parallel to the increased use of technology, misuse also has grown significantly, requiring measures to deal with it.
Spatio-temporal changes in river bank mass failures in the Lockyer Valley, Queensland, Australia
NASA Astrophysics Data System (ADS)
Thompson, Chris; Croke, Jacky; Grove, James; Khanal, Giri
2013-06-01
Wet-flow river bank failure processes are poorly understood relative to the more commonly studied processes of fluvial entrainment and gravity-induced mass failures. Using high resolution topographic data (LiDAR) and near coincident aerial photography, this study documents the downstream distribution of river bank mass failures which occurred as a result of a catastrophic flood in the Lockyer Valley in January 2011. In addition, this distribution is compared with wet flow mass failure features from previous large floods. The downstream analysis of these two temporal data sets indicated that they occur across a range of river lengths, catchment areas, bank heights and angles and do not appear to be scale-dependent or spatially restricted to certain downstream zones. The downstream trends of each bank failure distribution show limited spatial overlap with only 17% of wet flows common to both distributions. The modification of these features during the catastrophic flood of January 2011 also indicated that such features tend to form at some 'optimum' shape and show limited evidence of subsequent enlargement even when flow and energy conditions within the banks and channel were high. Elevation changes indicate that such features show evidence for infilling during subsequent floods. The preservation of these features in the landscape for a period of at least 150 years suggests that the seepage processes dominant in their initial formation appear to have limited role in their continuing enlargement over time. No evidence of gully extension or headwall retreat is evident. It is estimated that at least 12 inundation events would be required to fill these failures based on the average net elevation change recorded for the 2011 event. Existing conceptual models of downstream bank erosion process zones may need to consider a wider array of mass failure processes to accommodate for wet flow failures.
Mesoscopic description of random walks on combs
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Iomin, Alexander; Campos, Daniel; Horsthemke, Werner
2015-12-01
Combs are a simple caricature of various types of natural branched structures, which belong to the category of loopless graphs and consist of a backbone and branches. We study continuous time random walks on combs and present a generic method to obtain their transport properties. The random walk along the branches may be biased, and we account for the effect of the branches by renormalizing the waiting time probability distribution function for the motion along the backbone. We analyze the overall diffusion properties along the backbone and find normal diffusion, anomalous diffusion, and stochastic localization (diffusion failure), respectively, depending on the characteristics of the continuous time random walk along the branches, and compare our analytical results with stochastic simulations.
Das, Stephanie L M; Papachristou, George I; De Campos, Tercio; Panek, Jozefa; Poves Prim, Ignasi; Serrablo, Alejandro; Parks, Rowan W; Uomo, Generoso; Windsor, John A; Petrov, Maxim S
2013-09-10
Organ failure is a major determinant of mortality in patients with acute pancreatitis. These patients usually require admission to high dependency or intensive care units and consume considerable health care resources. Given a low incidence rate of organ failure and a lack of large non-interventional studies in the field of acute pancreatitis, the characteristics of organ failure that influence outcomes of patients with acute pancreatitis remain largely unknown. Therefore, the Pancreatitis Across Nations Clinical Research and Education Alliance (PANCREA) aims to conduct a meta-analysis of individual patient data from prospective non-interventional studies to determine the influence of timing, duration, sequence, and combination of different organ failures on mortality in patients with acute pancreatitis. Pancreatologists currently active with acute pancreatitis clinical research will be invited to contribute. To be eligible for inclusion patients will have to meet the criteria of acute pancreatitis, develop at least one organ failure during the first week of hospitalization, and not be enrolled into an intervention study. Raw data will then be collated and checked. Individual patient data analysis based on a logistic regression model with adjustment for confounding variables will be done. For all analyses, corresponding 95% confidence intervals and P values will be reported. This collaborative individual patient data meta-analysis will answer important clinical questions regarding patients with acute pancreatitis that develop organ failure. Information derived from this study will be used to optimize routine clinical management and improve care strategies. It can also help validate outcome definitions, allow comparability of results and form a more accurate basis for patient allocation in further clinical studies.
Time-related patterns of ventricular shunt failure.
Kast, J; Duong, D; Nowzari, F; Chadduck, W M; Schiff, S J
1994-11-01
Proximal obstruction is reported to be the most common cause of ventriculoperitoneal (VP) shunt failure, suggesting that imperfect ventricular catheter placement and inadequate valve mechanisms are major causes. This study retrospectively examined patterns of shunt failure in 128 consecutive patients with symptoms of shunt malfunction over a 2-year period. Factors analyzed included site of failure, time from shunt placement or last revision to failure, age of patient at time of failure, infections, and primary etiology of the hydrocephalus. One hundred of these patients required revisions; 14 revisions were due to infections. In this series there was a higher incidence of distal (43%) than of proximal (35%) failure. The difference was not statistically significant when the overall series was considered; however, when factoring time to failure as a variable, marked differences were noted regardless of the underlying cause of hydrocephalus or the age of the patient. Of the 49 patients needing a shunt revision or replacement within 2 years of the previous operation, 50% had proximal malfunction, 14% distal, and 10% had malfunctions attributable directly to the valve itself. Also, 12 of the 14 infections occurred during this time interval. In sharp contrast, of the 51 patients having shunt failure from 2 to more than 12 years after the previous procedure, 72% had distal malfunction, 21% proximal, and only 6% had a faulty valve or infection. This difference between time to failure for proximal versus distal failures was statistically significant (P < 0.00001 for both Student's t-test and non-parametric Mann-Whitney U-test).(ABSTRACT TRUNCATED AT 250 WORDS)
[Review of the knowledge on acute kidney failure in the critical patient].
Romero García, M; Delgado Hito, P; de la Cueva Ariza, L
2013-01-01
Acute renal failure affects from 1% to 25% of patients admitted to intensive care units. These figures vary depending on the population studied and criteria. The complications of acute renal failure (fluid overload, metabolic acidosis, hyperkalemia, bleeding) are treated. However, mortality remains high despite the technological advances of recent years because acute renal failure is usually associated with sepsis, respiratory failure, serious injury, surgical complications or consumption coagulopathy. Mortality ranges from 30% to 90%. Although there is no universally accepted definition, the RIFLE classification gives us an operational tool to define the degree of acute renal failure and to standardize the initiation of renal replacement techniques as well as to evaluate the results. Therefore, nurses working within the intensive care unit must be familiar with this disease, with its treatment (drug or alternative) and with the prevention of possible complications. Equally, they must be capable of detecting the manifestations of dependency each one of the basic needs and to be able to identify the collaboration problems in order to achieve an individualized care plan. Copyright © 2012 Elsevier España, S.L. y SEEIUC. All rights reserved.
Time- and temperature-dependent failures of a bonded joint
NASA Astrophysics Data System (ADS)
Sihn, Sangwook
This dissertation summarizes my study of time- and temperature-dependent behavior of a tubular lap bonded joint to provide a design methodology for windmill blade structures. The bonded joint is between a cast-iron rod and a GFRP composite pipe. The adhesive material is an epoxy containing chopped glass fibers. We proposed a new fabrication method to make concentric and void-less specimens of the tubular joint with a thick adhesive bondline to stimulate the root bond of a blade. The thick bondline facilitates the joint assembly of actual blades. For a better understanding of the behavior of the bonded joint, we studied viscoelastic behavior of the adhesive materials by measuring creep compliance at several temperatures during loading period. We observed that the creep compliance depends highly on the period of loading and the temperature. We applied time-temperature equivalence to the creep compliance of the adhesive material to obtain time-temperature shift factors. We also performed constant-rate of monotonically increased uniaxial tensile tests to measure static strength of the tubular lap joint at several temperatures and different strain-rates. We observed two failure modes from load-deflection curves and failed specimens. One is the brittle mode, which was caused by weakness of the interfacial strength occurring at low temperature and short period of loading. The other is the ductile mode, which was caused by weakness of the adhesive material at high temperature and long period of loading. Transition from the brittle to the ductile mode appeared as the temperature or the loading period increased. We also performed tests under uniaxial tensile-tensile cyclic loadings to measure fatigue strength of the bonded joint at several temperatures, frequencies and stress ratios. The fatigue data are analyzed statistically by applying the residual strength degradation model to calculate statistical distribution of the fatigue life. Combining the time-temperature equivalence and the residual strength degradation model enables us to estimate the fatigue life of the bonded joint at different load levels, frequencies and temperatures with a certain probability. A numerical example shows how to apply the life estimation method to a structure subjected to a random load history by rainflow cycle counting.
Optoelectronic Devices with Complex Failure Modes
NASA Technical Reports Server (NTRS)
Johnston, A.
2000-01-01
This part of the NSREC-2000 Short Course discusses radiation effects in basic photonic devices along with effects in more complex optoelectronic devices where the overall radiation response depends on several factors, with the possibility of multiple failure modes.
Cilostazol May Improve Maturation Rates and Durability of Vascular Access for Hemodialysis.
Russell, Todd E; Kasper, Gregory C; Seiwert, Andrew J; Comerota, Anthony J; Lurie, Fedor
2017-04-01
Cilostazol is effective in controlling pathophysiological pathways similar or identical to those involved in nonmaturation and failure of the arteriovenous access. This case-control study examined whether cilostazol would improve maturation rates and durability of vascular access for hemodialysis. The treatment group included 33 patients who received cilostazol for ≥30 days prior to creation of a dialysis access and continued with cilostazol therapy for ≥60 days after surgery. The matched (gender, age, race, diabetes, and the year of surgery) control group included 116 patients who underwent the same procedure but did not receive cilostazol prior to and at least 3 months after surgery. Primary outcomes were maturation and, for those that matured, time of functioning access, defined as the time from the first use to irreparable failure of the access. Secondary outcomes were time to maturation, complications, and time to first complication. Study group patients were 3.8 times more likely to experience fistula maturation compared to the controls (88% vs 66%, RR = 3.8, 95% confidence interval: 1.3-11.6, P = .016). Fewer patients in the study group had complications (76% vs 92%, P = .025), and the time from construction of the fistula to the first complication was longer (345.6 ± 441 days vs 198.3 ± 185.0 days, P = .025). Time to maturation was similar in both groups (119.3 ± 62.9 days vs 100.2 ± 61.7 days, P = .2). However, once matured, time to failure was significantly longer in the treatment group (903.7 ± 543.6 vs 381.6 ± 317.2 days, P = .001). Multivariate analysis confirmed that the likelihood of maturation was significantly higher in the treatment group patients. These results suggest that dialysis access patients may benefit from preoperative and postoperative cilostazol therapy. If confirmed by a randomized trial, this treatment will have a major beneficial impact on patients dependent on a well-functioning access for their hemodialysis.
An accelerating precursor to predict "time-to-failure" in creep and volcanic eruptions
NASA Astrophysics Data System (ADS)
Hao, Shengwang; Yang, Hang; Elsworth, Derek
2017-09-01
Real-time prediction by monitoring of the evolution of response variables is a central goal in predicting rock failure. A linear relation Ω˙Ω¨-1 = C(tf - t) has been developed to describe the time to failure, where Ω represents a response quantity, C is a constant and tf represents the failure time. Observations from laboratory creep failure experiments and precursors to volcanic eruptions are used to test the validity of the approach. Both cumulative and simple moving window techniques are developed to perform predictions and to illustrate the effects of data selection on the results. Laboratory creep failure experiments on granites show that the linear relation works well during the final approach to failure. For blind prediction, the simple moving window technique is preferred because it always uses the most recent data and excludes effects of early data deviating significantly from the predicted trend. When the predicted results show only small fluctuations, failure is imminent.
NASA Astrophysics Data System (ADS)
Zhu, W. C.; Niu, L. L.; Li, S. H.; Xu, Z. H.
2015-09-01
The tensile strength of rock subjected to dynamic loading constitutes many engineering applications such as rock drilling and blasting. The dynamic Brazilian test of rock specimens was conducted with the split Hopkinson pressure bar (SHPB) driven by pendulum hammer, in order to determine the indirect tensile strength of rock under an intermediate strain rate ranging from 5.2 to 12.9 s-1, which is achieved when the incident bar is impacted by pendulum hammer with different velocities. The incident wave excited by pendulum hammer is triangular in shape, featuring a long rising time, and it is considered to be helpful for achieving a constant strain rate in the rock specimen. The dynamic indirect tensile strength of rock increases with strain rate. Then, the numerical simulator RFPA-Dynamics, a well-recognized software for simulating the rock failure under dynamic loading, is validated by reproducing the Brazilian test of rock when the incident stress wave retrieved at the incident bar is input as the boundary condition, and then it is employed to study the Brazilian test of rock under the higher strain rate. Based on the numerical simulation, the strain-rate dependency of tensile strength and failure pattern of the Brazilian disc specimen under the intermediate strain rate are numerically simulated, and the associated failure mechanism is clarified. It is deemed that the material heterogeneity should be a reason for the strain-rate dependency of rock.
NASA Astrophysics Data System (ADS)
Benson, P. M.; Fahrner, D.; Harnett, C. E.; Fazio, M.
2014-12-01
Time dependent deformation describes the process whereby brittle materials deform at a stress level below their short-term material strength (Ss), but over an extended time frame. Although generally well understood in engineering (where it is known as static fatigue or "creep"), knowledge of how rocks creep and fail has wide ramifications in areas as diverse as mine tunnel supports and the long term stability of critically loaded rock slopes. A particular hazard relates to the instability of volcano flanks. A large number of flank collapses are known such as Stromboli (Aeolian islands), Teide, and El Hierro (Canary Islands). Collapses on volcanic islands are especially complex as they necessarily involve the combination of active tectonics, heat, and fluids. Not only does the volcanic system generate stresses that reach close to the failure strength of the rocks involved, but when combined with active pore fluid the process of stress corrosion allows the rock mass to deform and creep at stresses far lower than Ss. Despite the obvious geological hazard that edifice failure poses, the phenomenon of creep in volcanic rocks at elevated temperatures has yet to be thoroughly investigated in a well controlled laboratory setting. We present new data using rocks taken from Stromboli, El Heirro and Teide volcanoes in order to better understand the interplay between the fundamental rock mechanics of these basalts and the effects of elevated temperature fluids (activating stress corrosion mechanisms). Experiments were conducted over short (30-60 minute) and long (8-10 hour) time scales. For this, we use the method of Heap et al., (2011) to impose a constant stress (creep) domain deformation monitored via non-contact axial displacement transducers. This is achieved via a conventional triaxial cell to impose shallow conditions of pressure (<25 MPa) and temperature (<200 °C), and equipped with a 3D laboratory seismicity array (known as acoustic emission, AE) to monitor the micro cracking due to the imposed deformation. By measuring the AE generated during deformation we are then able to apply fracture forecast models to predict, retrospectively, the time of failure. We find that higher temperatures increase the strain rate during creep for the same %Ss, and that the accuracy of the forecast does not change with increasing temperature.
Failure of a laminated composite under tension-compression fatigue loading
NASA Technical Reports Server (NTRS)
Rotem, A.; Nelson, H. G.
1989-01-01
The fatigue behavior of composite laminates under tension-compression loading is analyzed and compared with behavior under tension-tension and compression-compression loading. It is shown that for meaningful fatigue conditions, the tension-compression case is the dominant one. Both tension and compression failure modes can occur under the reversed loading, and failure is dependent on the specific lay-up of the laminate and the difference between the tensile static strength and the absolute value of the compressive static strength. The use of a fatigue failure envelope for determining the fatigue life and mode of failure is proposed and demonstrated.
NASA Technical Reports Server (NTRS)
Bowles, Kenneth J.; Roberts, Gary D.; Kamvouris, John E.
1996-01-01
A study was conducted to determine the effects of long-term isothermal thermo-oxidative aging on the compressive properties of T-650-35 fabric reinforced PMR-15 composites. The temperatures that were studied were 204, 260, 288, 316, and 343 C. Specimens of different geometries were evaluated. Cut edge-to-surface ratios of 0.03 to 0.89 were fabricated and aged. Aging times extended to a period in excess of 15,000 hours for the lower temperature runs. The unaged and aged specimens were tested in compression in accordance with ASTM D-695. Both thin and thick (plasma) specimens were tested. Three specimens were tested at each time/temperature/geometry condition. The failure modes appeared to be initiated by fiber kinking with longitudinal, interlaminar splitting. In general, it appears that the thermo-oxidative degradation of the compression strength of the composite material may occur by both thermal (time-dependent) and oxidative (weight-loss) mechanisms. Both mechanisms appear to be specimen-thickness dependent.
Seasonal water storage, stress modulation and California seismicity
NASA Astrophysics Data System (ADS)
Johnson, C. W.; Burgmann, R.; Fu, Y.
2017-12-01
Establishing what controls the timing of earthquakes is fundamental to understanding the nature of the earthquake cycle and critical to determining time-dependent earthquake hazard. Seasonal loading provides a natural laboratory to explore the crustal response to a quantifiable transient force. In California, the accumulation of winter snowpack in the Sierra Nevada, surface water in lakes and reservoirs, and groundwater in sedimentary basins follow the annual cycle of wet winters and dry summers. The surface loads resulting from the seasonal changes in water storage produce elastic deformation of the Earth's crust. We used 9 years of global positioning system (GPS) vertical deformation time series to constrain models of monthly hydrospheric loading and the resulting stress changes on fault planes of small earthquakes. Previous studies posit that temperature, atmospheric pressure, or hydrologic changes may strain the lithosphere and promote additional earthquakes above background levels. Depending on fault geometry, the addition or removal of water increases the Coulomb failure stress. The largest stress amplitudes are occurring on dipping reverse faults in the Coast Ranges and along the eastern Sierra Nevada range front. We analyze 9 years of M≥2.0 earthquakes with known focal mechanisms in northern and central California to resolve fault-normal and fault-shear stresses for the focal geometry. Our results reveal 10% more earthquakes occurring during slip-encouraging fault-shear stress conditions and suggest that earthquake populations are modulated at periods of natural loading cycles, which promote failure by stress changes on the order of 1-5 kPa. We infer that California seismicity rates are modestly modulated by natural hydrological loading cycles.
Compounding effects of sea level rise and fluvial flooding.
Moftakhari, Hamed R; Salvadori, Gianfausto; AghaKouchak, Amir; Sanders, Brett F; Matthew, Richard A
2017-09-12
Sea level rise (SLR), a well-documented and urgent aspect of anthropogenic global warming, threatens population and assets located in low-lying coastal regions all around the world. Common flood hazard assessment practices typically account for one driver at a time (e.g., either fluvial flooding only or ocean flooding only), whereas coastal cities vulnerable to SLR are at risk for flooding from multiple drivers (e.g., extreme coastal high tide, storm surge, and river flow). Here, we propose a bivariate flood hazard assessment approach that accounts for compound flooding from river flow and coastal water level, and we show that a univariate approach may not appropriately characterize the flood hazard if there are compounding effects. Using copulas and bivariate dependence analysis, we also quantify the increases in failure probabilities for 2030 and 2050 caused by SLR under representative concentration pathways 4.5 and 8.5. Additionally, the increase in failure probability is shown to be strongly affected by compounding effects. The proposed failure probability method offers an innovative tool for assessing compounding flood hazards in a warming climate.
Cyclic Load Effects on Long Term Behavior of Polymer Matrix Composites
NASA Technical Reports Server (NTRS)
Shah, A. R.; Chamis, C. C.
1996-01-01
A methodology to compute the fatigue life for different ratios, r, of applied stress to the laminate strength based on first ply failure criteria combined with thermal cyclic loads has been developed and demonstrated. Degradation effects resulting from long term environmental exposure and thermo-mechanical cyclic loads are considered in the simulation process. A unified time-stress dependent multi-factor interaction equation model developed at NASA Lewis Research Center has been used to account for the degradation of material properties caused by cyclic and aging loads. Effect of variation in the thermal cyclic load amplitude on a quasi-symmetric graphite/epoxy laminate has been studied with respect to the impending failure modes. The results show that, for the laminate under consideration, the fatigue life under combined mechanical and low thermal amplitude cyclic loads is higher than that due to mechanical loads only. However, as the thermal amplitude increases, the life also decreases. The failure mode changes from tensile under mechanical loads only to the compressive and shear at high mechanical and thermal loads. Also, implementation of the developed methodology in the design process has been discussed.
Tsunamis caused by submarine slope failures along western Great Bahama Bank
Schnyder, Jara S.D.; Eberli, Gregor P.; Kirby, James T.; Shi, Fengyan; Tehranirad, Babak; Mulder, Thierry; Ducassou, Emmanuelle; Hebbeln, Dierk; Wintersteller, Paul
2016-01-01
Submarine slope failures are a likely cause for tsunami generation along the East Coast of the United States. Among potential source areas for such tsunamis are submarine landslides and margin collapses of Bahamian platforms. Numerical models of past events, which have been identified using high-resolution multibeam bathymetric data, reveal possible tsunami impact on Bimini, the Florida Keys, and northern Cuba. Tsunamis caused by slope failures with terminal landslide velocity of 20 ms−1 will either dissipate while traveling through the Straits of Florida, or generate a maximum wave of 1.5 m at the Florida coast. Modeling a worst-case scenario with a calculated terminal landslide velocity generates a wave of 4.5 m height. The modeled margin collapse in southwestern Great Bahama Bank potentially has a high impact on northern Cuba, with wave heights between 3.3 to 9.5 m depending on the collapse velocity. The short distance and travel time from the source areas to densely populated coastal areas would make the Florida Keys and Miami vulnerable to such low-probability but high-impact events. PMID:27811961
Tsunamis caused by submarine slope failures along western Great Bahama Bank
NASA Astrophysics Data System (ADS)
Schnyder, Jara S. D.; Eberli, Gregor P.; Kirby, James T.; Shi, Fengyan; Tehranirad, Babak; Mulder, Thierry; Ducassou, Emmanuelle; Hebbeln, Dierk; Wintersteller, Paul
2016-11-01
Submarine slope failures are a likely cause for tsunami generation along the East Coast of the United States. Among potential source areas for such tsunamis are submarine landslides and margin collapses of Bahamian platforms. Numerical models of past events, which have been identified using high-resolution multibeam bathymetric data, reveal possible tsunami impact on Bimini, the Florida Keys, and northern Cuba. Tsunamis caused by slope failures with terminal landslide velocity of 20 ms-1 will either dissipate while traveling through the Straits of Florida, or generate a maximum wave of 1.5 m at the Florida coast. Modeling a worst-case scenario with a calculated terminal landslide velocity generates a wave of 4.5 m height. The modeled margin collapse in southwestern Great Bahama Bank potentially has a high impact on northern Cuba, with wave heights between 3.3 to 9.5 m depending on the collapse velocity. The short distance and travel time from the source areas to densely populated coastal areas would make the Florida Keys and Miami vulnerable to such low-probability but high-impact events.
Dynamic tensile-failure-induced velocity deficits in rock
NASA Technical Reports Server (NTRS)
Rubin, Allan M.; Ahrens, Thomas J.
1991-01-01
Planar impact experiments were employed to induce dynamic tensile failure in Bedford limestone. Rock disks were impacted with aluminum and polymethyl methacralate (PMMA) flyer plates at velocities of 10 to 25 m/s. Tensile stress magnitudes and duration were chosen so as to induce a range of microcrack growth insufficient to cause complete spalling of the samples. Ultrasonic P- and S-wave velocities of recovered targets were compared to the velocities prior to impact. Velocity reduction, and by inference microcrack production, occurred in samples subjected to stresses above 35 MPa in the 1.3 microsec PMMA experiments and 60 MPa in the 0.5 microsec aluminum experiments. Using a simple model for the time-dependent stress-intensity factor at the tips of existing flaws, apparent fracture toughnesses of 2.4 and 2.5 MPa sq rt m are computed for the 1.3 and 0.5 microsec experiments. These are a factor of about 2 to 3 greater than quasi-static values. The greater dynamic fracture toughness observed may result from microcrack interaction during tensile failure. Data for water-saturated and dry targets are indistinguishable.
Tsunamis caused by submarine slope failures along western Great Bahama Bank.
Schnyder, Jara S D; Eberli, Gregor P; Kirby, James T; Shi, Fengyan; Tehranirad, Babak; Mulder, Thierry; Ducassou, Emmanuelle; Hebbeln, Dierk; Wintersteller, Paul
2016-11-04
Submarine slope failures are a likely cause for tsunami generation along the East Coast of the United States. Among potential source areas for such tsunamis are submarine landslides and margin collapses of Bahamian platforms. Numerical models of past events, which have been identified using high-resolution multibeam bathymetric data, reveal possible tsunami impact on Bimini, the Florida Keys, and northern Cuba. Tsunamis caused by slope failures with terminal landslide velocity of 20 ms -1 will either dissipate while traveling through the Straits of Florida, or generate a maximum wave of 1.5 m at the Florida coast. Modeling a worst-case scenario with a calculated terminal landslide velocity generates a wave of 4.5 m height. The modeled margin collapse in southwestern Great Bahama Bank potentially has a high impact on northern Cuba, with wave heights between 3.3 to 9.5 m depending on the collapse velocity. The short distance and travel time from the source areas to densely populated coastal areas would make the Florida Keys and Miami vulnerable to such low-probability but high-impact events.
Assuring SS7 dependability: A robustness characterization of signaling network elements
NASA Astrophysics Data System (ADS)
Karmarkar, Vikram V.
1994-04-01
Current and evolving telecommunication services will rely on signaling network performance and reliability properties to build competitive call and connection control mechanisms under increasing demands on flexibility without compromising on quality. The dimensions of signaling dependability most often evaluated are the Rate of Call Loss and End-to-End Route Unavailability. A third dimension of dependability that captures the concern about large or catastrophic failures can be termed Network Robustness. This paper is concerned with the dependability aspects of the evolving Signaling System No. 7 (SS7) networks and attempts to strike a balance between the probabilistic and deterministic measures that must be evaluated to accomplish a risk-trend assessment to drive architecture decisions. Starting with high-level network dependability objectives and field experience with SS7 in the U.S., potential areas of growing stringency in network element (NE) dependability are identified to improve against current measures of SS7 network quality, as per-call signaling interactions increase. A sensitivity analysis is presented to highlight the impact due to imperfect coverage of duplex network component or element failures (i.e., correlated failures), to assist in the setting of requirements on NE robustness. A benefit analysis, covering several dimensions of dependability, is used to generate the domain of solutions available to the network architect in terms of network and network element fault tolerance that may be specified to meet the desired signaling quality goals.
NASA Technical Reports Server (NTRS)
Dillard, D. A.; Morris, D. H.; Brinson, H. F.
1981-01-01
An incremental numerical procedure based on lamination theory is developed to predict creep and creep rupture of general laminates. Existing unidirectional creep compliance and delayed failure data is used to develop analytical models for lamina response. The compliance model is based on a procedure proposed by Findley which incorporates the power law for creep into a nonlinear constitutive relationship. The matrix octahedral shear stress is assumed to control the stress interaction effect. A modified superposition principle is used to account for the varying stress level effect on the creep strain. The lamina failure model is based on a modification of the Tsai-Hill theory which includes the time dependent creep rupture strength. A linear cumulative damage law is used to monitor the remaining lifetime in each ply.
Contingency Trajectory Design for a Lunar Orbit Insertion Maneuver Failure by the LADEE Spacecraft
NASA Technical Reports Server (NTRS)
Genova, A. L.
2014-01-01
This paper presents results from a contingency trajectory analysis performed for the Lunar Atmosphere & Dust Environment Explorer (LADEE) mission in the event of a missed lunar-orbit insertion (LOI) maneuver by the LADEE spacecraft. The effects of varying solar perturbations in the vicinity of the weak stability boundary (WSB) in the Sun-Earth system on the trajectory design are analyzed and discussed. It is shown that geocentric recovery trajectory options existed for the LADEE spacecraft, depending on the spacecraft's recovery time to perform an Earth escape-prevention maneuver after the hypothetical LOI maneuver failure and subsequent path traveled through the Sun-Earth WSB. If Earth-escape occurred, a heliocentric recovery option existed, but with reduced science capacapability for the spacecraft in an eccentric, not circular near-equatorial retrograde lunar orbit.
Analysis of problems and failures in the measurement of soil-gas radon concentration.
Neznal, Martin; Neznal, Matěj
2014-07-01
Long-term experience in the field of soil-gas radon concentration measurements allows to describe and explain the most frequent causes of failures, which can appear in practice when various types of measurement methods and soil-gas sampling techniques are used. The concept of minimal sampling depth, which depends on the volume of the soil-gas sample and on the soil properties, is shown in detail. Consideration of minimal sampling depth at the time of measurement planning allows to avoid the most common mistakes. The ways how to identify influencing parameters, how to avoid a dilution of soil-gas samples by the atmospheric air, as well as how to recognise inappropriate sampling methods are discussed. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Time dependency of strainrange partitioning life relationships
NASA Technical Reports Server (NTRS)
Kalluri, S.; Manson, S. S.
1984-01-01
The effect of exposure time (or creep rate) on the CP life relationship is established by conducting isothermal CP tests at varying exposure times on 316 Ss at 1300 and 1500 F. A reduction in the CP cycle life is observed with an increase in the exposure time of the CP test at a given inelastic strain-range. This phenomenon is characterized by modifying the Manson-Coffin type of CP relationship. Two new life relationships: (1) the Steady State Creep Rate (SSRC) Modified CP life relationship, and (2) the Failure Time (FT) Modified CP life relationship, are developed in this report. They account for the effect of creep rate and exposure time within the CP type of waveform. The reduction in CP cyclic life in the long exposure time tests is attributed to oxidation and the precipitation of carbides along the grain boundaries.
Two Essays in Financial Economics
NASA Astrophysics Data System (ADS)
Putnam, Kyle J.
The following dissertation contains two distinct empirical essays which contribute to the overall field of Financial Economics. Chapter 1, entitled "The Determinants of Dynamic Dependence: An Analysis of Commodity Futures and Equity Markets," examines the determinants of the dynamic equity-commodity return correlations between five commodity futures sub-sectors (energy, foods and fibers, grains and oilseeds, livestock, and precious metals) and a value-weighted equity market index (S&P 500). The study utilizes the traditional DCC model, as well as three time-varying copulas: (i) the normal copula, (ii) the student's t copula, and (iii) the rotated-gumbel copula as dependence measures. Subsequently, the determinants of these various dependence measures are explored by analyzing several macroeconomic, financial, and speculation variables over different sample periods. Results indicate that the dynamic equity-commodity correlations for the energy, grains and oilseeds, precious metals, and to a lesser extent the foods and fibers, sub-sectors have become increasingly explainable by broad macroeconomic and financial market indicators, particularly after May 2003. Furthermore, these variables exhibit heterogeneous effects in terms of both magnitude and sign on each sub-sectors' equity-commodity correlation structure. Interestingly, the effects of increased financial market speculation are found to be extremely varied among the five sub-sectors. These results have important implications for portfolio selection, price formation, and risk management. Chapter 2, entitled, "US Community Bank Failure: An Empirical Investigation," examines the declining, but still pivotal role, of the US community banking industry. The study utilizes survival analysis to determine which accounting and macroeconomic variables help to predict community bank failure. Federal Deposit Insurance Corporation and Federal Reserve Bank data are utilized to compare 452 community banks which failed between 2000 and 2013, relative to a sample of surviving community banks. Empirical results indicate that smaller banks are less likely to fail than their larger community bank counterparts. Additionally, several unique bank-specific indicators of failure emerge which relate to asset quality and liquidity, as well as earnings ratios. Moreover, results show that the use of the macroeconomic indicator of liquidity, the TED spread, provides a substantial improvement in modeling predictive community bank failure.
A Case Study on Engineering Failure Analysis of Link Chain
Lee, Seong-Beom; Lee, Hong-Chul
2010-01-01
Objectives The objective of this study was to investigate the effect of chain installation condition on stress distribution that could eventually cause disastrous failure from sudden deformation and geometric rupture. Methods Fractographic method used for the failed chain indicates that over-stress was considered as the root cause of failure. 3D modeling and finite element analysis for the chain, used in a crane hook, were performed with a three-dimensional interactive application program, CATIA, commercial finite element analysis and computational fluid dynamic software, ANSYS. Results The results showed that the state of stress was changed depending on the initial position of the chain that was installed in the hook. Especially, the magnitude of the stress was strongly affected by the bending forces, which are 2.5 times greater (under the simulation condition currently investigated) than that from the plain tensile load. Also, it was noted that the change of load state is strongly related to the failure of parts. The chain can hold an ultimate load of about 8 tons with only the tensile load acting on it. Conclusion The conclusions of this research clearly showed that a reduction of the loss from similar incidents can be achieved when an operator properly handles the installation of the chain. PMID:22953162
The utility of levosimendan in the treatment of heart failure.
Lehtonen, Lasse; Põder, Pentti
2007-01-01
Calcium sensitizers are a new group of inotropic drugs. Levosimendan is the only calcium sensitizer in clinical use in Europe. Its mechanism of action includes both calcium sensitization of contractile proteins and the opening of adenosine triphosphate (ATP)-dependent potassium channels as mechanism of vasodilation. The combination of K-channel opening with positive inotropy offers potential benefits in comparison to currently available intravenous inotropes, since K-channel opening protects myocardium during ischemia. Due to the calcium-dependent binding of levosimendan to troponin C, the drug increases contractility without negative lusitropic effects. In patients with heart failure levosimendan dose-dependently increases cardiac output and reduces pulmonary capillary wedge pressure. Since levosimendan has an active metabolite OR-1896 with a half-life of some 80 hours, the duration of the hemodynamic effects significantly exceeds the 1-hour half-life of the parent compound. The hemodynamic effects of the levosimendan support its use in acute and postoperative heart failure. Several moderate-size trials (LIDO, RUSSLAN, CASINO) have previously suggested that the drug might even improve the prognosis of patients with decompensated heart failure. These trials were carried out in patients with high filling pressures. Recently two larger trials (SURVIVE and REVIVE) in patients who were hospitalized because of worsening heart failure have been finalized. These trials did not require filling pressures to be measured. The two trials showed that levosimendan improves the symptoms of heart failure, but does not improve survival. The results raise the question whether a 24-hour levosimendan infusion can be used without invasive hemodynamic monitoring.
Modelling of Rainfall Induced Landslides in Puerto Rico
NASA Astrophysics Data System (ADS)
Lepore, C.; Arnone, E.; Sivandran, G.; Noto, L. V.; Bras, R. L.
2010-12-01
We performed an island-wide determination of static landslide susceptibility and hazard assessment as well as dynamic modeling of rainfall-induced shallow landslides in a particular hydrologic basin. Based on statistical analysis of past landslides, we determined that reliable prediction of the susceptibility to landslides is strongly dependent on the resolution of the digital elevation model (DEM) employed and the reliability of the rainfall data. A distributed hydrology model, Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator with VEGetation Generator for Interactive Evolution (tRIBS-VEGGIE), tRIBS-VEGGIE, has been implemented for the first time in a humid tropical environment like Puerto Rico and validated against in-situ measurements. A slope-failure module has been added to tRIBS-VEGGIE’s framework, after analyzing several failure criterions to identify the most suitable for our application; the module is used to predict the location and timing of landsliding events. The Mameyes basin, located in the Luquillo Experimental Forest in Puerto Rico, was selected for modeling based on the availability of soil, vegetation, topographical, meteorological and historic landslide data. Application of the model yields a temporal and spatial distribution of predicted rainfall-induced landslides.
Popova, J A; Yadrihinskaya, V N; Krylova, M I; Sleptsovа, S S; Borisovа, N V
frequent complications of hemodialysis treatments are coagulation disorders. This is due to activation of the coagulation of blood flow in the interaction with a dialysis membrane material vascular prostheses and extracorporeal circuit trunks. In addition, in hemodialysis patients receiving heparin for years, there is depletion of stocks in endothelial cells in tissue factor inhibitor, inhibits the activity of an external blood clotting mechanism. the aim of our study was to evaluate the hemostatic system parameters in patients with end-stage renal failure, depending on the cause of renal failure. to evaluate the hemostatic system parameters in patients with end-stage renal failure, depending on the cause of renal failure and hemodialysis treatment duration conducted a study that included 100 patients observed in the department of chronic hemodialysis and nephrology hospital №1 Republican National Medical Center in the period of 2013-2016. in patients with end-stage renal failure in the outcome of chronic glomerulonephritis, a great expression of activation of blood coagulation confirm increased the mean concentration of fibrinogen, whereas in the group, which included patients with end-stage renal failure in the outcome of other diseases, such is not different from the norm, and a higher rate of hyperfibrinogenemia, identified in 2/3 patients in this group. it was revealed that the state of homeostasis in patients with end-stage renal failure in increasingly characterizes the level of fibrinogen and the activation of the hemostatic markers: soluble fibrin monomer complexes, D-dimers.
Micromechanical investigation of ductile failure in Al 5083-H116 via 3D unit cell modeling
NASA Astrophysics Data System (ADS)
Bomarito, G. F.; Warner, D. H.
2015-01-01
Ductile failure is governed by the evolution of micro-voids within a material. The micro-voids, which commonly initiate at second phase particles within metal alloys, grow and interact with each other until failure occurs. The evolution of the micro-voids, and therefore ductile failure, depends on many parameters (e.g., stress state, temperature, strain rate, void and particle volume fraction, etc.). In this study, the stress state dependence of the ductile failure of Al 5083-H116 is investigated by means of 3-D Finite Element (FE) periodic cell models. The cell models require only two pieces of information as inputs: (1) the initial particle volume fraction of the alloy and (2) the constitutive behavior of the matrix material. Based on this information, cell models are subjected to a given stress state, defined by the stress triaxiality and the Lode parameter. For each stress state, the cells are loaded in many loading orientations until failure. Material failure is assumed to occur in the weakest orientation, and so the orientation in which failure occurs first is considered as the critical orientation. The result is a description of material failure that is derived from basic principles and requires no fitting parameters. Subsequently, the results of the simulations are used to construct a homogenized material model, which is used in a component-scale FE model. The component-scale FE model is compared to experiments and is shown to over predict ductility. By excluding smaller nucleation events and load path non-proportionality, it is concluded that accuracy could be gained by including more information about the true microstructure in the model; emphasizing that its incorporation into micromechanical models is critical to developing quantitatively accurate physics-based ductile failure models.
Reliability Growth in Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2014-01-01
A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.
Evaluation of nasal mucociliary activity in patients with chronic renal failure.
Kucur, Cuneyt; Ozbay, Isa; Gulcan, Erim; Kulekci, Semra; Aksoy, Sinan; Oghan, Fatih
2016-05-01
The ability of respiratory mucosal surfaces to eliminate foreign particles and pathogens and to keep mucosal surfaces moist and fresh depends on mucociliary activity. Chronic renal failure (CRF) is an irreversible medical condition that may result in important extrarenal systemic consequences, such as cardiovascular, metabolic, and respiratory system abnormalities. Although there are studies describing nasal manifestations of CRF, data are lacking concerning the effects of the condition on nasal mucosa. The goal of the current study was to evaluate nasal mucociliary clearance (NMC) time in patients with CRF. This prospective cohort study conducted in a tertiary referral center included 32 non-diabetic end-stage CRF patients and 30 control individuals. The control group consisted of voluntary participants who had been referred to our clinic for symptoms other than rhinological diseases. The mean NMC times in CRF patients and control individuals were 12.51 ± 3.74 min (range 7-22 min) and 8.97 ± 1.83 min (range 6-13 min), respectively. The mean NMC time in patients with CRF was significantly longer than that in control individuals (p < 0.001). Clinicians must keep in mind that NMC time in CRF patients is prolonged and must follow-up these patients more closely for sinonasal and middle ear infections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Jim Bouchard
Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less
Links between grievance, complaint and different forms of entitlement.
Weintrobe, Sally
2004-02-01
The author argues that different kinds of object relationships underlie the phenomena of grievance and complaint. Grievance is addressed to an object held responsible for a failure of idealisation, and the object is scolded or punished for this failure. Nursing grievance can restore the ideal object in phantasy and block mourning the ideal. With pathological grievance the self is seen as ideal and awareness of dependence on the libidinal other is denied, as are the passage of time and the transience of experience. An attitude of narcissistic entitlement to be special and exempt from ordinary reality is seen as intrinsic to the more persistent and pathological forms of grievance, and this narcissistic entitlement fuels grievance. Turning to complaint, the author argues that complaint is addressed to an object that is less idealised; there is more open acknowledgement of the need for and dependence on the other to realise liveliness. Complaint is the voice of the authentic lively self and intrinsic to complaint is a sense of lively entitlement. The author presents clinical material to illustrate these themes, and to show movement between complaint and grievance. Some technical difficulties in working with grievance are discussed.
Chan, Kwun Chuen Gary; Wang, Mei-Cheng
2017-01-01
Recurrent event processes with marker measurements are mostly and largely studied with forward time models starting from an initial event. Interestingly, the processes could exhibit important terminal behavior during a time period before occurrence of the failure event. A natural and direct way to study recurrent events prior to a failure event is to align the processes using the failure event as the time origin and to examine the terminal behavior by a backward time model. This paper studies regression models for backward recurrent marker processes by counting time backward from the failure event. A three-level semiparametric regression model is proposed for jointly modeling the time to a failure event, the backward recurrent event process, and the marker observed at the time of each backward recurrent event. The first level is a proportional hazards model for the failure time, the second level is a proportional rate model for the recurrent events occurring before the failure event, and the third level is a proportional mean model for the marker given the occurrence of a recurrent event backward in time. By jointly modeling the three components, estimating equations can be constructed for marked counting processes to estimate the target parameters in the three-level regression models. Large sample properties of the proposed estimators are studied and established. The proposed models and methods are illustrated by a community-based AIDS clinical trial to examine the terminal behavior of frequencies and severities of opportunistic infections among HIV infected individuals in the last six months of life.
A two-stage model of fracture of rocks
Kuksenko, V.; Tomilin, N.; Damaskinskaya, E.; Lockner, D.
1996-01-01
In this paper we propose a two-stage model of rock fracture. In the first stage, cracks or local regions of failure are uncorrelated occur randomly throughout the rock in response to loading of pre-existing flaws. As damage accumulates in the rock, there is a gradual increase in the probability that large clusters of closely spaced cracks or local failure sites will develop. Based on statistical arguments, a critical density of damage will occur where clusters of flaws become large enough to lead to larger-scale failure of the rock (stage two). While crack interaction and cooperative failure is expected to occur within clusters of closely spaced cracks, the initial development of clusters is predicted based on the random variation in pre-existing Saw populations. Thus the onset of the unstable second stage in the model can be computed from the generation of random, uncorrelated damage. The proposed model incorporates notions of the kinetic (and therefore time-dependent) nature of the strength of solids as well as the discrete hierarchic structure of rocks and the flaw populations that lead to damage accumulation. The advantage offered by this model is that its salient features are valid for fracture processes occurring over a wide range of scales including earthquake processes. A notion of the rank of fracture (fracture size) is introduced, and criteria are presented for both fracture nucleation and the transition of the failure process from one scale to another.
NASA Astrophysics Data System (ADS)
Gurfinkel, Yuri I.; Mikhailov, Valery M.; Kudutkina, Marina I.
2004-06-01
Capillaries play a critical role in cardiovascular function as the point of exchange of nutrients and waste products between tissues and circulation. A common problem for healthy volunteers examined during isolation, and for the patients suffering from heart failure is a quantitative estimation tissue oedema. Until now, objective assessment body fluids retention in tissues did not exist. Optical imaging of living capillaries is a challenging and medically important scientific problem. Goal of the investigation was to study dynamic of microcriculation parameters including tissue oedema in healthy volunteers during extended isolation and relative hypokinesia as a model of mission to the International Space Station. The other aim was to study dynamic of microcirculation parameters including tissue oedema in patients suffering from heart failure under treatment. Healthy volunteers and patients. We studied four healthy male subjects at the age of 41, 37, 40, and 48 before the experiment (June 1999), and during the 240-d isolation period starting from July3, 1999. Unique hermetic chambers with artidicial environmental parameters allowed performing this study with maximum similarity to real conditions in the International Space Station (ISS). With the regularity of 3 times a week at the same time, each subject recorded three video episodes with the total length of one-minute using the optical computerized capillaroscope for noninvasive measurement of the capillary diameters sizes, capillary blood velocity as well as the size of the perivascular zone. All this parameters of microcirculation determined during three weeks in 15 patients (10 male, 5 female, aged 62,2+/-8,8) suffering from heart failure under Furosemid 40 mg 2 times a week, as diuretic. Results. About 1500 episodes recorded on laser disks and analyzed during this experiment. Every subject had wave-like variations of capillary blood velocity within the minute, week, and month ranges. It was found that the perivascular zone sizes rising during isolation correlate with body mass of subjects and probably depend on retention of body fluids in tissues. Computerized capillaroscopy provides a new opportunity for non-invasive quantitative estimation tissue oedema and suggests for exact management patients suffering from heart failure under diuretic treatment.
Time and Temperature Dependence of Viscoelastic Stress Relaxation in Gold and Gold Alloy Thin Films
NASA Astrophysics Data System (ADS)
Mongkolsuttirat, Kittisun
Radio frequency (RF) switches based on capacitive MicroElectroMechanical System (MEMS) devices have been proposed as replacements for traditional solid-state field effect transistor (FET) devices. However, one of the limitations of the existing capacitive switch designs is long-term reliability. Failure is generally attributed to electrical charging in the capacitor's dielectric layer that creates an attractive electrostatic force between a moving upper capacitor plate (a metal membrane) and the dielectric. This acts as an attractive stiction force between them that may cause the switch to stay permanently in the closed state. The force that is responsible for opening the switch is the elastic restoring force due to stress in the film membrane. If the restoring force decreases over time due to stress relaxation, the tendency for stiction failure behavior will increase. Au films have been shown to exhibit stress relaxation even at room temperature. The stress relaxation observed is a type of viscoelastic behavior that is more significant in thin metal films than in bulk materials. Metal films with a high relaxation resistance would have a lower probability of device failure due to stress relaxation. It has been shown that solid solution and oxide dispersion can strengthen a material without unacceptable decreases in electrical conductivity. In this study, the viscoelastic behavior of Au, AuV solid solution and AuV2O5 dispersion created by DC magnetron sputtering are investigated using the gas pressure bulge testing technique in the temperature range from 20 to 80°C. The effectiveness of the two strengthening approaches is compared with the pure Au in terms of relaxation modulus and 3 hour modulus decay. The time dependent relaxation curves can be fitted very well with a four-term Prony series model. From the temperature dependence of the terms of the series, activation energies have been deduced to identify the possible dominant relaxation mechanism. The measured modulus relaxation of Au films also proves that the films exhibit linear viscoelastic behavior. From this, a linear viscoelastic model is shown to fit very well to experimental steady state stress relaxation data and can predict time dependent stress for complex loading histories including the ability to predict stress-time behavior at other strain rates during loading. Two specific factors that are expected to influence the viscoelastic behavior-degree of alloying and grain size are investigated to explore the influence of V concentration in solid solution and grain size of pure Au. It is found that the normalized modulus of Au films is dependent on both concentration (C) and grain size (D) with proportionalities of C1/3 and D 2, respectively. A quantitative model of the rate-equation for dislocation glide plasticity based on Frost and Ashby is proposed and fitted well with steady state anelastic stress relaxation experimental data. The activation volume and the density of mobile dislocations is determined using repeated stress relaxation tests in order to further understand the viscoelastic relaxation mechanism. A rapid decrease of mobile dislocation density is found at the beginning of relaxation, which correlates well with a large reduction of viscoelastic modulus at the early stage of relaxation. The extracted activation volume and dislocation mobility can be ascribed to mobile dislocation loops with double kinks generated at grain boundaries, consistent with the dislocation mechanism proposed for the low activation energy measured in this study.
2012-08-01
paper, we will first briefly discuss our recent results, using coarse-grained bead - spring model , on the dependence of failure stress and failure...length of the resin strands. In the coarse-grained model used here the polymer network is treated as a bead - spring system. To create highly cross...simulations of Thermosets We have used a coarse-grained bead - spring model to study the dependence of the mechanical properties of thermosets on chain
Time dependent micromechanics in continuous graphite fiber/epoxy composites with fiber breaks
NASA Astrophysics Data System (ADS)
Zhou, Chao Hui
Time dependent micromechanics in graphite fiber/epoxy composites around fiber breaks was investigated with micro Raman spectroscopy (MRS) and two shear-lag based composite models, a multi-fiber model (VBI) and a single fiber model (SFM), which aim at predicting the strain/stress evolutions in the composite from the matrix creep behavior and fiber strength statistics. This work is motivated by the need to understand the micromechanics and predict the creep-rupture of the composites. Creep of the unfilled epoxy was characterized under different stress levels and at temperatures up to 80°C, with two power law functions, which provided the modeling parameters used as input for the composite models. Both the VBI and the SFM models showed good agreement with the experimental data obtained with MRS, when inelasticity (interfacial debonding and/or matrix yielding) was not significant. The maximum shear stress near a fiber break relaxed at t-alpha/2 (or as (1+ talpha)-1/2) and the load recovery length increased at talpha/2(or (1+ talpha)1/2) following the model predictions. When the inelastic zone became non-negligible, the viscoelastic VBI model lost its competence, while the SFM with inelasticity showed good agreement with the MRS measurements. Instead of using the real fiber spacing, an effective fiber spacing was used in model predictions, taking into account of the radial decay of the interfacial shear stress from the fiber surface. The comparisons between MRS data and the SFM showed that inelastic zone would initiate when the shear strain at the fiber end exceeds a critical value gammac which was determined to be 5% for this composite system at room temperature and possibly a smaller value at elevated temperatures. The stress concentrations in neighboring intact fibers played important roles in the subsequent fiber failure and damage growth. The VBI model predicts a constant stress concentration factor, 1.33, for the 1st nearest intact fiber, which is in good agreement with MRS measurements for most cases except for those with severely debonded interfaces. However, the VBI model usually gives a stress concentration profile narrower than the measured one due to the inelasticity near the fiber break. The low average fiber volume fraction in the model composites caused small relaxation in the stress concentration, which became more obvious at elevated temperatures, especially for large fiber spacing cases. When new break(s) occurred in the original intact neighboring fibers within an effective distance from the original break, the inelastic zones grew at a faster rate due to the strong interactions. Results on the creep-rupture of the bulk composites showed that the failure probability depends on the stress level and the loading time. The time dependent failure probability data could be fitted to a power law function, which suggested a link between the matrix creep, composite short-term strength and the composite creep-rupture.
Implications of Secondary Aftershocks for Failure Processes
NASA Astrophysics Data System (ADS)
Gross, S. J.
2001-12-01
When a seismic sequence with more than one mainshock or an unusually large aftershock occurs, there is a compound aftershock sequence. The secondary aftershocks need not have exactly the same decay as the primary sequence, with the differences having implications for the failure process. When the stress step from the secondary mainshock is positive but not large enough to cause immediate failure of all the remaining primary aftershocks, failure processes which involve accelerating slip will produce secondary aftershocks that decay more rapidly than primary aftershocks. This is because the primary aftershocks are an accelerated version of the background seismicity, and secondary aftershocks are an accelerated version of the primary aftershocks. Real stress perturbations may be negative, and heterogeneities in mainshock stress fields mean that the real world situation is quite complicated. I will first describe and verify my picture of secondary aftershock decay with reference to a simple numerical model of slipping faults which obeys rate and state dependent friction and lacks stress heterogeneity. With such a model, it is possible to generate secondary aftershock sequences with perturbed decay patterns, quantify those patterns, and develop an analysis technique capable of correcting for the effect in real data. The secondary aftershocks are defined in terms of frequency linearized time s(T), which is equal to the number of primary aftershocks expected by a time T, $ s ≡ ∫ t=0T n(t) dt, where the start time t=0 is the time of the primary aftershock, and the primary aftershock decay function n(t) is extrapolated forward to the times of the secondary aftershocks. In the absence of secondary sequences the function s(T)$ re-scales the time so that approximately one event occurs per new time unit; the aftershock sequence is gone. If this rescaling is applied in the presence of a secondary sequence, the secondary sequence is shaped like a primary aftershock sequence, and can be fit by the same modeling techniques applied to simple sequences. The later part of the presentation will concern the decay of Hector Mine aftershocks as influenced by the Landers aftershocks. Although attempts to predict the abundance of Hector aftershocks based on stress overlap analysis are not very successful, the analysis does do a good job fitting the decay of secondary sequences.
Kuehl, R; Tschudin-Sutter, S; Morgenstern, M; Dangel, M; Egli, A; Nowakowski, A; Suhm, N; Theilacker, C; Widmer, A F
2018-04-10
Little information has been published on orthopaedic internal fixation-associated infections. We aimed to analyse time-dependent microbiology, treatment, and outcome. Over a 10-year period, all consecutive patients with internal fixation-associated infections at the University Hospital of Basel, were prospectively followed and clinical, microbiological and outcome data were acquired. Infections were classified as early (0-2 weeks after implantation), delayed (3-10 weeks), and late (>10 weeks). Two hundred and twenty-nine patients were included, with a median follow-up of 773 days (IQR 334-1400). Staphylococcus aureus was the most prevalent pathogen (in 96/229 patients, 41.9%). Enterobacteriaceae were frequent in early infections (13/49, 26.5%), whereas coagulase-negative staphylococci (36/92, 39.1%), anaerobes (15/92, 16.3%) and streptococci (10/92, 10.9%) increased in late revisions. Failure was observed in 27/229 (11.7%). Implants were retained in 42/49 (85.7%) in early, in 51/88 (57.9%) in delayed, and in 9/92 (9.8%) in late revisions (p < 0.01). Early revisions failed in 6/49 (12.2%), delayed in 9/88 (10.2%), and late in 11/92 (13.0%) (p 0.81). Debridement and retention failed in 6/42 (14.3%) for early, in 6/51 (11.8%) for delayed, and in 3/9 (33.3%) for late revisions (p 0.21). Biofilm-active antibiotic therapy tailored to resistance correlated with improved outcome for late revisions failure (6/72, 7.7% versus 6/12, 50.0%; p < 0.01) but not for early revisions failure (5/38, 13.2% versus 1/11, 9.1%; p 1.0). Treatment of internal fixation-associated infections showed a high success rate of 87-90% over all time periods. Implant retention was highly successful in early and delayed infections but only limited in late infections. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
A Bayesian Approach Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data
Yue, Meng; Toto, Tami; Jensen, Michael P.; ...
2017-05-18
Severe weather events such as strong thunderstorms are some of the most significant and frequent threats to the electrical grid infrastructure. Outages resulting from storms can be very costly. While some tools are available to utilities to predict storm occurrences and damage, they are typically very crude and provide little means of facilitating restoration efforts. This study developed a methodology to use historical high-resolution (both temporal and spatial) radar observations of storm characteristics and outage information to develop weather condition dependent failure rate models (FRMs) for different grid components. Such models can provide an estimation or prediction of the outagemore » numbers in small areas of a utility’s service territory once the real-time measurement or forecasted data of weather conditions become available as the input to the models. Considering the potential value provided by real-time outages reported, a Bayesian outage prediction (BOP) algorithm is proposed to account for both strength and uncertainties of the reported outages and failure rate models. The potential benefit of this outage prediction scheme is illustrated in this study.« less
A Bayesian Approach Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, Meng; Toto, Tami; Jensen, Michael P.
Severe weather events such as strong thunderstorms are some of the most significant and frequent threats to the electrical grid infrastructure. Outages resulting from storms can be very costly. While some tools are available to utilities to predict storm occurrences and damage, they are typically very crude and provide little means of facilitating restoration efforts. This study developed a methodology to use historical high-resolution (both temporal and spatial) radar observations of storm characteristics and outage information to develop weather condition dependent failure rate models (FRMs) for different grid components. Such models can provide an estimation or prediction of the outagemore » numbers in small areas of a utility’s service territory once the real-time measurement or forecasted data of weather conditions become available as the input to the models. Considering the potential value provided by real-time outages reported, a Bayesian outage prediction (BOP) algorithm is proposed to account for both strength and uncertainties of the reported outages and failure rate models. The potential benefit of this outage prediction scheme is illustrated in this study.« less
Time-dependent strength degradation of a siliconized silicon carbide determined by dynamic fatigue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breder, K.
1995-10-01
Both fast-fracture strength and strength as a function of stressing rate at room temperature, 1,100, and 1,400 C were measured for a siliconized SiC. The fast-fracture strength increased slightly from 386 MPa at room temperature to 424 MPa at 1,100 C and then dropped to 308 MPa at 1,400 C. The Weibull moduli at room temperature and 1,100 were 10.8 and 7.8, respectively, whereas, at 1,400 C, the Weibull modulus was 2.8. The very low Weibull modulus at 1,400 C was due to the existence of two exclusive flaw populations with very different characteristic strengths. The data were reanalyzed usingmore » two exclusive flaw populations. The ceramic showed no slow crack growth (SCG), as measured by dynamic fatigue at 1,100 C, but, at 1,400 C, an SCG parameter, n, of 15.5 was measured. Fractography showed SCG zones consisting of cracks grown out from silicon-rich areas. Time-to-failure predictions at given levels of failure probabilities were performed.« less
Mechanical Circulatory Support Devices for Acute Right Ventricular Failure.
Kapur, Navin K; Esposito, Michele L; Bader, Yousef; Morine, Kevin J; Kiernan, Michael S; Pham, Duc Thinh; Burkhoff, Daniel
2017-07-18
Right ventricular (RV) failure remains a major cause of global morbidity and mortality for patients with advanced heart failure, pulmonary hypertension, or acute myocardial infarction and after major cardiac surgery. Over the past 2 decades, percutaneously delivered acute mechanical circulatory support pumps specifically designed to support RV failure have been introduced into clinical practice. RV acute mechanical circulatory support now represents an important step in the management of RV failure and provides an opportunity to rapidly stabilize patients with cardiogenic shock involving the RV. As experience with RV devices grows, their role as mechanical therapies for RV failure will depend less on the technical ability to place the device and more on improved algorithms for identifying RV failure, patient monitoring, and weaning protocols for both isolated RV failure and biventricular failure. In this review, we discuss the pathophysiology of acute RV failure and both the mechanism of action and clinical data exploring the utility of existing RV acute mechanical circulatory support devices. © 2017 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Sheikh, Muhammad; Elmarakbi, Ahmed; Elkady, Mustafa
2017-12-01
This paper focuses on state of charge (SOC) dependent mechanical failure analysis of 18650 lithium-ion battery to detect signs of thermal runaway. Quasi-static loading conditions are used with four test protocols (Rod, Circular punch, three-point bend and flat plate) to analyse the propagation of mechanical failures and failure induced temperature changes. Finite element analysis (FEA) is used to model single battery cell with the concentric layered formation which represents a complete cell. The numerical simulation model is designed with solid element formation where stell casing and all layers followed the same formation, and fine mesh is used for all layers. Experimental work is also performed to analyse deformation of 18650 lithium-ion cell. The numerical simulation model is validated with experimental results. Deformation of cell mimics thermal runaway and various thermal runaway detection strategies are employed in this work including, force-displacement, voltage-temperature, stress-strain, SOC dependency and separator failure. Results show that cell can undergo severe conditions even with no fracture or rupture, these conditions may slow to develop but they can lead to catastrophic failures. The numerical simulation technique is proved to be useful in predicting initial battery failures, and results are in good correlation with the experimental results.
Estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers.
Li, Shanshan; Ning, Yang
2015-09-01
Covariate-specific time-dependent ROC curves are often used to evaluate the diagnostic accuracy of a biomarker with time-to-event outcomes, when certain covariates have an impact on the test accuracy. In many medical studies, measurements of biomarkers are subject to missingness due to high cost or limitation of technology. This article considers estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers. To incorporate the covariate effect, we assume a proportional hazards model for the failure time given the biomarker and the covariates, and a semiparametric location model for the biomarker given the covariates. In the presence of missing biomarkers, we propose a simple weighted estimator for the ROC curves where the weights are inversely proportional to the selection probability. We also propose an augmented weighted estimator which utilizes information from the subjects with missing biomarkers. The augmented weighted estimator enjoys the double-robustness property in the sense that the estimator remains consistent if either the missing data process or the conditional distribution of the missing data given the observed data is correctly specified. We derive the large sample properties of the proposed estimators and evaluate their finite sample performance using numerical studies. The proposed approaches are illustrated using the US Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. © 2015, The International Biometric Society.
Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie
2006-01-01
A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.
NASA Astrophysics Data System (ADS)
dell'Isola, Francesco; Lekszycki, Tomasz; Pawlikowski, Marek; Grygoruk, Roman; Greco, Leopoldo
2015-12-01
In this paper, we study a metamaterial constructed with an isotropic material organized following a geometric structure which we call pantographic lattice. This relatively complex fabric was studied using a continuous model (which we call pantographic sheet) by Rivlin and Pipkin and includes two families of flexible fibers connected by internal pivots which are, in the reference configuration, orthogonal. A rectangular specimen having one side three times longer than the other is cut at 45° with respect to the fibers in reference configuration, and it is subjected to large-deformation plane-extension bias tests imposing a relative displacement of shorter sides. The continuum model used, the presented numerical models and the extraordinary advancements of the technology of 3D printing allowed for the design of some first experiments, whose preliminary results are shown and seem to be rather promising. Experimental evidence shows three distinct deformation regimes. In the first regime, the equilibrium total deformation energy depends quadratically on the relative displacement of terminal specimen sides: Applied resultant force depends linearly on relative displacement. In the second regime, the applied force varies nonlinearly on relative displacement, but the behavior remains elastic. In the third regime, damage phenomena start to occur until total failure, but the exerted resultant force continues to be increasing and reaches a value up to several times larger than the maximum shown in the linear regime before failure actually occurs. Moreover, the total energy needed to reach structural failure is larger than the maximum stored elastic energy. Finally, the volume occupied by the material in the fabric is a small fraction of the total volume, so that the ratio weight/resistance to extension is very advantageous. The results seem to require a refinement of the used theoretical and numerical methods to transform the presented concept into a promising technological prototype.
An approach to the drone fleet survivability assessment based on a stochastic continues-time model
NASA Astrophysics Data System (ADS)
Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos
2017-09-01
An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.
Murphy, M A; Mun, Sungkwang; Horstemeyer, M F; Baskes, M I; Bakhtiary, A; LaPlaca, Michelle C; Gwaltney, Steven R; Williams, Lakiesha N; Prabhu, R K
2018-04-09
Continuum finite element material models used for traumatic brain injury lack local injury parameters necessitating nanoscale mechanical injury mechanisms be incorporated. One such mechanism is membrane mechanoporation, which can occur during physical insults and can be devastating to cells, depending on the level of disruption. The current study investigates the strain state dependence of phospholipid bilayer mechanoporation and failure. Using molecular dynamics, a simplified membrane, consisting of 72 1-palmitoyl-2-oleoyl-phosphatidylcholine (POPC) phospholipids, was subjected to equibiaxial, 2:1 non-equibiaxial, 4:1 non-equibiaxial, strip biaxial, and uniaxial tensile deformations at a von Mises strain rate of 5.45 × 10 8 s -1 , resulting in velocities in the range of 1 to 4.6 m·s -1 . A water bridge forming through both phospholipid bilayer leaflets was used to determine structural failure. The stress magnitude, failure strain, headgroup clustering, and damage responses were found to be strain state-dependent. The strain state order of detrimentality in descending order was equibiaxial, 2:1 non-equibiaxial, 4:1 non-equibiaxial, strip biaxial, and uniaxial. The phospholipid bilayer failed at von Mises strains of .46, .47, .53, .77, and 1.67 during these respective strain path simulations. Additionally, a Membrane Failure Limit Diagram (MFLD) was created using the pore nucleation, growth, and failure strains to demonstrate safe and unsafe membrane deformation regions. This MFLD allowed representative equations to be derived to predict membrane failure from in-plane strains. These results provide the basis to implement a more accurate mechano-physiological internal state variable continuum model that captures lower length scale damage and will aid in developing higher fidelity injury models.
Chen, Wan-Ling; Chen, Chin-Ming; Kung, Shu-Chen; Wang, Ching-Min; Lai, Chih-Cheng; Chao, Chien-Ming
2018-01-23
This retrospective cohort study investigated the outcomes and prognostic factors in nonagenarians (patients 90 years old or older) with acute respiratory failure. Between 2006 and 2016, all nonagenarians with acute respiratory failure requiring invasive mechanical ventilation (MV) were enrolled. Outcomes including in-hospital mortality and ventilator dependency were measured. A total of 173 nonagenarians with acute respiratory failure were admitted to the intensive care unit (ICU). A total of 56 patients died during the hospital stay and the rate of in-hospital mortality was 32.4%. Patients with higher APACHE (Acute Physiology and Chronic Health Evaluation) II scores (adjusted odds ratio [OR], 5.91; 95 % CI, 1.55-22.45; p = 0.009, APACHE II scores ≥ 25 vs APACHE II scores < 15), use of vasoactive agent (adjust OR, 2.67; 95% CI, 1.12-6.37; p = 0.03) and more organ dysfunction (adjusted OR, 11.13; 95% CI, 3.38-36.36, p < 0.001; ≥ 3 organ dysfunction vs ≤ 1 organ dysfunction) were more likely to die. Among the 117 survivors, 25 (21.4%) patients became dependent on MV. Female gender (adjusted OR, 3.53; 95% CI, 1.16-10.76, p = 0.027) and poor consciousness level (adjusted OR, 4.98; 95% CI, 1.41-17.58, p = 0.013) were associated with MV dependency. In conclusion, the mortality rate of nonagenarians with acute respiratory failure was high, especially for those with higher APACHE II scores or more organ dysfunction.
Flow-accelerated corrosion in power plants. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chexal, B.; Horowitz, J.; Dooley, B.
1998-07-01
Flow-Accelerated Corrosion (FAC) is a phenomenon that results in metal loss from piping, vessels, and equipment made of carbon steel. FAC occurs only under certain conditions of flow, chemistry, geometry, and material. Unfortunately, those conditions are common in much of the high-energy piping in nuclear and fossil-fueled power plants. Undetected, FAC will cause leaks and ruptures. Consequently, FAC has become a major issue, particularly for nuclear plants. Although major failures are rare, the consequences can be severe. In 1986, four men in the area of an FAC-induced pipe rupture were killed. Fossil plants too, are subject to FAC. In 1995,more » a failure at a fossil-fired plant caused two fatalities. In addition to concerns about personnel safety, FAC failures can pose challenges to plant safety. Regulatory agencies have therefore required nuclear utilities to institute formal programs to address FAC. Finally, a major FAC failure (like the one that happened in 1997 at a US nuclear power plant) can force a plant to shutdown and purchase replacement power at a price approaching a million dollars per day depending upon the MWe rating of the plant. A great deal of time and money has been spent to develop the technology to predict, detect, and mitigate FAC in order to prevent catastrophic failures. Over time, substantial progress has been made towards understanding and preventing FAC. The results of these efforts include dozens of papers, reports, calculations, and manuals, as well as computer programs and other tools. This book is written to provide a detailed treatment of the entire subject in a single document. Any complex issue requires balancing know-how, the risk of decision making, and a pragmatic engineering solution. This book addresses these by carrying out the necessary R and D and engineering along with plant knowledge to cover all quadrants of Chexal`s four quadrant known-unknown diagram, as seen in Figure i.« less
Failure Forecasting in Triaxially Stressed Sandstones
NASA Astrophysics Data System (ADS)
Crippen, A.; Bell, A. F.; Curtis, A.; Main, I. G.
2017-12-01
Precursory signals to fracturing events have been observed to follow power-law accelerations in spatial, temporal, and size distributions leading up to catastrophic failure. In previous studies this behavior was modeled using Voight's relation of a geophysical precursor in order to perform `hindcasts' by solving for failure onset time. However, performing this analysis in retrospect creates a bias, as we know an event happened, when it happened, and we can search data for precursors accordingly. We aim to remove this retrospective bias, thereby allowing us to make failure forecasts in real-time in a rock deformation laboratory. We triaxially compressed water-saturated 100 mm sandstone cores (Pc= 25MPa, Pp = 5MPa, σ = 1.0E-5 s-1) to the point of failure while monitoring strain rate, differential stress, AEs, and continuous waveform data. Here we compare the current `hindcast` methods on synthetic and our real laboratory data. We then apply these techniques to increasing fractions of the data sets to observe the evolution of the failure forecast time with precursory data. We discuss these results as well as our plan to mitigate false positives and minimize errors for real-time application. Real-time failure forecasting could revolutionize the field of hazard mitigation of brittle failure processes by allowing non-invasive monitoring of civil structures, volcanoes, and possibly fault zones.
Nagel, Thomas; Kelly, Daniel J
2013-04-01
The biomechanical functionality of articular cartilage is derived from both its biochemical composition and the architecture of the collagen network. Failure to replicate this normal Benninghoff architecture in regenerating articular cartilage may in turn predispose the tissue to failure. In this article, the influence of the maturity (or functionality) of a tissue-engineered construct at the time of implantation into a tibial chondral defect on the likelihood of recapitulating a normal Benninghoff architecture was investigated using a computational model featuring a collagen remodeling algorithm. Such a normal tissue architecture was predicted to form in the intact tibial plateau due to the interplay between the depth-dependent extracellular matrix properties, foremost swelling pressures, and external mechanical loading. In the presence of even small empty defects in the articular surface, the collagen architecture in the surrounding cartilage was predicted to deviate significantly from the native state, indicating a possible predisposition for osteoarthritic changes. These negative alterations were alleviated by the implantation of tissue-engineered cartilage, where a mature implant was predicted to result in the formation of a more native-like collagen architecture than immature implants. The results of this study highlight the importance of cartilage graft functionality to maintain and/or re-establish joint function and suggest that engineering a tissue with a native depth-dependent composition may facilitate the establishment of a normal Benninghoff collagen architecture after implantation into load-bearing defects.
Lessons from (triggered) tremor
Gomberg, Joan
2010-01-01
I test a “clock-advance” model that implies triggered tremor is ambient tremor that occurs at a sped-up rate as a result of loading from passing seismic waves. This proposed model predicts that triggering probability is proportional to the product of the ambient tremor rate and a function describing the efficacy of the triggering wave to initiate a tremor event. Using data mostly from Cascadia, I have compared qualitatively a suite of teleseismic waves that did and did not trigger tremor with ambient tremor rates. Many of the observations are consistent with the model if the efficacy of the triggering wave depends on wave amplitude. One triggered tremor observation clearly violates the clock-advance model. The model prediction that larger triggering waves result in larger triggered tremor signals also appears inconsistent with the measurements. I conclude that the tremor source process is a more complex system than that described by the clock-advance model predictions tested. Results of this and previous studies also demonstrate that (1) conditions suitable for tremor generation exist in many tectonic environments, but, within each, only occur at particular spots whose locations change with time; (2) any fluid flow must be restricted to less than a meter; (3) the degree to which delayed failure and secondary triggering occurs is likely insignificant; and 4) both shear and dilatational deformations may trigger tremor. Triggered and ambient tremor rates correlate more strongly with stress than stressing rate, suggesting tremor sources result from time-dependent weakening processes rather than simple Coulomb failure.
Nucleation, growth and localisation of microcracks: implications for predictability of rock failure
NASA Astrophysics Data System (ADS)
Main, I. G.; Kun, F.; Pál, G.; Jánosi, Z.
2016-12-01
The spontaneous emergence of localized co-operative deformation is an important phenomenon in the development of shear faults in porous media. It can be studied by empirical observation, by laboratory experiment or by numerical simulation. Here we investigate the evolution of damage and fragmentation leading up to and including system-sized failure in a numerical model of a porous rock, using discrete element simulations of the strain-controlled uniaxial compression of cylindrical samples of different finite size. As the system approaches macroscopic failure the number of fractures and the energy release rate both increase as a time-reversed Omori law, with scaling constants for the frequency-size distribution and the inter-event time, including their temporal evolution, that closely resemble those of natural experiments. The damage progressively localizes in a narrow shear band, ultimately a fault 'gouge' containing a large number of poorly-sorted non-cohesive fragments on a broad bandwidth of scales, with properties similar to those of natural and experimental faults. We determine the position and orientation of the central fault plane, the width of the deformation band and the spatial and mass distribution of fragments. The relative width of the deformation band decreases as a power law of the system size and the probability distribution of the angle of the damage plane converges to around 30 degrees, representing an emergent internal coefficient of friction of 0.7 or so. The mass of fragments is power law distributed, with an exponent that does not depend on scale, and is near that inferred for experimental and natural fault gouges. The fragments are in general angular, with a clear self-affine geometry. The consistency of this model with experimental and field results confirms the critical roles of pre-existing heterogeneity, elastic interactions, and finite system size to grain size ratio on the development of faults, and ultimately to assessing the predictive power of forecasts of failure time in such media.
Abrahamyan, G
2017-01-01
Occurrence of pregnancy after in vitro fertilization depends of two components: functional adequacy of the embryo at the blastocyst stage and receptivity of endometrium, which, according to modern perception, are determinate in achieving optimal conditions of implantation. From the pregnancy occurrence point of view, as well as in regard to its further development , implantation is the most crucial phase of IVF/ICSI and ET. As the same time, this phase is also the most vulnerable. Multiple researches have proven the role of mother thrombophilia for genesis of gestation complications and early embryo losses, but in relation to this problem i the context of IVF there is still a lot to be detailed. The objective of this work was to increase the efficiency of IVF and to research the causes of IVF failures, related to thrombophilic genetic mutations and polymorphisms. In order to achieve the set goal 354 women with infertility, who turned to the department of aided reproductive technologies (ART) for infertility treatment by means of IVF, were examined. 237 (66,9%) of women had primary infertility, 117 (33,1%) - secondary infertility. To 228 of these women the IVF (in vitro fertilization) program was introduced for the first time (study group 1), 126 patients had failed IVF history (1 to 9 failed attempts). Patients were 23 to 43 years of age. Obtained results confirm the relation between hemostasis defects, change of hemostasis system activity and efficiency of IVF. One of the main reason of IVF failure and, probably, of infertility is the hemostasis system disturbance of thrombophilic nature. High correlation is established between the hemostasis system disturbance of thrombophilic nature, preconditioned by genetic mutations and polymorphisms, as well as failed IVFs. Failure of IVF is the indication for expanded examination of genetically determined factors of hemostasis system. In case of presence of genetic defects of thrombophilic nature in hemostasis system the risk of failure in IVF program is 2 and more times higher.
Computational implications of activity-dependent neuronal processes
NASA Astrophysics Data System (ADS)
Goldman, Mark Steven
Synapses, the connections between neurons, often fail to transmit a large percentage of the action potentials that they receive. I describe several models of synaptic transmission at a single stochastic synapse with an activity-dependent probability of transmission and demonstrate how synaptic transmission failures may increase the efficiency with which a synapse transmits information. Spike trains in the visual cortex of freely viewing monkeys have positive auto correlations that are indicative of a redundant representation of the information they contain. I show how a synapse with activity-dependent transmission failures modeled after those occurring in visual cortical synapses can remove this redundancy by transmitting a decorrelated subset of the spike trains it receives. I suggest that redundancy reduction at individual synapses saves synaptic resources while increasing the sensitivity of the postsynaptic neuron to information arriving along many inputs. For a neuron receiving input from many decorrelating synapses, my analysis leads to a prediction of the number of visual inputs to a neuron and the cross-correlations between these inputs and suggests that the time scale of synaptic dynamics observed in sensory areas corresponds to a fundamental time scale for processing sensory information. Systems with activity-dependent changes in their parameters, or plasticity, often display a wide variability in their individual components that belies the stability of their function, Motivated by experiments demonstrating that identified neurons with stereotyped function can have a large variability in the densities of their ion channels, or ionic conductances, I build a conductance-based model of a single neuron. The neuron's firing activity is relatively insensitive to changes in certain combinations of conductances, but markedly sensitive to changes in other combinations. Using a combined modeling and experimental approach, I show that neuromodulators and regulatory processes target sensitive combinations of conductances. I suggest that the variability observed in conductance measurements occurs along insensitive combinations of conductances and could result from homeostatic processes that allow the neuron's conductances to drift without triggering activity- dependent feedback mechanisms. These results together suggest that plastic systems may have a high degree of flexibility and variability in their components without a loss of robustness in their response properties.
Prochaska, Judith J
2010-08-01
In mental health and addiction treatment settings, failure to treat tobacco dependence has been rationalized by some as a clinical approach to harm reduction. That is, tobacco use is viewed as a less harmful alternative to alcohol or illicit drug use and/or other self-harm behaviors. This paper examines the impact of providers' failure to treat tobacco use on patients' alcohol and illicit drug use and associated high-risk behaviors. The weight of the evidence in the literature indicates: (1) tobacco use is a leading cause of death in patients with psychiatric illness or addictive disorders; (2) tobacco use is associated with worsened substance abuse treatment outcomes, whereas treatment of tobacco dependence supports long-term sobriety; (3) tobacco use is associated with increased (not decreased) depressive symptoms and suicidal risk behavior; (4) tobacco use adversely impacts psychiatric treatment; (5) tobacco use is a lethal and ineffective long-term coping strategy for managing stress, and (6) treatment of tobacco use does not harm mental health recovery. Failure to treat tobacco dependence in mental health and addiction treatment settings is not consistent with a harm reduction model. In contrast, emerging evidence indicates treatment of tobacco dependence may even improve addiction treatment and mental health outcomes. Providers in mental health and addiction treatment settings have an ethical duty to intervene on patients' tobacco use and provide available evidence-based treatments. Copyright (c) 2010. Published by Elsevier Ireland Ltd.
Detonation failure characterization of non-ideal explosives
NASA Astrophysics Data System (ADS)
Janesheski, Robert S.; Groven, Lori J.; Son, Steven
2012-03-01
Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.
CARES/Life Software for Designing More Reliable Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.
Analysis of Weibull Grading Test for Solid Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2010-01-01
Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This, model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.
Reliability of High-Voltage Tantalum Capacitors. Parts 3 and 4)
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2010-01-01
Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.
Basaria, Shehzad
2014-04-05
Male hypogonadism is a clinical syndrome that results from failure to produce physiological concentrations of testosterone, normal amounts of sperm, or both. Hypogonadism may arise from testicular disease (primary hypogonadism) or dysfunction of the hypothalamic-pituitary unit (secondary hypogonadism). Clinical presentations vary dependent on the time of onset of androgen deficiency, whether the defect is in testosterone production or spermatogenesis, associated genetic factors, or history of androgen therapy. The clinical diagnosis of hypogonadism is made on the basis of signs and symptoms consistent with androgen deficiency and low morning testosterone concentrations in serum on multiple occasions. Several testosterone-replacement therapies are approved for treatment and should be selected according to the patient's preference, cost, availability, and formulation-specific properties. Contraindications to testosterone-replacement therapy include prostate and breast cancers, uncontrolled congestive heart failure, severe lower-urinary-tract symptoms, and erythrocytosis. Treatment should be monitored for benefits and adverse effects. Copyright © 2014 Elsevier Ltd. All rights reserved.
45 CFR 1303.7 - Effect of failure to file or serve documents in a timely manner.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Effect of failure to file or serve documents in a... PROSPECTIVE DELEGATE AGENCIES General § 1303.7 Effect of failure to file or serve documents in a timely manner... requisite deadlines or time frames if it exceeds them by any amount. (d) The time to file an appeal...
NASA Technical Reports Server (NTRS)
Saltsman, James F.; Halford, Gary R.
1989-01-01
Procedures are presented for characterizing an alloy and predicting cyclic life for isothermal and thermomechanical fatigue conditions by using the total strain version of strainrange partitioning (TS-SRP). Numerical examples are given. Two independent alloy characteristics are deemed important: failure behavior, as reflected by the inelastic strainrange versus cyclic life relations; and flow behavior, as indicated by the cyclic stress-strain-time response (i.e., the constitutive behavior). Failure behavior is characterized by conducting creep-fatigue tests in the strain regime, wherein the testing times are reasonably short and the inelastic strains are large enough to be determined accurately. At large strainranges, stress-hold, strain-limited tests are preferred because a high rate of creep damage per cycle is inherent in this type of test. At small strainranges, strain-hold cycles are more appropriate. Flow behavior is characterized by conducting tests wherein the specimen is usually cycled far short of failure and the wave shape is appropriate for the duty cycle of interest. In characterizing an alloy pure fatigue, or PP, failure tests are conducted first. Then depending on the needs of the analyst a series of creep-fatigue tests are conducted. As many of the three generic SRP cycles are featured as are required to characterize the influence of creep on fatigue life (i.e., CP, PC, and CC cycles, respectively, for tensile creep only, compressive creep only, and both tensile and compressive creep). Any mean stress effects on life also must be determined and accounted for when determining the SRP inelastic strainrange versus life relations for cycles featuring creep. This is particularly true for small strainranges. The life relations thus are established for a theoretical zero mean stress condition.
NASA Astrophysics Data System (ADS)
Belapurkar, Rohit K.
Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohr, C.L.; Pankaskie, P.J.; Heasler, P.G.
Reactor fuel failure data sets in the form of initial power (P/sub i/), final power (P/sub f/), transient increase in power (..delta..P), and burnup (Bu) were obtained for pressurized heavy water reactors (PHWRs), boiling water reactors (BWRs), and pressurized water reactors (PWRs). These data sets were evaluated and used as the basis for developing two predictive fuel failure models, a graphical concept called the PCI-OGRAM, and a nonlinear regression based model called PROFIT. The PCI-OGRAM is an extension of the FUELOGRAM developed by AECL. It is based on a critical threshold concept for stress dependent stress corrosion cracking. The PROFITmore » model, developed at Pacific Northwest Laboratory, is the result of applying standard statistical regression methods to the available PCI fuel failure data and an analysis of the environmental and strain rate dependent stress-strain properties of the Zircaloy cladding.« less
On the failure load and mechanism of polycrystalline graphene by nanoindentation
Sha, Z. D.; Wan, Q.; Pei, Q. X.; Quek, S. S.; Liu, Z. S.; Zhang, Y. W.; Shenoy, V. B.
2014-01-01
Nanoindentation has been recently used to measure the mechanical properties of polycrystalline graphene. However, the measured failure loads are found to be scattered widely and vary from lab to lab. We perform molecular dynamics simulations of nanoindentation on polycrystalline graphene at different sites including grain center, grain boundary (GB), GB triple junction, and holes. Depending on the relative position between the indenter tip and defects, significant scattering in failure load is observed. This scattering is found to arise from a combination of the non-uniform stress state, varied and weakened strengths of different defects, and the relative location between the indenter tip and the defects in polycrystalline graphene. Consequently, the failure behavior of polycrystalline graphene by nanoindentation is critically dependent on the indentation site, and is thus distinct from uniaxial tensile loading. Our work highlights the importance of the interaction between the indentation tip and defects, and the need to explicitly consider the defect characteristics at and near the indentation site in polycrystalline graphene during nanoindentation. PMID:25500732
Phase dependent fracture and damage evolution of polytetrafluoroethylene (PTFE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, E. N.; Rae, P.; Orler, E. B.
2004-01-01
Compared with other polymers, polytetrafluoroethylene (PTFE) presents several advantages for load-bearing structural components including higher strength at elevated temperatures and higher toughness at lowered temperatures. Failure sensitive applications of PTFE include surgical implants, aerospace components, and chemical barriers. Polytetrafluoroethylene is semicrystalline in nature with their linear chains forming complicated phases near room temperature and ambient pressure. The presence of three unique phases near room temperature implies that failure during standard operating conditions may be strongly dependent on the phase. This paper presents a comprehensive and systematic study of fracture and damage evolution in PTFE to elicit the effects of temperature-inducedmore » phase on fracture mechanisms. The fracture behavior of PTFE is observed to undergo transitions from brittle-fracture below 19 C to ductile-fracture with crazing and some stable crack growth to plastic flow aver 30 C. The bulk failure properties are correlated to failure mechanisms through fractography and analysis of the crystalline structure.« less
Swaminathan, Soumya; Pasipanodya, Jotam G.; Ramachandran, Geetha; Hemanth Kumar, A. K.; Srivastava, Shashikant; Deshpande, Devyani; Nuermberger, Eric; Gumbo, Tawanda
2016-01-01
Background. The role of drug concentrations in clinical outcomes in children with tuberculosis is unclear. Target concentrations for dose optimization are unknown. Methods. Plasma drug concentrations measured in Indian children with tuberculosis were modeled using compartmental pharmacokinetic analyses. The children were followed until end of therapy to ascertain therapy failure or death. An ensemble of artificial intelligence algorithms, including random forests, was used to identify predictors of clinical outcome from among 30 clinical, laboratory, and pharmacokinetic variables. Results. Among the 143 children with known outcomes, there was high between-child variability of isoniazid, rifampin, and pyrazinamide concentrations: 110 (77%) completed therapy, 24 (17%) failed therapy, and 9 (6%) died. The main predictors of therapy failure or death were a pyrazinamide peak concentration <38.10 mg/L and rifampin peak concentration <3.01 mg/L. The relative risk of these poor outcomes below these peak concentration thresholds was 3.64 (95% confidence interval [CI], 2.28–5.83). Isoniazid had concentration-dependent antagonism with rifampin and pyrazinamide, with an adjusted odds ratio for therapy failure of 3.00 (95% CI, 2.08–4.33) in antagonism concentration range. In regard to death alone as an outcome, the same drug concentrations, plus z scores (indicators of malnutrition), and age <3 years, were highly ranked predictors. In children <3 years old, isoniazid 0- to 24-hour area under the concentration-time curve <11.95 mg/L × hour and/or rifampin peak <3.10 mg/L were the best predictors of therapy failure, with relative risk of 3.43 (95% CI, .99–11.82). Conclusions. We have identified new antibiotic target concentrations, which are potential biomarkers associated with treatment failure and death in children with tuberculosis. PMID:27742636
Revisiting the stability of mini-implants used for orthodontic anchorage.
Yao, Chung-Chen Jane; Chang, Hao-Hueng; Chang, Jenny Zwei-Chieng; Lai, Hsiang-Hua; Lu, Shao-Chun; Chen, Yi-Jane
2015-11-01
The aim of this study is to comprehensively analyze the potential factors affecting the failure rates of three types of mini-implants used for orthodontic anchorage. Data were collected on 727 mini-implants (miniplates, predrilled titanium miniscrews, and self-drilling stainless steel miniscrews) in 220 patients. The factors related to mini-implant failure were investigated using a Chi-square test for univariate analysis and a generalized estimating equation model for multivariate analysis. The failure rate for miniplates was significantly lower than for miniscrews. All types of mini-implants, especially the self-drilling stainless steel miniscrews, showed decreased stability if the previous implantation had failed. The stability of predrilled titanium miniscrews and self-drilling stainless steel miniscrews were comparable at the first implantation. However, the failure rate of stainless steel miniscrews increased at the second implantation. The univariate analysis showed that the following variables had a significant influence on the failure rates of mini-implants: age of patient, type of mini-implant, site of implantation, and characteristics of the soft tissue around the mini-implants. The generalized estimating equation analysis revealed that mini-implants with miniscrews used in patients younger than 35 years, subjected to orthodontic loading after 30 days and implanted on the alveolar bone ridge, have a significantly higher risk of failure. This study revealed that once the dental surgeon becomes familiar with the procedure, the stability of orthodontic mini-implants depends on the type of mini-implant, age of the patient, implantation site, and the healing time of the mini-implant. Miniplates are a more feasible anchorage system when miniscrews fail repeatedly. Copyright © 2014. Published by Elsevier B.V.
A Novel Solution-Technique Applied to a Novel WAAS Architecture
NASA Technical Reports Server (NTRS)
Bavuso, J.
1998-01-01
The Federal Aviation Administration has embarked on an historic task of modernizing and significantly improving the national air transportation system. One system that uses the Global Positioning System (GPS) to determine aircraft navigational information is called the Wide Area Augmentation System (WAAS). This paper describes a reliability assessment of one candidate system architecture for the WAAS. A unique aspect of this study regards the modeling and solution of a candidate system that allows a novel cold sparing scheme. The cold spare is a WAAS communications satellite that is fabricated and launched after a predetermined number of orbiting satellite failures have occurred and after some stochastic fabrication time transpires. Because these satellites are complex systems with redundant components, they exhibit an increasing failure rate with a Weibull time to failure distribution. Moreover, the cold spare satellite build-time is Weibull and upon launch is considered to be a good-as-new system with an increasing failure rate and a Weibull time to failure distribution as well. The reliability model for this system is non-Markovian because three distinct system clocks are required: the time to failure of the orbiting satellites, the build time for the cold spare, and the time to failure for the launched spare satellite. A powerful dynamic fault tree modeling notation and Monte Carlo simulation technique with importance sampling are shown to arrive at a reliability prediction for a 10 year mission.
NASA Astrophysics Data System (ADS)
Helbing, Dirk; Ammoser, Hendrik; Kühnert, Christian
2006-04-01
In this paper we discuss the problem of information losses in organizations and how they depend on the organization network structure. Hierarchical networks are an optimal organization structure only when the failure rate of nodes or links is negligible. Otherwise, redundant information links are important to reduce the risk of information losses and the related costs. However, as redundant information links are expensive, the optimal organization structure is not a fully connected one. It rather depends on the failure rate. We suggest that sidelinks and temporary, adaptive shortcuts can improve the information flows considerably by generating small-world effects. This calls for modified organization structures to cope with today's challenges of businesses and administrations, in particular, to successfully respond to crises or disasters.
Tensile Strength of Carbon Nanotubes Under Realistic Temperature and Strain Rate
NASA Technical Reports Server (NTRS)
Wei, Chen-Yu; Cho, Kyeong-Jae; Srivastava, Deepak; Biegel, Bryan (Technical Monitor)
2002-01-01
Strain rate and temperature dependence of the tensile strength of single-wall carbon nanotubes has been investigated with molecular dynamics simulations. The tensile failure or yield strain is found to be strongly dependent on the temperature and strain rate. A transition state theory based predictive model is developed for the tensile failure of nanotubes. Based on the parameters fitted from high-strain rate and temperature dependent molecular dynamics simulations, the model predicts that a defect free micrometer long single-wall nanotube at 300 K, stretched with a strain rate of 1%/hour, fails at about 9 plus or minus 1% tensile strain. This is in good agreement with recent experimental findings.
Compression Fracture of CFRP Laminates Containing Stress Intensifications.
Leopold, Christian; Schütt, Martin; Liebig, Wilfried V; Philipkowski, Timo; Kürten, Jonas; Schulte, Karl; Fiedler, Bodo
2017-09-05
For brittle fracture behaviour of carbon fibre reinforced plastics (CFRP) under compression, several approaches exist, which describe different mechanisms during failure, especially at stress intensifications. The failure process is not only initiated by the buckling fibres, but a shear driven fibre compressive failure beneficiaries or initiates the formation of fibres into a kink-band. Starting from this kink-band further damage can be detected, which leads to the final failure. The subject of this work is an experimental investigation on the influence of ply thickness and stacking sequence in quasi-isotropic CFRP laminates containing stress intensifications under compression loading. Different effects that influence the compression failure and the role the stacking sequence has on damage development and the resulting compressive strength are identified and discussed. The influence of stress intensifications is investigated in detail at a hole in open hole compression (OHC) tests. A proposed interrupted test approach allows identifying the mechanisms of damage initiation and propagation from the free edge of the hole by causing a distinct damage state and examine it at a precise instant of time during fracture process. Compression after impact (CAI) tests are executed in order to compare the OHC results to a different type of stress intensifications. Unnotched compression tests are carried out for comparison as a reference. With this approach, a more detailed description of the failure mechanisms during the sudden compression failure of CFRP is achieved. By microscopic examination of single plies from various specimens, the different effects that influence the compression failure are identified. First damage of fibres occurs always in 0°-ply. Fibre shear failure leads to local microbuckling and the formation and growth of a kink-band as final failure mechanisms. The formation of a kink-band and finally steady state kinking is shifted to higher compressive strains with decreasing ply thickness. Final failure mode in laminates with stress intensification depends on ply thickness. In thick or inner plies, damage initiates as shear failure and fibre buckling into the drilled hole. The kink-band orientation angle is changing with increasing strain. In outer or thin plies shear failure of single fibres is observed as first damage and the kink-band orientation angle is constant until final failure. Decreasing ply thickness increases the unnotched compressive strength. When stress intensifications are present, the position of the 0°-layer is critical for stability under compression and is thus more important than the ply thickness. Central 0°-layers show best results for OHC and CAI strength due to higher bending stiffness and better supporting effect of the adjacent layers.
Compression Fracture of CFRP Laminates Containing Stress Intensifications
Schütt, Martin; Philipkowski, Timo; Kürten, Jonas; Schulte, Karl
2017-01-01
For brittle fracture behaviour of carbon fibre reinforced plastics (CFRP) under compression, several approaches exist, which describe different mechanisms during failure, especially at stress intensifications. The failure process is not only initiated by the buckling fibres, but a shear driven fibre compressive failure beneficiaries or initiates the formation of fibres into a kink-band. Starting from this kink-band further damage can be detected, which leads to the final failure. The subject of this work is an experimental investigation on the influence of ply thickness and stacking sequence in quasi-isotropic CFRP laminates containing stress intensifications under compression loading. Different effects that influence the compression failure and the role the stacking sequence has on damage development and the resulting compressive strength are identified and discussed. The influence of stress intensifications is investigated in detail at a hole in open hole compression (OHC) tests. A proposed interrupted test approach allows identifying the mechanisms of damage initiation and propagation from the free edge of the hole by causing a distinct damage state and examine it at a precise instant of time during fracture process. Compression after impact (CAI) tests are executed in order to compare the OHC results to a different type of stress intensifications. Unnotched compression tests are carried out for comparison as a reference. With this approach, a more detailed description of the failure mechanisms during the sudden compression failure of CFRP is achieved. By microscopic examination of single plies from various specimens, the different effects that influence the compression failure are identified. First damage of fibres occurs always in 0°-ply. Fibre shear failure leads to local microbuckling and the formation and growth of a kink-band as final failure mechanisms. The formation of a kink-band and finally steady state kinking is shifted to higher compressive strains with decreasing ply thickness. Final failure mode in laminates with stress intensification depends on ply thickness. In thick or inner plies, damage initiates as shear failure and fibre buckling into the drilled hole. The kink-band orientation angle is changing with increasing strain. In outer or thin plies shear failure of single fibres is observed as first damage and the kink-band orientation angle is constant until final failure. Decreasing ply thickness increases the unnotched compressive strength. When stress intensifications are present, the position of the 0°-layer is critical for stability under compression and is thus more important than the ply thickness. Central 0°-layers show best results for OHC and CAI strength due to higher bending stiffness and better supporting effect of the adjacent layers. PMID:28872623
ERIC Educational Resources Information Center
Dante, Angelo; Fabris, Stefano; Palese, Alvisa
2013-01-01
Empirical studies and conceptual frameworks presented in the extant literature offer a static imagining of academic failure. Time-to-event analysis, which captures the dynamism of individual factors, as when they determine the failure to properly tailor timely strategies, impose longitudinal studies which are still lacking within the field. The…
Leukocyte diversity in resolving and nonresolving mechanisms of cardiac remodeling.
Tourki, Bochra; Halade, Ganesh
2017-10-01
In response to myocardial infarction (MI), time-dependent leukocyte infiltration is critical to program the acute inflammatory response. Post-MI leukocyte density, residence time in the infarcted area, and exit from the infarcted injury predict resolving or nonresolving inflammation. Overactive or unresolved inflammation is the primary determinant in heart failure pathology post-MI. Here, our review describes supporting evidence that the acute inflammatory response also guides the generation of healing and regenerative mediators after cardiac damage. Time-dependent leukocyte density and diversity and the magnitude of myocardial injury is responsible for the resolving and nonresolving pathway in myocardial healing. Post MI, the diversity of leukocytes, such as neutrophils, macrophages, and lymphocytes, has been explored that regulate the clearance of deceased cardiomyocytes by using the classic and reparative pathways. Among the innovative factors and intermediates that have been recognized as essential in acute the self-healing and clearance mechanism, we highlight specialized proresolving mediators as the emerging factor for post-MI reparative mechanisms-translational leukocyte modifiers, such as aging, the source of leukocytes, and the milieu around the leukocytes. In the clinical setting, it is possible that leukocyte diversity is more prominent as a result of risk factors, such as obesity, diabetes, and hypertension. Pharmacologic agents are critical modifiers of leukocyte diversity in healing mechanisms that may impair or stimulate the clearance mechanism. Future research is needed, with a focused approach to understand the molecular targets, cellular effectors, and receptors. A clear understanding of resolving and nonresolving inflammation in myocardial healing will help to develop novel targets with major emphasis on the resolution of inflammation in heart failure pathology.-Tourki, B., Halade, G. Leukocyte diversity in resolving and nonresolving mechanisms of cardiac remodeling. © FASEB.
Bonsu, Kwadwo Osei; Owusu, Isaac Kofi; Buabeng, Kwame Ohene; Reidpath, Daniel D; Kadirvelu, Amudha
2017-04-01
Randomized control trials of statins have not demonstrated significant benefits in outcomes of heart failure (HF). However, randomized control trials may not always be generalizable. The aim was to determine whether statin and statin type-lipophilic or -hydrophilic improve long-term outcomes in Africans with HF. This was a retrospective longitudinal study of HF patients aged ≥18 years hospitalized at a tertiary healthcare center between January 1, 2009 and December 31, 2013 in Ghana. Patients were eligible if they were discharged from first admission for HF (index admission) and followed up to time of all-cause, cardiovascular, and HF mortality or end of study. Multivariable time-dependent Cox model and inverse-probability-of-treatment weighting of marginal structural model were used to estimate associations between statin treatment and outcomes. Adjusted hazard ratios were also estimated for lipophilic and hydrophilic statin compared with no statin use. The study included 1488 patients (mean age 60.3±14.2 years) with 9306 person-years of observation. Using the time-dependent Cox model, the 5-year adjusted hazard ratios with 95% CI for statin treatment on all-cause, cardiovascular, and HF mortality were 0.68 (0.55-0.83), 0.67 (0.54-0.82), and 0.63 (0.51-0.79), respectively. Use of inverse-probability-of-treatment weighting resulted in estimates of 0.79 (0.65-0.96), 0.77 (0.63-0.96), and 0.77 (0.61-0.95) for statin treatment on all-cause, cardiovascular, and HF mortality, respectively, compared with no statin use. Among Africans with HF, statin treatment was associated with significant reduction in mortality. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Komada, Fusao
2018-01-01
The aim of this study was to investigate the time-to-onset of drug-induced interstitial lung disease (DILD) following the administration of small molecule molecularly-targeted drugs via the use of the spontaneous adverse reaction reporting system of the Japanese Adverse Drug Event Report database. DILD datasets for afatinib, alectinib, bortezomib, crizotinib, dasatinib, erlotinib, everolimus, gefitinib, imatinib, lapatinib, nilotinib, osimertinib, sorafenib, sunitinib, temsirolimus, and tofacitinib were used to calculate the median onset times of DILD and the Weibull distribution parameters, and to perform the hierarchical cluster analysis. The median onset times of DILD for afatinib, bortezomib, crizotinib, erlotinib, gefitinib, and nilotinib were within one month. The median onset times of DILD for dasatinib, everolimus, lapatinib, osimertinib, and temsirolimus ranged from 1 to 2 months. The median onset times of the DILD for alectinib, imatinib, and tofacitinib ranged from 2 to 3 months. The median onset times of the DILD for sunitinib and sorafenib ranged from 8 to 9 months. Weibull distributions for these drugs when using the cluster analysis showed that there were 4 clusters. Cluster 1 described a subgroup with early to later onset DILD and early failure type profiles or a random failure type profile. Cluster 2 exhibited early failure type profiles or a random failure type profile with early onset DILD. Cluster 3 exhibited a random failure type profile or wear out failure type profiles with later onset DILD. Cluster 4 exhibited an early failure type profile or a random failure type profile with the latest onset DILD.
NASA Astrophysics Data System (ADS)
Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung
2007-07-01
This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.
NASA Technical Reports Server (NTRS)
Schneeweiss, W.
1977-01-01
It is shown how the availability and MTBF (Mean Time Between Failures) of a redundant system with subsystems maintenanced at the points of so-called stationary renewal processes can be determined from the distributions of the intervals between maintenance actions and of the failure-free operating intervals of the subsystems. The results make it possible, for example, to determine the frequency and duration of hidden failure states in computers which are incidentally corrected during the repair of observed failures.
Real-time failure control (SAFD)
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.
1990-01-01
The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.
A double hit model for the distribution of time to AIDS onset
NASA Astrophysics Data System (ADS)
Chillale, Nagaraja Rao
2013-09-01
Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.
Woodin, Sarah A; Hilbish, Thomas J; Helmuth, Brian; Jones, Sierra J; Wethey, David S
2013-09-01
Modeling the biogeographic consequences of climate change requires confidence in model predictions under novel conditions. However, models often fail when extended to new locales, and such instances have been used as evidence of a change in physiological tolerance, that is, a fundamental niche shift. We explore an alternative explanation and propose a method for predicting the likelihood of failure based on physiological performance curves and environmental variance in the original and new environments. We define the transient event margin (TEM) as the gap between energetic performance failure, defined as CTmax, and the upper lethal limit, defined as LTmax. If TEM is large relative to environmental fluctuations, models will likely fail in new locales. If TEM is small relative to environmental fluctuations, models are likely to be robust for new locales, even when mechanism is unknown. Using temperature, we predict when biogeographic models are likely to fail and illustrate this with a case study. We suggest that failure is predictable from an understanding of how climate drives nonlethal physiological responses, but for many species such data have not been collected. Successful biogeographic forecasting thus depends on understanding when the mechanisms limiting distribution of a species will differ among geographic regions, or at different times, resulting in realized niche shifts. TEM allows prediction of the likelihood of such model failure.
Interfacial characterization of flexible hybrid electronics
NASA Astrophysics Data System (ADS)
Najafian, Sara; Amirkhizi, Alireza V.; Stapleton, Scott
2018-03-01
Flexible Hybrid Electronics (FHEs) are the new generation of electronics combining flexible plastic film substrates with electronic devices. Besides the electrical features, design improvements of FHEs depend on the prediction of their mechanical and failure behavior. Debonding of electronic components from the flexible substrate is one of the most common and critical failures of these devices, therefore, the experimental determination of material and interface properties is of great importance in the prediction of failure mechanisms. Traditional interface characterization involves isolated shear and normal mode tests such as the double cantilever beam (DCB) and end notch flexure (ENF) tests. However, due to the thin, flexible nature of the materials and manufacturing restrictions, tests mirroring traditional interface characterization experiments may not always be possible. The ideal goal of this research is to design experiments such that each mode of fracture is isolated. However, due to the complex nonlinear nature of the response and small geometries of FHEs, design of the proper tests to characterize the interface properties can be significantly time and cost consuming. Hence numerical modeling has been implemented to design these novel characterization experiments. This research involves loading case and specimen geometry parametric studies using numerical modeling to design future experiments where either shear or normal fracture modes are dominant. These virtual experiments will provide a foundation for designing similar tests for many different types of flexible electronics and predicting the failure mechanism independent of the specific FHE materials.
Neonatal Marfan syndrome: Report of two cases.
Jurko, Tomas; Jurko, Alexander; Minarik, Milan; Micieta, Vladimir; Tonhajzerova, Ingrid; Kolarovszka, Hana; Zibolen, Mirko
2017-07-01
Marfan syndrome is rarely diagnosed in the neonatal period because of variable expression and age-dependent appearance of clinical signs. The prognosis is usually poor due to high probability of congestive heart failure, mitral and tricuspid regurgitations with suboptimal response to medical therapy and difficulties in surgical management. The authors have studied two cases of Marfan syndrome in the newborn period. Two cases of neonatal Marfan syndrome, one male and one female, were diagnosed by characteristic physical appearance. Both infants had significant cardiovascular abnormalities diagnosed by ultrasonography. Genetic DNA analysis in the second case confirmed the mutations in the fibrillin-1 gene located on chromosome 15q21 which is responsible for the development of Marfan syndrome. The boy died at six weeks of age with signs of rapidly progressive left ventricular failure associated with pneumonia. The second infant was having only mild signs of congestive heart failure and has been treated with beta blockers. At the age of 4 years her symptoms of congestive heart failure had worsened due to progression of mitral and tricuspid insufficiency and development of significant cardiomegaly. Mitral and tricuspid valvuloplasy had to be done at that time. Early diagnosis of Marfan syndrome in the newborn period can allow treatment in the early stages of cardiovascular abnormalities and may improve the prognosis. It also helps to explain to the family the serious health problem of their child.
Performance results of cooperating expert systems in a distributed real-time monitoring system
NASA Technical Reports Server (NTRS)
Schwuttke, U. M.; Veregge, J. R.; Quan, A. G.
1994-01-01
There are numerous definitions for real-time systems, the most stringent of which involve guaranteeing correct system response within a domain-dependent or situationally defined period of time. For applications such as diagnosis, in which the time required to produce a solution can be non-deterministic, this requirement poses a unique set of challenges in dynamic modification of solution strategy that conforms with maximum possible latencies. However, another definition of real time is relevant in the case of monitoring systems where failure to supply a response in the proper (and often infinitesimal) amount of time allowed does not make the solution less useful (or, in the extreme example of a monitoring system responsible for detecting and deflecting enemy missiles, completely irrelevant). This more casual definition involves responding to data at the same rate at which it is produced, and is more appropriate for monitoring applications with softer real-time constraints, such as interplanetary exploration, which results in massive quantities of data transmitted at the speed of light for a number of hours before it even reaches the monitoring system. The latter definition of real time has been applied to the MARVEL system for automated monitoring and diagnosis of spacecraft telemetry. An early version of this system has been in continuous operational use since it was first deployed in 1989 for the Voyager encounter with Neptune. This system remained under incremental development until 1991 and has been under routine maintenance in operations since then, while continuing to serve as an artificial intelligence (AI) testbed in the laboratory. The system architecture has been designed to facilitate concurrent and cooperative processing by multiple diagnostic expert systems in a hierarchical organization. The diagnostic modules adhere to concepts of data-driven reasoning, constrained but complete nonoverlapping domains, metaknowledge of global consequences of anomalous data, hierarchical reporting of problems that extend beyond a single domain, and shared responsibility for problems that overlap domains. The system enables efficient diagnosis of complex system failures in real-time environments with high data volumes and moderate failure rates, as indicated by extensive performance measurements.
NASA Astrophysics Data System (ADS)
Avery, Katherine R.
Isothermal low cycle fatigue (LCF) and anisothermal thermomechanical fatigue (TMF) tests were conducted on a high silicon molybdenum (HiSiMo) cast iron for temperatures up to 1073K. LCF and out-of-phase (OP) TMF lives were significantly reduced when the temperature was near 673K due to an embrittlement phenomenon which decreases the ductility of HiSiMo at this temperature. In this case, intergranular fracture was predominant, and magnesium was observed at the fracture surface. When the thermal cycle did not include 673K, the failure mode was predominantly transgranular, and magnesium was not present on the fracture surface. The in-phase (IP) TMF lives were unaffected when the thermal cycle included 673K, and the predominant failure mode was found to be transgranular fracture, regardless of the temperature. No magnesium was present on the IP TMF fracture surfaces. Thus, the embrittlement phenomenon was found to contribute to fatigue damage only when the temperature was near 673K and a tensile stress was present. To account for the temperature- and stress-dependence of the embrittlement phenomenon on the TMF life of HiSiMo cast iron, an original model based on the cyclic inelastic energy dissipation is proposed which accounts for temperature-dependent differences in the rate of fatigue damage accumulation in tension and compression. The proposed model has few empirical parameters. Despite the simplicity of the model, the predicted fatigue life shows good agreement with more than 130 uniaxial low cycle and thermomechanical fatigue tests, cyclic creep tests, and tests conducted at slow strain rates and with hold times. The proposed model was implemented in a multiaxial formulation and applied to the fatigue life prediction of an exhaust manifold subjected to severe thermal cycles. The simulation results show good agreement with the failure locations and number of cycles to failure observed in a component-level experiment.
Britt, Todd; Sturm, Ryan; Ricardi, Rick; Labond, Virginia
2015-01-01
Thoracic trauma accounts for 10%-15% of all trauma admissions. Rib fractures are the most common injury following blunt thoracic trauma. Epidural analgesia improves patient outcomes but is not without problems. The use of continuous intercostal nerve blockade (CINB) may offer superior pain control with fewer side effects. This study's objective was to compare the rate of pulmonary complications when traumatic rib fractures were treated with CINB vs epidurals. A hospital trauma registry provided retrospective data from 2008 to 2013 for patients with 2 or more traumatic rib fractures. All subjects were admitted and were treated with either an epidural or a subcutaneously placed catheter for continuous intercostal nerve blockade. Our primary outcome was a composite of either pneumonia or respiratory failure. Secondary outcomes included total hospital days, total ICU days, and days on the ventilator. 12.5% (N=8) of the CINB group developed pneumonia or had respiratory failure compared to 16.3% (N=7) in the epidural group. No statistical difference (P=0.58) in the incidence of pneumonia or vent dependent respiratory failure was observed. There was a significant reduction (P=0.05) in hospital days from 9.72 (SD 9.98) in the epidural compared to 6.98 (SD 4.67) in the CINB group. The rest of our secondary outcomes showed no significant difference. This study did not show a difference in the rate of pneumonia or ventilator-dependent respiratory failure in the CINB vs epidural groups. It was not sufficiently powered. Our data supports a reduction in hospital days when CINB is used vs epidural. CINB may have advantages over epidurals such as fewer complications, fewer contraindications, and a shorter time to placement. Further studies are needed to confirm these statements.
Buikema, H; Monnink, S H J; Tio, R A; Crijns, H J G M; de Zeeuw, D; van Gilst, W H
2000-01-01
We evaluated the role of SH-groups in improvement of endothelial dysfunction with ACE-inhibitors in experimental heart failure. To this end, we compared the vasoprotective effect of chronic treatment with zofenopril (plus SH-group) versus lisinopril (no SH-group), or N-acetylcysteine (only SH-group) in myocardial infarcted (MI) heart failure rats.After 11 weeks of treatment, aortas were obtained and studied as ring preparations for endothelium-dependent and -independent dilatation in continuous presence of indomethacin to avoid interference of vasoactive prostanoids, and the selective presence of the NOS-inhibitor L-NMMA to determine NO-contribution.Total dilatation after receptor-dependent stimulation with acetylcholine (ACh) was attenuated (−49%, P<0.05) in untreated MI (n=11), compared to control rats with no-MI (n=8). This was in part due to impaired NO-contribution in MI (−50%, P<0.05 versus no-MI). At the same time the capacity for generation of biologically active NO after receptor-independent stimulation with A23187 remained intact.Chronic treatment with n-acetylcysteine (n=8) selectively restored NO-contribution in total dilatation to ACh. In contrast, both ACE-inhibitors fully normalized total dilatation to ACh, including the part mediated by NO (no significant differences between zofenopril (n=10) and lisinopril (n=8)).Zofenopril, but not lisinopril, additionally potentiated the effect of endogenous NO after A23187-induced release from the endothelium (+100%) as well as that of exogenous NO provided by nitroglycerin (+22%) and sodium nitrite (+36%) (for all P<0.05 versus no-MI).We conclude that ACE-inhibition with a SH-group has a potential advantage in improvement of endothelial dysfunction through increased activity of NO after release from the endothelium into the vessel wall. Furthermore, this is the first study demonstrating the selective normalizing effect of N-actylcysteine on NO-contribution to ACh-induced dilatation in experimental heart failure. PMID:10952693
Insulation Resistance Degradation in Ni-BaTiO3 Multilayer Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang (David)
2015-01-01
Insulation resistance (IR) degradation in Ni-BaTiO3 multilayer ceramic capacitors has been characterized by the measurement of both time to failure and direct-current (DC) leakage current as a function of stress time under highly accelerated life test conditions. The measured leakage current-time dependence data fit well to an exponential form, and a characteristic growth time ?SD can be determined. A greater value of tau(sub SD) represents a slower IR degradation process. Oxygen vacancy migration and localization at the grain boundary region results in the reduction of the Schottky barrier height and has been found to be the main reason for IR degradation in Ni-BaTiO3 capacitors. The reduction of barrier height as a function of time follows an exponential relation of phi (??)=phi (0)e(exp -2?t), where the degradation rate constant ??=??o??(????/????) is inversely proportional to the mean time to failure (MTTF) and can be determined using an Arrhenius plot. For oxygen vacancy electromigration, a lower barrier height phi(0) will favor a slow IR degradation process, but a lower phi(0) will also promote electronic carrier conduction across the barrier and decrease the insulation resistance. As a result, a moderate barrier height phi(0) (and therefore a moderate IR value) with a longer MTTF (smaller degradation rate constant ??) will result in a minimized IR degradation process and the most improved reliability in Ni-BaTiO3 multilayer ceramic capacitors.
Contraceptive failure in the United States
Trussell, James
2013-01-01
This review provides an update of previous estimates of first-year probabilities of contraceptive failure for all methods of contraception available in the United States. Estimates are provided of probabilities of failure during typical use (which includes both incorrect and inconsistent use) and during perfect use (correct and consistent use). The difference between these two probabilities reveals the consequences of imperfect use; it depends both on how unforgiving of imperfect use a method is and on how hard it is to use that method perfectly. These revisions reflect new research on contraceptive failure both during perfect use and during typical use. PMID:21477680
[Acute renal failure in a 75-year-old woman with a high-output ileostoma].
Teege, S; Wiech, T; Steinmetz, O M
2017-05-01
We report on a 75-year old woman who presented with acute oliguric renal failure. The kidney biopsy revealed calcium oxalate depositions in the tubular lumen, caused by an overload of intravenous ascorbic acid (cumulative dose of 240 g). Due to a lack of specific therapeutic interventions, the patient remained dialysis-dependent. Iatrogenic causes of kidney failure play an important role in the pathogenesis of kidney diseases and should always be considered in patients with acute renal failure. Detailed evaluation of the patient history is often suggestive, while renal biopsy can establish the diagnosis.
Register of specialized sources for information on mechanics of structural failure
NASA Technical Reports Server (NTRS)
Carpenter, J. L., Jr.; Denny, F. J.
1973-01-01
Specialized information sources that generate information relative to six problem areas in aerospace mechanics of structural failure are identified. Selection for inclusion was based upon information obtained from the individual knowledge and professional contacts of Martin Marietta Aerospace staff members and the information uncovered by the staff of technical reviewers. Activities listed perform basic or applied research related to the mechanics of structural failure and publish the results of such research. The purpose of the register is to present, in easy reference form, original sources for dependable information regarding failure modes and mechanisms of aerospace structures.
Nonlinear fracture mechanics-based analysis of thin wall cylinders
NASA Technical Reports Server (NTRS)
Brust, Frederick W.; Leis, Brian N.; Forte, Thomas P.
1994-01-01
This paper presents a simple analysis technique to predict the crack initiation, growth, and rupture of large-radius, R, to thickness, t, ratio (thin wall) cylinders. The method is formulated to deal both with stable tearing as well as fatigue mechanisms in applications to both surface and through-wall axial cracks, including interacting surface cracks. The method can also account for time-dependent effects. Validation of the model is provided by comparisons of predictions to more than forty full scale experiments of thin wall cylinders pressurized to failure.
Neutron beam irradiation study of workload dependence of SER in a microprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalak, Sarah E; Graves, Todd L; Hong, Ted
It is known that workloads are an important factor in soft error rates (SER), but it is proving difficult to find differentiating workloads for microprocessors. We have performed neutron beam irradiation studies of a commercial microprocessor under a wide variety of workload conditions from idle, performing no operations, to very busy workloads resembling real HPC, graphics, and business applications. There is evidence that the mean times to first indication of failure, MTFIF defined in Section II, may be different for some of the applications.
Conquering common breast-feeding problems.
Walker, Marsha
2008-01-01
Meeting mothers' personal breast-feeding goals depends on a number of factors, including the timely resolution of any problems she encounters. Nurses are often the first providers who interact with the mother during the perinatal period and are positioned to guide mothers through the prevention and solving of breast-feeding problems. Although many problems may be "common," failure to remedy conditions that cause pain, frustration, and anxiety can lead to premature weaning and avoidance of breast-feeding subsequent children. This article describes strategies and interventions to alleviate common problems that breast-feeding mothers frequently encounter.
26 CFR 1.547-5 - Deduction denied in case of fraud or wilful failure to file timely return.
Code of Federal Regulations, 2010 CFR
2010-04-01
... failure to file timely return. 1.547-5 Section 1.547-5 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Personal Holding Companies § 1.547-5 Deduction denied in case of fraud or wilful failure to file timely return. No deduction...
26 CFR 1.547-5 - Deduction denied in case of fraud or wilful failure to file timely return.
Code of Federal Regulations, 2011 CFR
2011-04-01
... failure to file timely return. 1.547-5 Section 1.547-5 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Personal Holding Companies § 1.547-5 Deduction denied in case of fraud or wilful failure to file timely return. No deduction...
NASA Technical Reports Server (NTRS)
Onwubiko, Chin-Yere; Onyebueke, Landon
1996-01-01
The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.
MRSA acquisition in an intensive care unit.
Dancer, Stephanie J; Coyne, Michael; Speekenbrink, A; Samavedam, Sam; Kennedy, Julie; Wallace, Peter G M
2006-02-01
This paper describes a retrospective investigation of methicillin-resistant Staphylococcus aureus (MRSA) acquisition in an 8-bed intensive care unit (ICU) over a 5-month period. Clinical and microbiologic data were collected from the ICU, including MRSA detection dates, patient dependency scores, standardized environmental screening data, weekly bed occupancies, number of admissions, and nurse staffing levels. MRSA acquisition weeks were defined as weeks during which initial delivery of MRSA occurred before sampling and laboratory confirmation. Weekly workloads were plotted against staffing levels and modelled against MRSA acquisition weeks and hygiene failures. Of 174 patients admitted into the ICU, 28 (16%) were found to have MRSA; 12 of these (7%) acquired MRSA on the ICU within 7 of the 23 weeks studied. Six of these 7 weeks were associated with a deficit of trained nurses during the day and 5 with hygiene failures (data unavailable for 2). Pulsed-field gel electrophoresis (PFGE) profiles demonstrated relationships between staphylococci from staff hands, hand-touch sites, and patients' blood. MRSA acquisition in the ICU was temporally associated with reduced numbers of trained nurses and hygiene failures predominantly involving hand-touch sites. Epidemiologic analysis suggested that patient acquisitions were 7 times more likely to occur during periods of nurse understaffing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredrich, J.T.; Argueello, J.G.; Thorne, B.J.
1996-11-01
This paper describes an integrated geomechanics analysis of well casing damage induced by compaction of the diatomite reservoir at the Belridge Field, California. Historical data from the five field operators were compiled and analyzed to determine correlations between production, injection, subsidence, and well failures. The results of this analysis were used to develop a three-dimensional geomechanical model of South Belridge, Section 33 to examine the diatomite reservoir and overburden response to production and injection at the interwell scale and to evaluate potential well failure mechanisms. The time-dependent reservoir pressure field was derived from a three-dimensional finite difference reservoir simulation andmore » used as input to three-dimensional non-linear finite element geomechanical simulations. The reservoir simulation included -200 wells and covered 18 years of production and injection. The geomechanical simulation contained 437,100 nodes and 374,130 elements with the overburden and reservoir discretized into 13 layers with independent material properties. The results reveal the evolution of the subsurface stress and displacement fields with production and injection and suggest strategies for reducing the occurrence of well casing damage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredrich, J.T.; Argueello, J.G.; Thorne, B.J.
1996-12-31
This paper describes an integrated geomechanics analysis of well casing damage induced by compaction of the diatomite reservoir at the Belridge Field, California. Historical data from the five field operators were compiled and analyzed to determine correlations between production, injection, subsidence, and well failures. The results of this analysis were used to develop a three-dimensional geomechanical model of South Belridge, Section 33 to examine the diatomite reservoir and overburden response to production and injection at the interwell scale and to evaluate potential well failure mechanisms. The time-dependent reservoir pressure field was derived from a three-dimensional finite difference reservoir simulation andmore » used as input to three-dimensional non-linear finite element geomechanical simulations. The reservoir simulation included approximately 200 wells and covered 18 years of production and injection. The geomechanical simulation contained 437,100 nodes and 374,130 elements with the overburden and reservoir discretized into 13 layers with independent material properties. The results reveal the evolution of the subsurface stress and displacement fields with production and injection and suggest strategies for reducing the occurrence of well casing damage.« less
Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.
Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications
Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco
2012-01-01
Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497
Grossetti, Francesco; Ieva, Francesca; Paganoni, Anna Maria
2018-06-01
Healthcare administrative databases are becoming more and more important and reliable sources of clinical and epidemiological information. They are able to track several interactions between a patient and the public healthcare system. In the present study, we make use of data extracted from the administrative data warehouse of Regione Lombardia, a region located in the northern part of Italy whose capital is Milan. Data are within a project aiming at providing a description of the epidemiology of Heart Failure (HF) patients at regional level, to profile health service utilization over time, and to investigate variations in patient care according to geographic area, socio-demographic characteristic and other clinical variables. We use multi-state models to estimate the probability of transition from (re)admission to discharge and death adjusting for covariates which are state dependent. To the best of our knowledge, this is the first Italian attempt of investigating which are the effects of pharmacological and outpatient cares covariates on patient's readmissions and death. This allows to better characterise disease progression and possibly identify what are the main determinants of a hospital admission and death in patients with Heart Failure.
Time-dependent fiber bundles with local load sharing.
Newman, W I; Phoenix, S L
2001-02-01
Fiber bundle models, where fibers have random lifetimes depending on their load histories, are useful tools in explaining time-dependent failure in heterogeneous materials. Such models shed light on diverse phenomena such as fatigue in structural materials and earthquakes in geophysical settings. Various asymptotic and approximate theories have been developed for bundles with various geometries and fiber load-sharing mechanisms, but numerical verification has been hampered by severe computational demands in larger bundles. To gain insight at large size scales, interest has returned to idealized fiber bundle models in 1D. Such simplified models typically assume either equal load sharing (ELS) among survivors, or local load sharing (LLS) where a failed fiber redistributes its load onto its two nearest flanking survivors. Such models can often be solved exactly or asymptotically in increasing bundle size, N, yet still capture the essence of failure in real materials. The present work focuses on 1D bundles under LLS. As in previous works, a fiber has failure rate following a power law in its load level with breakdown exponent rho. Surviving fibers under fixed loads have remaining lifetimes that are independent and exponentially distributed. We develop both new asymptotic theories and new computational algorithms that greatly increase the bundle sizes that can be treated in large replications (e.g., one million fibers in thousands of realizations). In particular we develop an algorithm that adapts several concepts and methods that are well-known among computer scientists, but relatively unknown among physicists, to dramatically increase the computational speed with no attendant loss of accuracy. We consider various regimes of rho that yield drastically different behavior as N increases. For 1/2< or =rho< or =1, ELS and LLS have remarkably similar behavior (they have identical lifetime distributions at rho=1) with approximate Gaussian bundle lifetime statistics and a finite limiting mean. For rho>1 this Gaussian behavior also applies to ELS, whereas LLS behavior diverges sharply showing brittle, weakest volume behavior in terms of characteristic elements derived from critical cluster formation. For 0
Activation of Background Knowledge for Inference Making: Effects on Reading Comprehension
ERIC Educational Resources Information Center
Elbro, Carsten; Buch-Iversen, Ida
2013-01-01
Failure to "activate" relevant, existing background knowledge may be a cause of poor reading comprehension. This failure may cause particular problems with inferences that depend heavily on prior knowledge. Conversely, teaching how to use background knowledge in the context of gap-filling inferences could improve reading comprehension in…
Incretin-related drug therapy in heart failure.
Vest, Amanda R
2015-02-01
The new pharmacological classes of GLP-1 agonists and DPP-4 inhibitors are now widely used in diabetes and have been postulated as beneficial in heart failure. These proposed benefits arise from the inter-related pathophysiologies of diabetes and heart failure (diabetes increases the risk of heart failure, and heart failure can induce insulin resistance) and also in light of the dysfunctional myocardial energetics seen in heart failure. The normal heart utilizes predominantly fatty acids for energy production, but there is some evidence to suggest that increased myocardial glucose uptake may be beneficial for the failing heart. Thus, GLP-1 agonists, which stimulate glucose-dependent insulin release and enhance myocardial glucose uptake, have become a focus of investigation in both animal models and humans with heart failure. Limited pilot data for GLP-1 agonists shows potential improvements in systolic function, hemodynamics, and quality of life, forming the basis for current phase II trials.
Developing Ultra Reliable Life Support for the Moon and Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2009-01-01
Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.
Algorithm for Determination of Orion Ascent Abort Mode Achievability
NASA Technical Reports Server (NTRS)
Tedesco, Mark B.
2011-01-01
For human spaceflight missions, a launch vehicle failure poses the challenge of returning the crew safely to earth through environments that are often much more stressful than the nominal mission. Manned spaceflight vehicles require continuous abort capability throughout the ascent trajectory to protect the crew in the event of a failure of the launch vehicle. To provide continuous abort coverage during the ascent trajectory, different types of Orion abort modes have been developed. If a launch vehicle failure occurs, the crew must be able to quickly and accurately determine the appropriate abort mode to execute. Early in the ascent, while the Launch Abort System (LAS) is attached, abort mode selection is trivial, and any failures will result in a LAS abort. For failures after LAS jettison, the Service Module (SM) effectors are employed to perform abort maneuvers. Several different SM abort mode options are available depending on the current vehicle location and energy state. During this region of flight the selection of the abort mode that maximizes the survivability of the crew becomes non-trivial. To provide the most accurate and timely information to the crew and the onboard abort decision logic, on-board algorithms have been developed to propagate the abort trajectories based on the current launch vehicle performance and to predict the current abort capability of the Orion vehicle. This paper will provide an overview of the algorithm architecture for determining abort achievability as well as the scalar integration scheme that makes the onboard computation possible. Extension of the algorithm to assessing abort coverage impacts from Orion design modifications and launch vehicle trajectory modifications is also presented.
The impact of vaccine failure rate on epidemic dynamics in responsive networks.
Liang, Yu-Hao; Juang, Jonq
2015-04-01
An SIS model based on the microscopic Markov-chain approximation is considered in this paper. It is assumed that the individual vaccination behavior depends on the contact awareness, local and global information of an epidemic. To better simulate the real situation, the vaccine failure rate is also taken into consideration. Our main conclusions are given in the following. First, we show that if the vaccine failure rate α is zero, then the epidemic eventually dies out regardless of what the network structure is or how large the effective spreading rate and the immunization response rates of an epidemic are. Second, we show that for any positive α, there exists a positive epidemic threshold depending on an adjusted network structure, which is only determined by the structure of the original network, the positive vaccine failure rate and the immunization response rate for contact awareness. Moreover, the epidemic threshold increases with respect to the strength of the immunization response rate for contact awareness. Finally, if the vaccine failure rate and the immunization response rate for contact awareness are positive, then there exists a critical vaccine failure rate αc > 0 so that the disease free equilibrium (DFE) is stable (resp., unstable) if α < αc (resp., α > αc). Numerical simulations to see the effectiveness of our theoretical results are also provided.
Evolutionary Construction of Block-Based Neural Networks in Consideration of Failure
NASA Astrophysics Data System (ADS)
Takamori, Masahito; Koakutsu, Seiichi; Hamagami, Tomoki; Hirata, Hironori
In this paper we propose a modified gene coding and an evolutionary construction in consideration of failure in evolutionary construction of Block-Based Neural Networks. In the modified gene coding, we arrange the genes of weights on a chromosome in consideration of the position relation of the genes of weight and structure. By the modified gene coding, the efficiency of search by crossover is increased. Thereby, it is thought that improvement of the convergence rate of construction and shortening of construction time can be performed. In the evolutionary construction in consideration of failure, the structure which is adapted for failure is built in the state where failure occured. Thereby, it is thought that BBNN can be reconstructed in a short time at the time of failure. To evaluate the proposed method, we apply it to pattern classification and autonomous mobile robot control problems. The computational experiments indicate that the proposed method can improve convergence rate of construction and shorten of construction and reconstruction time.
Hao, Shengwang; Liu, Chao; Lu, Chunsheng; Elsworth, Derek
2016-06-16
A theoretical explanation of a time-to-failure relation is presented, with this relationship then used to describe the failure of materials. This provides the potential to predict timing (tf - t) immediately before failure by extrapolating the trajectory as it asymptotes to zero with no need to fit unknown exponents as previously proposed in critical power law behaviors. This generalized relation is verified by comparison with approaches to criticality for volcanic eruptions and creep failure. A new relation based on changes with stress is proposed as an alternative expression of Voight's relation, which is widely used to describe the accelerating precursory signals before material failure and broadly applied to volcanic eruptions, landslides and other phenomena. The new generalized relation reduces to Voight's relation if stress is limited to increase at a constant rate with time. This implies that the time-derivatives in Voight's analysis may be a subset of a more general expression connecting stress derivatives, and thus provides a potential method for forecasting these events.
Majeed, Raphael W; Stöhr, Mark R; Röhrig, Rainer
2012-01-01
Notifications and alerts play an important role in clinical daily routine. Rising prevalence of clinical decision support systems and electronic health records also result in increasing demands on notification systems. Failure adequately to communicate a critical value is a potential cause of adverse events. Critical laboratory values and changing vital data depend on timely notifications of medical staff. Vital monitors and medical devices rely on acoustic signals for alerting which are prone to "alert fatigue" and require medical staff to be present within audible range. Personal computers are unsuitable to display time critical notification messages, since the targeted medical staff are not always operating or watching the computer. On the other hand, mobile phones and smart devices enjoy increasing popularity. Previous notification systems sending text messages to mobile phones depend on asynchronous confirmations. By utilizing an automated telephony server, we provide a method to deliver notifications quickly and independently of the recipients' whereabouts while allowing immediate feedback and confirmations. Evaluation results suggest the feasibility of the proposed notification system for real-time notifications.
Retention modeling for ultra-thin density of Cu-based conductive bridge random access memory (CBRAM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aga, Fekadu Gochole; Woo, Jiyong; Lee, Sangheon
We investigate the effect of Cu concentration On-state resistance retention characteristics of W/Cu/Ti/HfO{sub 2}/Pt memory cell. The development of RRAM device for application depends on the understanding of the failure mechanism and the key parameters for device optimization. In this study, we develop analytical expression for cations (Cu{sup +}) diffusion model using Gaussian distribution for detailed analysis of data retention time at high temperature. It is found that the improvement of data retention time depends not only on the conductive filament (CF) size but also on Cu atoms concentration density in the CF. Based on the simulation result, better datamore » retention time is observed for electron wave function associated with Cu{sup +} overlap and an extended state formation. This can be verified by analytical calculation of Cu atom defects inside the filament, based on Cu{sup +} diffusion model. The importance of Cu diffusion for the device reliability and the corresponding local temperature of the filament were analyzed by COMSOL Multiphysics simulation.« less
Kung, Shu-Chen; Wang, Ching-Min; Lai, Chih-Cheng; Chao, Chien-Ming
2018-01-01
This retrospective cohort study investigated the outcomes and prognostic factors in nonagenarians (patients 90 years old or older) with acute respiratory failure. Between 2006 and 2016, all nonagenarians with acute respiratory failure requiring invasive mechanical ventilation (MV) were enrolled. Outcomes including in-hospital mortality and ventilator dependency were measured. A total of 173 nonagenarians with acute respiratory failure were admitted to the intensive care unit (ICU). A total of 56 patients died during the hospital stay and the rate of in-hospital mortality was 32.4%. Patients with higher APACHE (Acute Physiology and Chronic Health Evaluation) II scores (adjusted odds ratio [OR], 5.91; 95 % CI, 1.55-22.45; p = 0.009, APACHE II scores ≥ 25 vs APACHE II scores < 15), use of vasoactive agent (adjust OR, 2.67; 95% CI, 1.12-6.37; p = 0.03) and more organ dysfunction (adjusted OR, 11.13; 95% CI, 3.38-36.36, p < 0.001; ≥ 3 organ dysfunction vs ≤ 1 organ dysfunction) were more likely to die. Among the 117 survivors, 25 (21.4%) patients became dependent on MV. Female gender (adjusted OR, 3.53; 95% CI, 1.16-10.76, p = 0.027) and poor consciousness level (adjusted OR, 4.98; 95% CI, 1.41-17.58, p = 0.013) were associated with MV dependency. In conclusion, the mortality rate of nonagenarians with acute respiratory failure was high, especially for those with higher APACHE II scores or more organ dysfunction. PMID:29467961
Homer, Michael V.; Charo, Lindsey M.; Natarajan, Loki; Haunschild, Carolyn; Chung, Karine; Mao, Jun J.; DeMichele, Angela M.; Su, H. Irene
2016-01-01
Objective To determine if inter-individual genetic variation in single nucleotide polymorphisms related to age at natural menopause are associated with risk of ovarian failure in breast cancer survivors. Methods A prospective cohort of 169 premenopausal breast cancer survivors recruited at diagnosis with Stages 0 to III disease were followed longitudinally for menstrual pattern via self-reported daily menstrual diaries. Participants were genotyped for 13 single nucleotide polymorphisms (SNPs) previously found to be associated with age at natural menopause: EXO1, TLK1, HELQ, UIMC1, PRIM1, POLG, TMEM224, BRSK1, and MCM8. A risk variable summed the total number of risk alleles in each participant. The association between individual genotypes, as well as the risk variable, and time to ovarian failure (> 12 months of amenorrhea) was tested using time-to-event methods. Results Median age at enrollment was 40.5 years old (range 20.6–46.1). The majority of participants were white (69%) and underwent chemotherapy (76%). Thirty-eight participants (22%) experienced ovarian failure. None of the candidate SNPs or the summary risk variable were significantly associated with time to ovarian failure. Sensitivity analysis restricted to whites or only to participants receiving chemotherapy yielded similar findings. Older age, chemotherapy exposure and lower BMI were related to shorter time to ovarian failure. Conclusions Thirteen previously identified genetic variants associated with time to natural menopause were not related to timing of ovarian failure in breast cancer survivors. PMID:28118297
Homer, Michael V; Charo, Lindsey M; Natarajan, Loki; Haunschild, Carolyn; Chung, Karine; Mao, Jun J; DeMichele, Angela M; Su, H Irene
2017-06-01
To determine if interindividual genetic variation in single-nucleotide polymorphisms (SNPs) related to age at natural menopause is associated with risk of ovarian failure in breast cancer survivors. A prospective cohort of 169 premenopausal breast cancer survivors recruited at diagnosis with stages 0 to III disease were followed longitudinally for menstrual pattern via self-reported daily menstrual diaries. Participants were genotyped for 13 SNPs previously found to be associated with age at natural menopause: EXO1, TLK1, HELQ, UIMC1, PRIM1, POLG, TMEM224, BRSK1, and MCM8. A risk variable summed the total number of risk alleles in each participant. The association between individual genotypes, and also the risk variable, and time to ovarian failure (>12 months of amenorrhea) was tested using time-to-event methods. Median age at enrollment was 40.5 years (range 20.6-46.1). The majority of participants were white (69%) and underwent chemotherapy (76%). Thirty-eight participants (22%) experienced ovarian failure. None of the candidate SNPs or the summary risk variable was significantly associated with time to ovarian failure. Sensitivity analysis restricted to whites or only to participants receiving chemotherapy yielded similar findings. Older age, chemotherapy exposure, and lower body mass index were related to shorter time to ovarian failure. Thirteen previously identified genetic variants associated with time to natural menopause were not related to timing of ovarian failure in breast cancer survivors.
Hristoskova, Anna; Sakkalis, Vangelis; Zacharioudakis, Giorgos; Tsiknakis, Manolis; De Turck, Filip
2014-01-01
A major challenge related to caring for patients with chronic conditions is the early detection of exacerbations of the disease. Medical personnel should be contacted immediately in order to intervene in time before an acute state is reached, ensuring patient safety. This paper proposes an approach to an ambient intelligence (AmI) framework supporting real-time remote monitoring of patients diagnosed with congestive heart failure (CHF). Its novelty is the integration of: (i) personalized monitoring of the patients health status and risk stage; (ii) intelligent alerting of the dedicated physician through the construction of medical workflows on-the-fly; and (iii) dynamic adaptation of the vital signs’ monitoring environment on any available device or smart phone located in close proximity to the physician depending on new medical measurements, additional disease specifications or the failure of the infrastructure. The intelligence lies in the adoption of semantics providing for a personalized and automated emergency alerting that smoothly interacts with the physician, regardless of his location, ensuring timely intervention during an emergency. It is evaluated on a medical emergency scenario, where in the case of exceeded patient thresholds, medical personnel are localized and contacted, presenting ad hoc information on the patient's condition on the most suited device within the physician's reach. PMID:24445411
SLAMM: Visual monocular SLAM with continuous mapping using multiple maps
Md. Sabri, Aznul Qalid; Loo, Chu Kiong; Mansoor, Ali Mohammed
2018-01-01
This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor’s malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM. PMID:29702697
A Sensor Failure Simulator for Control System Reliability Studies
NASA Technical Reports Server (NTRS)
Melcher, K. J.; Delaat, J. C.; Merrill, W. C.; Oberle, L. G.; Sadler, G. G.; Schaefer, J. H.
1986-01-01
A real-time Sensor Failure Simulator (SFS) was designed and assembled for the Advanced Detection, Isolation, and Accommodation (ADIA) program. Various designs were considered. The design chosen features an IBM-PC/XT. The PC is used to drive analog circuitry for simulating sensor failures in real-time. A user defined scenario describes the failure simulation for each of the five incoming sensor signals. Capabilities exist for editing, saving, and retrieving the failure scenarios. The SFS has been tested closed-loop with the Controls Interface and Monitoring (CIM) unit, the ADIA control, and a real-time F100 hybrid simulation. From a productivity viewpoint, the menu driven user interface has proven to be efficient and easy to use. From a real-time viewpoint, the software controlling the simulation loop executes at greater than 100 cycles/sec.
A sensor failure simulator for control system reliability studies
NASA Astrophysics Data System (ADS)
Melcher, K. J.; Delaat, J. C.; Merrill, W. C.; Oberle, L. G.; Sadler, G. G.; Schaefer, J. H.
A real-time Sensor Failure Simulator (SFS) was designed and assembled for the Advanced Detection, Isolation, and Accommodation (ADIA) program. Various designs were considered. The design chosen features an IBM-PC/XT. The PC is used to drive analog circuitry for simulating sensor failures in real-time. A user defined scenario describes the failure simulation for each of the five incoming sensor signals. Capabilities exist for editing, saving, and retrieving the failure scenarios. The SFS has been tested closed-loop with the Controls Interface and Monitoring (CIM) unit, the ADIA control, and a real-time F100 hybrid simulation. From a productivity viewpoint, the menu driven user interface has proven to be efficient and easy to use. From a real-time viewpoint, the software controlling the simulation loop executes at greater than 100 cycles/sec.
NASA Technical Reports Server (NTRS)
Peters, K. A.; Atkinson, P. F.; Hammond, E. C., Jr.
1986-01-01
Reciprocity failure was examined for IIaO spectroscopic film. Three separate experiments were performed in order to study film batch variations, thermal and aging effects in relationship to reciprocity failure, and shifting of reciprocity failure points as a function of thermal and aging effects. The failure was examined over ranges of time between 5 and 60 seconds. The variation to illuminance was obtained by using thirty neutral density filters. A standard sensitometer device imprinted the wedge pattern on the film as exposure time was subjected to variation. The results indicate that film batch differences, temperature, and aging play an important role in reciprocity failure of IIaO spectroscopic film. A shifting of the failure points was also observed in various batches of film.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, M.G.; Kohles, S.S.; Stevens, T.L.
1996-12-31
Duality of failure mechanisms (slow crack growth from pre-existing defects versus cumulative creep damage) is examined in a silicon nitride advanced ceramic recently tested at elevated-temperatures. Static (constant stress over time), dynamic (monotonically-increasing stress over time), and cyclic (fluctuating stress over time) fatigue behaviors were evaluated in tension in ambient air at temperatures of 1150, 1260, and 1370{degrees}C for a hot-isostatically pressed monolithic {beta}-silicon nitride. At 1150{degrees}C, all three types of fatigue results showed the similar failure mechanism of slow crack growth (SCG). At 1260 and 1370{degrees}C the failure mechanism was more complex. Failure under static fatigue was dominated bymore » the accumulation of creep damage via diffusion-controlled cavities. In dynamic fatigue, failure occurred by SCG at high stress rates (>10{sup {minus}2}MPa/s) and by creep damage at low stress rates ({le}10{sup {minus}2} MPa/s). For cyclic fatigue, such rate effects influenced the stress rupture results in which times to failure were greater for dynamic and cyclic fatigue than for static fatigue. Elucidation of failure mechanisms is necessary for accurate prediction of long-term survivability and reliability of structural ceramics.« less
Gilbert, P B; Ribaudo, H J; Greenberg, L; Yu, G; Bosch, R J; Tierney, C; Kuritzkes, D R
2000-09-08
At present, many clinical trials of anti-HIV-1 therapies compare treatments by a primary endpoint that measures the durability of suppression of HIV-1 replication. Several durability endpoints are compared. Endpoints are compared by their implicit assumptions regarding surrogacy for clinical outcomes, sample size requirements, and accommodations for inter-patient differences in baseline plasma HIV-1-RNA levels and in initial treatment response. Virological failure is defined by the non-suppression of virus levels at a prespecified follow-up time T(early virological failure), or by relapse. A binary virological failure endpoint is compared with three time-to-virological failure endpoints: time from (i) randomization that assigns early failures a failure time of T weeks; (ii) randomization that extends the early failure time T for slowly responding subjects; and (iii) virological response that assigns non-responders a failure time of 0 weeks. Endpoint differences are illustrated with Agouron's trial 511. In comparing high with low-dose nelfinavir (NFV) regimens in Agouron 511, the difference in Kaplan-Meier estimates of the proportion not failing by 24 weeks is 16.7% (P = 0.048), 6.5% (P = 0.29) and 22.9% (P = 0.0030) for endpoints (i), (ii) and (iii), respectively. The results differ because NFV suppresses virus more quickly at the higher dose, and the endpoints weigh this treatment difference differently. This illustrates that careful consideration needs to be given to choosing a primary endpoint that will detect treatment differences of interest. A time from randomization endpoint is usually recommended because of its advantages in flexibility and sample size, especially at interim analyses, and for its interpretation for patient management.
NASA Astrophysics Data System (ADS)
Nemeth, Noel N.; Jadaan, Osama M.; Palfi, Tamas; Baker, Eric H.
Brittle materials today are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts, thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing brittle material components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The NASA CARES/Life 1 (Ceramic Analysis and Reliability Evaluation of Structure/Life) code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. This capability includes predicting the time-dependent failure probability of ceramic components against catastrophic rupture when subjected to transient thermomechanical loads (including cyclic loads). The developed methodology allows for changes in material response that can occur with temperature or time (i.e. changing fatigue and Weibull parameters with temperature or time). For this article an overview of the transient reliability methodology and how this methodology is extended to account for proof testing is described. The CARES/Life code has been modified to have the ability to interface with commercially available finite element analysis (FEA) codes executed for transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.
Time-dependent breakdown of fiber networks: Uncertainty of lifetime
NASA Astrophysics Data System (ADS)
Mattsson, Amanda; Uesaka, Tetsu
2017-05-01
Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.
Reid, Ryan; Ezekowitz, Justin A.; Brown, Paul M.; McAlister, Finlay A.; Rowe, Brian H.; Braam, Branko
2015-01-01
Background Worsening and improving renal function during acute heart failure have been associated with adverse outcomes but few studies have considered the admission level of renal function upon which these changes are superimposed. Objectives The objective of this study was to evaluate definitions that incorporate both admission renal function and change in renal function. Methods 696 patients with acute heart failure with calculable eGFR were classified by admission renal function (Reduced [R, eGFR<45 ml/min] or Preserved [P, eGFR≥45 ml/min]) and change over hospital admission (worsening [WRF]: eGFR ≥20% decline; stable [SRF]; and improving [IRF]: eGFR ≥20% increase). The primary outcome was all-cause mortality. The prevalence of Pres and Red renal function was 47.8% and 52.2%. The frequency of R-WRF, R-SRF, and R-IRF was 11.4%, 28.7%, and 12.1%, respectively; the incidence of P-WRF, P-SRF, and P-IRF was 5.7%, 35.3%, and 6.8%, respectively. Survival was shorter for patients with R-WRF compared to R-IRF (median survival times 13.9 months (95%CI 7.7–24.9) and 32.5 months (95%CI 18.8–56.1), respectively), resulting in an acceleration factor of 2.3 (p = 0.016). Thus, an increase compared with a decrease in renal function was associated with greater than two times longer survival among patients with Reduced renal function. PMID:26380982
Constraints on deep moonquake focal mechanisms through analyses of tidal stress
Weber, R.C.; Bills, B.G.; Johnson, C.L.
2009-01-01
[1] A relationship between deep moonquake occurrence and tidal forcing is suggested by the monthly periodicities observed in the occurrence times of events recorded by the Apollo Passive Seismic Experiment. In addition, the typically large S wave to P wave arrival amplitude ratios observed on deep moonquake seismograms are indicative of shear failure. Tidal stress, induced in the lunar interior by the gravitational influence of the Earth, may influence moonquake activity. We investigate the relationship between tidal stress and deep moonquake occurrence by searching for a linear combination of the normal and shear components of tidal stress that best approximates a constant value when evaluated at the times of moonquakes from 39 different moonquake clusters. We perform a grid search at each cluster location, computing the stresses resolved onto a suite of possible failure planes, to obtain the best fitting fault orientation at each location. We find that while linear combinations of stresses (and in some cases stress rates) can fit moonquake occurrence at many clusters quite well; for other clusters, the fit is not strongly dependent on plane orientation. This suggests that deep moonquakes may occur in response to factors other than, or in addition to, tidal stress. Several of our inferences support the hypothesis that deep moonquakes might be related to transformational faulting, in which shear failure is induced by mineral phase changes at depth. The occurrence of this process would have important implications for the lunar interior. Copyright 2009 by the American Geophysical Union.
Nattala, Prasanthi; Murthy, Pratima; Leung, Kit Sang; Rentala, Sreevani; Ramakrishna, Jayashree
2017-04-25
Returning to alcohol use following inpatient treatment occurs due to various real life cues/triggers. It is a challenge to demonstrate to patients how to deal with these triggers during inpatient treatment. Aims of the current study were (a) to evaluate the effectiveness of video-enabled cue-exposure-based intervention (VE-CEI) in influencing treatment outcomes in alcohol dependence, (b) to identify postdischarge predictors of intervention failure (returning to ≥50% of baseline alcohol consumption quantity/day). The VE-CEI comprises live action videos in which human characters model various alcohol use cues and strategies to deal with them effectively. The VE-CEI was administered to an inpatient alcohol-dependent sample (n = 43) and compared with treatment as usual (TAU) (n = 42) at a government addiction treatment setting in India. Patients were followed up over 6 months postdischarge to evaluate effectiveness of the VE-CEI on specific drinking outcomes. Over 6-month follow-up, VE-CEI group (vs. TAU) reported significantly lesser alcohol consumption quantity, fewer drinking days, and lower intervention failure rates. Results of multivariate Cox regression showed that participants who did not receive VE-CEI had an elevated risk of intervention failure (hazards ratio: 11.14; 95% confidence interval [4.93, 25.15]), other intervention failure predictors being early-onset dependence and increased baseline drinking. Findings provide evidence from India for effectiveness of cue-exposure-based intervention delivered using video technology in improving postdischarge treatment outcomes.
NASA Technical Reports Server (NTRS)
Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.
2016-01-01
Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshal Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.
NASA Technical Reports Server (NTRS)
Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.
2015-01-01
Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshall Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.
Agenesis of the ductus venosus-A case with favorable outcome after early signs of cardiac failure.
Hofmann, Sigrun R; Heilmann, Antje; Häusler, Hans J; Kamin, Gabriele; Nitzsche, Katharina I
2013-01-01
Absence of the ductus venosus (ADV) is a rare vascular anomaly. Its prognosis depends on the pathway of the umbilical flow to the systemic venous circulation, and the presence or absence of associated structural or chromosomal anomalies, sometimes resulting in hydrops fetalis. In cases with isolated ADV in the absence of associated anomalies, survival rates are as high as 85%, depending on the shunt situation. Here, we report a patient with ADV and extrahepatic umbilical vein drainage with favorable outcome after intrauterine reversal of early signs of cardiac failure. Diagnosis was made after the appearance of moderate cardiomegaly in the 25th gestational week. Thus, in the case of cardiomegaly with or without further signs of cardiac failure, ultrasound imaging of the venous duct should be considered. Copyright © 2012 Wiley Periodicals, Inc.
Software reliability experiments data analysis and investigation
NASA Technical Reports Server (NTRS)
Walker, J. Leslie; Caglayan, Alper K.
1991-01-01
The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.
Approximation of Failure Probability Using Conditional Sampling
NASA Technical Reports Server (NTRS)
Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.
2008-01-01
In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.
Life Support with Failures and Variable Supply
NASA Technical Reports Server (NTRS)
Jones, Harry
2010-01-01
The life support system for long duration missions will recycle oxygen and water to reduce the material resupply mass from Earth. The impact of life support failures was investigated by dynamic simulation of a lunar outpost habitat life support model. The model was modified to simulate resupply delays, power failures, recycling system failures, and storage failures. Many failures impact the lunar outpost water supply directly or indirectly, depending on the water balance and water storage. Failure effects on the water supply are reduced if Extra Vehicular Activity (EVA) water use is low and the water supply is ample. Additional oxygen can be supplied by scavenging unused propellant or by production from regolith, but the amounts obtained can vary significantly. The requirements for oxygen and water can also vary significantly, especially for EVA. Providing storage buffers can improve efficiency and reliability, and minimize the chance of supply failing to meet demand. Life support failures and supply variations can be survivable if effective solutions are provided by the system design
A geometric approach to failure detection and identification in linear systems
NASA Technical Reports Server (NTRS)
Massoumnia, M. A.
1986-01-01
Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.
Jipp, Meike
2016-12-01
This study explored whether working memory and sustained attention influence cognitive lock-up, which is a delay in the response to consecutive automation failures. Previous research has demonstrated that the information that automation provides about failures and the time pressure that is associated with a task influence cognitive lock-up. Previous research has also demonstrated considerable variability in cognitive lock-up between participants. This is why individual differences might influence cognitive lock-up. The present study tested whether working memory-including flexibility in executive functioning-and sustained attention might be crucial in this regard. Eighty-five participants were asked to monitor automated aircraft functions. The experimental manipulation consisted of whether or not an initial automation failure was followed by a consecutive failure. Reaction times to the failures were recorded. Participants' working-memory and sustained-attention abilities were assessed with standardized tests. As expected, participants' reactions to consecutive failures were slower than their reactions to initial failures. In addition, working-memory and sustained-attention abilities enhanced the speed with which participants reacted to failures, more so with regard to consecutive than to initial failures. The findings highlight that operators with better working memory and sustained attention have small advantages when initial failures occur, but their advantages increase across consecutive failures. The results stress the need to consider personnel selection strategies to mitigate cognitive lock-up in general and training procedures to enhance the performance of low ability operators. © 2016, Human Factors and Ergonomics Society.
Simultaneous specimen current and time-dependent cathodoluminescence measurements on gallium nitride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campo, E. M., E-mail: e.campo@bangor.ac.uk; Hopkins, L.; Pophristic, M.
2016-06-28
Time-dependent cathodoluminescence (CL) and specimen current (SC) are monitored to evaluate trapping behavior and evolution of charge storage. Examination of CL and SC suggests that the near band edge emission in GaN is reduced primarily by the activation of traps upon irradiation, and Gallium vacancies are prime candidates. At the steady state, measurement of the stored charge by empiric-analytical methods suggests that all available traps within the interaction volume have been filled, and that additional charge is being stored interstitially, necessarily beyond the interaction volume. Once established, the space charge region is responsible for the steady state CL emission and,more » prior to build up, it is responsible for the generation of diffusion currents. Since the non-recombination effects resulting from diffusion currents that develop early on are analogous to those leading to device failure upon aging, this study is fundamental toward a holistic insight into optical properties in GaN.« less
Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo
2016-11-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.
Uncertainty and Intelligence in Computational Stochastic Mechanics
NASA Technical Reports Server (NTRS)
Ayyub, Bilal M.
1996-01-01
Classical structural reliability assessment techniques are based on precise and crisp (sharp) definitions of failure and non-failure (survival) of a structure in meeting a set of strength, function and serviceability criteria. These definitions are provided in the form of performance functions and limit state equations. Thus, the criteria provide a dichotomous definition of what real physical situations represent, in the form of abrupt change from structural survival to failure. However, based on observing the failure and survival of real structures according to the serviceability and strength criteria, the transition from a survival state to a failure state and from serviceability criteria to strength criteria are continuous and gradual rather than crisp and abrupt. That is, an entire spectrum of damage or failure levels (grades) is observed during the transition to total collapse. In the process, serviceability criteria are gradually violated with monotonically increasing level of violation, and progressively lead into the strength criteria violation. Classical structural reliability methods correctly and adequately include the ambiguity sources of uncertainty (physical randomness, statistical and modeling uncertainty) by varying amounts. However, they are unable to adequately incorporate the presence of a damage spectrum, and do not consider in their mathematical framework any sources of uncertainty of the vagueness type. Vagueness can be attributed to sources of fuzziness, unclearness, indistinctiveness, sharplessness and grayness; whereas ambiguity can be attributed to nonspecificity, one-to-many relations, variety, generality, diversity and divergence. Using the nomenclature of structural reliability, vagueness and ambiguity can be accounted for in the form of realistic delineation of structural damage based on subjective judgment of engineers. For situations that require decisions under uncertainty with cost/benefit objectives, the risk of failure should depend on the underlying level of damage and the uncertainties associated with its definition. A mathematical model for structural reliability assessment that includes both ambiguity and vagueness types of uncertainty was suggested to result in the likelihood of failure over a damage spectrum. The resulting structural reliability estimates properly represent the continuous transition from serviceability to strength limit states over the ultimate time exposure of the structure. In this section, a structural reliability assessment method based on a fuzzy definition of failure is suggested to meet these practical needs. A failure definition can be developed to indicate the relationship between failure level and structural response. In this fuzzy model, a subjective index is introduced to represent all levels of damage (or failure). This index can be interpreted as either a measure of failure level or a measure of a degree of belief in the occurrence of some performance condition (e.g., failure). The index allows expressing the transition state between complete survival and complete failure for some structural response based on subjective evaluation and judgment.
Launch Vehicle Failure Dynamics and Abort Triggering Analysis
NASA Technical Reports Server (NTRS)
Hanson, John M.; Hill, Ashely D.; Beard, Bernard B.
2011-01-01
Launch vehicle ascent is a time of high risk for an on-board crew. There are many types of failures that can kill the crew if the crew is still on-board when the failure becomes catastrophic. For some failure scenarios, there is plenty of time for the crew to be warned and to depart, whereas in some there is insufficient time for the crew to escape. There is a large fraction of possible failures for which time is of the essence and a successful abort is possible if the detection and action happens quickly enough. This paper focuses on abort determination based primarily on data already available from the GN&C system. This work is the result of failure analysis efforts performed during the Ares I launch vehicle development program. Derivation of attitude and attitude rate abort triggers to ensure that abort occurs as quickly as possible when needed, but that false positives are avoided, forms a major portion of the paper. Some of the potential failure modes requiring use of these triggers are described, along with analysis used to determine the success rate of getting the crew off prior to vehicle demise.
The reliability-quality relationship for quality systems and quality risk management.
Claycamp, H Gregg; Rahaman, Faiad; Urban, Jason M
2012-01-01
Engineering reliability typically refers to the probability that a system, or any of its components, will perform a required function for a stated period of time and under specified operating conditions. As such, reliability is inextricably linked with time-dependent quality concepts, such as maintaining a state of control and predicting the chances of losses from failures for quality risk management. Two popular current good manufacturing practice (cGMP) and quality risk management tools, failure mode and effects analysis (FMEA) and root cause analysis (RCA) are examples of engineering reliability evaluations that link reliability with quality and risk. Current concepts in pharmaceutical quality and quality management systems call for more predictive systems for maintaining quality; yet, the current pharmaceutical manufacturing literature and guidelines are curiously silent on engineering quality. This commentary discusses the meaning of engineering reliability while linking the concept to quality systems and quality risk management. The essay also discusses the difference between engineering reliability and statistical (assay) reliability. The assurance of quality in a pharmaceutical product is no longer measured only "after the fact" of manufacturing. Rather, concepts of quality systems and quality risk management call for designing quality assurance into all stages of the pharmaceutical product life cycle. Interestingly, most assays for quality are essentially static and inform product quality over the life cycle only by being repeated over time. Engineering process reliability is the fundamental concept that is meant to anticipate quality failures over the life cycle of the product. Reliability is a well-developed theory and practice for other types of manufactured products and manufacturing processes. Thus, it is well known to be an appropriate index of manufactured product quality. This essay discusses the meaning of reliability and its linkages with quality systems and quality risk management.
Using street view imagery for 3-D survey of rock slope failures
NASA Astrophysics Data System (ADS)
Voumard, Jérémie; Abellán, Antonio; Nicolet, Pierrick; Penna, Ivanna; Chanut, Marie-Aurélie; Derron, Marc-Henri; Jaboyedoff, Michel
2017-12-01
We discuss here different challenges and limitations of surveying rock slope failures using 3-D reconstruction from image sets acquired from street view imagery (SVI). We show how rock slope surveying can be performed using two or more image sets using online imagery with photographs from the same site but acquired at different instances. Three sites in the French alps were selected as pilot study areas: (1) a cliff beside a road where a protective wall collapsed, consisting of two image sets (60 and 50 images in each set) captured within a 6-year time frame; (2) a large-scale active landslide located on a slope at 250 m from the road, using seven image sets (50 to 80 images per set) from five different time periods with three image sets for one period; (3) a cliff over a tunnel which has collapsed, using two image sets captured in a 4-year time frame. The analysis include the use of different structure from motion (SfM) programs and a comparison between the extracted photogrammetric point clouds and a lidar-derived mesh that was used as a ground truth. Results show that both landslide deformation and estimation of fallen volumes were clearly identified in the different point clouds. Results are site- and software-dependent, as a function of the image set and number of images, with model accuracies ranging between 0.2 and 3.8 m in the best and worst scenario, respectively. Although some limitations derived from the generation of 3-D models from SVI were observed, this approach allowed us to obtain preliminary 3-D models of an area without on-field images, allowing extraction of the pre-failure topography that would not be available otherwise.
Applications of crude incidence curves.
Korn, E L; Dorey, F J
1992-04-01
Crude incidence curves display the cumulative number of failures of interest as a function of time. With competing causes of failure, they are distinct from cause-specific incidence curves that treat secondary types of failures as censored observations. After briefly reviewing their definition and estimation, we present five applications of crude incidence curves to show their utility in a broad range of studies. In some of these applications it is helpful to model survival-time distributions with use of two different time metameters, for example, time from diagnosis and age of the patient. We describe how one can incorporate published vital statistics into the models when secondary types of failure correspond to common causes of death.
NASA Astrophysics Data System (ADS)
Eck, M.; Mukunda, M.
The proliferation of space vehicle launch sites and the projected utilization of these facilities portends an increase in the number of on-pad, ascent, and on-orbit solid-rocket motor (SRM) casings and liquid-rocket tanks which will randomly fail or will fail from range destruct actions. Beyond the obvious safety implications, these failures may have serious resource implications for mission system and facility planners. SRM-casing failures and liquid-rocket tankage failures result in the generation of large, high velocity fragments which may be serious threats to the safety of launch support personnel if proper bunkers and exclusion areas are not provided. In addition, these fragments may be indirect threats to the general public's safety if they encounter hazardous spacecraft payloads which have not been designed to withstand shrapnel of this caliber. They may also become threats to other spacecraft if, by failing on-orbit, they add to the ever increasing space-junk collision cross-section. Most prior attempts to assess the velocity of fragments from failed SRM casings have simply assigned the available chamber impulse to available casing and fuel mass and solved the resulting momentum balance for velocity. This method may predict a fragment velocity which is high or low by a factor of two depending on the ratio of fuel to casing mass extant at the time of failure. Recognizing the limitations of existing methods, the authors devised an analytical approach which properly partitions the available impulse to each major system-mass component. This approach uses the Physics International developed PISCES code to couple the forces generated by an Eulerian modeled gas flow field to a Lagrangian modeled fuel and casing system. The details of a predictive analytical modeling process as well as the development of normalized relations for momentum partition as a function of SRM burn time and initial geometry are discussed in this paper. Methods for applying similar modeling techniques to liquid-tankage-over-pressure failures are also discussed. These methods have been calibrated against observed SRM ascent failures and on-orbit tankage failures. Casing-quadrant sized fragments with velocities exceeding 100 m/s resulted from Titan 34D-SRM range destruct actions at 10 s mission elapsed time (MET). Casing-quadrant sized fragments with velocities of approx. 200 m/s resulted from STS-SRM range destruct actions at 110 s MET. Similar sized fragments for Ariane third stage and Delta second stage tankage were predicted to have maximum velocities of 260 and 480 m/s respectively. Good agreement was found between the predictions and observations for five specific events and it was concluded that the methods developed have good potential for use in predicting the fragmentation process of a number of generically similar casing and tankage systems.
Flexural Progressive Failure of Carbon/Glass Interlayer and Intralayer Hybrid Composites.
Wang, Qingtao; Wu, Weili; Gong, Zhili; Li, Wei
2018-04-17
The flexural progressive failure modes of carbon fiber and glass fiber (C/G) interlayer and intralayer hybrid composites were investigated in this work. Results showed that the bending failure modes for interlayer hybrid composites are determined by the layup structure. Besides, the bending failure is characterized by the compression failure of the upper layer, when carbon fiber tends to distribute in the upper layer, the interlayer hybrid composite fails early, the failure force is characterized by a multi-stage slightly fluctuating decline and the fracture area exhibits a diamond shape. While carbon fiber distributes in the middle or bottom layer, the failure time starts late, and the failure process exhibits one stage sharp force/stress drop, the fracture zone of glass fiber above the carbon layers presents an inverted trapezoid shape, while the fracture of glass fiber below the carbon layers exhibits an inverted triangular shape. With regards to the intralayer hybrid composites, the C/G hybrid ratio plays a dominating role in the bending failure which could be considered as the mixed failures of four structures. The bending failure of intralayer hybrid composites occurs in advance since carbon fiber are located in each layer; the failure process shows a multi-stage fluctuating decline, and the decline slows down as carbon fiber content increases, and the fracture sound release has the characteristics of a low intensity and high frequency for a long time. By contrast, as glass fiber content increases, the bending failure of intralayer composites is featured with a multi-stage cliff decline with a high amplitude and low frequency for a short-time fracture sound release.
NASA Astrophysics Data System (ADS)
Edwards, John L.; Beekman, Randy M.; Buchanan, David B.; Farner, Scott; Gershzohn, Gary R.; Khuzadi, Mbuyi; Mikula, D. F.; Nissen, Gerry; Peck, James; Taylor, Shaun
2007-04-01
Human space travel is inherently dangerous. Hazardous conditions will exist. Real time health monitoring of critical subsystems is essential for providing a safe abort timeline in the event of a catastrophic subsystem failure. In this paper, we discuss a practical and cost effective process for developing critical subsystem failure detection, diagnosis and response (FDDR). We also present the results of a real time health monitoring simulation of a propellant ullage pressurization subsystem failure. The health monitoring development process identifies hazards, isolates hazard causes, defines software partitioning requirements and quantifies software algorithm development. The process provides a means to establish the number and placement of sensors necessary to provide real time health monitoring. We discuss how health monitoring software tracks subsystem control commands, interprets off-nominal operational sensor data, predicts failure propagation timelines, corroborate failures predictions and formats failure protocol.
Intelligent Design and Intelligent Failure
NASA Technical Reports Server (NTRS)
Jerman, Gregory
2015-01-01
Good Evening, my name is Greg Jerman and for nearly a quarter century I have been performing failure analysis on NASA's aerospace hardware. During that time I had the distinct privilege of keeping the Space Shuttle flying for two thirds of its history. I have analyzed a wide variety of failed hardware from simple electrical cables to cryogenic fuel tanks to high temperature turbine blades. During this time I have found that for all the time we spend intelligently designing things, we need to be equally intelligent about understanding why things fail. The NASA Flight Director for Apollo 13, Gene Kranz, is best known for the expression "Failure is not an option." However, NASA history is filled with failures both large and small, so it might be more accurate to say failure is inevitable. It is how we react and learn from our failures that makes the difference.
Predictability of Landslide Timing From Quasi-Periodic Precursory Earthquakes
NASA Astrophysics Data System (ADS)
Bell, Andrew F.
2018-02-01
Accelerating rates of geophysical signals are observed before a range of material failure phenomena. They provide insights into the physical processes controlling failure and the basis for failure forecasts. However, examples of accelerating seismicity before landslides are rare, and their behavior and forecasting potential are largely unknown. Here I use a Bayesian methodology to apply a novel gamma point process model to investigate a sequence of quasiperiodic repeating earthquakes preceding a large landslide at Nuugaatsiaq in Greenland in June 2017. The evolution in earthquake rate is best explained by an inverse power law increase with time toward failure, as predicted by material failure theory. However, the commonly accepted power law exponent value of 1.0 is inconsistent with the data. Instead, the mean posterior value of 0.71 indicates a particularly rapid acceleration toward failure and suggests that only relatively short warning times may be possible for similar landslides in future.
Physical nature of longevity of light actinides in dynamic failure phenomenon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchaev, A. Ya., E-mail: uchaev@expd.vniief.ru; Punin, V. T.; Selchenkova, N. I.
It is shown in this work that the physical nature of the longevity of light actinides under extreme conditions in a range of nonequilibrium states of t ∼ 10{sup –6}–10{sup –10} s is determined by the time needed for the formation of a critical concentration of a cascade of failure centers, which changes connectivity of the body. These centers form a percolation cluster. The longevity is composed of waiting time t{sub w} for the appearance of failure centers and clusterization time t{sub c} of cascade of failure centers, when connectivity in the system of failure centers and the percolation clustermore » arise. A unique mechanism of the dynamic failure process, a unique order parameter, and an equal dimensionality of the space in which the process occurs determine the physical nature of the longevity of metals, including fissionable materials.« less
Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Dubben, H H; Beck-Bornholdt, H P
2000-12-01
The statistical quality of the contributions to "Strahlentherapie und Onkologie" is assessed, aiming for improvement of the journal and consequently its impact factor. All 181 articles published during 1998 and 1999 in the categories "review", "original contribution", and "short communication" were analyzed concerning actuarial analysis of time-failure data. One hundred and twenty-three publications without time-failure data were excluded from analysis. Forty-five of the remaining 58 publications with time-failure data were evaluated actuarially. This corresponds to 78% (95% confidence interval: 64 to 88%) of papers, in which data were adequately analyzed. Complications were reported in 16 of 58 papers, but in only 3 cases actuarially. The number of patients at risk during the course of follow-up was documented adequately in 22 of the 45 publications with actuarial analysis. Authors, peer reviewers, and editors could contribute to improve the quality of the journal by setting value on acturial analysis of time-failure data.
Conceptual modeling of coincident failures in multiversion software
NASA Technical Reports Server (NTRS)
Littlewood, Bev; Miller, Douglas R.
1989-01-01
Recent work by Eckhardt and Lee (1985) shows that independently developed program versions fail dependently (specifically, simultaneous failure of several is greater than would be the case under true independence). The present authors show there is a precise duality between input choice and program choice in this model and consider a generalization in which different versions can be developed using diverse methodologies. The use of diverse methodologies is shown to decrease the probability of the simultaneous failure of several versions. Indeed, it is theoretically possible to obtain versions which exhibit better than independent failure behavior. The authors try to formalize the notion of methodological diversity by considering the sequence of decision outcomes that constitute a methodology. They show that diversity of decision implies likely diversity of behavior for the different verions developed under such forced diversity. For certain one-out-of-n systems the authors obtain an optimal method for allocating diversity between versions. For two-out-of-three systems there seem to be no simple optimality results which do not depend on constraints which cannot be verified in practice.