A bivariate model for analyzing recurrent multi-type automobile failures
NASA Astrophysics Data System (ADS)
Sunethra, A. A.; Sooriyarachchi, M. R.
2017-09-01
The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by the bivariate model. The proposed model can be used to determine the time and type of failure that would occur in the automobiles considered here.
Evaluation of a Multi-Axial, Temperature, and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.; Rudolphi, Michael (Technical Monitor)
2002-01-01
To obtain a better understanding the response of the structural adhesives used in the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle, an extensive effort has been conducted to characterize in detail the failure properties of these adhesives. This effort involved the development of a failure model that includes the effects of multi-axial loading, temperature, and time. An understanding of the effects of these parameters on the failure of the adhesive is crucial to the understanding and prediction of the safety of the RSRM nozzle. This paper documents the use of this newly developed multi-axial, temperature, and time (MATT) dependent failure model for modeling failure for the adhesives TIGA 321, EA913NA, and EA946. The development of the mathematical failure model using constant load rate normal and shear test data is presented. Verification of the accuracy of the failure model is shown through comparisons between predictions and measured creep and multi-axial failure data. The verification indicates that the failure model performs well for a wide range of conditions (loading, temperature, and time) for the three adhesives. The failure criterion is shown to be accurate through the glass transition for the adhesive EA946. Though this failure model has been developed and evaluated with adhesives, the concepts are applicable for other isotropic materials.
Chan, Kwun Chuen Gary; Wang, Mei-Cheng
2017-01-01
Recurrent event processes with marker measurements are mostly and largely studied with forward time models starting from an initial event. Interestingly, the processes could exhibit important terminal behavior during a time period before occurrence of the failure event. A natural and direct way to study recurrent events prior to a failure event is to align the processes using the failure event as the time origin and to examine the terminal behavior by a backward time model. This paper studies regression models for backward recurrent marker processes by counting time backward from the failure event. A three-level semiparametric regression model is proposed for jointly modeling the time to a failure event, the backward recurrent event process, and the marker observed at the time of each backward recurrent event. The first level is a proportional hazards model for the failure time, the second level is a proportional rate model for the recurrent events occurring before the failure event, and the third level is a proportional mean model for the marker given the occurrence of a recurrent event backward in time. By jointly modeling the three components, estimating equations can be constructed for marked counting processes to estimate the target parameters in the three-level regression models. Large sample properties of the proposed estimators are studied and established. The proposed models and methods are illustrated by a community-based AIDS clinical trial to examine the terminal behavior of frequencies and severities of opportunistic infections among HIV infected individuals in the last six months of life.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr
Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
Multiaxial Temperature- and Time-Dependent Failure Model
NASA Technical Reports Server (NTRS)
Richardson, David; McLennan, Michael; Anderson, Gregory; Macon, David; Batista-Rodriquez, Alicia
2003-01-01
A temperature- and time-dependent mathematical model predicts the conditions for failure of a material subjected to multiaxial stress. The model was initially applied to a filled epoxy below its glass-transition temperature, and is expected to be applicable to other materials, at least below their glass-transition temperatures. The model is justified simply by the fact that it closely approximates the experimentally observed failure behavior of this material: The multiaxiality of the model has been confirmed (see figure) and the model has been shown to be applicable at temperatures from -20 to 115 F (-29 to 46 C) and to predict tensile failures of constant-load and constant-load-rate specimens with failure times ranging from minutes to months..
Real-time diagnostics of the reusable rocket engine using on-line system identification
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1990-01-01
A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
Predictive modeling of dynamic fracture growth in brittle materials with machine learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel
We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less
Predictive modeling of dynamic fracture growth in brittle materials with machine learning
Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel; ...
2018-02-22
We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less
Chen, Ling; Feng, Yanqin; Sun, Jianguo
2017-10-01
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.
2017-01-01
Producing predictions of the probabilistic risks of operating materials for given lengths of time at stated operating conditions requires the assimilation of existing deterministic creep life prediction models (that only predict the average failure time) with statistical models that capture the random component of creep. To date, these approaches have rarely been combined to achieve this objective. The first half of this paper therefore provides a summary review of some statistical models to help bridge the gap between these two approaches. The second half of the paper illustrates one possible assimilation using 1Cr1Mo-0.25V steel. The Wilshire equation for creep life prediction is integrated into a discrete hazard based statistical model—the former being chosen because of its novelty and proven capability in accurately predicting average failure times and the latter being chosen because of its flexibility in modelling the failure time distribution. Using this model it was found that, for example, if this material had been in operation for around 15 years at 823 K and 130 MPa, the chances of failure in the next year is around 35%. However, if this material had been in operation for around 25 years, the chance of failure in the next year rises dramatically to around 80%. PMID:29039773
On the use and the performance of software reliability growth models
NASA Technical Reports Server (NTRS)
Keiller, Peter A.; Miller, Douglas R.
1991-01-01
We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.
Improved Multi-Axial, Temperature and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.
2002-01-01
An extensive effort has recently been completed by the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program to completely characterize the effects of multi-axial loading, temperature and time on the failure characteristics of three filled epoxy adhesives (TIGA 321, EA913NA, EA946). As part of this effort, a single general failure criterion was developed that accounted for these effects simultaneously. This model was named the Multi- Axial, Temperature, and Time Dependent or MATT failure criterion. Due to the intricate nature of the failure criterion, some parameters were required to be calculated using complex equations or numerical methods. This paper documents some simple but accurate modifications to the failure criterion to allow for calculations of failure conditions without complex equations or numerical techniques.
Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.
Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi
2015-10-01
In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.
Fault management for the Space Station Freedom control center
NASA Technical Reports Server (NTRS)
Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet
1992-01-01
This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.
Real-time automated failure analysis for on-orbit operations
NASA Technical Reports Server (NTRS)
Kirby, Sarah; Lauritsen, Janet; Pack, Ginger; Ha, Anhhoang; Jowers, Steven; Mcnenny, Robert; Truong, The; Dell, James
1993-01-01
A system which is to provide real-time failure analysis support to controllers at the NASA Johnson Space Center Control Center Complex (CCC) for both Space Station and Space Shuttle on-orbit operations is described. The system employs monitored systems' models of failure behavior and model evaluation algorithms which are domain-independent. These failure models are viewed as a stepping stone to more robust algorithms operating over models of intended function. The described system is designed to meet two sets of requirements. It must provide a useful failure analysis capability enhancement to the mission controller. It must satisfy CCC operational environment constraints such as cost, computer resource requirements, verification, and validation. The underlying technology and how it may be used to support operations is also discussed.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Semiparametric regression analysis of failure time data with dependent interval censoring.
Chen, Chyong-Mei; Shen, Pao-Sheng
2017-09-20
Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A Novel Solution-Technique Applied to a Novel WAAS Architecture
NASA Technical Reports Server (NTRS)
Bavuso, J.
1998-01-01
The Federal Aviation Administration has embarked on an historic task of modernizing and significantly improving the national air transportation system. One system that uses the Global Positioning System (GPS) to determine aircraft navigational information is called the Wide Area Augmentation System (WAAS). This paper describes a reliability assessment of one candidate system architecture for the WAAS. A unique aspect of this study regards the modeling and solution of a candidate system that allows a novel cold sparing scheme. The cold spare is a WAAS communications satellite that is fabricated and launched after a predetermined number of orbiting satellite failures have occurred and after some stochastic fabrication time transpires. Because these satellites are complex systems with redundant components, they exhibit an increasing failure rate with a Weibull time to failure distribution. Moreover, the cold spare satellite build-time is Weibull and upon launch is considered to be a good-as-new system with an increasing failure rate and a Weibull time to failure distribution as well. The reliability model for this system is non-Markovian because three distinct system clocks are required: the time to failure of the orbiting satellites, the build time for the cold spare, and the time to failure for the launched spare satellite. A powerful dynamic fault tree modeling notation and Monte Carlo simulation technique with importance sampling are shown to arrive at a reliability prediction for a 10 year mission.
Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model
NING, JING; QIN, JING; SHEN, YU
2014-01-01
SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727
Development of a subway operation incident delay model using accelerated failure time approaches.
Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang
2014-12-01
This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.
Power degradation and reliability study of high-power laser bars at quasi-CW operation
NASA Astrophysics Data System (ADS)
Zhang, Haoyu; Fan, Yong; Liu, Hui; Wang, Jingwei; Zah, Chungen; Liu, Xingsheng
2017-02-01
The solid state laser relies on the laser diode (LD) pumping array. Typically for high peak power quasi-CW (QCW) operation, both energy output per pulse and long term reliability are critical. With the improved bonding technique, specially Indium-free bonded diode laser bars, most of the device failures were caused by failure within laser diode itself (wearout failure), which are induced from dark line defect (DLD), bulk failure, point defect generation, facet mirror damage and etc. Measuring the reliability of LD under QCW condition will take a rather long time. Alternatively, an accelerating model could be a quicker way to estimate the LD life time under QCW operation. In this report, diode laser bars were mounted on micro channel cooler (MCC) and operated under QCW condition with different current densities and junction temperature (Tj ). The junction temperature is varied by modulating pulse width and repetition frequency. The major concern here is the power degradation due to the facet failure. Reliability models of QCW and its corresponding failures are studied. In conclusion, QCW accelerated life-time model is discussed, with a few variable parameters. The model is compared with CW model to find their relationship.
Applications of crude incidence curves.
Korn, E L; Dorey, F J
1992-04-01
Crude incidence curves display the cumulative number of failures of interest as a function of time. With competing causes of failure, they are distinct from cause-specific incidence curves that treat secondary types of failures as censored observations. After briefly reviewing their definition and estimation, we present five applications of crude incidence curves to show their utility in a broad range of studies. In some of these applications it is helpful to model survival-time distributions with use of two different time metameters, for example, time from diagnosis and age of the patient. We describe how one can incorporate published vital statistics into the models when secondary types of failure correspond to common causes of death.
Quantile Regression Models for Current Status Data
Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen
2016-01-01
Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307
The Inclusion of Arbitrary Load Histories in the Strength Decay Model for Stress Rupture
NASA Technical Reports Server (NTRS)
Reeder, James R.
2014-01-01
Stress rupture is a failure mechanism where failures can occur after a period of time, even though the material has seen no increase in load. Carbon/epoxy composite materials have demonstrated the stress rupture failure mechanism. In a previous work, a model was proposed for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures based on strength degradation. However, the original model was limited to constant load periods (holds) at constant load. The model was expanded in this paper to address arbitrary loading histories and specifically the inclusions of ramp loadings up to holds and back down. The broadening of the model allows for failures on loading to be treated as any other failure that may occur during testing instead of having to be treated as a special case. The inclusion of ramps can also influence the length of the "safe period" following proof loading that was previously predicted by the model. No stress rupture failures are predicted in a safe period because time is required for strength to decay from above the proof level to the lower level of loading. Although the model can predict failures during the ramp periods, no closed-form solution for the failure times could be derived. Therefore, two suggested solution techniques were proposed. Finally, the model was used to design an experiment that could detect the difference between the strength decay model and a commonly used model for stress rupture. Although these types of models are necessary to help guide experiments for stress rupture, only experimental evidence will determine how well the model may predict actual material response. If the model can be shown to be accurate, current proof loading requirements may result in predicted safe periods as long as 10(13) years. COPVs design requirements for stress rupture may then be relaxed, allowing more efficient designs, while still maintaining an acceptable level of safety.
Development of failure model for nickel cadmium cells
NASA Technical Reports Server (NTRS)
Gupta, A.
1980-01-01
The development of a method for the life prediction of nickel cadmium cells is discussed. The approach described involves acquiring an understanding of the mechanisms of degradation and failure and at the same time developing nondestructive evaluation techniques for the nickel cadmium cells. The development of a statistical failure model which will describe the mechanisms of degradation and failure is outlined.
The Influence of a High Salt Diet on a Rat Model of Isoproterenol-Induced Heart Failure
Rat models of heart failure (HF) show varied pathology and time to disease outcome, dependent on induction method. We found that subchronic (4 weeks) isoproterenol (ISO) infusion exacerbated cardiomyopathy in Spontaneously Hypertensive Heart Failure (SHHF) rats. Others have shown...
A RAT MODEL OF HEART FAILURE INDUCED BY ISOPROTERENOL AND A HIGH SALT DIET
Rat models of heart failure (HF) show varied pathology and time to disease outcome, dependent on induction method. We found that subchronic (4wk) isoproterenol (ISO) infusion in Spontaneously Hypertensive Heart Failure (SHHF) rats caused cardiac injury with minimal hypertrophy. O...
Simulation Assisted Risk Assessment Applied to Launch Vehicle Conceptual Design
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie; Gee, Ken; Lawrence, Scott
2008-01-01
A simulation-based risk assessment approach is presented and is applied to the analysis of abort during the ascent phase of a space exploration mission. The approach utilizes groupings of launch vehicle failures, referred to as failure bins, which are mapped to corresponding failure environments. Physical models are used to characterize the failure environments in terms of the risk due to blast overpressure, resulting debris field, and the thermal radiation due to a fireball. The resulting risk to the crew is dynamically modeled by combining the likelihood of each failure, the severity of the failure environments as a function of initiator and time of the failure, the robustness of the crew module, and the warning time available due to early detection. The approach is shown to support the launch vehicle design process by characterizing the risk drivers and identifying regions where failure detection would significantly reduce the risk to the crew.
On a Stochastic Failure Model under Random Shocks
NASA Astrophysics Data System (ADS)
Cha, Ji Hwan
2013-02-01
In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.
Time-dependent earthquake probabilities
Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.
2005-01-01
We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.
76 FR 8661 - Airworthiness Directives; Lycoming Engines, Fuel Injected Reciprocating Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-15
... engine models requiring inspections. We are proposing this AD to prevent failure of the fuel injector... repetitive inspection compliance time. We issued that AD to prevent failure of the fuel injector fuel lines... engine models requiring inspection. We are issuing this AD to prevent failure of the fuel injector fuel...
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.
Rasmussen, Gregers Brünnich; Håkansson, Katrin E; Vogelius, Ivan R; Rasmussen, Jacob H; Friborg, Jeppe T; Fischer, Barbara M; Schumaker, Lisa; Cullen, Kevin; Therkildsen, Marianne H; Bentzen, Søren M; Specht, Lena
2017-11-01
To identify a failure site-specific prognostic model by combining immunohistochemistry (IHC) and molecular imaging information to predict long-term failure type in squamous cell carcinoma of the head and neck. Tissue microarray blocks of 196 head and neck squamous cell carcinoma cases were stained for a panel of biomarkers using IHC. Gross tumor volume (GTV) from the PET/CT radiation treatment planning CT scan, maximal Standard Uptake Value (SUVmax) of fludeoxyglucose (FDG) and clinical information were included in the model building using Cox proportional hazards models, stratified for p16 status in oropharyngeal carcinomas. Separate models were built for time to locoregional failure and time to distant metastasis. Higher than median p53 expression on IHC tended toward a risk factor for locoregional failure but was protective for distant metastasis, χ 2 for difference p = .003. The final model for locoregional failure included p53 (HR: 1.9; p: .055), concomitant cisplatin (HR: 0.41; p: .008), β-tubulin-1 (HR: 1.8; p: .08), β-tubulin-2 (HR: 0.49; p: .057) and SUVmax (HR: 2.1; p: .046). The final model for distant metastasis included p53 (HR: 0.23; p: .025), Bcl-2 (HR: 2.6; p: .08), SUVmax (HR: 3.5; p: .095) and GTV (HR: 1.7; p: .063). The models successfully distinguished between risk of locoregional failure and risk of distant metastasis, which is important information for clinical decision-making. High p53 expression has opposite prognostic effects for the two endpoints; increasing risk of locoregional failure, but decreasing the risk of metastatic failure, but external validation of this finding is needed.
Fournier, Marie-Cécile; Foucher, Yohann; Blanche, Paul; Buron, Fanny; Giral, Magali; Dantan, Etienne
2016-05-01
In renal transplantation, serum creatinine (SCr) is the main biomarker routinely measured to assess patient's health, with chronic increases being strongly associated with long-term graft failure risk (death with a functioning graft or return to dialysis). Joint modeling may be useful to identify the specific role of risk factors on chronic evolution of kidney transplant recipients: some can be related to the SCr evolution, finally leading to graft failure, whereas others can be associated with graft failure without any modification of SCr. Sample data for 2749 patients transplanted between 2000 and 2013 with a functioning kidney at 1-year post-transplantation were obtained from the DIVAT cohort. A shared random effect joint model for longitudinal SCr values and time to graft failure was performed. We show that graft failure risk depended on both the current value and slope of the SCr. Deceased donor graft patient seemed to have a higher SCr increase, similar to patient with diabetes history, while no significant association of these two features with graft failure risk was found. Patient with a second graft was at higher risk of graft failure, independent of changes in SCr values. Anti-HLA immunization was associated with both processes simultaneously. Joint models for repeated and time-to-event data bring new opportunities to improve the epidemiological knowledge of chronic diseases. For instance in renal transplantation, several features should receive additional attention as we demonstrated their correlation with graft failure risk was independent of the SCr evolution.
Eslami, Mohammad H; Zhu, Clara K; Rybin, Denis; Doros, Gheorghe; Siracuse, Jeffrey J; Farber, Alik
2016-08-01
Native arteriovenous fistulas (AVFs) have a high 1 year failure rate leading to a need for secondary procedures. We set out to create a predictive model of early failure in patients undergoing first-time AVF creation, to identify failure-associated factors and stratify initial failure risk. The Vascular Study Group of New England (VSGNE) (2010-2014) was queried to identify patients undergoing first-time AVF creation. Patients with early (within 3 months postoperation) AVF failure (EF) or no failure (NF) were compared, failure being defined as any AVF that could not be used for dialysis. A multivariate logistic regression predictive model of EF based on perioperative clinical variables was created. Backward elimination with alpha level of 0.2 was used to create a parsimonious model. We identified 376 first-time AVF patients with follow-up data available in VSGNE. EF rate was 17.5%. Patients in the EF group had lower rates of hypertension (80.3% vs. 93.2%, P = 0.003) and diabetes (47.0% vs. 61.3%, P = 0.039). EF patients were also more likely to have radial artery inflow (57.6% vs. 38.4%, P = 0.011) and have forearm cephalic vein outflow (57.6% vs. 36.5%, P = 0.008). Additionally, the EF group was noted to have significantly smaller mean diameters of target artery (3.1 ± 0.9 vs. 3.6 ± 1.1, P = 0.002) and vein (3.1 ± 0.7 vs. 3.6 ± 0.9, P < 0.001). Multivariate analyses revealed that hypertension, diabetes, and vein larger than 3 mm were protective of EF (P < 0.05). The discriminating ability of this model was good (C-statistic = 0.731) and the model fits the data well (Hosmer-Lemeshow P = 0.149). β-estimates of significant factors were used to create a point system and assign probabilities of EF. We developed a simple model that robustly predicts first-time AVF EF and suggests that anatomical and clinical factors directly affect early AVF outcomes. The risk score has the potential to be used in clinical settings to stratify risk and make informed follow-up plans for AVF patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Real-time failure control (SAFD)
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.
1990-01-01
The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.
Investigation of advanced fault insertion and simulator methods
NASA Technical Reports Server (NTRS)
Dunn, W. R.; Cottrell, D.
1986-01-01
The cooperative agreement partly supported research leading to the open-literature publication cited. Additional efforts under the agreement included research into fault modeling of semiconductor devices. Results of this research are presented in this report which is summarized in the following paragraphs. As a result of the cited research, it appears that semiconductor failure mechanism data is abundant but of little use in developing pin-level device models. Failure mode data on the other hand does exist but is too sparse to be of any statistical use in developing fault models. What is significant in the failure mode data is that, unlike classical logic, MSI and LSI devices do exhibit more than 'stuck-at' and open/short failure modes. Specifically they are dominated by parametric failures and functional anomalies that can include intermittent faults and multiple-pin failures. The report discusses methods of developing composite pin-level models based on extrapolation of semiconductor device failure mechanisms, failure modes, results of temperature stress testing and functional modeling. Limitations of this model particularly with regard to determination of fault detection coverage and latency time measurement are discussed. Indicated research directions are presented.
Product component genealogy modeling and field-failure prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Caleb; Hong, Yili; Meeker, William Q.
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Product component genealogy modeling and field-failure prediction
King, Caleb; Hong, Yili; Meeker, William Q.
2016-04-13
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Reliability Growth in Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2014-01-01
A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.
Light water reactor lower head failure analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rempe, J.L.; Chavez, S.A.; Thinnes, G.L.
1993-10-01
This document presents the results from a US Nuclear Regulatory Commission-sponsored research program to investigate the mode and timing of vessel lower head failure. Major objectives of the analysis were to identify plausible failure mechanisms and to develop a method for determining which failure mode would occur first in different light water reactor designs and accident conditions. Failure mechanisms, such as tube ejection, tube rupture, global vessel failure, and localized vessel creep rupture, were studied. Newly developed models and existing models were applied to predict which failure mechanism would occur first in various severe accident scenarios. So that a broadermore » range of conditions could be considered simultaneously, calculations relied heavily on models with closed-form or simplified numerical solution techniques. Finite element techniques-were employed for analytical model verification and examining more detailed phenomena. High-temperature creep and tensile data were obtained for predicting vessel and penetration structural response.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximationmore » and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.« less
Reliability-based management of buried pipelines considering external corrosion defects
NASA Astrophysics Data System (ADS)
Miran, Seyedeh Azadeh
Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.
On rate-state and Coulomb failure models
Gomberg, J.; Beeler, N.; Blanpied, M.
2000-01-01
We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified Coulomb failure model in which the failure stress threshold is lowered due to weakening, increasing the clock advance. The deviation from a non-Coulomb response also depends on the loading rate, elastic stiffness, initial conditions, and assumptions about how state evolves.
NASA Astrophysics Data System (ADS)
Manconi, A.; Giordan, D.
2015-07-01
We apply failure forecast models by exploiting near-real-time monitoring data for the La Saxe rockslide, a large unstable slope threatening Aosta Valley in northern Italy. Starting from the inverse velocity theory, we analyze landslide surface displacements automatically and in near real time on different temporal windows and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here, we present the result obtained for the La Saxe rockslide, a large unstable slope located in Aosta Valley, northern Italy. Based on this case study, we identify operational thresholds that are established on the reliability of the forecast models. Our approach is aimed at supporting the management of early warning systems in the most critical phases of the landslide emergency.
Micromechanics of failure waves in glass. 2: Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espinosa, H.D.; Xu, Y.; Brar, N.S.
1997-08-01
In an attempt to elucidate the failure mechanism responsible for the so-called failure waves in glass, numerical simulations of plate and rod impact experiments, with a multiple-plane model, have been performed. These simulations show that the failure wave phenomenon can be modeled by the nucleation and growth of penny-shaped shear defects from the specimen surface to its interior. Lateral stress increase, reduction of spall strength,and progressive attenuation of axial stress behind the failure front are properly predicted by the multiple-plane model. Numerical simulations of high-strain-rate pressure-shear experiments indicate that the model predicts reasonably well the shear resistance of the materialmore » at strain rates as high as 1 {times} 10{sup 6}/s. The agreement is believed to be the result of the model capability in simulating damage-induced anisotropy. By examining the kinetics of the failure process in plate experiments, the authors show that the progressive glass spallation in the vicinity of the failure front and the rate of increase in lateral stress are more consistent with a representation of inelasticity based on shear-activated flow surfaces, inhomogeneous flow, and microcracking, rather than pure microcracking. In the former mechanism, microcracks are likely formed at a later time at the intersection of flow surfaces, in the case of rod-on-rod impact, stress and radial velocity histories predicted by the microcracking model are in agreement with the experimental measurements. Stress attenuation, pulse duration, and release structure are properly simulated. It is shown that failure wave speeds in excess to 3,600 m/s are required for adequate prediction in rod radial expansion.« less
Graph-based real-time fault diagnostics
NASA Technical Reports Server (NTRS)
Padalkar, S.; Karsai, G.; Sztipanovits, J.
1988-01-01
A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.
NASA Astrophysics Data System (ADS)
Su, Po-Cheng; Hsu, Chun-Chi; Du, Sin-I.; Wang, Tahui
2017-12-01
Read operation induced disturbance in SET-state in a tungsten oxide resistive switching memory is investigated. We observe that the reduction of oxygen vacancy density during read-disturb follows power-law dependence on cumulative read-disturb time. Our study shows that the SET-state read-disturb immunity progressively degrades by orders of magnitude as SET/RESET cycle number increases. To explore the cause of the read-disturb degradation, we perform a constant voltage stress to emulate high-field stress effects in SET/RESET cycling. We find that the read-disturb failure time degradation is attributed to high-field stress-generated oxide traps. Since the stress-generated traps may substitute for some of oxygen vacancies in forming conductive percolation paths in a switching dielectric, a stressed cell has a reduced oxygen vacancy density in SET-state, which in turn results in a shorter read-disturb failure time. We develop an analytical read-disturb degradation model including both cycling induced oxide trap creation and read-disturb induced oxygen vacancy reduction. Our model can well reproduce the measured read-disturb failure time degradation in a cycled cell without using fitting parameters.
NASA Technical Reports Server (NTRS)
Packard, Michael H.
2002-01-01
Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.
NASA Astrophysics Data System (ADS)
Gromek, Katherine Emily
A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.
Bayesian transformation cure frailty models with multivariate failure time data.
Yin, Guosheng
2008-12-10
We propose a class of transformation cure frailty models to accommodate a survival fraction in multivariate failure time data. Established through a general power transformation, this family of cure frailty models includes the proportional hazards and the proportional odds modeling structures as two special cases. Within the Bayesian paradigm, we obtain the joint posterior distribution and the corresponding full conditional distributions of the model parameters for the implementation of Gibbs sampling. Model selection is based on the conditional predictive ordinate statistic and deviance information criterion. As an illustration, we apply the proposed method to a real data set from dentistry.
NASA Astrophysics Data System (ADS)
Jayawardena, Adikaramge Asiri
The goal of this dissertation is to identify electrical and thermal parameters of an LED package that can be used to predict catastrophic failure real-time in an application. Through an experimental study the series electrical resistance and thermal resistance were identified as good indicators of contact failure of LED packages. This study investigated the long-term changes in series electrical resistance and thermal resistance of LED packages at three different current and junction temperature stress conditions. Experiment results showed that the series electrical resistance went through four phases of change; including periods of latency, rapid increase, saturation, and finally a sharp decline just before failure. Formation of voids in the contact metallization was identified as the underlying mechanism for series resistance increase. The rate of series resistance change was linked to void growth using the theory of electromigration. The rate of increase of series resistance is dependent on temperature and current density. The results indicate that void growth occurred in the cap (Au) layer, was constrained by the contact metal (Ni) layer, preventing open circuit failure of contact metal layer. Short circuit failure occurred due to electromigration induced metal diffusion along dislocations in GaN. The increase in ideality factor, and reverse leakage current with time provided further evidence to presence of metal in the semiconductor. An empirical model was derived for estimation of LED package failure time due to metal diffusion. The model is based on the experimental results and theories of electromigration and diffusion. Furthermore, the experimental results showed that the thermal resistance of LED packages increased with aging time. A relationship between thermal resistance change rate, with case temperature and temperature gradient within the LED package was developed. The results showed that dislocation creep is responsible for creep induced plastic deformation in the die-attach solder. The temperatures inside the LED package reached the melting point of die-attach solder due to delamination just before catastrophic open circuit failure. A combined model that could estimate life of LED packages based on catastrophic failure of thermal and electrical contacts is presented for the first time. This model can be used to make a-priori or real-time estimation of LED package life based on catastrophic failure. Finally, to illustrate the usefulness of the findings from this thesis, two different implementations of real-time life prediction using prognostics and health monitoring techniques are discussed.
Zhang, Xu; Zhang, Mei-Jie; Fine, Jason
2012-01-01
With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated. PMID:21557288
Modeling the roles of damage accumulation and mechanical healing on rainfall-induced landslides
NASA Astrophysics Data System (ADS)
Fan, Linfeng; Lehmann, Peter; Or, Dani
2014-05-01
The abrupt release of rainfall-induced shallow landslides is preceded by local failures that may abruptly coalesce and form a continuous failure plane within a hillslope. The mechanical status of hillslopes reflects a competition between the extent of severity of accumulated local damage during prior rainfall events and the rates of mechanically healing (i.e. regaining of strength) by closure of micro-cracks, regrowth of roots, etc. The interplay of these processes affects the initial conditions for landslide modeling and shapes potential failure patterns during future rainfall events. We incorporated these competing mechanical processes in a hydro-mechanical landslide triggering model subjected to a sequence of rainfall scenarios. The model employs the Fiber Bundle Model (FBM) with bonds (fiber bundle) with prescribed threshold linking adjacent soil columns and soil to bedrock. Prior damage was represented by a fraction of broken fibers during previous rainfall events, and the healing of broken fibers was described by strength regaining models for soil and roots at different characteristic time scales. Results show that prior damage and healing introduce highly nonlinear response to landslide triggering. For small prior damage, mechanical bonds at soil-bedrock interface may fail early in next rainfall event but lead to small perturbations onto lateral bonds without triggering a landslide. For more severe damage weakening lateral bonds, excess load due to failure at soil-bedrock interface accumulates at downslope soil columns resulting in early soil failure with patterns strongly correlated with prior damage distribution. Increasing prior damage over the hillslope decreases the volume of first landslide and prolongs the time needed to trigger the second landslide due to mechanical relaxation of the system. The mechanical healing of fibers diminishes effects of prior damage on the time of failure, and shortens waiting time between the first and second landslides. These findings highlight the need to improve definition of initial conditions and the shortcomings of assuming pristine hillslopes.
North, Frederick; Fox, Samuel; Chaudhry, Rajeev
2016-07-20
Risk calculation is increasingly used in lipid management, congestive heart failure, and atrial fibrillation. The risk scores are then used for decisions about statin use, anticoagulation, and implantable defibrillator use. Calculating risks for patients and making decisions based on these risks is often done at the point of care and is an additional time burden for clinicians that can be decreased by automating the tasks and using clinical decision-making support. Using Morae Recorder software, we timed 30 healthcare providers tasked with calculating the overall risk of cardiovascular events, sudden death in heart failure, and thrombotic event risk in atrial fibrillation. Risk calculators used were the American College of Cardiology Atherosclerotic Cardiovascular Disease risk calculator (AHA-ASCVD risk), Seattle Heart Failure Model (SHFM risk), and CHA2DS2VASc. We also timed the 30 providers using Ask Mayo Expert care process models for lipid management, heart failure management, and atrial fibrillation management based on the calculated risk scores. We used the Mayo Clinic primary care panel to estimate time for calculating an entire panel risk. Mean provider times to complete the CHA2DS2VASc, AHA-ASCVD risk, and SHFM were 36, 45, and 171 s respectively. For decision making about atrial fibrillation, lipids, and heart failure, the mean times (including risk calculations) were 85, 110, and 347 s respectively. Even under best case circumstances, providers take a significant amount of time to complete risk assessments. For a complete panel of patients this can lead to hours of time required to make decisions about prescribing statins, use of anticoagulation, and medications for heart failure. Informatics solutions are needed to capture data in the medical record and serve up automatically calculated risk assessments to physicians and other providers at the point of care.
Functional correlation approach to operational risk in banking organizations
NASA Astrophysics Data System (ADS)
Kühn, Reimer; Neu, Peter
2003-05-01
A Value-at-Risk-based model is proposed to compute the adequate equity capital necessary to cover potential losses due to operational risks, such as human and system process failures, in banking organizations. Exploring the analogy to a lattice gas model from physics, correlations between sequential failures are modeled by as functionally defined, heterogeneous couplings between mutually supportive processes. In contrast to traditional risk models for market and credit risk, where correlations are described as equal-time-correlations by a covariance matrix, the dynamics of the model shows collective phenomena such as bursts and avalanches of process failures.
Performance evaluation of the croissant production line with reparable machines
NASA Astrophysics Data System (ADS)
Tsarouhas, Panagiotis H.
2015-03-01
In this study, the analytical probability models for an automated serial production system, bufferless that consists of n-machines in series with common transfer mechanism and control system was developed. Both time to failure and time to repair a failure are assumed to follow exponential distribution. Applying those models, the effect of system parameters on system performance in actual croissant production line was studied. The production line consists of six workstations with different numbers of reparable machines in series. Mathematical models of the croissant production line have been developed using Markov process. The strength of this study is in the classification of the whole system in states, representing failures of different machines. Failure and repair data from the actual production environment have been used to estimate reliability and maintainability for each machine, workstation, and the entire line is based on analytical models. The analysis provides a useful insight into the system's behaviour, helps to find design inherent faults and suggests optimal modifications to upgrade the system and improve its performance.
NASA Technical Reports Server (NTRS)
Brinson, R. F.
1985-01-01
A method for lifetime or durability predictions for laminated fiber reinforced plastics is given. The procedure is similar to but not the same as the well known time-temperature-superposition principle for polymers. The method is better described as an analytical adaptation of time-stress-super-position methods. The analytical constitutive modeling is based upon a nonlinear viscoelastic constitutive model developed by Schapery. Time dependent failure models are discussed and are related to the constitutive models. Finally, results of an incremental lamination analysis using the constitutive and failure model are compared to experimental results. Favorable results between theory and predictions are presented using data from creep tests of about two months duration.
NASA Astrophysics Data System (ADS)
Bell, Andrew F.; Naylor, Mark; Heap, Michael J.; Main, Ian G.
2011-08-01
Power-law accelerations in the mean rate of strain, earthquakes and other precursors have been widely reported prior to material failure phenomena, including volcanic eruptions, landslides and laboratory deformation experiments, as predicted by several theoretical models. The Failure Forecast Method (FFM), which linearizes the power-law trend, has been routinely used to forecast the failure time in retrospective analyses; however, its performance has never been formally evaluated. Here we use synthetic and real data, recorded in laboratory brittle creep experiments and at volcanoes, to show that the assumptions of the FFM are inconsistent with the error structure of the data, leading to biased and imprecise forecasts. We show that a Generalized Linear Model method provides higher-quality forecasts that converge more accurately to the eventual failure time, accounting for the appropriate error distributions. This approach should be employed in place of the FFM to provide reliable quantitative forecasts and estimate their associated uncertainties.
A Generic Modeling Process to Support Functional Fault Model Development
NASA Technical Reports Server (NTRS)
Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.
2016-01-01
Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.
The failure of earthquake failure models
Gomberg, J.
2001-01-01
In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.
NASA Technical Reports Server (NTRS)
Tao, Gang; Joshi, Suresh M.
2008-01-01
In this paper, the problem of controlling systems with failures and faults is introduced, and an overview of recent work on direct adaptive control for compensation of uncertain actuator failures is presented. Actuator failures may be characterized by some unknown system inputs being stuck at some unknown (fixed or varying) values at unknown time instants, that cannot be influenced by the control signals. The key task of adaptive compensation is to design the control signals in such a manner that the remaining actuators can automatically and seamlessly take over for the failed ones, and achieve desired stability and asymptotic tracking. A certain degree of redundancy is necessary to accomplish failure compensation. The objective of adaptive control design is to effectively use the available actuation redundancy to handle failures without the knowledge of the failure patterns, parameters, and time of occurrence. This is a challenging problem because failures introduce large uncertainties in the dynamic structure of the system, in addition to parametric uncertainties and unknown disturbances. The paper addresses some theoretical issues in adaptive actuator failure compensation: actuator failure modeling, redundant actuation requirements, plant-model matching, error system dynamics, adaptation laws, and stability, tracking, and performance analysis. Adaptive control designs can be shown to effectively handle uncertain actuator failures without explicit failure detection. Some open technical challenges and research problems in this important research area are discussed.
Failure Modes in Capacitors When Tested Under a Time-Varying Stress
NASA Technical Reports Server (NTRS)
Liu, David (Donhang)
2011-01-01
Steady step surge testing (SSST) is widely applied to screen out potential power-on failures in solid tantalum capacitors. The test simulates the power supply's on and off characteristics. Power-on failure has been the prevalent failure mechanism for solid tantalum capacitors for decoupling applications. On the other hand, the SSST can also be reviewed as an electrically destructive test under a time-varying stress. It consists of rapidly charging the capacitor with incremental voltage increases, through a low resistance in series, until the capacitor under test is electrically shorted. Highly accelerated life testing (HALT) is usually a time-efficient method for determining the failure mechanism in capacitors; however, a destructive test under a time-varying stress like SSST is even more effective. It normally takes days to complete a HALT test, but it only takes minutes for a time-varying stress test to produce failures. The advantage of incorporating specific time-varying stress into a statistical model is significant in providing an alternative life test method for quickly revealing the failure modes in capacitors. In this paper, a time-varying stress has been incorporated into the Weibull model to characterize the failure modes. The SSST circuit and transient conditions to correctly test the capacitors is discussed. Finally, the SSST was applied for testing polymer aluminum capacitors (PA capacitors), Ta capacitors, and multi-layer ceramic capacitors with both precious metal electrode (PME) and base-metal-electrodes (BME). It appears that testing results are directly associated to the dielectric layer breakdown in PA and Ta capacitors and are independent on the capacitor values, the way the capacitors being built, and the manufactures. The testing results also reveal that ceramic capacitors exhibit breakdown voltages more than 20 times the rated voltage, and the breakdown voltages are inverse proportional to the dielectric layer thickness. The possibility of ceramic capacitors in front-end decoupling applications to block the surge noise from a power supply is also discussed.
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.
Modeling joint restoration strategies for interdependent infrastructure systems.
Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.
Age-Dependent Risk of Graft Failure in Young Kidney Transplant Recipients.
Kaboré, Rémi; Couchoud, Cécile; Macher, Marie-Alice; Salomon, Rémi; Ranchin, Bruno; Lahoche, Annie; Roussey-Kesler, Gwenaelle; Garaix, Florentine; Decramer, Stéphane; Pietrement, Christine; Lassalle, Mathilde; Baudouin, Véronique; Cochat, Pierre; Niaudet, Patrick; Joly, Pierre; Leffondré, Karen; Harambat, Jérôme
2017-06-01
The risk of graft failure in young kidney transplant recipients has been found to increase during adolescence and early adulthood. However, this question has not been addressed outside the United States so far. Our objective was to investigate whether the hazard of graft failure also increases during this age period in France irrespective of age at transplantation. Data of all first kidney transplantation performed before 30 years of age between 1993 and 2012 were extracted from the French kidney transplant database. The hazard of graft failure was estimated at each current age using a 2-stage modelling approach that accounted for both age at transplantation and time since transplantation. Hazard ratios comparing the risk of graft failure during adolescence or early adulthood to other periods were estimated from time-dependent Cox models. A total of 5983 renal transplant recipients were included. The risk of graft failure was found to increase around the age of 13 years until the age of 21 years, and decrease thereafter. Results from the Cox model indicated that the hazard of graft failure during the age period 13 to 23 years was almost twice as high as than during the age period 0 to 12 years, and 25% higher than after 23 years. Among first kidney transplant recipients younger than 30 years in France, those currently in adolescence or early adulthood have the highest risk of graft failure.
NASA Astrophysics Data System (ADS)
Manconi, A.; Giordan, D.
2015-02-01
We investigate the use of landslide failure forecast models by exploiting near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here we describe the main concepts of our method, and show an example of application to a real emergency scenario, the La Saxe rockslide, Aosta Valley region, northern Italy. Based on the herein presented case study, we identify operational thresholds based on the reliability of the forecast models, in order to support the management of early warning systems in the most critical phases of the landslide emergency.
Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods
NASA Astrophysics Data System (ADS)
Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed
2018-04-01
This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.
Comín-Colet, Josep; Enjuanes, Cristina; Lupón, Josep; Cainzos-Achirica, Miguel; Badosa, Neus; Verdú, José María
2016-10-01
Despite advances in the treatment of heart failure, mortality, the number of readmissions, and their associated health care costs are very high. Heart failure care models inspired by the chronic care model, also known as heart failure programs or heart failure units, have shown clinical benefits in high-risk patients. However, while traditional heart failure units have focused on patients detected in the outpatient phase, the increasing pressure from hospital admissions is shifting the focus of interest toward multidisciplinary programs that concentrate on transitions of care, particularly between the acute phase and the postdischarge phase. These new integrated care models for heart failure revolve around interventions at the time of transitions of care. They are multidisciplinary and patient-centered, designed to ensure continuity of care, and have been demonstrated to reduce potentially avoidable hospital admissions. Key components of these models are early intervention during the inpatient phase, discharge planning, early postdischarge review and structured follow-up, advanced transition planning, and the involvement of physicians and nurses specialized in heart failure. It is hoped that such models will be progressively implemented across the country. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
A Simulation Model for Setting Terms for Performance Based Contract Terms
2010-05-01
torpedo self-noise and the use of ruggedized, embedded, digital micro - processors . The latter capability made it possible for digitally controlled...inventories are: System Reliability, Product Reliability, Operational Availability, Mean Time to Repair (MTTR), Mean Time to Failure ( MTTF ...Failure ( MTTF ) Mean Logistics Delay Time (MLDT) Mean Supply Response Time (MSRT) D ep en de nt M et ric s Mean Accumulated Down Time (MADT
Denis, P; Le Pen, C; Umuhire, D; Berdeaux, G
2008-01-01
To compare the effectiveness of two treatment sequences, latanoprost-latanoprost timolol fixed combination (L-LT) versus travoprost-travoprost timolol fixed combination (T-TT), in the treatment of open-angle glaucoma (OAG) or ocular hypertension (OHT). A discrete event simulation (DES) model was constructed. Patients with either OAG or OHT were treated first-line with a prostaglandin, either latanoprost or travoprost. In case of treatment failure, patients were switched to the specific prostaglandin-timolol sequence LT or TT. Failure was defined as intraocular pressure higher than or equal to 18 mmHg at two visits. Time to failure was estimated from two randomized clinical trials. Log-rank tests were computed. Linear functions after log-log transformation were used to model time to failure. The time horizon of the model was 60 months. Outcomes included treatment failure and disease progression. Sensitivity analyses were performed. Latanoprost treatment resulted in more treatment failures than travoprost (p<0.01), and LT more than TT (p<0.01). At 60 months, the probability of starting a third treatment line was 39.2% with L-LT versus 29.9% with T-TT. On average, L-LT patients developed 0.55 new visual field defects versus 0.48 for T-TT patients. The probability of no disease progression at 60 months was 61.4% with L-LT and 65.5% with T-TT. Based on randomized clinical trial results and using a DES model, the T-TT sequence was more effective at avoiding starting a third line treatment than the L-LT sequence. T-TT treated patients developed less glaucoma progression.
Predictability of Landslide Timing From Quasi-Periodic Precursory Earthquakes
NASA Astrophysics Data System (ADS)
Bell, Andrew F.
2018-02-01
Accelerating rates of geophysical signals are observed before a range of material failure phenomena. They provide insights into the physical processes controlling failure and the basis for failure forecasts. However, examples of accelerating seismicity before landslides are rare, and their behavior and forecasting potential are largely unknown. Here I use a Bayesian methodology to apply a novel gamma point process model to investigate a sequence of quasiperiodic repeating earthquakes preceding a large landslide at Nuugaatsiaq in Greenland in June 2017. The evolution in earthquake rate is best explained by an inverse power law increase with time toward failure, as predicted by material failure theory. However, the commonly accepted power law exponent value of 1.0 is inconsistent with the data. Instead, the mean posterior value of 0.71 indicates a particularly rapid acceleration toward failure and suggests that only relatively short warning times may be possible for similar landslides in future.
A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2017-01-01
Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.
Nygård, Lotte; Vogelius, Ivan R; Fischer, Barbara M; Kjær, Andreas; Langer, Seppo W; Aznar, Marianne C; Persson, Gitte F; Bentzen, Søren M
2018-04-01
The aim of the study was to build a model of first failure site- and lesion-specific failure probability after definitive chemoradiotherapy for inoperable NSCLC. We retrospectively analyzed 251 patients receiving definitive chemoradiotherapy for NSCLC at a single institution between 2009 and 2015. All patients were scanned by fludeoxyglucose positron emission tomography/computed tomography for radiotherapy planning. Clinical patient data and fludeoxyglucose positron emission tomography standardized uptake values from primary tumor and nodal lesions were analyzed by using multivariate cause-specific Cox regression. In patients experiencing locoregional failure, multivariable logistic regression was applied to assess risk of each lesion being the first site of failure. The two models were used in combination to predict probability of lesion failure accounting for competing events. Adenocarcinoma had a lower hazard ratio (HR) of locoregional failure than squamous cell carcinoma (HR = 0.45, 95% confidence interval [CI]: 0.26-0.76, p = 0.003). Distant failures were more common in the adenocarcinoma group (HR = 2.21, 95% CI: 1.41-3.48, p < 0.001). Multivariable logistic regression of individual lesions at the time of first failure showed that primary tumors were more likely to fail than lymph nodes (OR = 12.8, 95% CI: 5.10-32.17, p < 0.001). Increasing peak standardized uptake value was significantly associated with lesion failure (OR = 1.26 per unit increase, 95% CI: 1.12-1.40, p < 0.001). The electronic model is available at http://bit.ly/LungModelFDG. We developed a failure site-specific competing risk model based on patient- and lesion-level characteristics. Failure patterns differed between adenocarcinoma and squamous cell carcinoma, illustrating the limitation of aggregating them into NSCLC. Failure site-specific models add complementary information to conventional prognostic models. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Failure Forecasting in Triaxially Stressed Sandstones
NASA Astrophysics Data System (ADS)
Crippen, A.; Bell, A. F.; Curtis, A.; Main, I. G.
2017-12-01
Precursory signals to fracturing events have been observed to follow power-law accelerations in spatial, temporal, and size distributions leading up to catastrophic failure. In previous studies this behavior was modeled using Voight's relation of a geophysical precursor in order to perform `hindcasts' by solving for failure onset time. However, performing this analysis in retrospect creates a bias, as we know an event happened, when it happened, and we can search data for precursors accordingly. We aim to remove this retrospective bias, thereby allowing us to make failure forecasts in real-time in a rock deformation laboratory. We triaxially compressed water-saturated 100 mm sandstone cores (Pc= 25MPa, Pp = 5MPa, σ = 1.0E-5 s-1) to the point of failure while monitoring strain rate, differential stress, AEs, and continuous waveform data. Here we compare the current `hindcast` methods on synthetic and our real laboratory data. We then apply these techniques to increasing fractions of the data sets to observe the evolution of the failure forecast time with precursory data. We discuss these results as well as our plan to mitigate false positives and minimize errors for real-time application. Real-time failure forecasting could revolutionize the field of hazard mitigation of brittle failure processes by allowing non-invasive monitoring of civil structures, volcanoes, and possibly fault zones.
Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi
2016-11-01
Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Reliability and degradation of oxide VCSELs due to reaction to atmospheric water vapor
NASA Astrophysics Data System (ADS)
Dafinca, Alexandru; Weidberg, Anthony R.; McMahon, Steven J.; Grillo, Alexander A.; Farthouat, Philippe; Ziolkowski, Michael; Herrick, Robert W.
2013-03-01
850nm oxide-aperture VCSELs are susceptible to premature failure if operated while exposed to atmospheric water vapor, and not protected by hermetic packaging. The ATLAS detector in CERN's Large Hadron Collider (LHC) has had approximately 6000 channels of Parallel Optic VCSELs fielded under well-documented ambient conditions. Exact time-to-failure data has been collected on this large sample, providing for the first time actual failure data at use conditions. In addition, the same VCSELs were tested under a variety of accelerated conditions to allow us to construct a more accurate acceleration model. Failure analysis information will also be presented to show what we believe causes corrosion-related failure for such VCSELs.
Modeling Grade IV Gas Emboli using a Limited Failure Population Model with Random Effects
NASA Technical Reports Server (NTRS)
Thompson, Laura A.; Conkin, Johnny; Chhikara, Raj S.; Powell, Michael R.
2002-01-01
Venous gas emboli (VGE) (gas bubbles in venous blood) are associated with an increased risk of decompression sickness (DCS) in hypobaric environments. A high grade of VGE can be a precursor to serious DCS. In this paper, we model time to Grade IV VGE considering a subset of individuals assumed to be immune from experiencing VGE. Our data contain monitoring test results from subjects undergoing up to 13 denitrogenation test procedures prior to exposure to a hypobaric environment. The onset time of Grade IV VGE is recorded as contained within certain time intervals. We fit a parametric (lognormal) mixture survival model to the interval-and right-censored data to account for the possibility of a subset of "cured" individuals who are immune to the event. Our model contains random subject effects to account for correlations between repeated measurements on a single individual. Model assessments and cross-validation indicate that this limited failure population mixture model is an improvement over a model that does not account for the potential of a fraction of cured individuals. We also evaluated some alternative mixture models. Predictions from the best fitted mixture model indicate that the actual process is reasonably approximated by a limited failure population model.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Dual permeability FEM models for distributed fiber optic sensors development
NASA Astrophysics Data System (ADS)
Aguilar-López, Juan Pablo; Bogaard, Thom
2017-04-01
Fiber optic cables are commonly known for being robust and reliable mediums for transferring information at the speed of light in glass. Billions of kilometers of cable have been installed around the world for internet connection and real time information sharing. Yet, fiber optic cable is not only a mean for information transfer but also a way to sense and measure physical properties of the medium in which is installed. For dike monitoring, it has been used in the past for detecting inner core and foundation temperature changes which allow to estimate water infiltration during high water events. The DOMINO research project, aims to develop a fiber optic based dike monitoring system which allows to directly sense and measure any pore pressure change inside the dike structure. For this purpose, questions like which location, how many sensors, which measuring frequency and which accuracy are required for the sensor development. All these questions may be initially answered with a finite element model which allows to estimate the effects of pore pressure change in different locations along the cross section while having a time dependent estimation of a stability factor. The sensor aims to monitor two main failure mechanisms at the same time; The piping erosion failure mechanism and the macro-stability failure mechanism. Both mechanisms are going to be modeled and assessed in detail with a finite element based dual permeability Darcy-Richards numerical solution. In that manner, it is possible to assess different sensing configurations with different loading scenarios (e.g. High water levels, rainfall events and initial soil moisture and permeability conditions). The results obtained for the different configurations are later evaluated based on an entropy based performance evaluation. The added value of this kind of modelling approach for the sensor development is that it allows to simultaneously model the piping erosion and macro-stability failure mechanisms in a time dependent manner. In that way, the estimated pore pressures may be related to the monitored one and to both failure mechanisms. Furthermore, the approach is intended to be used in a later stage for the real time monitoring of the failure.
NASA Astrophysics Data System (ADS)
Zhang, Bin; Deng, Congying; Zhang, Yi
2018-03-01
Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1991-01-01
The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.
[Hazard function and life table: an introduction to the failure time analysis].
Matsushita, K; Inaba, H
1987-04-01
Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.
Failure rate analysis of Goddard Space Flight Center spacecraft performance during orbital life
NASA Technical Reports Server (NTRS)
Norris, H. P.; Timmins, A. R.
1976-01-01
Space life performance data on 57 Goddard Space Flight Center spacecraft are analyzed from the standpoint of determining an appropriate reliability model and the associated reliability parameters. Data from published NASA reports, which cover the space performance of GSFC spacecraft launched in the 1960-1970 decade, form the basis of the analyses. The results of the analyses show that the time distribution of 449 malfunctions, of which 248 were classified as failures (not necessarily catastrophic), follow a reliability growth pattern that can be described with either the Duane model or a Weibull distribution. The advantages of both mathematical models are used in order to: identify space failure rates, observe chronological trends, and compare failure rates with those experienced during the prelaunch environmental tests of the flight model spacecraft.
Modeling joint restoration strategies for interdependent infrastructure systems
Simonovic, Slobodan P.
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puskar, Joseph David; Quintana, Michael A.; Sorensen, Neil Robert
A program is underway at Sandia National Laboratories to predict long-term reliability of photovoltaic (PV) systems. The vehicle for the reliability predictions is a Reliability Block Diagram (RBD), which models system behavior. Because this model is based mainly on field failure and repair times, it can be used to predict current reliability, but it cannot currently be used to accurately predict lifetime. In order to be truly predictive, physics-informed degradation processes and failure mechanisms need to be included in the model. This paper describes accelerated life testing of metal foil tapes used in thin-film PV modules, and how tape jointmore » degradation, a possible failure mode, can be incorporated into the model.« less
Scaling of coupled dilatancy-diffusion processes in space and time
NASA Astrophysics Data System (ADS)
Main, I. G.; Bell, A. F.; Meredith, P. G.; Brantut, N.; Heap, M.
2012-04-01
Coupled dilatancy-diffusion processes resulting from microscopically brittle damage due to precursory cracking have been observed in the laboratory and suggested as a mechanism for earthquake precursors. One reason precursors have proven elusive may be the scaling in space: recent geodetic and seismic data placing strong limits on the spatial extent of the nucleation zone for recent earthquakes. Another may be the scaling in time: recent laboratory results on axi-symmetric samples show both a systematic decrease in circumferential extensional strain at failure and a delayed and a sharper acceleration of acoustic emission event rate as strain rate is decreased. Here we examine the scaling of such processes in time from laboratory to field conditions using brittle creep (constant stress loading) to failure tests, in an attempt to bridge part of the strain rate gap to natural conditions, and discuss the implications for forecasting the failure time. Dilatancy rate is strongly correlated to strain rate, and decreases to zero in the steady-rate creep phase at strain rates around 10-9 s-1 for a basalt from Mount Etna. The data are well described by a creep model based on the linear superposition of transient (decelerating) and accelerating micro-crack growth due to stress corrosion. The model produces good fits to the failure time in retrospect using the accelerating acoustic emission event rate, but in prospective tests on synthetic data with the same properties we find failure-time forecasting is subject to systematic epistemic and aleatory uncertainties that degrade predictability. The next stage is to use the technology developed to attempt failure forecasting in real time, using live streamed data and a public web-based portal to quantify the prospective forecast quality under such controlled laboratory conditions.
Parametric Testing of Launch Vehicle FDDR Models
NASA Technical Reports Server (NTRS)
Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar
2011-01-01
For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.
Liu, Chengyu; Zheng, Dingchang; Zhao, Lina; Liu, Changchun
2014-01-01
It has been reported that Gaussian functions could accurately and reliably model both carotid and radial artery pressure waveforms (CAPW and RAPW). However, the physiological relevance of the characteristic features from the modeled Gaussian functions has been little investigated. This study thus aimed to determine characteristic features from the Gaussian functions and to make comparisons of them between normal subjects and heart failure patients. Fifty-six normal subjects and 51 patients with heart failure were studied with the CAPW and RAPW signals recorded simultaneously. The two signals were normalized first and then modeled by three positive Gaussian functions, with their peak amplitude, peak time, and half-width determined. Comparisons of these features were finally made between the two groups. Results indicated that the peak amplitude of the first Gaussian curve was significantly decreased in heart failure patients compared with normal subjects (P<0.001). Significantly increased peak amplitude of the second Gaussian curves (P<0.001) and significantly shortened peak times of the second and third Gaussian curves (both P<0.001) were also presented in heart failure patients. These results were true for both CAPW and RAPW signals, indicating the clinical significance of the Gaussian modeling, which should provide essential tools for further understanding the underlying physiological mechanisms of the artery pressure waveform.
Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C
2015-11-01
Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.
High Risk of Graft Failure in Emerging Adult Heart Transplant Recipients.
Foster, B J; Dahhou, M; Zhang, X; Dharnidharka, V; Ng, V; Conway, J
2015-12-01
Emerging adulthood (17-24 years) is a period of high risk for graft failure in kidney transplant. Whether a similar association exists in heart transplant recipients is unknown. We sought to estimate the relative hazards of graft failure at different current ages, compared with patients between 20 and 24 years old. We evaluated 11 473 patients recorded in the Scientific Registry of Transplant Recipients who received a first transplant at <40 years old (1988-2013) and had at least 6 months of graft function. Time-dependent Cox models were used to estimate the association between current age (time-dependent) and failure risk, adjusted for time since transplant and other potential confounders. Failure was defined as death following graft failure or retransplant; observation was censored at death with graft function. There were 2567 failures. Crude age-specific graft failure rates were highest in 21-24 year olds (4.2 per 100 person-years). Compared to individuals with the same time since transplant, 21-24 year olds had significantly higher failure rates than all other age periods except 17-20 years (HR 0.92 [95%CI 0.77, 1.09]) and 25-29 years (0.86 [0.73, 1.03]). Among young first heart transplant recipients, graft failure risks are highest in the period from 17 to 29 years of age. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.
Joint scale-change models for recurrent events and failure time.
Xu, Gongjun; Chiou, Sy Han; Huang, Chiung-Yu; Wang, Mei-Cheng; Yan, Jun
2017-01-01
Recurrent event data arise frequently in various fields such as biomedical sciences, public health, engineering, and social sciences. In many instances, the observation of the recurrent event process can be stopped by the occurrence of a correlated failure event, such as treatment failure and death. In this article, we propose a joint scale-change model for the recurrent event process and the failure time, where a shared frailty variable is used to model the association between the two types of outcomes. In contrast to the popular Cox-type joint modeling approaches, the regression parameters in the proposed joint scale-change model have marginal interpretations. The proposed approach is robust in the sense that no parametric assumption is imposed on the distribution of the unobserved frailty and that we do not need the strong Poisson-type assumption for the recurrent event process. We establish consistency and asymptotic normality of the proposed semiparametric estimators under suitable regularity conditions. To estimate the corresponding variances of the estimators, we develop a computationally efficient resampling-based procedure. Simulation studies and an analysis of hospitalization data from the Danish Psychiatric Central Register illustrate the performance of the proposed method.
Failure Modes in Capacitors When Tested Under a Time-Varying Stress
NASA Technical Reports Server (NTRS)
Liu, David (Donhang)
2011-01-01
Power-on failure has been the prevalent failure mechanism for solid tantalum capacitors in decoupling applications. A surge step stress test (SSST) has been previously applied to identify the critical stress level of a capacitor batch to give some predictability to the power-on failure mechanism [1]. But SSST can also be viewed as an electrically destructive test under a time-varying stress (voltage). It consists of rapidly charging the capacitor with incremental voltage increases, through a low resistance in series, until the capacitor under test is electrically shorted. When the reliability of capacitors is evaluated, a highly accelerated life test (HALT) is usually adopted since it is a time-efficient method of determining the failure mechanism; however, a destructive test under a time-varying stress such as SSST is even more time efficient. It usually takes days or weeks to complete a HALT test, but it only takes minutes for a time-varying stress test to produce failures. The advantage of incorporating a specific time-varying stress profile into a statistical model is significant in providing an alternative life test method for quickly revealing the failure mechanism in capacitors. In this paper, a time-varying stress that mimics a typical SSST has been incorporated into the Weibull model to characterize the failure mechanism in different types of capacitors. The SSST circuit and transient conditions for correctly surge testing capacitors are discussed. Finally, the SSST was applied for testing Ta capacitors, polymer aluminum capacitors (PA capacitors), and multi-layer ceramic (MLC) capacitors with both precious metal electrodes (PME) and base metal electrodes (BME). The test results are found to be directly associated with the dielectric layer breakdown in Ta and PA capacitors and are independent of the capacitor values, the way the capacitors were built, and the capacitors manufacturers. The test results also show that MLC capacitors exhibit surge breakdown voltages much higher than the rated voltage and that the breakdown field is inversely proportional to the dielectric layer thickness. The SSST data can also be used to comparatively evaluate the voltage robustness of capacitors for decoupling applications.
Does an inter-flaw length control the accuracy of rupture forecasting in geological materials?
NASA Astrophysics Data System (ADS)
Vasseur, Jérémie; Wadsworth, Fabian B.; Heap, Michael J.; Main, Ian G.; Lavallée, Yan; Dingwell, Donald B.
2017-10-01
Multi-scale failure of porous materials is an important phenomenon in nature and in material physics - from controlled laboratory tests to rockbursts, landslides, volcanic eruptions and earthquakes. A key unsolved research question is how to accurately forecast the time of system-sized catastrophic failure, based on observations of precursory events such as acoustic emissions (AE) in laboratory samples, or, on a larger scale, small earthquakes. Until now, the length scale associated with precursory events has not been well quantified, resulting in forecasting tools that are often unreliable. Here we test the hypothesis that the accuracy of the forecast failure time depends on the inter-flaw distance in the starting material. We use new experimental datasets for the deformation of porous materials to infer the critical crack length at failure from a static damage mechanics model. The style of acceleration of AE rate prior to failure, and the accuracy of forecast failure time, both depend on whether the cracks can span the inter-flaw length or not. A smooth inverse power-law acceleration of AE rate to failure, and an accurate forecast, occurs when the cracks are sufficiently long to bridge pore spaces. When this is not the case, the predicted failure time is much less accurate and failure is preceded by an exponential AE rate trend. Finally, we provide a quantitative and pragmatic correction for the systematic error in the forecast failure time, valid for structurally isotropic porous materials, which could be tested against larger-scale natural failure events, with suitable scaling for the relevant inter-flaw distances.
Meltzer, Andrew J; Graham, Ashley; Connolly, Peter H; Karwowski, John K; Bush, Harry L; Frazier, Peter I; Schneider, Darren B
2013-01-01
We apply an innovative and novel analytic approach, based on reliability engineering (RE) principles frequently used to characterize the behavior of manufactured products, to examine outcomes after peripheral endovascular intervention. We hypothesized that this would allow for improved prediction of outcome after peripheral endovascular intervention, specifically with regard to identification of risk factors for early failure. Patients undergoing infrainguinal endovascular intervention for chronic lower-extremity ischemia from 2005 to 2010 were identified in a prospectively maintained database. The primary outcome of failure was defined as patency loss detected by duplex ultrasonography, with or without clinical failure. Analysis included univariate and multivariate Cox regression models, as well as RE-based analysis including product life-cycle models and Weibull failure plots. Early failures were distinguished using the RE principle of "basic rating life," and multivariate models identified independent risk factors for early failure. From 2005 to 2010, 434 primary endovascular peripheral interventions were performed for claudication (51.8%), rest pain (16.8%), or tissue loss (31.3%). Fifty-five percent of patients were aged ≥75 years; 57% were men. Failure was noted after 159 (36.6%) interventions during a mean follow-up of 18 months (range, 0-71 months). Using multivariate (Cox) regression analysis, rest pain and tissue loss were independent predictors of patency loss, with hazard ratios of 2.5 (95% confidence interval, 1.6-4.1; P < 0.001) and 3.2 (95% confidence interval, 2.0-5.2, P < 0.001), respectively. The distribution of failure times for both claudication and critical limb ischemia fit distinct Weibull plots, with different characteristics: interventions for claudication demonstrated an increasing failure rate (β = 1.22, θ = 13.46, mean time to failure = 12.603 months, index of fit = 0.99037, R(2) = 0.98084), whereas interventions for critical limb ischemia demonstrated a decreasing failure rate, suggesting the predominance of early failures (β = 0.7395, θ = 6.8, mean time to failure = 8.2, index of fit = 0.99391, R(2) = 0.98786). By 3.1 months, 10% of interventions failed. This point (90% reliability) was identified as the basic rating life. Using multivariate analysis of failure data, independent predictors of early failure (before 3.1 months) included tissue loss, long lesion length, chronic total occlusions, heart failure, and end-stage renal disease. Application of a RE framework to the assessment of clinical outcomes after peripheral interventions is feasible, and potentially more informative than traditional techniques. Conceptualization of interventions as "products" permits application of product life-cycle models that allow for empiric definition of "early failure" may facilitate comparative effectiveness analysis and enable the development of individualized surveillance programs after endovascular interventions. Copyright © 2013 Annals of Vascular Surgery Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Riviere, J.; Lubbers, N.; Barros, K.; Marone, C.; Johnson, P. A.
2016-12-01
Forecasting failure is a primary goal in diverse domains that include earthquake physics, materials science, nondestructive evaluation of materials and other engineering applications. Due to the highly complex physics of material failure and limitations on gathering data in the failure nucleation zone, this goal has often appeared out of reach; however, recent advances in instrumentation sensitivity, instrument density and data analysis show promise toward forecasting failure times. Here, we show that we can predict frictional failure times of both slow and fast stick slip failure events in the laboratory. This advance is made possible by applying a machine learning approach known as Random Forests1(RF) to the continuous acoustic emission (AE) time series recorded by detectors located on the fault blocks. The RF is trained using a large number of statistical features derived from the AE time series signal. The model is then applied to data not previously analyzed. Remarkably, we find that the RF method predicts upcoming failure time far in advance of a stick slip event, based only on a short time window of data. Further, the algorithm accurately predicts the time of the beginning and end of the next slip event. The predicted time improves as failure is approached, as other data features add to prediction. Our results show robust predictions of slow and dynamic failure based on acoustic emissions from the fault zone throughout the laboratory seismic cycle. The predictions are based on previously unidentified tremor-like acoustic signals that occur during stress build up and the onset of macroscopic frictional weakening. We suggest that the tremor-like signals carry information about fault zone processes and allow precise predictions of failure at any time in the slow slip or stick slip cycle2. If the laboratory experiments represent Earth frictional conditions, it could well be that signals are being missed that contain highly useful predictive information. 1Breiman, L. Random forests. Machine Learning 45, 5-32 (2001). 2Rouet-Leduc, B. C. Hulbert, N. Lubbers, K. Barros and P. A. Johnson, Learning the physics of failure, in review (2016).
NASA Technical Reports Server (NTRS)
Behbehani, K.
1980-01-01
A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.
Alwan, Faris M; Baharum, Adam; Hassan, Geehan S
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.
Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346
Computer-aided operations engineering with integrated models of systems and operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.
Factors Predicting Meniscal Allograft Transplantation Failure
Parkinson, Ben; Smith, Nicholas; Asplin, Laura; Thompson, Peter; Spalding, Tim
2016-01-01
Background: Meniscal allograft transplantation (MAT) is performed to improve symptoms and function in patients with a meniscal-deficient compartment of the knee. Numerous studies have shown a consistent improvement in patient-reported outcomes, but high failure rates have been reported by some studies. The typical patients undergoing MAT often have multiple other pathologies that require treatment at the time of surgery. The factors that predict failure of a meniscal allograft within this complex patient group are not clearly defined. Purpose: To determine predictors of MAT failure in a large series to refine the indications for surgery and better inform future patients. Study Design: Cohort study; Level of evidence, 3. Methods: All patients undergoing MAT at a single institution between May 2005 and May 2014 with a minimum of 1-year follow-up were prospectively evaluated and included in this study. Failure was defined as removal of the allograft, revision transplantation, or conversion to a joint replacement. Patients were grouped according to the articular cartilage status at the time of the index surgery: group 1, intact or partial-thickness chondral loss; group 2, full-thickness chondral loss 1 condyle; and group 3, full-thickness chondral loss both condyles. The Cox proportional hazards model was used to determine significant predictors of failure, independently of other factors. Kaplan-Meier survival curves were produced for overall survival and significant predictors of failure in the Cox proportional hazards model. Results: There were 125 consecutive MATs performed, with 1 patient lost to follow-up. The median follow-up was 3 years (range, 1-10 years). The 5-year graft survival for the entire cohort was 82% (group 1, 97%; group 2, 82%; group 3, 62%). The probability of failure in group 1 was 85% lower (95% CI, 13%-97%) than in group 3 at any time. The probability of failure with lateral allografts was 76% lower (95% CI, 16%-89%) than medial allografts at any time. Conclusion: This study showed that the presence of severe cartilage damage at the time of MAT and medial allografts were significantly predictive of failure. Surgeons and patients should use this information when considering the risks and benefits of surgery. PMID:27583257
NASA Technical Reports Server (NTRS)
Wheeler, J. T.
1990-01-01
The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.
Control of Flexible Systems in the Presence of Failures
NASA Technical Reports Server (NTRS)
Magahami, Peiman G.; Cox, David E.; Bauer, Frank H. (Technical Monitor)
2001-01-01
Control of flexible systems under degradation or failure of sensors/actuators is considered. A Linear Matrix Inequality framework is used to synthesize H(sub infinity)-based controllers, which provide good disturbance rejection while capable of tolerating real parameter uncertainties in the system model, as well as potential degradation or failure of the control system hardware. In this approach, a one-at-a-time failure scenario is considered, wherein no more than one sensor or actuator is allowed to fail at any given time. A numerical example involving control synthesis for a two-dimensional flexible system is presented to demonstrate the feasibility of the proposed approach.
Zebrafish Heart Failure Models for the Evaluation of Chemical Probes and Drugs
Monte, Aaron; Cook, James M.; Kabir, Mohd Shahjahan; Peterson, Karl P.
2013-01-01
Abstract Heart failure is a complex disease that involves genetic, environmental, and physiological factors. As a result, current medication and treatment for heart failure produces limited efficacy, and better medication is in demand. Although mammalian models exist, simple and low-cost models will be more beneficial for drug discovery and mechanistic studies of heart failure. We previously reported that aristolochic acid (AA) caused cardiac defects in zebrafish embryos that resemble heart failure. Here, we showed that cardiac troponin T and atrial natriuretic peptide were expressed at significantly higher levels in AA-treated embryos, presumably due to cardiac hypertrophy. In addition, several human heart failure drugs could moderately attenuate the AA-induced heart failure by 10%–40%, further verifying the model for drug discovery. We then developed a drug screening assay using the AA-treated zebrafish embryos and identified three compounds. Mitogen-activated protein kinase kinase inhibitor (MEK-I), an inhibitor for the MEK-1/2 known to be involved in cardiac hypertrophy and heart failure, showed nearly 60% heart failure attenuation. C25, a chalcone derivative, and A11, a phenolic compound, showed around 80% and 90% attenuation, respectively. Time course experiments revealed that, to obtain 50% efficacy, these compounds were required within different hours of AA treatment. Furthermore, quantitative polymerase chain reaction showed that C25, not MEK-I or A11, strongly suppressed inflammation. Finally, C25 and MEK-I, but not A11, could also rescue the doxorubicin-induced heart failure in zebrafish embryos. In summary, we have established two tractable heart failure models for drug discovery and three potential drugs have been identified that seem to attenuate heart failure by different mechanisms. PMID:24351044
Sun, Jianguo; Feng, Yanqin; Zhao, Hui
2015-01-01
Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, M.H.; Coon, D.M.
Time-dependent failure at elevated temperatures currently governs the service life of oxynitride glass-joined silicon nitride. Creep, devitrification, stress- aided oxidation-controlled slow crack growth, and viscous cabitation-controlled failure are examined as possible controlling mechanisms. Creep deformation failure is observed above 1000{degrees}C. Fractographic evidence indicates cavity formation and growth below 1000{degrees}C. Auger electron spectroscopy verified that the oxidation rate of the joining glass is governed by the oxygen supply rate. Time-to-failure data and those predicted using the Tsai and Raj, and Raj and Dang viscous cavitation models. It is concluded that viscous relaxation and isolated cavity growth control the rate of failuremore » in oxynitride glass-filled silicon nitride joints below 1000{degrees}C. Several possible methods are also proposed for increasing the service lives of these joints.« less
Experimental models of hepatotoxicity related to acute liver failure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maes, Michaël; Vinken, Mathieu, E-mail: mvinken@vub.ac.be; Jaeschke, Hartmut
Acute liver failure can be the consequence of various etiologies, with most cases arising from drug-induced hepatotoxicity in Western countries. Despite advances in this field, the management of acute liver failure continues to be one of the most challenging problems in clinical medicine. The availability of adequate experimental models is of crucial importance to provide a better understanding of this condition and to allow identification of novel drug targets, testing the efficacy of new therapeutic interventions and acting as models for assessing mechanisms of toxicity. Experimental models of hepatotoxicity related to acute liver failure rely on surgical procedures, chemical exposuremore » or viral infection. Each of these models has a number of strengths and weaknesses. This paper specifically reviews commonly used chemical in vivo and in vitro models of hepatotoxicity associated with acute liver failure. - Highlights: • The murine APAP model is very close to what is observed in patients. • The Gal/ET model is useful to study TNFα-mediated apoptotic signaling mechanisms. • Fas receptor activation is an effective model of apoptosis and secondary necrosis. • The ConA model is a relevant model of auto-immune hepatitis and viral hepatitis. • Multiple time point evaluation needed in experimental models of acute liver injury.« less
Failure prediction using machine learning and time series in optical network.
Wang, Zhilong; Zhang, Min; Wang, Danshi; Song, Chuang; Liu, Min; Li, Jin; Lou, Liqi; Liu, Zhuo
2017-08-07
In this paper, we propose a performance monitoring and failure prediction method in optical networks based on machine learning. The primary algorithms of this method are the support vector machine (SVM) and double exponential smoothing (DES). With a focus on risk-aware models in optical networks, the proposed protection plan primarily investigates how to predict the risk of an equipment failure. To the best of our knowledge, this important problem has not yet been fully considered. Experimental results showed that the average prediction accuracy of our method was 95% when predicting the optical equipment failure state. This finding means that our method can forecast an equipment failure risk with high accuracy. Therefore, our proposed DES-SVM method can effectively improve traditional risk-aware models to protect services from possible failures and enhance the optical network stability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, D. I.; Han, S. H.
A PSA analyst has been manually determining fire-induced component failure modes and modeling them into the PSA logics. These can be difficult and time-consuming tasks as they need much information and many events are to be modeled. KAERI has been developing the IPRO-ZONE (interface program for constructing zone effect table) to facilitate fire PSA works for identifying and modeling fire-induced component failure modes, and to construct a one top fire event PSA model. With the output of the IPRO-ZONE, the AIMS-PSA, and internal event one top PSA model, one top fire events PSA model is automatically constructed. The outputs ofmore » the IPRO-ZONE include information on fire zones/fire scenarios, fire propagation areas, equipment failure modes affected by a fire, internal PSA basic events corresponding to fire-induced equipment failure modes, and fire events to be modeled. This paper introduces the IPRO-ZONE, and its application results to fire PSA of Ulchin Unit 3 and SMART(System-integrated Modular Advanced Reactor). (authors)« less
NASA Astrophysics Data System (ADS)
Sawant, M.; Christou, A.
2012-12-01
While use of LEDs in Fiber Optics and lighting applications is common, their use in medical diagnostic applications is not very extensive. Since the precise value of light intensity will be used to interpret patient results, understanding failure modes [1-4] is very important. We used the Failure Modes and Effects Criticality Analysis (FMECA) tool to identify the critical failure modes of the LEDs. FMECA involves identification of various failure modes, their effects on the system (LED optical output in this context), their frequency of occurrence, severity and the criticality of the failure modes. The competing failure modes/mechanisms were degradation of: active layer (where electron-hole recombination occurs to emit light), electrodes (provides electrical contact to the semiconductor chip), Indium Tin Oxide (ITO) surface layer (used to improve current spreading and light extraction), plastic encapsulation (protective polymer layer) and packaging failures (bond wires, heat sink separation). A FMECA table is constructed and the criticality is calculated by estimating the failure effect probability (β), failure mode ratio (α), failure rate (λ) and the operating time. Once the critical failure modes were identified, the next steps were generation of prior time to failure distribution and comparing with our accelerated life test data. To generate the prior distributions, data and results from previous investigations were utilized [5-33] where reliability test results of similar LEDs were reported. From the graphs or tabular data, we extracted the time required for the optical power output to reach 80% of its initial value. This is our failure criterion for the medical diagnostic application. Analysis of published data for different LED materials (AlGaInP, GaN, AlGaAs), the Semiconductor Structures (DH, MQW) and the mode of testing (DC, Pulsed) was carried out. The data was categorized according to the materials system and LED structure such as AlGaInP-DH-DC, AlGaInP-MQW-DC, GaN-DH-DC, and GaN-DH-DC. Although the reported testing was carried out at different temperature and current, the reported data was converted to the present application conditions of the medical environment. Comparisons between the model data and accelerated test results carried out in the present are reported. The use of accelerating agent modeling and regression analysis was also carried out. We have used the Inverse Power Law model with the current density J as the accelerating agent and the Arrhenius model with temperature as the accelerating agent. Finally, our reported methodology is presented as an approach for analyzing LED suitability for the target medical diagnostic applications.
Chakraborty, Arindom
2016-12-01
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.
Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre
NASA Astrophysics Data System (ADS)
Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip
2013-04-01
Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in imperfect conditions. The forecasting model testing centre uses a repository to hold all the data and models and a catalogue to hold all the corresponding metadata. It allows to: Data transfer: Upload experimental data: We have developed FAST (Flexible Automated Streaming Transfer) tool to upload data from RP laboratories to the repository. FAST sets up data transfer requirements and selects automatically the transfer protocol. Metadata are automatically created and stored. Web data access: Create synthetic data: Users can choose a generator and supply parameters. Synthetic data are automatically stored with corresponding metadata. Select data and models: Search the metadata using criteria design for RP. The metadata of each data (synthetic or from laboratory) and models are well-described through their respective catalogues accessible by the web portal. Upload models: Upload and store a model with associated metadata. This provide an opportunity to share models. The web portal solicits and creates metadata describing each model. Run model and visualise results: Selected data and a model to be submitted to a High Performance Computational resource hiding technical details. Results are displayed in accelerated time and stored allowing retrieval, inspection and aggregation. The forecasting model testing centre proposed could be integrated into EPOS. Its expected benefits are: Improved the understanding of brittle failure prediction and its scalability to natural phenomena. Accelerated and extensive testing and rapid sharing of insights. Increased impact and visibility of RP and GeoScience research. Resources for education and training. A key challenge is to agree the framework for sharing RP data and models. Our work is provocative first step.
A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The two identified failure modes follow different acceleration functions. Catastrophic failures follow the traditional power-law relationship to the applied voltage. Slow degradation failures fit well to an exponential law relationship to the applied electrical field. Finally, the impact of capacitor structure on the reliability of BME capacitors is discussed with respect to the number of dielectric layers in an MLCC unit, the number of BaTiO3 grains per dielectric layer, and the chip size of the capacitor device.
A measurement-based performability model for a multiprocessor system
NASA Technical Reports Server (NTRS)
Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.
1987-01-01
A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.
Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1998-01-01
Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
Hu, Bo; Liu, Dong-Xing; Zhang, Yu-Qing; Song, Jian-Tao; Ji, Xian-Fei; Hou, Zhi-Qiang; Zhang, Zhen-Hai
2016-05-01
In this study we sequenced the complete mitochondrial genome sequencing of a heart failure model of cardiomyopathic Syrian hamster (Mesocricetus auratus) for the first time. The total length of the mitogenome was 16,267 bp. It harbored 13 protein-coding genes, 2 ribosomal RNA genes, 22 transfer RNA genes and 1 non-coding control region.
Wang, Peijie; Zhao, Hui; Sun, Jianguo
2016-12-01
Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.
Cabin Environment Physics Risk Model
NASA Technical Reports Server (NTRS)
Mattenberger, Christopher J.; Mathias, Donovan Leigh
2014-01-01
This paper presents a Cabin Environment Physics Risk (CEPR) model that predicts the time for an initial failure of Environmental Control and Life Support System (ECLSS) functionality to propagate into a hazardous environment and trigger a loss-of-crew (LOC) event. This physics-of failure model allows a probabilistic risk assessment of a crewed spacecraft to account for the cabin environment, which can serve as a buffer to protect the crew during an abort from orbit and ultimately enable a safe return. The results of the CEPR model replace the assumption that failure of the crew critical ECLSS functionality causes LOC instantly, and provide a more accurate representation of the spacecraft's risk posture. The instant-LOC assumption is shown to be excessively conservative and, moreover, can impact the relative risk drivers identified for the spacecraft. This, in turn, could lead the design team to allocate mass for equipment to reduce overly conservative risk estimates in a suboptimal configuration, which inherently increases the overall risk to the crew. For example, available mass could be poorly used to add redundant ECLSS components that have a negligible benefit but appear to make the vehicle safer due to poor assumptions about the propagation time of ECLSS failures.
Viscoelastic behavior and life-time predictions
NASA Technical Reports Server (NTRS)
Dillard, D. A.; Brinson, H. F.
1985-01-01
Fiber reinforced plastics were considered for many structural applications in automotive, aerospace and other industries. A major concern was and remains the failure modes associated with the polymer matrix which serves to bind the fibers together and transfer the load through connections, from fiber to fiber and ply to ply. An accelerated characterization procedure for prediction of delayed failures was developed. This method utilizes time-temperature-stress-moisture superposition principles in conjunction with laminated plate theory. Because failures are inherently nonlinear, the testing and analytic modeling for both moduli and strength is based upon nonlinear viscoelastic concepts.
Fault detection and diagnosis using neural network approaches
NASA Technical Reports Server (NTRS)
Kramer, Mark A.
1992-01-01
Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.
NASA Astrophysics Data System (ADS)
Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.
2017-04-01
Large mountain slopes in alpine environments undergo a complex long-term evolution from glacial to postglacial environments, through a transient period of paraglacial readjustment. During and after this transition, the interplay among rock strength, topographic relief, and morpho-climatic drivers varying in space and time can lead to the development of different types of slope instability, from sudden catastrophic failures to large, slow, long-lasting yet potentially catastrophic rockslides. Understanding the long-term evolution of large rock slopes requires accounting for the time-dependence of deglaciation unloading, permeability and fluid pressure distribution, displacements and failure mechanisms. In turn, this is related to a convincing description of rock mass damage processes and to their transition from a sub-critical (progressive failure) to a critical (catastrophic failure) character. Although mechanisms of damage occurrence in rocks have been extensively studied in the laboratory, the description of time-dependent damage under gravitational load and variable external actions remains difficult. In this perspective, starting from a time-dependent model conceived for laboratory rock deformation, we developed Dadyn-RS, a tool to simulate the long-term evolution of real, large rock slopes. Dadyn-RS is a 2D, FEM model programmed in Matlab, which combines damage and time-to-failure laws to reproduce both diffused damage and strain localization meanwhile tracking long-term slope displacements from primary to tertiary creep stages. We implemented in the model the ability to account for rock mass heterogeneity and property upscaling, time-dependent deglaciation, as well as damage-dependent fluid pressure occurrence and stress corrosion. We first tested DaDyn-RS performance on synthetic case studies, to investigate the effect of the different model parameters on the mechanisms and timing of long-term slope behavior. The model reproduces complex interactions between topography, deglaciation rate, mechanical properties and fluid pressure occurrence, resulting in different kinematics, damage patterns and timing of slope instabilities. We assessed the role of groundwater on slope damage and deformation mechanisms by introducing time-dependent pressure cycling within simulations. Then, we applied DaDyn-RS to real slopes located in the Italian Central Alps, affected by an active rockslide and a Deep Seated Gravitational Slope Deformation, respectively. From Last Glacial Maximum to present conditions, our model allows reproducing in an explicitly time-dependent framework the progressive development of damage-induced permeability, strain localization and shear band differentiation at different times between the Lateglacial period and the Mid-Holocene climatic transition. Different mechanisms and timings characterize different styles of slope deformations, consistently with available dating constraints. DaDyn-RS is able to account for different long-term slope dynamics, from slow creep to the delayed transition to fast-moving rockslides.
NASA Astrophysics Data System (ADS)
Xu, T.; Zhou, G. L.; Heap, Michael J.; Zhu, W. C.; Chen, C. F.; Baud, Patrick
2017-09-01
An understanding of the influence of temperature on brittle creep in granite is important for the management and optimization of granitic nuclear waste repositories and geothermal resources. We propose here a two-dimensional, thermo-mechanical numerical model that describes the time-dependent brittle deformation (brittle creep) of low-porosity granite under different constant temperatures and confining pressures. The mesoscale model accounts for material heterogeneity through a stochastic local failure stress field, and local material degradation using an exponential material softening law. Importantly, the model introduces the concept of a mesoscopic renormalization to capture the co-operative interaction between microcracks in the transition from distributed to localized damage. The mesoscale physico-mechanical parameters for the model were first determined using a trial-and-error method (until the modeled output accurately captured mechanical data from constant strain rate experiments on low-porosity granite at three different confining pressures). The thermo-physical parameters required for the model, such as specific heat capacity, coefficient of linear thermal expansion, and thermal conductivity, were then determined from brittle creep experiments performed on the same low-porosity granite at temperatures of 23, 50, and 90 °C. The good agreement between the modeled output and the experimental data, using a unique set of thermo-physico-mechanical parameters, lends confidence to our numerical approach. Using these parameters, we then explore the influence of temperature, differential stress, confining pressure, and sample homogeneity on brittle creep in low-porosity granite. Our simulations show that increases in temperature and differential stress increase the creep strain rate and therefore reduce time-to-failure, while increases in confining pressure and sample homogeneity decrease creep strain rate and increase time-to-failure. We anticipate that the modeling presented herein will assist in the management and optimization of geotechnical engineering projects within granite.
Robust inference in discrete hazard models for randomized clinical trials.
Nguyen, Vinh Q; Gillen, Daniel L
2012-10-01
Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.
Maximum likelihood estimation for semiparametric transformation models with interval-censored data
Mao, Lu; Lin, D. Y.
2016-01-01
Abstract Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656
Arzilli, Chiara; Aimo, Alberto; Vergaro, Giuseppe; Ripoli, Andrea; Senni, Michele; Emdin, Michele; Passino, Claudio
2018-05-01
Background The Seattle heart failure model or the cardiac and comorbid conditions (3C-HF) scores may help define patient risk in heart failure. Direct comparisons between them or versus N-terminal fraction of pro-B-type natriuretic peptide (NT-proBNP) have never been performed. Methods Data from consecutive patients with stable systolic heart failure and 3C-HF data were examined. A subgroup of patients had the Seattle heart failure model data available. The endpoints were one year all-cause or cardiovascular death. Results The population included 2023 patients, aged 68 ± 12 years, 75% were men. At the one year time-point, 198 deaths were recorded (10%), 124 of them (63%) from cardiovascular causes. While areas under the curve were not significantly different, NT-proBNP displayed better reclassification capability than the 3C-HF score for the prediction of one year all-cause and cardiovascular death. Adding NT-proBNP to the 3C-HF score resulted in a significant improvement in risk prediction. Among patients with Seattle heart failure model data available ( n = 798), the area under the curve values for all-cause and cardiovascular death were similar for the Seattle heart failure model score (0.790 and 0.820), NT-proBNP (0.783 and 0.803), and the 3C-HF score (0.770 and 0.800). The combination of the 3C-HF score and NT-proBNP displayed a similar prognostic performance to the Seattle heart failure model score for both endpoints. Adding NT-proBNP to the Seattle heart failure model score performed better than the Seattle heart failure model alone in terms of reclassification, but not discrimination. Conclusions Among systolic heart failure patients, NT-proBNP levels had better reclassification capability for all-cause and cardiovascular death than the 3C-HF score. The inclusion of NT-proBNP to the 3C-HF and Seattle heart failure model score resulted in significantly better risk stratification.
Explosive Model Tarantula 4d/JWL++ Calibration of LX-17
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souers, P C; Vitello, P A
2008-09-30
Tarantula is an explosive kinetic package intended to do detonation, shock initiation, failure, corner-turning with dead zones, gap tests and air gaps in reactive flow hydrocode models. The first, 2007-2008 version with monotonic Q is here run inside JWL++ with square zoning from 40 to 200 zones/cm on ambient LX-17. The model splits the rate behavior in every zone into sections set by the hydrocode pressure, P + Q. As the pressure rises, we pass through the no-reaction, initiation, ramp-up/failure and detonation sections sequentially. We find that the initiation and pure detonation rate constants are largely insensitive to zoning butmore » that the ramp-up/failure rate constant is extremely sensitive. At no time does the model pass every test, but the pressure-based approach generally works. The best values for the ramp/failure region are listed here in Mb units.« less
NASA Technical Reports Server (NTRS)
Raju, Ivatury S; Glaessgen, Edward H.; Mason, Brian H; Krishnamurthy, Thiagarajan; Davila, Carlos G
2005-01-01
A detailed finite element analysis of the right rear lug of the American Airlines Flight 587 - Airbus A300-600R was performed as part of the National Transportation Safety Board s failure investigation of the accident that occurred on November 12, 2001. The loads experienced by the right rear lug are evaluated using global models of the vertical tail, local models near the right rear lug, and a global-local analysis procedure. The right rear lug was analyzed using two modeling approaches. In the first approach, solid-shell type modeling is used, and in the second approach, layered-shell type modeling is used. The solid-shell and the layered-shell modeling approaches were used in progressive failure analyses (PFA) to determine the load, mode, and location of failure in the right rear lug under loading representative of an Airbus certification test conducted in 1985 (the 1985-certification test). Both analyses were in excellent agreement with each other on the predicted failure loads, failure mode, and location of failure. The solid-shell type modeling was then used to analyze both a subcomponent test conducted by Airbus in 2003 (the 2003-subcomponent test) and the accident condition. Excellent agreement was observed between the analyses and the observed failures in both cases. From the analyses conducted and presented in this paper, the following conclusions were drawn. The moment, Mx (moment about the fuselage longitudinal axis), has significant effect on the failure load of the lugs. Higher absolute values of Mx give lower failure loads. The predicted load, mode, and location of the failure of the 1985-certification test, 2003-subcomponent test, and the accident condition are in very good agreement. This agreement suggests that the 1985-certification and 2003- subcomponent tests represent the accident condition accurately. The failure mode of the right rear lug for the 1985-certification test, 2003-subcomponent test, and the accident load case is identified as a cleavage-type failure. For the accident case, the predicted failure load for the right rear lug from the PFA is greater than 1.98 times the limit load of the lugs. I.
Structural Analysis of the Right Rear Lug of American Airlines Flight 587
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Glaessgen, Edward H.; Mason, Brian H.; Krishnamurthy, Thiagarajan; Davila, Carlos G.
2006-01-01
A detailed finite element analysis of the right rear lug of the American Airlines Flight 587 - Airbus A300-600R was performed as part of the National Transportation Safety Board s failure investigation of the accident that occurred on November 12, 2001. The loads experienced by the right rear lug are evaluated using global models of the vertical tail, local models near the right rear lug, and a global-local analysis procedure. The right rear lug was analyzed using two modeling approaches. In the first approach, solid-shell type modeling is used, and in the second approach, layered-shell type modeling is used. The solid-shell and the layered-shell modeling approaches were used in progressive failure analyses (PFA) to determine the load, mode, and location of failure in the right rear lug under loading representative of an Airbus certification test conducted in 1985 (the 1985-certification test). Both analyses were in excellent agreement with each other on the predicted failure loads, failure mode, and location of failure. The solid-shell type modeling was then used to analyze both a subcomponent test conducted by Airbus in 2003 (the 2003-subcomponent test) and the accident condition. Excellent agreement was observed between the analyses and the observed failures in both cases. The moment, Mx (moment about the fuselage longitudinal axis), has significant effect on the failure load of the lugs. Higher absolute values of Mx give lower failure loads. The predicted load, mode, and location of the failure of the 1985- certification test, 2003-subcomponent test, and the accident condition are in very good agreement. This agreement suggests that the 1985-certification and 2003-subcomponent tests represent the accident condition accurately. The failure mode of the right rear lug for the 1985-certification test, 2003-subcomponent test, and the accident load case is identified as a cleavage-type failure. For the accident case, the predicted failure load for the right rear lug from the PFA is greater than 1.98 times the limit load of the lugs.
Voltage stress effects on microcircuit accelerated life test failure rates
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1976-01-01
The applicability of Arrhenius and Eyring reaction rate models for describing microcircuit aging characteristics as a function of junction temperature and applied voltage was evaluated. The results of a matrix of accelerated life tests with a single metal oxide semiconductor microcircuit operated at six different combinations of temperature and voltage were used to evaluate the models. A total of 450 devices from two different lots were tested at ambient temperatures between 200 C and 250 C and applied voltages between 5 Vdc and 15 Vdc. A statistical analysis of the surface related failure data resulted in bimodal failure distributions comprising two lognormal distributions; a 'freak' distribution observed early in time, and a 'main' distribution observed later in time. The Arrhenius model was shown to provide a good description of device aging as a function of temperature at a fixed voltage. The Eyring model also appeared to provide a reasonable description of main distribution device aging as a function of temperature and voltage. Circuit diagrams are shown.
An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks
Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei
2014-01-01
The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005
Schackman, Bruce R; Ribaudo, Heather J; Krambrink, Amy; Hughes, Valery; Kuritzkes, Daniel R; Gulick, Roy M
2007-12-15
Blacks had higher rates of virologic failure than whites on efavirenz-containing regimens in the AIDS Clinical Trials Group (ACTG) A5095 study; preliminary analyses also suggested an association with adherence. We rigorously examined associations over time among race, virologic failure, 4 self-reported adherence metrics, and quality of life (QOL). ACTG A5095 was a double-blind placebo-controlled study of treatment-naive HIV-positive patients randomized to zidovudine/lamivudine/abacavir versus zidovudine/lamivudine plus efavirenz versus zidovudine/lamivudine/abacavir plus efavirenz. Virologic failure was defined as confirmed HIV-1 RNA >or=200 copies/mL at >or=16 weeks on study. The zidovudine/lamivudine/abacavir arm was discontinued early because of virologic inferiority. We examined virologic failure differences for efavirenz-containing arms according to missing 0 (adherent) versus at least 1 dose (nonadherent) during the past 4 days, alternative self-reported adherence metrics, and QOL. Analyses used the Fisher exact, log rank tests, and Cox proportional hazards models. The study population included white (n = 299), black (n = 260), and Hispanic (n = 156) patients with >or=1 adherence evaluation. Virologic failure was associated with week 12 nonadherence during the past 4 days for blacks (53% nonadherent failed vs. 25% adherent; P < 0.001) but not for whites (20% nonadherent failed vs. 20% adherent; P = 0.91). After adjustment for baseline covariates and treatment, there was a significant interaction between race and week 12 adherence (P = 0.02). In time-dependent Cox models using self-reports over time to reflect recent adherence, there was a significantly higher failure risk for nonadherent subjects (hazard ratio [HR] = 2.07; P < 0.001). Significant race-adherence interactions were seen in additional models of adherence: missing at least 1 medication dose ever (P = 0.04), past month (P < 0.01), or past weekend (P = 0.05). Lower QOL was significantly associated with virologic failure (P < 0.001); there was no evidence of an interaction between QOL and race (P = 0.39) or adherence (P = 0.51) in predicting virologic failure. There was a greater effect of nonadherence on virologic failure in blacks given efavirenz-containing regimens than in whites. Self-reported adherence and QOL are independent predictors of virologic failure.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
Methodology for Physics and Engineering of Reliable Products
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Gibbel, Mark
1996-01-01
Physics of failure approaches have gained wide spread acceptance within the electronic reliability community. These methodologies involve identifying root cause failure mechanisms, developing associated models, and utilizing these models to inprove time to market, lower development and build costs and higher reliability. The methodology outlined herein sets forth a process, based on integration of both physics and engineering principles, for achieving the same goals.
SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, M; Abazeed, M; Woody, N
Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported tomore » R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.« less
Liu, Xu-hua; Chen, Yu; Wang, Tai-ling; Lu, Jun; Zhang, Li-jie; Song, Chen-zhao; Zhang, Jing; Duan, Zhong-ping
2007-10-01
To establish a practical and reproducible animal model of human acute-on-chronic liver failure for further study of the pathophysiological mechanism of acute-on-chronic liver failure and for drug screening and evaluation in its treatment. Immunological hepatic fibrosis was induced by human serum albumin in Wistar rats. In rats with early-stage cirrhosis (fibrosis stage IV), D-galactosamine and lipopolysaccharide were administered. Mortality and survival time were recorded in 20 rats. Ten rats were sacrificed at 4, 8, and 12 hours. Liver function tests and plasma cytokine levels were measured after D-galactosamine/lipopolysaccharide administration and liver pathology was studied. Cell apoptosis was detected by terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling assay. Most of the rats treated with human albumin developed cirrhosis and fibrosis, and 90% of them died from acute liver failure after administration of D-galactosamine/lipopolysaccharide, with a mean survival time of (16.1+/-3.7) hours. Liver histopathology showed massive or submassive necrosis of the regenerated nodules, while fibrosis septa were intact. Liver function tests were compatible with massive necrosis of hepatocytes. Plasma level of TNFalpha increased significantly, parallel with the degree of the hepatocytes apoptosis. Plasma IL-10 levels increased similarly as seen in patients with acute-on-chronic liver failure. We established an animal model of acute-on-chronic liver failure by treating rats with human serum albumin and later with D-galactosamine and lipopolysaccharide. TNFalpha-mediated liver cell apoptoses plays a very important role in the pathogenesis of acute liver failure.
Using Landslide Failure Forecast Models in Near Real Time: the Mt. de La Saxe case-study
NASA Astrophysics Data System (ADS)
Manconi, Andrea; Giordan, Daniele
2014-05-01
Forecasting the occurrence of landslide phenomena in space and time is a major scientific challenge. The approaches used to forecast landslides mainly depend on the spatial scale analyzed (regional vs. local), the temporal range of forecast (long- vs. short-term), as well as the triggering factor and the landslide typology considered. By focusing on short-term forecast methods for large, deep seated slope instabilities, the potential time of failure (ToF) can be estimated by studying the evolution of the landslide deformation over time (i.e., strain rate) provided that, under constant stress conditions, landslide materials follow creep mechanism before reaching rupture. In the last decades, different procedures have been proposed to estimate ToF by considering simplified empirical and/or graphical methods applied to time series of deformation data. Fukuzono, 1985 proposed a failure forecast method based on the experience performed during large scale laboratory experiments, which were aimed at observing the kinematic evolution of a landslide induced by rain. This approach, known also as the inverse-velocity method, considers the evolution over time of the inverse value of the surface velocity (v) as an indicator of the ToF, by assuming that failure approaches while 1/v tends to zero. Here we present an innovative method to aimed at achieving failure forecast of landslide phenomena by considering near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and then apply straightforward statistical methods to obtain confidence intervals on the time of failure. Our results can be relevant to support the management of early warning systems during landslide emergency conditions, also when the predefined displacement and/or velocity thresholds are exceeded. In addition, our statistical approach for the definition of confidence interval and forecast reliability can be applied also to different failure forecast methods. We applied for the first time the herein presented approach in near real time during the emergency scenario relevant to the reactivation of the La Saxe rockslide, a large mass wasting menacing the population of Courmayeur, northern Italy, and the important European route E25. We show how the application of simplified but robust forecast models can be a convenient method to manage and support early warning systems during critical situations. References: Fukuzono T. (1985), A New Method for Predicting the Failure Time of a Slope, Proc. IVth International Conference and Field Workshop on Landslides, Tokyo.
NASA Astrophysics Data System (ADS)
Reid, Mark; Iverson, Richard; Brien, Dianne; Iverson, Neal; LaHusen, Richard; Logan, Matthew
2017-04-01
Shallow landslides and ensuing debris flows are a common hazard worldwide, yet forecasting their initiation at a specific site is challenging. These challenges arise, in part, from diverse near-surface hydrologic pathways under different wetting conditions, 3D failure geometries, and the effects of suction in partially saturated soils. Simplistic hydrologic models typically used for regional hazard assessment disregard these complexities. As an alterative to field studies where the effects of these governing factors can be difficult to isolate, we used the USGS debris-flow flume to conduct controlled, field-scale landslide initiation experiments. Using overhead sprinklers or groundwater injectors on the flume bed, we triggered failures using three different wetting conditions: groundwater inflow from below, prolonged moderate-intensity precipitation, and bursts of high-intensity precipitation. Failures occurred in 6 m3 (0.65-m thick and 2-m wide) prisms of loamy sand on a 31° slope; these field-scale failures enabled realistic incorporation of nonlinear scale-dependent effects such as soil suction. During the experiments, we monitored soil deformation, variably saturated pore pressures, and moisture changes using ˜50 sensors sampling at 20 Hz. From ancillary laboratory tests, we determined shear strength, saturated hydraulic conductivities, and unsaturated moisture retention characteristics. The three different wetting conditions noted above led to different hydrologic pathways and influenced instrumental responses and failure timing. During groundwater injection, pore-water pressures increased from the bed of the flume upwards into the sediment, whereas prolonged moderate infiltration wet the sediment from the ground surface downward. In both cases, pore pressures acting on the impending failure surface slowly rose until abrupt failure. In contrast, a burst of intense sprinkling caused rapid failure without precursory development of widespread positive pore pressures. Using coupled 2D variably saturated groundwater flow modeling and 3D limit-equilibrium analyses, we simulated the observed hydrologic behaviors and the time evolution of changes in factors of safety. Our measured parameters successfully reproduced pore pressure observations without calibration. We also quantified the mechanical effects of 3D geometry and unsaturated soil suction on stability. Although suction effects appreciably increased the stability of drier sediment, they were dampened (to <10% increase) in wetted sediment. 3D geometry effects from the lateral margins consistently increased factors of safety by >20% in wet or dry sediment. Importantly, both 3D and suction effects enabled more accurate simulation of failure times. Without these effects, failure timing and/or back-calculated shear strengths would be markedly incorrect. Our results indicate that simplistic models could not consistently predict the timing of slope failure given diverse hydrologic pathways. Moreover, high frequency monitoring (with sampling periods < ˜60 s) would be required to measure and interpret the effects of rapid hydrologic triggers, such as intense rain bursts.
Statistical analysis of cascading failures in power grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael; Pfitzner, Rene; Turitsyn, Konstantin
2010-12-01
We introduce a new microscopic model of cascading failures in transmission power grids. This model accounts for automatic response of the grid to load fluctuations that take place on the scale of minutes, when optimum power flow adjustments and load shedding controls are unavailable. We describe extreme events, caused by load fluctuations, which cause cascading failures of loads, generators and lines. Our model is quasi-static in the causal, discrete time and sequential resolution of individual failures. The model, in its simplest realization based on the Directed Current description of the power flow problem, is tested on three standard IEEE systemsmore » consisting of 30, 39 and 118 buses. Our statistical analysis suggests a straightforward classification of cascading and islanding phases in terms of the ratios between average number of removed loads, generators and links. The analysis also demonstrates sensitivity to variations in line capacities. Future research challenges in modeling and control of cascading outages over real-world power networks are discussed.« less
Wendelboe Nielsen, Olav; Sajadieh, Ahmad; Ketzel, Matthias; Tjønneland, Anne; Overvad, Kim; Raaschou-Nielsen, Ole
2017-01-01
Background: Although air pollution and road traffic noise have been associated with higher risk of cardiovascular diseases, associations with heart failure have received only little attention. Objectives: We aimed to investigate whether long-term exposure to road traffic noise and nitrogen dioxide (NO2) were associated with incident heart failure. Methods: In a cohort of 57,053 people 50–64 y of age at enrollment in the period 1993–1997, we identified 2,550 cases of first-ever hospital admission for heart failure during a mean follow-up time of 13.4 y. Present and historical residential addresses from 1987 to 2011 were found in national registers, and road traffic noise (Lden) and NO2 were modeled for all addresses. Analyses were done using Cox proportional hazard model. Results: An interquartile range higher 10-y time-weighted mean exposure for Lden and NO2 was associated with incidence rate ratios (IRR) for heart failure of 1.14 (1.08–1.21) and 1.11 (1.07–1.16), respectively, in models adjusted for gender, lifestyle, and socioeconomic status. In models with mutual exposure adjustment, IRRs were 1.08 (1.00–1.16) for Lden and 1.07 (1.01–1.14) for NO2. We found statistically significant modification of the NO2–heart failure association by gender (strongest association among men), baseline hypertension (strongest association among hypertensive), and diabetes (strongest association among diabetics). The same tendencies were seen for noise, but interactions were not statistically significant. Conclusions: Long-term exposure to NO2 and road traffic noise was associated with higher risk of heart failure, mainly among men, in both single- and two-pollutant models. High exposure to both pollutants was associated with highest risk. https://doi.org/10.1289/EHP1272 PMID:28953453
Nonlinear deformation and localized failure of bacterial streamers in creeping flows
Biswas, Ishita; Ghosh, Ranajay; Sadrzadeh, Mohtada; Kumar, Aloke
2016-01-01
We investigate the failure of bacterial floc mediated streamers in a microfluidic device in a creeping flow regime using both experimental observations and analytical modeling. The quantification of streamer deformation and failure behavior is possible due to the use of 200 nm fluorescent polystyrene beads which firmly embed in the extracellular polymeric substance (EPS) and act as tracers. The streamers, which form soon after the commencement of flow begin to deviate from an apparently quiescent fully formed state in spite of steady background flow and limited mass accretion indicating significant mechanical nonlinearity. This nonlinear behavior shows distinct phases of deformation with mutually different characteristic times and comes to an end with a distinct localized failure of the streamer far from the walls. We investigate this deformation and failure behavior for two separate bacterial strains and develop a simplified but nonlinear analytical model describing the experimentally observed instability phenomena assuming a necking route to instability. Our model leads to a power law relation between the critical strain at failure and the fluid velocity scale exhibiting excellent qualitative and quantitative agreeing with the experimental rupture behavior. PMID:27558511
Model Based Autonomy for Robust Mars Operations
NASA Technical Reports Server (NTRS)
Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.
Clinical models of cardiovascular regulation after weightlessness
NASA Technical Reports Server (NTRS)
Robertson, D.; Jacob, G.; Ertl, A.; Shannon, J.; Mosqueda-Garcia, R.; Robertson, R. M.; Biaggioni, I.
1996-01-01
After several days in microgravity, return to earth is attended by alterations in cardiovascular function. The mechanisms underlying these effects are inadequately understood. Three clinical disorders of autonomic function represent possible models of this abnormal cardiovascular function after spaceflight. They are pure autonomic failure, baroreflex failure, and orthostatic intolerance. In pure autonomic failure, virtually complete loss of sympathetic and parasympathetic function occurs along with profound and immediate orthostatic hypotension. In baroreflex failure, various degrees of debuffering of blood pressure occur. In acute and complete baroreflex failure, there is usually severe hypertension and tachycardia, while with less complete and more chronic baroreflex impairment, orthostatic abnormalities may be more apparent. In orthostatic intolerance, blood pressure fall is minor, but orthostatic symptoms are prominent and tachycardia frequently occurs. Only careful autonomic studies of human subjects in the microgravity environment will permit us to determine which of these models most closely reflects the pathophysiology brought on by a period of time in the microgravity environment.
Rezaei, Fatemeh; Yarmohammadian, Mohmmad H.; Haghshenas, Abbas; Fallah, Ali; Ferdosi, Masoud
2018-01-01
Background: Methodology of Failure Mode and Effects Analysis (FMEA) is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN)” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1) defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss), occurrence of incident (time - unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) then, (2) risks priority criteria quantified by using RPN index (361 for the highest rate failure). The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models. PMID:29441184
Enhanced stability of steep channel beds to mass failure and debris flow initiation
NASA Astrophysics Data System (ADS)
Prancevic, J.; Lamb, M. P.; Ayoub, F.; Venditti, J. G.
2015-12-01
Debris flows dominate bedrock erosion and sediment transport in very steep mountain channels, and are often initiated from failure of channel-bed alluvium during storms. While several theoretical models exist to predict mass failures, few have been tested because observations of in-channel bed failures are extremely limited. To fill this gap in our understanding, we performed laboratory flume experiments to identify the conditions necessary to initiate bed failures in non-cohesive sediment of different sizes (D = 0.7 mm to 15 mm) on steep channel-bed slopes (S = 0.45 to 0.93) and in the presence of water flow. In beds composed of sand, failures occurred under sub-saturated conditions on steep bed slopes (S > 0.5) and under super-saturated conditions at lower slopes. In beds of gravel, however, failures occurred only under super-saturated conditions at all tested slopes, even those approaching the dry angle of repose. Consistent with theoretical models, mass failures under super-saturated conditions initiated along a failure plane approximately one grain-diameter below the bed surface, whereas the failure plane was located near the base of the bed under sub-saturated conditions. However, all experimental beds were more stable than predicted by 1-D infinite-slope stability models. In partially saturated sand, enhanced stability appears to result from suction stress. Enhanced stability in gravel may result from turbulent energy losses in pores or increased granular friction for failures that are shallow with respect to grain size. These grain-size dependent effects are not currently included in stability models for non-cohesive sediment, and they may help to explain better the timing and location of debris flow occurrence.
Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy
2013-01-01
Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.
On the estimation of risk associated with an attenuation prediction
NASA Technical Reports Server (NTRS)
Crane, R. K.
1992-01-01
Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.
Reliability Analysis of the Gradual Degradation of Semiconductor Devices.
1983-07-20
under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation
Diagnosis of delay-deadline failures in real time discrete event models.
Biswas, Santosh; Sarkar, Dipankar; Bhowal, Prodip; Mukhopadhyay, Siddhartha
2007-10-01
In this paper a method for fault detection and diagnosis (FDD) of real time systems has been developed. A modeling framework termed as real time discrete event system (RTDES) model is presented and a mechanism for FDD of the same has been developed. The use of RTDES framework for FDD is an extension of the works reported in the discrete event system (DES) literature, which are based on finite state machines (FSM). FDD of RTDES models are suited for real time systems because of their capability of representing timing faults leading to failures in terms of erroneous delays and deadlines, which FSM-based ones cannot address. The concept of measurement restriction of variables is introduced for RTDES and the consequent equivalence of states and indistinguishability of transitions have been characterized. Faults are modeled in terms of an unmeasurable condition variable in the state map. Diagnosability is defined and the procedure of constructing a diagnoser is provided. A checkable property of the diagnoser is shown to be a necessary and sufficient condition for diagnosability. The methodology is illustrated with an example of a hydraulic cylinder.
NASA Technical Reports Server (NTRS)
Hall, Steven R.; Walker, Bruce K.
1990-01-01
A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Lin, Guang
In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less
Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers
NASA Technical Reports Server (NTRS)
Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.
Predicting the Lifetime of Dynamic Networks Experiencing Persistent Random Attacks.
Podobnik, Boris; Lipic, Tomislav; Horvatic, Davor; Majdandzic, Antonio; Bishop, Steven R; Eugene Stanley, H
2015-09-21
Estimating the critical points at which complex systems abruptly flip from one state to another is one of the remaining challenges in network science. Due to lack of knowledge about the underlying stochastic processes controlling critical transitions, it is widely considered difficult to determine the location of critical points for real-world networks, and it is even more difficult to predict the time at which these potentially catastrophic failures occur. We analyse a class of decaying dynamic networks experiencing persistent failures in which the magnitude of the overall failure is quantified by the probability that a potentially permanent internal failure will occur. When the fraction of active neighbours is reduced to a critical threshold, cascading failures can trigger a total network failure. For this class of network we find that the time to network failure, which is equivalent to network lifetime, is inversely dependent upon the magnitude of the failure and logarithmically dependent on the threshold. We analyse how permanent failures affect network robustness using network lifetime as a measure. These findings provide new methodological insight into system dynamics and, in particular, of the dynamic processes of networks. We illustrate the network model by selected examples from biology, and social science.
Modeling Soft Tissue Damage and Failure Using a Combined Particle/Continuum Approach.
Rausch, M K; Karniadakis, G E; Humphrey, J D
2017-02-01
Biological soft tissues experience damage and failure as a result of injury, disease, or simply age; examples include torn ligaments and arterial dissections. Given the complexity of tissue geometry and material behavior, computational models are often essential for studying both damage and failure. Yet, because of the need to account for discontinuous phenomena such as crazing, tearing, and rupturing, continuum methods are limited. Therefore, we model soft tissue damage and failure using a particle/continuum approach. Specifically, we combine continuum damage theory with Smoothed Particle Hydrodynamics (SPH). Because SPH is a meshless particle method, and particle connectivity is determined solely through a neighbor list, discontinuities can be readily modeled by modifying this list. We show, for the first time, that an anisotropic hyperelastic constitutive model commonly employed for modeling soft tissue can be conveniently implemented within a SPH framework and that SPH results show excellent agreement with analytical solutions for uniaxial and biaxial extension as well as finite element solutions for clamped uniaxial extension in 2D and 3D. We further develop a simple algorithm that automatically detects damaged particles and disconnects the spatial domain along rupture lines in 2D and rupture surfaces in 3D. We demonstrate the utility of this approach by simulating damage and failure under clamped uniaxial extension and in a peeling experiment of virtual soft tissue samples. In conclusion, SPH in combination with continuum damage theory may provide an accurate and efficient framework for modeling damage and failure in soft tissues.
Modeling Soft Tissue Damage and Failure Using a Combined Particle/Continuum Approach
Rausch, M. K.; Karniadakis, G. E.; Humphrey, J. D.
2016-01-01
Biological soft tissues experience damage and failure as a result of injury, disease, or simply age; examples include torn ligaments and arterial dissections. Given the complexity of tissue geometry and material behavior, computational models are often essential for studying both damage and failure. Yet, because of the need to account for discontinuous phenomena such as crazing, tearing, and rupturing, continuum methods are limited. Therefore, we model soft tissue damage and failure using a particle/continuum approach. Specifically, we combine continuum damage theory with Smoothed Particle Hydrodynamics (SPH). Because SPH is a meshless particle method, and particle connectivity is determined solely through a neighbor list, discontinuities can be readily modeled by modifying this list. We show, for the first time, that an anisotropic hyperelastic constitutive model commonly employed for modeling soft tissue can be conveniently implemented within a SPH framework and that SPH results show excellent agreement with analytical solutions for uniaxial and biaxial extension as well as finite element solutions for clamped uniaxial extension in 2D and 3D. We further develop a simple algorithm that automatically detects damaged particles and disconnects the spatial domain along rupture lines in 2D and rupture surfaces in 3D. We demonstrate the utility of this approach by simulating damage and failure under clamped uniaxial extension and in a peeling experiment of virtual soft tissue samples. In conclusion, SPH in combination with continuum damage theory may provide an accurate and efficient framework for modeling damage and failure in soft tissues. PMID:27538848
Earthquake triggering by transient and static deformations
Gomberg, J.; Beeler, N.M.; Blanpied, M.L.; Bodin, P.
1998-01-01
Observational evidence for both static and transient near-field and far-field triggered seismicity are explained in terms of a frictional instability model, based on a single degree of freedom spring-slider system and rate- and state-dependent frictional constitutive equations. In this study a triggered earthquake is one whose failure time has been advanced by ??t (clock advance) due to a stress perturbation. Triggering stress perturbations considered include square-wave transients and step functions, analogous to seismic waves and coseismic static stress changes, respectively. Perturbations are superimposed on a constant background stressing rate which represents the tectonic stressing rate. The normal stress is assumed to be constant. Approximate, closed-form solutions of the rate-and-state equations are derived for these triggering and background loads, building on the work of Dieterich [1992, 1994]. These solutions can be used to simulate the effects of static and transient stresses as a function of amplitude, onset time t0, and in the case of square waves, duration. The accuracies of the approximate closed-form solutions are also evaluated with respect to the full numerical solution and t0. The approximate solutions underpredict the full solutions, although the difference decreases as t0, approaches the end of the earthquake cycle. The relationship between ??t and t0 differs for transient and static loads: a static stress step imposed late in the cycle causes less clock advance than an equal step imposed earlier, whereas a later applied transient causes greater clock advance than an equal one imposed earlier. For equal ??t, transient amplitudes must be greater than static loads by factors of several tens to hundreds depending on t0. We show that the rate-and-state model requires that the total slip at failure is a constant, regardless of the loading history. Thus a static load applied early in the cycle, or a transient applied at any time, reduces the stress at the initiation of failure, whereas static loads that are applied sufficiently late raise it. Rate-and-state friction predictions differ markedly from those based on Coulomb failure stress changes (??CFS) in which ??t equals the amplitude of the static stress change divided by the background stressing rate. The ??CFS model assumes a stress failure threshold, while the rate-and-state equations require a slip failure threshold. The complete rale-and-state equations predict larger ??t than the ??CFS model does for static stress steps at small t0, and smaller ??t than the ??CFS model for stress steps at large t0. The ??CFS model predicts nonzero ??t only for transient loads that raise the stress to failure stress levels during the transient. In contrast, the rate-and-state model predicts nonzero ??t for smaller loads, and triggered failure may occur well after the transient is finished. We consider heuristically the effects of triggering on a population of faults, as these effects might be evident in seismicity data. Triggering is manifest as an initial increase in seismicity rate that may be followed by a quiescence or by a return to the background rate. Available seismicity data are insufficient to discriminate whether triggered earthquakes are "new" or clock advanced. However, if triggering indeed results from advancing the failure time of inevitable earthquakes, then our modeling suggests that a quiescence always follows transient triggering and that the duration of increased seismicity also cannot exceed the duration of a triggering transient load. Quiescence follows static triggering only if the population of available faults is finite.
[Low Fidelity Simulation of a Zero-Y Robot
NASA Technical Reports Server (NTRS)
Sweet, Adam
2001-01-01
The item to be cleared is a low-fidelity software simulation model of a hypothetical freeflying robot designed for use in zero gravity environments. This simulation model works with the HCC simulation system that was developed by Xerox PARC and NASA Ames Research Center. HCC has been previously cleared for distribution. When used with the HCC software, the model computes the location and orientation of the simulated robot over time. Failures (such as a broken motor) can be injected into the simulation to produce simulated behavior corresponding to the failure. Release of this simulation will allow researchers to test their software diagnosis systems by attempting to diagnose the simulated failure from the simulated behavior. This model does not contain any encryption software nor can it perform any control tasks that might be export controlled.
A predictive model for failure properties of thermoset resins
NASA Technical Reports Server (NTRS)
Caruthers, James M.; Bowles, Kenneth J.
1989-01-01
A predictive model for the three-dimensional failure behavior of engineering polymers has been developed in a recent NASA-sponsored research program. This model acknowledges the underlying molecular deformation mechanisms and thus accounts for the effects of different chemical compositions, crosslink density, functionality of the curing agent, etc., on the complete nonlinear stress-strain response including yield. The material parameters required by the model can be determined from test-tube quantities of a new resin in only a few days. Thus, we can obtain a first-order prediction of the applicability of a new resin for an advanced aerospace application without synthesizing the large quantities of material needed for failure testing. This technology will effect order-of-magnitude reductions in the time and expense required to develop new engineering polymers.
Integrating FMEA in a Model-Driven Methodology
NASA Astrophysics Data System (ADS)
Scippacercola, Fabio; Pietrantuono, Roberto; Russo, Stefano; Esper, Alexandre; Silva, Nuno
2016-08-01
Failure Mode and Effects Analysis (FMEA) is a well known technique for evaluating the effects of potential failures of components of a system. FMEA demands for engineering methods and tools able to support the time- consuming tasks of the analyst. We propose to make FMEA part of the design of a critical system, by integration into a model-driven methodology. We show how to conduct the analysis of failure modes, propagation and effects from SysML design models, by means of custom diagrams, which we name FMEA Diagrams. They offer an additional view of the system, tailored to FMEA goals. The enriched model can then be exploited to automatically generate FMEA worksheet and to conduct qualitative and quantitative analyses. We present a case study from a real-world project.
Regression analysis of informative current status data with the additive hazards model.
Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo
2015-04-01
This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.
A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure...
Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations
NASA Technical Reports Server (NTRS)
Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor
2014-01-01
One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
Outcome-Dependent Sampling with Interval-Censored Failure Time Data
Zhou, Qingning; Cai, Jianwen; Zhou, Haibo
2017-01-01
Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664
A Closed Network Queue Model of Underground Coal Mining Production, Failure, and Repair
NASA Technical Reports Server (NTRS)
Lohman, G. M.
1978-01-01
Underground coal mining system production, failures, and repair cycles were mathematically modeled as a closed network of two queues in series. The model was designed to better understand the technological constraints on availability of current underground mining systems, and to develop guidelines for estimating the availability of advanced mining systems and their associated needs for spares as well as production and maintenance personnel. It was found that: mine performance is theoretically limited by the maintainability ratio, significant gains in availability appear possible by means of small improvements in the time between failures the number of crews and sections should be properly balanced for any given maintainability ratio, and main haulage systems closest to the mine mouth require the most attention to reliability.
A FORTRAN program for multivariate survival analysis on the personal computer.
Mulder, P G
1988-01-01
In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.
Reed, Shelby D.; Neilson, Matthew P.; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H.; Polsky, Daniel E.; Graham, Felicia L.; Bowers, Margaret T.; Paul, Sara C.; Granger, Bradi B.; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara; Levy, Wayne C.
2015-01-01
Background Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. Methods We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics, use of evidence-based medications, and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model (SHFM). Projections of resource use and quality of life are modeled using relationships with time-varying SHFM scores. The model can be used to evaluate parallel-group and single-cohort designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. Results The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. Conclusion The TEAM-HF Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. PMID:26542504
An evidential reasoning extension to quantitative model-based failure diagnosis
NASA Technical Reports Server (NTRS)
Gertler, Janos J.; Anderson, Kenneth C.
1992-01-01
The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly
Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition,more » substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.« less
Heterogeneity: The key to forecasting material failure?
NASA Astrophysics Data System (ADS)
Vasseur, J.; Wadsworth, F. B.; Lavallée, Y.; Dingwell, D. B.
2014-12-01
Empirical mechanistic models have been applied to the description of the stress and strain rate upon failure for heterogeneous materials. The behaviour of porous rocks and their analogous two-phase viscoelastic suspensions are particularly well-described by such models. Nevertheless, failure cannot yet be predicted forcing a reliance on other empirical prediction tools such as the Failure Forecast Method (FFM). Measurable, accelerating rates of physical signals (e.g., seismicity and deformation) preceding failure are often used as proxies for damage accumulation in the FFM. Previous studies have already statistically assessed the applicability and performance of the FFM, but none (to the best of our knowledge) has done so in terms of intrinsic material properties. Here we use a rheological standard glass, which has been powdered and then sintered for different times (up to 32 hours) at high temperature (675°C) in order to achieve a sample suite with porosities in the range of 0.10-0.45 gas volume fraction. This sample suite was then subjected to mechanical tests in a uniaxial press at a constant strain rate of 10-3 s-1 and a temperature in the region of the glass transition. A dual acoustic emission (AE) rig has been employed to test the success of the FFM in these materials of systematically varying porosity. The pore-emanating crack model describes well the peak stress at failure in the elastic regime for these materials. We show that the FFM predicts failure within 0-15% error at porosities >0.2. However, when porosities are <0.2, the forecast error associated with predicting the failure time increases to >100%. We interpret these results as a function of the low efficiency with which strain energy can be released in the scenario where there are few or no heterogeneities from which cracks can propagate. These observations shed light on questions surrounding the variable efficacy of the FFM applied to active volcanoes. In particular, they provide a systematic demonstration of the fact that a good understanding of the material properties is required. Thus, we wish to emphasize the need for a better coupling of empirical failure forecasting models with mechanical parameters, such as failure criteria for heterogeneous materials, and point to the implications of this for a broad range of material-based disciplines.
Influence of enamel preservation on failure rates of porcelain laminate veneers.
Gurel, Galip; Sesma, Newton; Calamita, Marcelo A; Coachman, Christian; Morimoto, Susana
2013-01-01
The purpose of this study was to evaluate the failure rates of porcelain laminate veneers (PLVs) and the influence of clinical parameters on these rates in a retrospective survey of up to 12 years. Five hundred eighty laminate veneers were bonded in 66 patients. The following parameters were analyzed: type of preparation (depth and margin), crown lengthening, presence of restoration, diastema, crowding, discoloration, abrasion, and attrition. Survival was analyzed using the Kaplan-Meier method. Cox regression modeling was used to determine which factors would predict PLV failure. Forty-two veneers (7.2%) failed in 23 patients, and an overall cumulative survival rate of 86% was observed. A statistically significant association was noted between failure and the limits of the prepared tooth surface (margin and depth). The most frequent failure type was fracture (n = 20). The results revealed no significant influence of crown lengthening apically, presence of restoration, diastema, discoloration, abrasion, or attrition on failure rates. Multivariable analysis (Cox regression model) also showed that PLVs bonded to dentin and teeth with preparation margins in dentin were approximately 10 times more likely to fail than PLVs bonded to enamel. Moreover, coronal crown lengthening increased the risk of PLV failure by 2.3 times. A survival rate of 99% was observed for veneers with preparations confined to enamel and 94% for veneers with enamel only at the margins. Laminate veneers have high survival rates when bonded to enamel and provide a safe and predictable treatment option that preserves tooth structure.
Spatio-temporal propagation of cascading overload failures in spatially embedded networks
NASA Astrophysics Data System (ADS)
Zhao, Jichang; Li, Daqing; Sanhedrai, Hillel; Cohen, Reuven; Havlin, Shlomo
2016-01-01
Different from the direct contact in epidemics spread, overload failures propagate through hidden functional dependencies. Many studies focused on the critical conditions and catastrophic consequences of cascading failures. However, to understand the network vulnerability and mitigate the cascading overload failures, the knowledge of how the failures propagate in time and space is essential but still missing. Here we study the spatio-temporal propagation behaviour of cascading overload failures analytically and numerically on spatially embedded networks. The cascading overload failures are found to spread radially from the centre of the initial failure with an approximately constant velocity. The propagation velocity decreases with increasing tolerance, and can be well predicted by our theoretical framework with one single correction for all the tolerance values. This propagation velocity is found similar in various model networks and real network structures. Our findings may help to predict the dynamics of cascading overload failures in realistic systems.
Spatio-temporal propagation of cascading overload failures in spatially embedded networks
Zhao, Jichang; Li, Daqing; Sanhedrai, Hillel; Cohen, Reuven; Havlin, Shlomo
2016-01-01
Different from the direct contact in epidemics spread, overload failures propagate through hidden functional dependencies. Many studies focused on the critical conditions and catastrophic consequences of cascading failures. However, to understand the network vulnerability and mitigate the cascading overload failures, the knowledge of how the failures propagate in time and space is essential but still missing. Here we study the spatio-temporal propagation behaviour of cascading overload failures analytically and numerically on spatially embedded networks. The cascading overload failures are found to spread radially from the centre of the initial failure with an approximately constant velocity. The propagation velocity decreases with increasing tolerance, and can be well predicted by our theoretical framework with one single correction for all the tolerance values. This propagation velocity is found similar in various model networks and real network structures. Our findings may help to predict the dynamics of cascading overload failures in realistic systems. PMID:26754065
Evaluating the best time to intervene acute liver failure in rat models induced by d-galactosamine.
Éboli, Lígia Patrícia de Carvalho Batista; Netto, Alcides Augusto Salzedas; Azevedo, Ramiro Antero de; Lanzoni, Valéria Pereira; Paula, Tatiana Sugayama de; Goldenberg, Alberto; Gonzalez, Adriano Miziara
2016-12-01
To describe an animal model for acute liver failure by intraperitoneal d-galactosamine injections in rats and to define when is the best time to intervene through King's College and Clichy´s criteria evaluation. Sixty-one Wistar female rats were distributed into three groups: group 1 (11 rats received 1.4 g/kg of d-galactosamine intraperitoneally and were observed until they died); group 2 (44 rats received a dose of 1.4 g/kg of d-galactosamine and blood and histological samples were collected for analysis at 12 , 24, 48 , 72 and 120 hours after the injection); and the control group as well (6 rats) . Twelve hours after applying d-galactosamine, AST/ALT, bilirubin, factor V, PT and INR were already altered. The peak was reached at 48 hours. INR > 6.5 was found 12 hours after the injection and factor V < 30% after 24 hours. All the laboratory variables presented statistical differences, except urea (p = 0.758). There were statistical differences among all the histological variables analyzed. King's College and Clichy´s criteria were fulfilled 12 hours after the d-galactosamine injection and this time may represent the best time to intervene in this acute liver failure animal model.
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
The evolution of concepts for soil erosion modelling
NASA Astrophysics Data System (ADS)
Kirkby, Mike
2013-04-01
From the earliest models for soil erosion, based on power laws relating sediment discharge or yield to slope length and gradient, the development of the Universal Soil Loss Equation was a natural step, although one that has long continued to hinder the development of better perceptual models for erosion processes. Key stumbling blocks have been: 1. The failure to go through runoff generation as a key intermediary 2. The failure to separate hydrological and strength parameters of the soil 3. The failure to treat sediment transport along a slope as a routing problem 4. The failure to analyse the nature of the dependence on vegetation Key advances have been in these directions (among others) 1. Improved understanding of the hydrological processes (e.g. infiltration and runoff, sediment entrainment) leading to KINEROS, LISEM,WEPP, PESERA 2. Recognition of selective sediment transport (e.g. transport- or supply-limited removal, grain travel distances) leading e.g. to MAHLERAN 3. Development of models adapted to particular time/space scales Some major remaining problems 1. Failure to integrate geomorphological and agronomic approaches 2. Tillage erosion - Is erosion loss of sediment or lowering of centre of mass? 3. Dynamic change during an event, as rills etc form.
Modeling a maintenance simulation of the geosynchronous platform
NASA Technical Reports Server (NTRS)
Kleiner, A. F., Jr.
1980-01-01
A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brodin, N. Patrik, E-mail: nils.patrik.brodin@rh.dk; Niels Bohr Institute, University of Copenhagen, Copenhagen; Vogelius, Ivan R.
2013-10-01
Purpose: As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published pattern of failure data. Methods and Materials: Outcome data for standard-risk MB published after 1990 with pattern of relapse information were used to fit a tumor control dose-response model addressing failures in both the high-dose boost volume and the elective craniospinal volume. Estimates of 5-year event-free survival from 2 large randomized MB trials were used tomore » model the time-to-progression distribution. Uncertainty in freedom from progression (FFP) was estimated by Monte Carlo sampling over the statistical uncertainty in input data. Results: The estimated 5-year FFP (95% confidence intervals [CI]) for craniospinal doses of 15, 18, 24, and 36 Gy while maintaining 54 Gy to the posterior fossa was 77% (95% CI, 70%-81%), 78% (95% CI, 73%-81%), 79% (95% CI, 76%-82%), and 80% (95% CI, 77%-84%) respectively. The uncertainty in FFP was considerably larger for craniospinal doses below 18 Gy, reflecting the lack of data in the lower dose range. Conclusions: Estimates of tumor control and time-to-progression for standard-risk MB provides a data-driven setting for hypothesis generation or power calculations for prospective trials, taking the uncertainties into account. The presented methods can also be applied to incorporate further risk-stratification for example based on molecular biomarkers, when the necessary data become available.« less
Resetting the Clock: The Dynamics of Organizational Change and Failure.
ERIC Educational Resources Information Center
Amburgey, Terry L.; And Others
1993-01-01
When viewed dynamically, organizational change can be both adaptive and disruptive. When viewed over time, same forces rendering organizations inert also make them more malleable. These ideas are supported by dynamic models of organizational failure and change estimated on population of 1,011 Finnish newspaper organizations over 193 years. Change…
Cascading Failures as Continuous Phase-Space Transitions
Yang, Yang; Motter, Adilson E.
2017-12-14
In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. We derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-likemore » function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.« less
Cascading Failures as Continuous Phase-Space Transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yang; Motter, Adilson E.
In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. We derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-likemore » function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.« less
NASA Astrophysics Data System (ADS)
White, Bradley W.; Tarver, Craig M.
2017-01-01
It has long been known that detonating single crystals of solid explosives have much larger failure diameters than those of heterogeneous charges of the same explosive pressed or cast to 98 - 99% theoretical maximum density (TMD). In 1957, Holland et al. demonstrated that PETN single crystals have failure diameters of about 8 mm, whereas heterogeneous PETN charges have failure diameters of less than 0.5 mm. Recently, Fedorov et al. quantitatively determined nanosecond time resolved detonation reaction zone profiles of single crystals of PETN and HMX by measuring the interface particle velocity histories of the detonating crystals and LiF windows using a PDV system. The measured reaction zone time durations for PETN and HMX single crystal detonations were approximately 100 and 260 nanoseconds, respectively. These experiments provided the necessary data to develop Ignition and Growth (I&G) reactive flow model parameters for the single crystal detonation reaction zones. Using these parameters, the calculated unconfined failure diameter of a PETN single crystal was 7.5 +/- 0.5 mm, close to the 8 mm experimental value. The calculated failure diameter of an unconfined HMX single crystal was 15 +/- 1 mm. The unconfined failure diameter of an HMX single crystal has not yet been determined precisely, but Fedorov et al. detonated 14 mm diameter crystals confined by detonating a HMX-based plastic bonded explosive (PBX) without initially overdriving the HMX crystals.
Verification of the Multi-Axial, Temperature and Time Dependent (MATT) Failure Criterion
NASA Technical Reports Server (NTRS)
Richardson, David E.; Macon, David J.
2005-01-01
An extensive test and analytical effort has been completed by the Space Shuttle's Reusable Solid Rocket Motor (KSKM) nozzle program to characterize the failure behavior of two epoxy adhesives (TIGA 321 and EA946). As part of this effort, a general failure model, the "Multi-Axial, Temperature, and Time Dependent" or MATT failure criterion was developed. In the initial development of this failure criterion, tests were conducted to provide validation of the theory under a wide range of test conditions. The purpose of this paper is to present additional verification of the MATT failure criterion, under new loading conditions for the adhesives TIGA 321 and EA946. In many cases, the loading conditions involve an extrapolation from the conditions under which the material models were originally developed. Testing was conducted using three loading conditions: multi-axial tension, torsional shear, and non-uniform tension in a bondline condition. Tests were conducted at constant and cyclic loading rates ranging over four orders of magnitude. Tests were conducted under environmental conditions of primary interest to the RSRM program. The temperature range was not extreme, but the loading ranges were extreme (varying by four orders of magnitude). It should be noted that the testing was conducted at temperatures below the glass transition temperature of the TIGA 321 adhesive. However for the EA946, the testing was conducted at temperatures that bracketed the glass transition temperature.
NASA Astrophysics Data System (ADS)
Main, I. G.; Bell, A. F.; Naylor, M.; Atkinson, M.; Filguera, R.; Meredith, P. G.; Brantut, N.
2012-12-01
Accurate prediction of catastrophic brittle failure in rocks and in the Earth presents a significant challenge on theoretical and practical grounds. The governing equations are not known precisely, but are known to produce highly non-linear behavior similar to those of near-critical dynamical systems, with a large and irreducible stochastic component due to material heterogeneity. In a laboratory setting mechanical, hydraulic and rock physical properties are known to change in systematic ways prior to catastrophic failure, often with significant non-Gaussian fluctuations about the mean signal at a given time, for example in the rate of remotely-sensed acoustic emissions. The effectiveness of such signals in real-time forecasting has never been tested before in a controlled laboratory setting, and previous work has often been qualitative in nature, and subject to retrospective selection bias, though it has often been invoked as a basis in forecasting natural hazard events such as volcanoes and earthquakes. Here we describe a collaborative experiment in real-time data assimilation to explore the limits of predictability of rock failure in a best-case scenario. Data are streamed from a remote rock deformation laboratory to a user-friendly portal, where several proposed physical/stochastic models can be analysed in parallel in real time, using a variety of statistical fitting techniques, including least squares regression, maximum likelihood fitting, Markov-chain Monte-Carlo and Bayesian analysis. The results are posted and regularly updated on the web site prior to catastrophic failure, to ensure a true and and verifiable prospective test of forecasting power. Preliminary tests on synthetic data with known non-Gaussian statistics shows how forecasting power is likely to evolve in the live experiments. In general the predicted failure time does converge on the real failure time, illustrating the bias associated with the 'benefit of hindsight' in retrospective analyses. Inference techniques that account explicitly for non-Gaussian statistics significantly reduce the bias, and increase the reliability and accuracy, of the forecast failure time in prospective mode.
Prediction of L70 lumen maintenance and chromaticity for LEDs using extended Kalman filter models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lall, Pradeep; Wei, Junchao; Davis, Lynn
2013-09-30
Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life is definedmore » by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. The measured state variable has been related to the underlying damage using physics-based models. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lall, Pradeep; Wei, Junchao; Davis, Lynn
2013-08-08
Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life is definedmore » by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. The measured state variable has been related to the underlying damage using physics-based models. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less
Lanfear, David E; Levy, Wayne C; Stehlik, Josef; Estep, Jerry D; Rogers, Joseph G; Shah, Keyur B; Boyle, Andrew J; Chuang, Joyce; Farrar, David J; Starling, Randall C
2017-05-01
Timing of left ventricular assist device (LVAD) implantation in advanced heart failure patients not on inotropes is unclear. Relevant prediction models exist (SHFM [Seattle Heart Failure Model] and HMRS [HeartMate II Risk Score]), but use in this group is not established. ROADMAP (Risk Assessment and Comparative Effectiveness of Left Ventricular Assist Device and Medical Management in Ambulatory Heart Failure Patients) is a prospective, multicenter, nonrandomized study of 200 advanced heart failure patients not on inotropes who met indications for LVAD implantation, comparing the effectiveness of HeartMate II support versus optimal medical management. We compared SHFM-predicted versus observed survival (overall survival and LVAD-free survival) in the optimal medical management arm (n=103) and HMRS-predicted versus observed survival in all LVAD patients (n=111) using Cox modeling, receiver-operator characteristic (ROC) curves, and calibration plots. In the optimal medical management cohort, the SHFM was a significant predictor of survival (hazard ratio=2.98; P <0.001; ROC area under the curve=0.71; P <0.001) but not LVAD-free survival (hazard ratio=1.41; P =0.097; ROC area under the curve=0.56; P =0.314). SHFM showed adequate calibration for survival but overestimated LVAD-free survival. In the LVAD cohort, the HMRS had marginal discrimination at 3 (Cox P =0.23; ROC area under the curve=0.71; P =0.026) and 12 months (Cox P =0.036; ROC area under the curve=0.62; P =0.122), but calibration was poor, underestimating survival across time and risk subgroups. In non-inotrope-dependent advanced heart failure patients receiving optimal medical management, the SHFM was predictive of overall survival but underestimated the risk of clinical worsening and LVAD implantation. Among LVAD patients, the HMRS had marginal discrimination and underestimated survival post-LVAD implantation. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01452802. © 2017 American Heart Association, Inc.
Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.
2015-12-01
This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.
Material failure modelling in metals at high strain rates
NASA Astrophysics Data System (ADS)
Panov, Vili
2005-07-01
Plate impact tests have been conducted on the OFHC Cu using single-stage gas gun. Using stress gauges, which were supported with PMMA blocks on the back of the target plates, stress-time histories have been recorded. After testing, micro structural observations of the softly recovered OFHC Cu spalled specimen were carried out and evolution of damage has been examined. To account for the physical mechanisms of failure, the concept that thermal activation in material separation during fracture processes has been adopted as basic mechanism for this material failure model development. With this basic assumption, the proposed model is compatible with the Mechanical Threshold Stress model and therefore in this development it was incorporated into the MTS material model in DYNA3D. In order to analyse proposed criterion a series of FE simulations have been performed for OFHC Cu. The numerical analysis results clearly demonstrate the ability of the model to predict the spall process and experimentally observed tensile damage and failure. It is possible to simulate high strain rate deformation processes and dynamic failure in tension for wide range of temperature. The proposed cumulative criterion, introduced in the DYNA3D code, is able to reproduce the ``pull-back'' stresses of the free surface caused by creation of the internal spalling, and enables one to analyse numerically the spalling over a wide range of impact velocities.
Kuo, Lindsay E; Kaufman, Elinore; Hoffman, Rebecca L; Pascual, Jose L; Martin, Niels D; Kelz, Rachel R; Holena, Daniel N
2017-03-01
Failure-to-rescue is defined as the conditional probability of death after a complication, and the failure-to-rescue rate reflects a center's ability to successfully "rescue" patients after complications. The validity of the failure-to-rescue rate as a quality measure is dependent on the preventability of death and the appropriateness of this measure for use in the trauma population is untested. We sought to evaluate the relationship between preventability and failure-to-rescue in trauma. All adjudications from a mortality review panel at an academic level I trauma center from 2005-2015 were merged with registry data for the same time period. The preventability of each death was determined by panel consensus as part of peer review. Failure-to-rescue deaths were defined as those occurring after any registry-defined complication. Univariate and multivariate logistic regression models between failure-to-rescue status and preventability were constructed and time to death was examined using survival time analyses. Of 26,557 patients, 2,735 (10.5%) had a complication, of whom 359 died for a failure-to-rescue rate of 13.2%. Of failure-to-rescue deaths, 272 (75.6%) were judged to be non-preventable, 65 (18.1%) were judged potentially preventable, and 22 (6.1%) were judged to be preventable by peer review. After adjusting for other patient factors, there remained a strong association between failure-to-rescue status and potentially preventable (odds ratio 2.32, 95% confidence interval, 1.47-3.66) and preventable (odds ratio 14.84, 95% confidence interval, 3.30-66.71) judgment. Despite a strong association between failure-to-rescue status and preventability adjudication, only a minority of deaths meeting the definition of failure to rescue were judged to be preventable or potentially preventable. Revision of the failure-to-rescue metric before use in trauma care benchmarking is warranted. Copyright © 2016 Elsevier Inc. All rights reserved.
Kuo, Lindsay E.; Kaufman, Elinore; Hoffman, Rebecca L.; Pascual, Jose L.; Martin, Niels D.; Kelz, Rachel R.; Holena, Daniel N.
2018-01-01
Background Failure-to-rescue is defined as the conditional probability of death after a complication, and the failure-to-rescue rate reflects a center’s ability to successfully “rescue” patients after complications. The validity of the failure-to-rescue rate as a quality measure is dependent on the preventability of death and the appropriateness of this measure for use in the trauma population is untested. We sought to evaluate the relationship between preventability and failure-to-rescue in trauma. Methods All adjudications from a mortality review panel at an academic level I trauma center from 2005–2015 were merged with registry data for the same time period. The preventability of each death was determined by panel consensus as part of peer review. Failure-to-rescue deaths were defined as those occurring after any registry-defined complication. Univariate and multivariate logistic regression models between failure-to-rescue status and preventability were constructed and time to death was examined using survival time analyses. Results Of 26,557 patients, 2,735 (10.5%) had a complication, of whom 359 died for a failure-to-rescue rate of 13.2%. Of failure-to-rescue deaths, 272 (75.6%) were judged to be non-preventable, 65 (18.1%) were judged potentially preventable, and 22 (6.1%) were judged to be preventable by peer review. After adjusting for other patient factors, there remained a strong association between failure-to-rescue status and potentially preventable (odds ratio 2.32, 95% confidence interval, 1.47–3.66) and preventable (odds ratio 14.84, 95% confidence interval, 3.30–66.71) judgment. Conclusion Despite a strong association between failure-to-rescue status and preventability adjudication, only a minority of deaths meeting the definition of failure to rescue were judged to be preventable or potentially preventable. Revision of the failure-to-rescue metric before use in trauma care benchmarking is warranted. PMID:27788924
On reliable control system designs. Ph.D. Thesis; [actuators
NASA Technical Reports Server (NTRS)
Birdwell, J. D.
1978-01-01
A mathematical model for use in the design of reliable multivariable control systems is discussed with special emphasis on actuator failures and necessary actuator redundancy levels. The model consists of a linear time invariant discrete time dynamical system. Configuration changes in the system dynamics are governed by a Markov chain that includes transition probabilities from one configuration state to another. The performance index is a standard quadratic cost functional, over an infinite time interval. The actual system configuration can be deduced with a one step delay. The calculation of the optimal control law requires the solution of a set of highly coupled Riccati-like matrix difference equations. Results can be used for off-line studies relating the open loop dynamics, required performance, actuator mean time to failure, and functional or identical actuator redundancy, with and without feedback gain reconfiguration strategies.
Forgie, Marie M; Greer, Danielle M; Kram, Jessica J F; Vander Wyst, Kiley B; Salvo, Nicole P; Siddiqui, Danish S
2016-03-01
Foley catheters are used for cervical ripening during induction of labor. Previous studies suggest that use of a stylette (a thin, rigid wire) to guide catheter insertion decreases insertion failure. However, stylette effects on insertion outcomes have been sparsely studied. The purpose of this study was to compare catheter insertion times, patient-assessed pain levels, and insertion failure rates between women who received a digitally placed Foley catheter for cervical ripening with the aid of a stylette and women who received the catheter without a stylette. We conducted a randomized clinical trial of women aged ≥ 18 years who presented for induction of labor. Inclusion criteria were singletons with intact membranes and cephalic presentation. Women received a computer-generated random assignment of a Foley catheter insertion with a stylette (treatment group, n = 62) or without a stylette (control group, n = 61). For all women, a standard insertion technique protocol was used. Three primary outcomes were of interest, including the following: (1) insertion time (total minutes to successful catheter placement), (2) patient-assessed pain level (0-10), and (3) failure rate of the randomly assigned insertion method. Treatment control differences were first examined using the Pearson's test of independence and the Student t test. Per outcome, we also constructed 4 regression models, each including the random effect of physician and fixed effects of stylette use with patient nulliparity, a history of vaginal delivery, cervical dilation at presentation, or postgraduate year of the performing resident physician. Women who received the Foley catheter with the stylette vs without the stylette did not differ by age, race/ethnicity, body mass index, or any of several other characteristics. Regression models revealed that insertion time, patient pain, and insertion failure were unrelated to stylette use, nulliparity, and history of vaginal delivery. However, overall insertion time and failure were significantly influenced by cervical dilation, with insertion time decreasing by 21% (95% confidence interval [CI], 5-34%) and odds of failure decreasing by 71% (odds ratio, 0.29; 95% CI, 0.10-0.86) per 1 cm dilation. Resident postgraduate year also significantly influenced insertion time, with greater time required of physicians with less experience. Mean insertion time was 51% (95% CI, 23-69%) shorter for fourth-year than second-year residents. Statistically nonsignificant but prominent patterns in outcomes were also observed, suggesting stylette use may lengthen the overall insertion procedure but minimize variability in pain levels and decrease insertion failure. The randomized trial suggests that, even after accounting for nulliparity, history of vaginal delivery, cervical dilation, and physician experience, Foley catheter insertions with and without a stylette are equivalent in insertion times, patient pain levels, and failure of catheter placement. Copyright © 2016 Elsevier Inc. All rights reserved.
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lall, Pradeep; Wei, Junchao; Davis, J Lynn
2014-06-24
Abstract— Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life ismore » defined by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less
A fuzzy set approach for reliability calculation of valve controlling electric actuators
NASA Astrophysics Data System (ADS)
Karmachev, D. P.; Yefremov, A. A.; Luneva, E. E.
2017-02-01
The oil and gas equipment and electric actuators in particular frequently perform in various operational modes and under dynamic environmental conditions. These factors affect equipment reliability measures in a vague, uncertain way. To eliminate the ambiguity, reliability model parameters could be defined as fuzzy numbers. We suggest a technique that allows constructing fundamental fuzzy-valued performance reliability measures based on an analysis of electric actuators failure data in accordance with the amount of work, completed before the failure, instead of failure time. Also, this paper provides a computation example of fuzzy-valued reliability and hazard rate functions, assuming Kumaraswamy complementary Weibull geometric distribution as a lifetime (reliability) model for electric actuators.
Hendriks, Celine; Drent, Marjolein; De Kleijn, Willemien; Elfferich, Marjon; Wijnen, Petal; De Vries, Jolanda
2018-05-01
Fatigue is a major and disabling problem in sarcoidosis. Knowledge concerning correlates of the development of fatigue and possible interrelationships is lacking. A conceptual model of fatigue was developed and tested. Sarcoidosis outpatients (n = 292) of Maastricht University Medical Center completed questionnaires regarding trait anxiety, depressive symptoms, cognitive failure, dyspnea, social support, and small fiber neuropathy (SFN) at baseline. Fatigue was assessed at 6 and 12 months. Sex, age, and time since diagnosis were taken from medical records. Pathways were estimated by means of path analyses in AMOS. Everyday cognitive failure, depressive symptoms, symptoms suggestive of SFN, and dyspnea were positive predictors of fatigue. Fit indices of the model were good. The model validly explains variation in fatigue. Everyday cognitive failure and depressive symptoms were the most important predictors of fatigue. In addition to physical functioning, cognitive and psychological aspects should be included in the management of sarcoidosis patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Diamond-Smith, Nadia; Moreau, Caroline; Bishai, David
2015-01-01
Despite high rates of contraceptive use in France, over a third of pregnancies are unintended. We built a dynamic micro simulation model which applies data from the French COCON study on method switching, discontinuation, and failure rates to a hypothetical population of 20,000 women, followed for 5 years. We use the model to estimate the adjustment factor needed to make the survey data fit the demographic profile of France, by adjusting for underreporting of contraceptive non-use and abortions. We then test three behavior change scenarios which would aim to reduce unintended pregnancies: decreasing method failure, increasing time spent on effective methods, and increasing switching from less to more effective methods. Our model suggests that decreasing method failure is the most effective strategy for reducing unintended pregnancies, but all scenarios reduced unintended pregnancies by at least 25%. Dynamic micro simulations such as this may be useful for policy makers. PMID:25469928
Diamond-Smith, Nadia G; Moreau, Caroline; Bishai, David M
2014-12-01
Although the rate of contraceptive use in France is high, more than one-third of pregnancies are unintended. We built a dynamic microsimulation model that applies data from the French COCON study on method switching, discontinuation, and failure rates to a hypothetical population of 20,000 women, followed for five years. We use the model to estimate the adjustment factor needed to make the survey data fit the demographic profile of France by adjusting for underreporting of contraceptive nonuse and abortion. We then test three behavior-change scenarios that could reduce unintended pregnancies: decreasing method failure, increasing time using effective methods, and increasing switching from less effective to more effective methods. Our model suggests that decreasing method failure is the most effective means of reducing unintended pregnancies, but we found that all of the scenarios reduced unintended pregnancies by at least 25 percent. Dynamic microsimulations may have great potential in reproductive health research and prove useful for policymakers. © 2014 The Population Council, Inc.
Real-time software failure characterization
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Finelli, George B.
1990-01-01
A series of studies aimed at characterizing the fundamentals of the software failure process has been undertaken as part of a NASA project on the modeling of a real-time aerospace vehicle software reliability. An overview of these studies is provided, and the current study, an investigation of the reliability of aerospace vehicle guidance and control software, is examined. The study approach provides for the collection of life-cycle process data, and for the retention and evaluation of interim software life-cycle products.
Vegter, Eline L; Ovchinnikova, Ekaterina S; Silljé, Herman H W; Meems, Laura M G; van der Pol, Atze; van der Velde, A Rogier; Berezikov, Eugene; Voors, Adriaan A; de Boer, Rudolf A; van der Meer, Peter
2017-01-01
We recently identified a set of plasma microRNAs (miRNAs) that are downregulated in patients with heart failure in comparison with control subjects. To better understand their meaning and function, we sought to validate these circulating miRNAs in 3 different well-established rat and mouse heart failure models, and correlated the miRNAs to parameters of cardiac function. The previously identified let-7i-5p, miR-16-5p, miR-18a-5p, miR-26b-5p, miR-27a-3p, miR-30e-5p, miR-199a-3p, miR-223-3p, miR-423-3p, miR-423-5p and miR-652-3p were measured by means of quantitative real time polymerase chain reaction (qRT-PCR) in plasma samples of 8 homozygous TGR(mREN2)27 (Ren2) transgenic rats and 8 (control) Sprague-Dawley rats, 6 mice with angiotensin II-induced heart failure (AngII) and 6 control mice, and 8 mice with ischemic heart failure and 6 controls. Circulating miRNA levels were compared between the heart failure animals and healthy controls. Ren2 rats, AngII mice and mice with ischemic heart failure showed clear signs of heart failure, exemplified by increased left ventricular and lung weights, elevated end-diastolic left ventricular pressures, increased expression of cardiac stress markers and reduced left ventricular ejection fraction. All miRNAs were detectable in plasma from rats and mice. No significant differences were observed between the circulating miRNAs in heart failure animals when compared to the healthy controls (all P>0.05) and no robust associations with cardiac function could be found. The previous observation that miRNAs circulate in lower levels in human patients with heart failure could not be validated in well-established rat and mouse heart failure models. These results question the translation of data on human circulating miRNA levels to experimental models, and vice versa the validity of experimental miRNA data for human heart failure.
Ng'andu, N H
1997-03-30
In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly.
Nonlinear parametric model for Granger causality of time series
NASA Astrophysics Data System (ADS)
Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano
2006-06-01
The notion of Granger causality between two time series examines if the prediction of one series could be improved by incorporating information of the other. In particular, if the prediction error of the first time series is reduced by including measurements from the second time series, then the second time series is said to have a causal influence on the first one. We propose a radial basis function approach to nonlinear Granger causality. The proposed model is not constrained to be additive in variables from the two time series and can approximate any function of these variables, still being suitable to evaluate causality. Usefulness of this measure of causality is shown in two applications. In the first application, a physiological one, we consider time series of heart rate and blood pressure in congestive heart failure patients and patients affected by sepsis: we find that sepsis patients, unlike congestive heart failure patients, show symmetric causal relationships between the two time series. In the second application, we consider the feedback loop in a model of excitatory and inhibitory neurons: we find that in this system causality measures the combined influence of couplings and membrane time constants.
Deng, Bo; Wang, Jin Xin; Hu, Xing Xing; Duan, Peng; Wang, Lin; Li, Yang; Zhu, Qing Lei
2017-08-01
The aim of this study is to determine whether Nkx2.5 transfection of transplanted bone marrow mesenchymal stem cells (MSCs) improves the efficacy of treatment of adriamycin-induced heart failure in a rat model. Nkx2.5 was transfected in MSCs by lentiviral vector transduction. The expressions of Nkx2.5 and cardiac specific genes in MSCs and Nkx2.5 transfected mesenchymal stem cells (MSCs-Nkx2.5) were analyzed with quantitative real-time PCR and Western blot in vitro. Heart failure models of rats were induced by adriamycin and were then randomly divided into 3 groups: injected saline, MSCs or MSCs-Nkx2.5 via the femoral vein respectively. Four weeks after injection, the cardiac function, expressions of cardiac specific gene, fibrosis formation and collagen volume fraction in the myocardium as well as the expressions of GATA4 and MEF2 in rats were analyzed with echocardiography, immunohistochemistry, Masson staining, quantitative real-time PCR and Western blot, respectively. Nkx2.5 enhanced cardiac specific gene expressions including α-MHC, TNI, CKMB, connexin-43 in MSCs-Nkx2.5 in vitro. Both MSCs and MSCs-Nkx2.5 improved cardiac function, promoted the differentiation of transplanted MSCs into cardiomyocyte-like cells, decreased fibrosis formation and collagen volume fraction in the myocardium, as well as increased the expressions of GATA4 and MEF2 in adriamycin-induced rat heart failure models. Moreover, the effect was much more remarkable in MSCs-Nkx2.5 than in MSCs group. This study has found that Nkx2.5 enhances the efficacy of MSCs transplantation in treatment adriamycin-induced heart failure in rats. Nkx2.5 transfected to transplanted MSCs provides a potential effective approach to heart failure. Copyright © 2017 Elsevier Inc. All rights reserved.
Real-Time Adaptive Control Allocation Applied to a High Performance Aircraft
NASA Technical Reports Server (NTRS)
Davidson, John B.; Lallman, Frederick J.; Bundick, W. Thomas
2001-01-01
Abstract This paper presents the development and application of one approach to the control of aircraft with large numbers of control effectors. This approach, referred to as real-time adaptive control allocation, combines a nonlinear method for control allocation with actuator failure detection and isolation. The control allocator maps moment (or angular acceleration) commands into physical control effector commands as functions of individual control effectiveness and availability. The actuator failure detection and isolation algorithm is a model-based approach that uses models of the actuators to predict actuator behavior and an adaptive decision threshold to achieve acceptable false alarm/missed detection rates. This integrated approach provides control reconfiguration when an aircraft is subjected to actuator failure, thereby improving maneuverability and survivability of the degraded aircraft. This method is demonstrated on a next generation military aircraft Lockheed-Martin Innovative Control Effector) simulation that has been modified to include a novel nonlinear fluid flow control control effector based on passive porosity. Desktop and real-time piloted simulation results demonstrate the performance of this integrated adaptive control allocation approach.
NASA Technical Reports Server (NTRS)
2001-01-01
Qualtech Systems, Inc. developed a complete software system with capabilities of multisignal modeling, diagnostic analysis, run-time diagnostic operations, and intelligent interactive reasoners. Commercially available as the TEAMS (Testability Engineering and Maintenance System) tool set, the software can be used to reveal unanticipated system failures. The TEAMS software package is broken down into four companion tools: TEAMS-RT, TEAMATE, TEAMS-KB, and TEAMS-RDS. TEAMS-RT identifies good, bad, and suspect components in the system in real-time. It reports system health results from onboard tests, and detects and isolates failures within the system, allowing for rapid fault isolation. TEAMATE takes over from where TEAMS-RT left off by intelligently guiding the maintenance technician through the troubleshooting procedure, repair actions, and operational checkout. TEAMS-KB serves as a model management and collection tool. TEAMS-RDS (TEAMS-Remote Diagnostic Server) has the ability to continuously assess a system and isolate any failure in that system or its components, in real time. RDS incorporates TEAMS-RT, TEAMATE, and TEAMS-KB in a large-scale server architecture capable of providing advanced diagnostic and maintenance functions over a network, such as the Internet, with a web browser user interface.
NASA Astrophysics Data System (ADS)
Main, I. G.; Bell, A. F.; Greenhough, J.; Heap, M. J.; Meredith, P. G.
2010-12-01
The nucleation processes that ultimately lead to earthquakes, volcanic eruptions, rock bursts in mines, and landslides from cliff slopes are likely to be controlled at some scale by brittle failure of the Earth’s crust. In laboratory brittle deformation experiments geophysical signals commonly exhibit an accelerating trend prior to dynamic failure. Similar signals have been observed prior to volcanic eruptions, including volcano-tectonic earthquake event and moment release rates. Despite a large amount of effort in the search, no such statistically robust systematic trend is found prior to natural earthquakes. Here we describe the results of a suite of laboratory tests on Mount Etna Basalt and other rocks to examine the nature of the non-linear scaling from laboratory to field conditions, notably using laboratory ‘creep’ tests to reduce the boundary strain rate to conditions more similar to those in the field. Seismic event rate, seismic moment release rate and rate of porosity change show a classic ‘bathtub’ graph that can be derived from a simple damage model based on separate transient and accelerating sub-critical crack growth mechanisms, resulting from separate processes of negative and positive feedback in the population dynamics. The signals exhibit clear precursors based on formal statistical model tests using maximum likelihood techniques with Poisson errors. After correcting for the finite loading time of the signal, the results show a transient creep rate that decays as a classic Omori law for earthquake aftershocks, and remarkably with an exponent near unity, as commonly observed for natural earthquake sequences. The accelerating trend follows an inverse power law when fitted in retrospect, i.e. with prior knowledge of the failure time. In contrast the strain measured on the sample boundary shows a less obvious but still accelerating signal that is often absent altogether in natural strain data prior to volcanic eruptions. To test the forecasting power of such constitutive rules in prospective mode, we examine the forecast quality of several synthetic trials, by adding representative statistical fluctuations, due to finite real-time sampling effects, to an underlying accelerating trend. Metrics of forecast quality change systematically and dramatically with time. In particular the model accuracy increases, and the forecast bias decreases, as the failure time approaches.
rpsftm: An R Package for Rank Preserving Structural Failure Time Models
Allison, Annabel; White, Ian R; Bond, Simon
2018-01-01
Treatment switching in a randomised controlled trial occurs when participants change from their randomised treatment to the other trial treatment during the study. Failure to account for treatment switching in the analysis (i.e. by performing a standard intention-to-treat analysis) can lead to biased estimates of treatment efficacy. The rank preserving structural failure time model (RPSFTM) is a method used to adjust for treatment switching in trials with survival outcomes. The RPSFTM is due to Robins and Tsiatis (1991) and has been developed by White et al. (1997, 1999). The method is randomisation based and uses only the randomised treatment group, observed event times, and treatment history in order to estimate a causal treatment effect. The treatment effect, ψ, is estimated by balancing counter-factual event times (that would be observed if no treatment were received) between treatment groups. G-estimation is used to find the value of ψ such that a test statistic Z(ψ) = 0. This is usually the test statistic used in the intention-to-treat analysis, for example, the log rank test statistic. We present an R package that implements the method of rpsftm. PMID:29564164
rpsftm: An R Package for Rank Preserving Structural Failure Time Models.
Allison, Annabel; White, Ian R; Bond, Simon
2017-12-04
Treatment switching in a randomised controlled trial occurs when participants change from their randomised treatment to the other trial treatment during the study. Failure to account for treatment switching in the analysis (i.e. by performing a standard intention-to-treat analysis) can lead to biased estimates of treatment efficacy. The rank preserving structural failure time model (RPSFTM) is a method used to adjust for treatment switching in trials with survival outcomes. The RPSFTM is due to Robins and Tsiatis (1991) and has been developed by White et al. (1997, 1999). The method is randomisation based and uses only the randomised treatment group, observed event times, and treatment history in order to estimate a causal treatment effect. The treatment effect, ψ , is estimated by balancing counter-factual event times (that would be observed if no treatment were received) between treatment groups. G-estimation is used to find the value of ψ such that a test statistic Z ( ψ ) = 0. This is usually the test statistic used in the intention-to-treat analysis, for example, the log rank test statistic. We present an R package that implements the method of rpsftm.
Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan
2018-03-01
Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Jackson, Karen E.
1990-01-01
Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. These limitations, or scaling effects, are characterized. in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. A dimensional analysis was performed on the composite beam-column loading configuration to determine a model law governing the system response. An experimental program was developed to validate the model law under both static and dynamic loading conditions. Laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic were tested to examine a diversity of composite response and failure modes. The model beams were loaded under scaled test conditions until catastrophic failure. A large deflection beam solution was developed to compare with the static experimental results and to analyze beam failure. Also, the finite element code DYCAST (DYnamic Crash Analysis of STructure) was used to model both the static and impulsive beam response. Static test results indicate that the unidirectional and cross ply beam responses scale as predicted by the model law, even under severe deformations. In general, failure modes were consistent between scale models within a laminate family; however, a significant scale effect was observed in strength. The scale effect in strength which was evident in the static tests was also observed in the dynamic tests. Scaling of load and strain time histories between the scale model beams and the prototypes was excellent for the unidirectional beams, but inconsistent results were obtained for the angle ply, cross ply, and quasi-isotropic beams. Results show that valuable information can be obtained from testing on scale model composite structures, especially in the linear elastic response region. However, due to scaling effects in the strength behavior of composite laminates, caution must be used in extrapolating data taken from a scale model test when that test involves failure of the structure.
Quantile Regression with Censored Data
ERIC Educational Resources Information Center
Lin, Guixian
2009-01-01
The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…
NASA Technical Reports Server (NTRS)
Vitali, Roberto; Lutomski, Michael G.
2004-01-01
National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.
A Bayesian network approach for modeling local failure in lung cancer
NASA Astrophysics Data System (ADS)
Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam
2011-03-01
Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.
Haul truck tire dynamics due to tire condition
NASA Astrophysics Data System (ADS)
Vaghar Anzabi, R.; Nobes, D. S.; Lipsett, M. G.
2012-05-01
Pneumatic tires are costly components on large off-road haul trucks used in surface mining operations. Tires are prone to damage during operation, and these events can lead to injuries to personnel, loss of equipment, and reduced productivity. Damage rates have significant variability, due to operating conditions and a range of tire fault modes. Currently, monitoring of tire condition is done by physical inspection; and the mean time between inspections is often longer than the mean time between incipient failure and functional failure of the tire. Options for new condition monitoring methods include off-board thermal imaging and camera-based optical methods for detecting abnormal deformation and surface features, as well as on-board sensors to detect tire faults during vehicle operation. Physics-based modeling of tire dynamics can provide a good understanding of the tire behavior, and give insight into observability requirements for improved monitoring systems. This paper describes a model to simulate the dynamics of haul truck tires when a fault is present to determine the effects of physical parameter changes that relate to faults. To simulate the dynamics, a lumped mass 'quarter-vehicle' model has been used to determine the response of the system to a road profile when a failure changes the original properties of the tire. The result is a model of tire vertical displacement that can be used to detect a fault, which will be tested under field conditions in time-varying conditions.
Evaluation of Enhanced Risk Monitors for Use on Advanced Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Veeramany, Arun; Bonebrake, Christopher A.
This study provides an overview of the methodology for integrating time-dependent failure probabilities into nuclear power reactor risk monitors. This prototypic enhanced risk monitor (ERM) methodology was evaluated using a hypothetical probabilistic risk assessment (PRA) model, generated using a simplified design of a liquid-metal-cooled advanced reactor (AR). Component failure data from industry compilation of failures of components similar to those in the simplified AR model were used to initialize the PRA model. Core damage frequency (CDF) over time were computed and analyzed. In addition, a study on alternative risk metrics for ARs was conducted. Risk metrics that quantify the normalizedmore » cost of repairs, replacements, or other operations and management (O&M) actions were defined and used, along with an economic model, to compute the likely economic risk of future actions such as deferred maintenance based on the anticipated change in CDF due to current component condition and future anticipated degradation. Such integration of conventional-risk metrics with alternate-risk metrics provides a convenient mechanism for assessing the impact of O&M decisions on safety and economics of the plant. It is expected that, when integrated with supervisory control algorithms, such integrated-risk monitors will provide a mechanism for real-time control decision-making that ensure safety margins are maintained while operating the plant in an economically viable manner.« less
Comprehensive Understanding of the Zipingpu Reservoir to the Ms8.0 Wenchuan Earthquake
NASA Astrophysics Data System (ADS)
Cheng, H.; Pang, Y. J.; Zhang, H.; Shi, Y.
2014-12-01
After the Wenchuan earthquake occurred, whether the big earthquake triggered by the storage of the Zipingpu Reservoir has attracted wide attention in international academic community. In addition to the qualitative discussion, many scholars also adopted the quantitative analysis methods to calculate the stress changes, but due to the different results, they draw very different conclusions. Here, we take the dispute of different teams in the quantitative calculation of Zipingpu reservoir as a starting point. In order to find out the key influence factors of quantitative calculation and know about the existing uncertainty elements during the numerical simulation, we analyze factors which may cause the differences. The preliminary results show that the calculation methods (analytical method or numerical method), dimension of models (2-D or 3-D), diffusion model, diffusion coefficient and focal mechanism are the main factors resulted in the differences, especially the diffusion coefficient of the fractured rock mass. The change of coulomb failure stress of the epicenter of Wenchuan earthquake attained from 2-D model is about 3 times of that of 3-D model. And it is not reasonable that only considering the fault permeability (assuming the permeability of rock mass as infinity) or only considering homogeneous isotropic rock mass permeability (ignoring the fault permeability). The different focal mechanisms also could dramatically affect the change of coulomb failure stress of the epicenter of Wenchuan earthquake, and the differences can research 2-7 times. And the differences the change of coulomb failure stress can reach several hundreds times, when selecting different diffusion coefficients. According to existing research that the magnitude of coulomb failure stress change is about several kPa, we could not rule out the possibility that the Zipingpu Reservoir may trigger the 2008 Wenchuan earthquake. However, for the background stress is not clear and coulomb failure stress change is too little, we also not sure there must be a connection between reservoir and earthquake. In future work, we should target on the basis of field survey and indoor experiment, improve the model and develop high performance simulation.
Pavlova, Viola; Grimm, Volker; Dietz, Rune; Sonne, Christian; Vorkamp, Katrin; Rigét, Frank F; Letcher, Robert J; Gustavson, Kim; Desforges, Jean-Pierre; Nabe-Nielsen, Jacob
2016-01-01
Polychlorinated biphenyls (PCBs) can cause endocrine disruption, cancer, immunosuppression, or reproductive failure in animals. We used an individual-based model to explore whether and how PCB-associated reproductive failure could affect the dynamics of a hypothetical polar bear (Ursus maritimus) population exposed to PCBs to the same degree as the East Greenland subpopulation. Dose-response data from experimental studies on a surrogate species, the mink (Mustela vision), were used in the absence of similar data for polar bears. Two alternative types of reproductive failure in relation to maternal sum-PCB concentrations were considered: increased abortion rate and increased cub mortality. We found that the quantitative impact of PCB-induced reproductive failure on population growth rate depended largely on the actual type of reproductive failure involved. Critical potencies of the dose-response relationship for decreasing the population growth rate were established for both modeled types of reproductive failure. Comparing the model predictions of the age-dependent trend of sum-PCBs concentrations in females with actual field measurements from East Greenland indicated that it was unlikely that PCB exposure caused a high incidence of abortions in the subpopulation. However, on the basis of this analysis, it could not be excluded that PCB exposure contributes to higher cub mortality. Our results highlight the necessity for further research on the possible influence of PCBs on polar bear reproduction regarding their physiological pathway. This includes determining the exact cause of reproductive failure, i.e., in utero exposure versus lactational exposure of offspring; the timing of offspring death; and establishing the most relevant reference metrics for the dose-response relationship.
NASA Technical Reports Server (NTRS)
Vanschalkwyk, Christiaan M.
1992-01-01
We discuss the application of Generalized Parity Relations to two experimental flexible space structures, the NASA Langley Mini-Mast and Marshall Space Flight Center ACES mast. We concentrate on the generation of residuals and make no attempt to implement the Decision Function. It should be clear from the examples that are presented whether it would be possible to detect the failure of a specific component. We derive the equations from Generalized Parity Relations. Two special cases are treated: namely, Single Sensor Parity Relations (SSPR) and Double Sensor Parity Relations (DSPR). Generalized Parity Relations for actuators are also derived. The NASA Langley Mini-Mast and the application of SSPR and DSPR to a set of displacement sensors located at the tip of the Mini-Mast are discussed. The performance of a reduced order model that includes the first five models of the mast is compared to a set of parity relations that was identified on a set of input-output data. Both time domain and frequency domain comparisons are made. The effect of the sampling period and model order on the performance of the Residual Generators are also discussed. Failure detection experiments where the sensor set consisted of two gyros and an accelerometer are presented. The effects of model order and sampling frequency are again illustrated. The detection of actuator failures is discussed. We use Generalized Parity Relations to monitor control system component failures on the ACES mast. An overview is given of the Failure Detection Filter and experimental results are discussed. Conclusions and directions for future research are given.
Beeler, N.M.; Lockner, D.A.
2003-01-01
We provide an explanation why earthquake occurrence does not correlate well with the daily solid Earth tides. The explanation is derived from analysis of laboratory experiments in which faults are loaded to quasiperiodic failure by the combined action of a constant stressing rate, intended to simulate tectonic loading, and a small sinusoidal stress, analogous to the Earth tides. Event populations whose failure times correlate with the oscillating stress show two modes of response; the response mode depends on the stressing frequency. Correlation that is consistent with stress threshold failure models, e.g., Coulomb failure, results when the period of stress oscillation exceeds a characteristic time tn; the degree of correlation between failure time and the phase of the driving stress depends on the amplitude and frequency of the stress oscillation and on the stressing rate. When the period of the oscillating stress is less than tn, the correlation is not consistent with threshold failure models, and much higher stress amplitudes are required to induce detectable correlation with the oscillating stress. The physical interpretation of tn is the duration of failure nucleation. Behavior at the higher frequencies is consistent with a second-order dependence of the fault strength on sliding rate which determines the duration of nucleation and damps the response to stress change at frequencies greater than 1/tn. Simple extrapolation of these results to the Earth suggests a very weak correlation of earthquakes with the daily Earth tides, one that would require >13,000 earthquakes to detect. On the basis of our experiments and analysis, the absence of definitive daily triggering of earthquakes by the Earth tides requires that for earthquakes, tn exceeds the daily tidal period. The experiments suggest that the minimum typical duration of earthquake nucleation on the San Andreas fault system is ???1 year.
Apparatus for sensor failure detection and correction in a gas turbine engine control system
NASA Technical Reports Server (NTRS)
Spang, H. A., III; Wanger, R. P. (Inventor)
1981-01-01
A gas turbine engine control system maintains a selected level of engine performance despite the failure or abnormal operation of one or more engine parameter sensors. The control system employs a continuously updated engine model which simulates engine performance and generates signals representing real time estimates of the engine parameter sensor signals. The estimate signals are transmitted to a control computational unit which utilizes them in lieu of the actual engine parameter sensor signals to control the operation of the engine. The estimate signals are also compared with the corresponding actual engine parameter sensor signals and the resulting difference signals are utilized to update the engine model. If a particular difference signal exceeds specific tolerance limits, the difference signal is inhibited from updating the model and a sensor failure indication is provided to the engine operator.
Average inactivity time model, associated orderings and reliability properties
NASA Astrophysics Data System (ADS)
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Seismic precursory patterns before a cliff collapse and critical point phenomena
Amitrano, D.; Grasso, J.-R.; Senfaute, G.
2005-01-01
We analyse the statistical pattern of seismicity before a 1-2 103 m3 chalk cliff collapse on the Normandie ocean shore, Western France. We show that a power law acceleration of seismicity rate and energy in both 40 Hz-1.5 kHz and 2 Hz-10kHz frequency range, is defined on 3 orders of magnitude, within 2 hours from the collapse time. Simultaneously, the average size of the seismic events increases toward the time to failure. These in situ results are derived from the only station located within one rupture length distance from the rock fall rupture plane. They mimic the "critical point" like behavior recovered from physical and numerical experiments before brittle failures and tertiary creep failures. Our analysis of this first seismic monitoring data of a cliff collapse suggests that the thermodynamic phase transition models for failure may apply for cliff collapse. Copyright 2005 by the American Geophysical Union.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
Stochastic Model of Clogging in a Microfluidic Cell Sorter
NASA Astrophysics Data System (ADS)
Fai, Thomas; Rycroft, Chris
2016-11-01
Microfluidic devices for sorting cells by deformability show promise for various medical purposes, e.g. detecting sickle cell anemia and circulating tumor cells. One class of such devices consists of a two-dimensional array of narrow channels, each column containing several identical channels in parallel. Cells are driven through the device by an applied pressure or flow rate. Such devices allows for many cells to be sorted simultaneously, but cells eventually clog individual channels and change the device properties in an unpredictable manner. In this talk, we propose a stochastic model for the failure of such microfluidic devices by clogging and present preliminary theoretical and computational results. The model can be recast as an ODE that exhibits finite time blow-up under certain conditions. The failure time distribution is investigated analytically in certain limiting cases, and more realistic versions of the model are solved by computer simulation.
Woodin, Sarah A; Hilbish, Thomas J; Helmuth, Brian; Jones, Sierra J; Wethey, David S
2013-09-01
Modeling the biogeographic consequences of climate change requires confidence in model predictions under novel conditions. However, models often fail when extended to new locales, and such instances have been used as evidence of a change in physiological tolerance, that is, a fundamental niche shift. We explore an alternative explanation and propose a method for predicting the likelihood of failure based on physiological performance curves and environmental variance in the original and new environments. We define the transient event margin (TEM) as the gap between energetic performance failure, defined as CTmax, and the upper lethal limit, defined as LTmax. If TEM is large relative to environmental fluctuations, models will likely fail in new locales. If TEM is small relative to environmental fluctuations, models are likely to be robust for new locales, even when mechanism is unknown. Using temperature, we predict when biogeographic models are likely to fail and illustrate this with a case study. We suggest that failure is predictable from an understanding of how climate drives nonlethal physiological responses, but for many species such data have not been collected. Successful biogeographic forecasting thus depends on understanding when the mechanisms limiting distribution of a species will differ among geographic regions, or at different times, resulting in realized niche shifts. TEM allows prediction of the likelihood of such model failure.
Selecting statistical model and optimum maintenance policy: a case study of hydraulic pump.
Ruhi, S; Karim, M R
2016-01-01
Proper maintenance policy can play a vital role for effective investigation of product reliability. Every engineered object such as product, plant or infrastructure needs preventive and corrective maintenance. In this paper we look at a real case study. It deals with the maintenance of hydraulic pumps used in excavators by a mining company. We obtain the data that the owner had collected and carry out an analysis and building models for pump failures. The data consist of both failure and censored lifetimes of the hydraulic pump. Different competitive mixture models are applied to analyze a set of maintenance data of a hydraulic pump. Various characteristics of the mixture models, such as the cumulative distribution function, reliability function, mean time to failure, etc. are estimated to assess the reliability of the pump. Akaike Information Criterion, adjusted Anderson-Darling test statistic, Kolmogrov-Smirnov test statistic and root mean square error are considered to select the suitable models among a set of competitive models. The maximum likelihood estimation method via the EM algorithm is applied mainly for estimating the parameters of the models and reliability related quantities. In this study, it is found that a threefold mixture model (Weibull-Normal-Exponential) fits well for the hydraulic pump failures data set. This paper also illustrates how a suitable statistical model can be applied to estimate the optimum maintenance period at a minimum cost of a hydraulic pump.
Dynamic behavior of the interaction between epidemics and cascades on heterogeneous networks
NASA Astrophysics Data System (ADS)
Jiang, Lurong; Jin, Xinyu; Xia, Yongxiang; Ouyang, Bo; Wu, Duanpo
2014-12-01
Epidemic spreading and cascading failure are two important dynamical processes on complex networks. They have been investigated separately for a long time. But in the real world, these two dynamics sometimes may interact with each other. In this paper, we explore a model combined with the SIR epidemic spreading model and a local load sharing cascading failure model. There exists a critical value of the tolerance parameter for which the epidemic with high infection probability can spread out and infect a fraction of the network in this model. When the tolerance parameter is smaller than the critical value, the cascading failure cuts off the abundance of paths and blocks the spreading of the epidemic locally. While the tolerance parameter is larger than the critical value, the epidemic spreads out and infects a fraction of the network. A method for estimating the critical value is proposed. In simulations, we verify the effectiveness of this method in the uncorrelated configuration model (UCM) scale-free networks.
A Nonlinear Viscoelastic Model for Ceramics at High Temperatures
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Panoskaltsis, Vassilis P.; Gasparini, Dario A.; Choi, Sung R.
2002-01-01
High-temperature creep behavior of ceramics is characterized by nonlinear time-dependent responses, asymmetric behavior in tension and compression, and nucleation and coalescence of voids leading to creep rupture. Moreover, creep rupture experiments show considerable scatter or randomness in fatigue lives of nominally equal specimens. To capture the nonlinear, asymmetric time-dependent behavior, the standard linear viscoelastic solid model is modified. Nonlinearity and asymmetry are introduced in the volumetric components by using a nonlinear function similar to a hyperbolic sine function but modified to model asymmetry. The nonlinear viscoelastic model is implemented in an ABAQUS user material subroutine. To model the random formation and coalescence of voids, each element is assigned a failure strain sampled from a lognormal distribution. An element is deleted when its volumetric strain exceeds its failure strain. Element deletion has been implemented within ABAQUS. Temporal increases in strains produce a sequential loss of elements (a model for void nucleation and growth), which in turn leads to failure. Nonlinear viscoelastic model parameters are determined from uniaxial tensile and compressive creep experiments on silicon nitride. The model is then used to predict the deformation of four-point bending and ball-on-ring specimens. Simulation is used to predict statistical moments of creep rupture lives. Numerical simulation results compare well with results of experiments of four-point bending specimens. The analytical model is intended to be used to predict the creep rupture lives of ceramic parts in arbitrary stress conditions.
Prottengeier, Johannes; Albermann, Matthias; Heinrich, Sebastian; Birkholz, Torsten; Gall, Christine; Schmidt, Joachim
2016-12-01
Intravenous access in prehospital emergency care allows for early administration of medication and extended measures such as anaesthesia. Cannulation may, however, be difficult, and failure and resulting delay in treatment and transport may have negative effects on the patient. Therefore, our study aims to perform a concise assessment of the difficulties of prehospital venous cannulation. We analysed 23 candidate predictor variables on peripheral venous cannulations in terms of cannulation failure and exceedance of a 2 min time threshold. Multivariate logistic regression models were fitted for variables of predictive value (P<0.25) and evaluated by the area under the curve (AUC>0.6) of their respective receiver operating characteristic curve. A total of 762 intravenous cannulations were enroled. In all, 22% of punctures failed on the first attempt and 13% of punctures exceeded 2 min. Model selection yielded a three-factor model (vein visibility without tourniquet, vein palpability with tourniquet and insufficient ambient lighting) of fair accuracy for the prediction of puncture failure (AUC=0.76) and a structurally congruent model of four factors (failure model factors plus vein visibility with tourniquet) for the exceedance of the 2 min threshold (AUC=0.80). Our study offers a simple assessment to identify cases of difficult intravenous access in prehospital emergency care. Of the numerous factors subjectively perceived as possibly exerting influences on cannulation, only the universal - not exclusive to emergency care - factors of lighting, vein visibility and palpability proved to be valid predictors of cannulation failure and exceedance of a 2 min threshold.
NASA Technical Reports Server (NTRS)
Waas, A.; Babcock, C., Jr.
1986-01-01
A series of experiments was carried out to determine the mechanism of failure in compressively loaded laminated plates with a circular cutout. Real time holographic interferometry and photomicrography are used to observe the progression of failure. These observations together with post experiment plate sectioning and deplying for interior damage observation provide useful information for modelling the failure process. It is revealed that the failure is initiated as a localised instability in the zero layers, at the hole surface. With increasing load extensive delamination cracking is observed. The progression of failure is by growth of these delaminations induced by delamination buckling. Upon reaching a critical state, catastrophic failure of the plate is observed. The levels of applied load and the rate at which these events occur depend on the plate stacking sequence.
The Potential of Micro Electro Mechanical Systems and Nanotechnology for the U.S. Army
2001-05-01
Quantitative Structure Activity Relationship ( QSAR ) model . The QSAR model calculates the proper composition of the polymer-carbon black matrix...example, the BEI Gyrochip Model QRS11 from Systron Donner Inertial Division has a startup time of less than 1 second, a Mean Time Between Failure (MTBF... modeling from many equations per atom to a few lines of code. This approach is amenable to parallel processing. Nevertheless, their programs require
NASA Astrophysics Data System (ADS)
Nasrum, A.; Pasaribu, U. S.; Husniah, H.
2016-02-01
This paper deals with maintenance service contract for a dump truck sold with a two-dimensional warranties. We consider a situation where an agent offers two maintenance contract options and the owner of the equipment has to select the optimal option either the OEM carried out all repairs and preventive maintenance activities (option one) or the OEM only carries out failure while the costumer undertakes preventive maintenance action in-house (option two). As the number of preventive maintenance and corrective maintenance that occurs in the area of servicing contracts is very influential in determining the value of the contract, we have to determine the optimal time between preventive maintenance that can minimize the cost of repair in the contract area. Moreover, we also study the maintenance service contract considering reduction of the intensity function after preventive maintenance from both the owner and OEM point of views. In this paper, we use a Weibull intensity function to consider a product with increasing failure intensity. We use a non-cooperative game formulation to determine the optimal price structure (i.e., the contract price and repair cost) for the OEM and the owner. A numerical example derived from the model has shown that if the owner choose option one then the owner obtain a higher profit compared with the profit resulted from option two. The result agree with earlier work which uses the accelerated failure time (AFT) for the failure modeling, while here we model the failure of the dump truck without the use of the AFT.
Advanced ceramic coating development for industrial/utility gas turbine applications
NASA Technical Reports Server (NTRS)
Andersson, C. A.; Lau, S. K.; Bratton, R. J.; Lee, S. Y.; Rieke, K. L.; Allen, J.; Munson, K. E.
1982-01-01
The effects of ceramic coatings on the lifetimes of metal turbine components and on the performance of a utility turbine, as well as of the turbine operational cycle on the ceramic coatings were determined. When operating the turbine under conditions of constant cooling flow, the first row blades run 55K cooler, and as a result, have 10 times the creep rupture life, 10 times the low cycle fatigue life and twice the corrosion life with only slight decreases in both specific power and efficiency. When operating the turbine at constant metal temperature and reduced cooling flow, both specific power and efficiency increases, with no change in component lifetime. The most severe thermal transient of the turbine causes the coating bond stresses to approach 60% of the bond strengths. Ceramic coating failures was studied. Analytic models based on fracture mechanics theories, combined with measured properties quantitatively assessed both single and multiple thermal cycle failures which allowed the prediction of coating lifetime. Qualitative models for corrosion failures are also presented.
Stability in a fiber bundle model: Existence of strong links and the effect of disorder
NASA Astrophysics Data System (ADS)
Roy, Subhadeep
2018-05-01
The present paper deals with a fiber bundle model which consists of a fraction α of infinitely strong fibers. The inclusion of such an unbreakable fraction has been proven to affect the failure process in early studies, especially around a critical value αc. The present work has a twofold purpose: (i) a study of failure abruptness, mainly the brittle to quasibrittle transition point with varying α and (ii) variation of αc as we change the strength of disorder introduced in the model. The brittle to quasibrittle transition is confirmed from the failure abruptness. On the other hand, the αc is obtained from the knowledge of failure abruptness as well as the statistics of avalanches. It is observed that the brittle to quasibrittle transition point scales to lower values, suggesting more quasi-brittle-like continuous failure when α is increased. At the same time, the bundle becomes stronger as there are larger numbers of strong links to support the external stress. High α in a highly disordered bundle leads to an ideal situation where the bundle strength, as well as the predictability in failure process is very high. Also, the critical fraction αc, required to make the model deviate from the conventional results, increases with decreasing strength of disorder. The analytical expression for αc shows good agreement with the numerical results. Finally, the findings in the paper are compared with previous results and real-life applications of composite materials.
Problems encountered with conventional fiber-reinforced composites
NASA Technical Reports Server (NTRS)
Landel, R. F.
1981-01-01
Preparational, computational, and operational problems associated with fiber-reinforced composites (FRC) are reviewed. Initial preparation of FRCs is shown to involve consideration of the type of prepreg, the setting time, cure conditions and cycles, and cure temperatures. The effects of the choice of bonding agents, the fiber transfer length, and individual fiber responses to bonding agents are noted to have an impact on fiber strength, moisture uptake, and fatigue resistance. The deformation prior to failure and the failure region are modeled through models of mini-, micro- and macro mechanics formulations employing a stiffness matrix, failure criterion, or fracture mechanics. The detection, evaluation, and repair of defects comprises the operational domain, and it is stressed that no good repair techniques exist for FRCs.
Wang, Junhua; Sun, Shuaiyi; Fang, Shouen; Fu, Ting; Stipancic, Joshua
2017-02-01
This paper aims to both identify the factors affecting driver drowsiness and to develop a real-time drowsy driving probability model based on virtual Location-Based Services (LBS) data obtained using a driving simulator. A driving simulation experiment was designed and conducted using 32 participant drivers. Collected data included the continuous driving time before detection of drowsiness and virtual LBS data related to temperature, time of day, lane width, average travel speed, driving time in heavy traffic, and driving time on different roadway types. Demographic information, such as nap habit, age, gender, and driving experience was also collected through questionnaires distributed to the participants. An Accelerated Failure Time (AFT) model was developed to estimate the driving time before detection of drowsiness. The results of the AFT model showed driving time before drowsiness was longer during the day than at night, and was longer at lower temperatures. Additionally, drivers who identified as having a nap habit were more vulnerable to drowsiness. Generally, higher average travel speeds were correlated to a higher risk of drowsy driving, as were longer periods of low-speed driving in traffic jam conditions. Considering different road types, drivers felt drowsy more quickly on freeways compared to other facilities. The proposed model provides a better understanding of how driver drowsiness is influenced by different environmental and demographic factors. The model can be used to provide real-time data for the LBS-based drowsy driving warning system, improving past methods based only on a fixed driving. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.
2018-01-01
Large alpine rock slopes undergo long-term evolution in paraglacial to postglacial environments. Rock mass weakening and increased permeability associated with the progressive failure of deglaciated slopes promote the development of potentially catastrophic rockslides. We captured the entire life cycle of alpine slopes in one damage-based, time-dependent 2-D model of brittle creep, including deglaciation, damage-dependent fluid occurrence, and rock mass property upscaling. We applied the model to the Spriana rock slope (Central Alps), affected by long-term instability after Last Glacial Maximum and representing an active threat. We simulated the evolution of the slope from glaciated conditions to present day and calibrated the model using site investigation data and available temporal constraints. The model tracks the entire progressive failure path of the slope from deglaciation to rockslide development, without a priori assumptions on shear zone geometry and hydraulic conditions. Complete rockslide differentiation occurs through the transition from dilatant damage to a compacting basal shear zone, accounting for observed hydraulic barrier effects and perched aquifer formation. Our model investigates the mechanical role of deglaciation and damage-controlled fluid distribution in the development of alpine rockslides. The absolute simulated timing of rock slope instability development supports a very long "paraglacial" period of subcritical rock mass damage. After initial damage localization during the Lateglacial, rockslide nucleation initiates soon after the onset of Holocene, whereas full mechanical and hydraulic rockslide differentiation occurs during Mid-Holocene, supporting a key role of long-term damage in the reported occurrence of widespread rockslide clusters of these ages.
Solid-state lighting life prediction using extended Kalman filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lall, Pradeep; Wei, Junchao; Davis, Lynn
2013-07-16
Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. The U.S. Department of Energy has made a long term commitment to advance the efficiency, understandingmore » and development of solid-state lighting (SSL) and is making a strong push for the acceptance and use of SSL products to reduce overall energy consumption attributable to lighting. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life is defined by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of SSL Luminaires from LM-80 test data. The TM-21 model uses an Arrhenius Equation with an Activation Energy, Pre-decay factor and Decay Rates. Several failure mechanisms may be active in a luminaire at a single time causing lumen depreciation. The underlying TM-21 Arrhenius Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, a Kalman Filter and Extended Kalman Filters have been used to develop a 70% Lumen Maintenance Life Prediction Model for a LEDs used in SSL luminaires. This model can be used to calculate acceleration factors, evaluate failure-probability and identify ALT methodologies for reducing test time. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state has been described in state space form using the measurement of the feature vector, velocity of feature vector change and the acceleration of the feature vector change. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. The measured state variable has been related to the underlying damage using physics-based models. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less
Vart, Priya; Matsushita, Kunihiro; Rawlings, Andreea M; Selvin, Elizabeth; Crews, Deidra C; Ndumele, Chiadi E; Ballantyne, Christie M; Heiss, Gerardo; Kucharska-Newton, Anna; Szklo, Moyses; Coresh, Josef
2018-02-01
Compared with coronary heart disease and stroke, the association between SES and the risk of heart failure is less well understood. In 12,646 participants of the Atherosclerosis Risk in Communities Study cohort free of heart failure history at baseline (1987-1989), the association of income, educational attainment, and area deprivation index with subsequent heart failure-related hospitalization or death was examined while accounting for cardiovascular disease risk factors and healthcare access. Because SES may affect threshold of identifying heart failure and admitting for heart failure management, secondarily the association between SES and N-terminal pro-b-type natriuretic peptide (NT-proBNP) levels, a marker reflecting cardiac overload, was investigated. Analysis was conducted in 2016. During a median follow-up of 24.3 years, a total of 2,249 participants developed heart failure. In a demographically adjusted model, the lowest-SES group had 2.2- to 2.5-fold higher risk of heart failure compared with the highest SES group for income, education, and area deprivation. With further adjustment for time-varying cardiovascular disease risk factors and healthcare access, these associations were attenuated but remained statistically significant (e.g., hazard ratio=1.92, 95% CI=1.69, 2.19 for the lowest versus highest income), with no racial interaction (p>0.05 for all SES measures). Similarly, compared with high SES, low SES was associated with both higher baseline level of NT-proBNP in a multivariable adjusted model (15% higher, p<0.001) and increase over time (~1% greater per year, p=0.023). SES was associated with clinical heart failure as well as NT-proBNP levels inversely and independently of traditional cardiovascular disease factors and healthcare access. Copyright © 2018 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Functional Fault Model Development Process to Support Design Analysis and Operational Assessment
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Maul, William A.; Hemminger, Joseph A.
2016-01-01
A functional fault model (FFM) is an abstract representation of the failure space of a given system. As such, it simulates the propagation of failure effects along paths between the origin of the system failure modes and points within the system capable of observing the failure effects. As a result, FFMs may be used to diagnose the presence of failures in the modeled system. FFMs necessarily contain a significant amount of information about the design, operations, and failure modes and effects. One of the important benefits of FFMs is that they may be qualitative, rather than quantitative and, as a result, may be implemented early in the design process when there is more potential to positively impact the system design. FFMs may therefore be developed and matured throughout the monitored system's design process and may subsequently be used to provide real-time diagnostic assessments that support system operations. This paper provides an overview of a generalized NASA process that is being used to develop and apply FFMs. FFM technology has been evolving for more than 25 years. The FFM development process presented in this paper was refined during NASA's Ares I, Space Launch System, and Ground Systems Development and Operations programs (i.e., from about 2007 to the present). Process refinement took place as new modeling, analysis, and verification tools were created to enhance FFM capabilities. In this paper, standard elements of a model development process (i.e., knowledge acquisition, conceptual design, implementation & verification, and application) are described within the context of FFMs. Further, newer tools and analytical capabilities that may benefit the broader systems engineering process are identified and briefly described. The discussion is intended as a high-level guide for future FFM modelers.
Nonparametric method for failures diagnosis in the actuating subsystem of aircraft control system
NASA Astrophysics Data System (ADS)
Terentev, M. N.; Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.
2018-02-01
In this paper we design a nonparametric method for failures diagnosis in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on analytical nonparametric one-step-ahead state prediction approach. This makes it possible to predict the behavior of unidentified and failure dynamic systems, to weaken the requirements to control signals, and to reduce the diagnostic time and problem complexity.
NASA Technical Reports Server (NTRS)
1976-01-01
Analytic techniques have been developed for detecting and identifying abrupt changes in dynamic systems. The GLR technique monitors the output of the Kalman filter and searches for the time that the failure occured, thus allowing it to be sensitive to new data and consequently increasing the chances for fast system recovery following detection of a failure. All failure detections are based on functional redundancy. Performance tests of the F-8 aircraft flight control system and computerized modelling of the technique are presented.
Gamell, Marc; Teranishi, Keita; Kolla, Hemanth; ...
2017-10-26
In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamell, Marc; Teranishi, Keita; Kolla, Hemanth
In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less
Dynamic one-dimensional modeling of secondary settling tanks and design impacts of sizing decisions.
Li, Ben; Stenstrom, Michael K
2014-03-01
As one of the most significant components in the activated sludge process (ASP), secondary settling tanks (SSTs) can be investigated with mathematical models to optimize design and operation. This paper takes a new look at the one-dimensional (1-D) SST model by analyzing and considering the impacts of numerical problems, especially the process robustness. An improved SST model with Yee-Roe-Davis technique as the PDE solver is proposed and compared with the widely used Takács model to show its improvement in numerical solution quality. The improved and Takács models are coupled with a bioreactor model to reevaluate ASP design basis and several popular control strategies for economic plausibility, contaminant removal efficiency and system robustness. The time-to-failure due to rising sludge blanket during overloading, as a key robustness indicator, is analyzed to demonstrate the differences caused by numerical issues in SST models. The calculated results indicate that the Takács model significantly underestimates time to failure, thus leading to a conservative design. Copyright © 2013 Elsevier Ltd. All rights reserved.
Weberndörfer, Vanessa; Nyffenegger, Tobias; Russi, Ian; Brinkert, Miriam; Berte, Benjamin; Toggweiler, Stefan; Kobza, Richard
2018-05-01
Early lead failure has recently been reported in ICD patients with Linox SD leads. We aimed to compare the long-term performance of the following lead model Linox Smart SD with other contemporary high-voltage leads. All patients receiving high-voltage leads at our center between November 2009 and May 2017 were retrospectively analyzed. Lead failure was defined as the occurrence of one or more of the following: non-physiological high-rate episodes, low- or high-voltage impedance anomalies, undersensing, or non-capture. In total, 220 patients were included (Linox Smart SD, n = 113; contemporary lead, n = 107). During a median follow-up of 3.8 years (IQR 1.6-5.9 years), a total of 16 (14 in Linox Smart SD and 2 in contemporary group) lead failures occurred, mostly due to non-physiological high-rate sensing or impedance abnormalities. Lead failure incidence rates per 100 person-years were 2.9 (95% CI 1.7-4.9) and 0.6 (95% CI 0.1-2.3) for Linox Smart SD compared to contemporary leads respectively. Kaplan Meier estimates of 5-year lead failure rates were 14.0% (95% CI 8.1-23.6%) and 1.3% (95% CI 0.2-8.9%), respectively (log-rank p = 0.028). Implantation of a Linox Smart SD lead increased the risk of lead failure with a hazard ratio (HR) of 4.53 (95% CI 1.03-19.95, p = 0.046) and 4.44 (95% CI 1.00-19.77, p = 0.05) in uni- and multivariable Cox models. The new Linox Smart SD lead model was associated with high failure rates and should be monitored closely to detect early signs of lead failure.
Mortier, Séverine Thérèse F C; Van Bockstal, Pieter-Jan; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas
2016-06-01
Large molecules, such as biopharmaceuticals, are considered the key driver of growth for the pharmaceutical industry. Freeze-drying is the preferred way to stabilise these products when needed. However, it is an expensive, inefficient, time- and energy-consuming process. During freeze-drying, there are only two main process variables to be set, i.e. the shelf temperature and the chamber pressure, however preferably in a dynamic way. This manuscript focuses on the essential use of uncertainty analysis for the determination and experimental verification of the dynamic primary drying Design Space for pharmaceutical freeze-drying. Traditionally, the chamber pressure and shelf temperature are kept constant during primary drying, leading to less optimal process conditions. In this paper it is demonstrated how a mechanistic model of the primary drying step gives the opportunity to determine the optimal dynamic values for both process variables during processing, resulting in a dynamic Design Space with a well-known risk of failure. This allows running the primary drying process step as time efficient as possible, hereby guaranteeing that the temperature at the sublimation front does not exceed the collapse temperature. The Design Space is the multidimensional combination and interaction of input variables and process parameters leading to the expected product specifications with a controlled (i.e., high) probability. Therefore, inclusion of parameter uncertainty is an essential part in the definition of the Design Space, although it is often neglected. To quantitatively assess the inherent uncertainty on the parameters of the mechanistic model, an uncertainty analysis was performed to establish the borders of the dynamic Design Space, i.e. a time-varying shelf temperature and chamber pressure, associated with a specific risk of failure. A risk of failure acceptance level of 0.01%, i.e. a 'zero-failure' situation, results in an increased primary drying process time compared to the deterministic dynamic Design Space; however, the risk of failure is under control. Experimental verification revealed that only a risk of failure acceptance level of 0.01% yielded a guaranteed zero-defect quality end-product. The computed process settings with a risk of failure acceptance level of 0.01% resulted in a decrease of more than half of the primary drying time in comparison with a regular, conservative cycle with fixed settings. Copyright © 2016. Published by Elsevier B.V.
Allegrini, P; Balocchi, R; Chillemi, S; Grigolini, P; Hamilton, P; Maestri, R; Palatella, L; Raffaelli, G
2003-06-01
We analyze RR heartbeat sequences with a dynamic model that satisfactorily reproduces both the long- and the short-time statistical properties of heart beating. These properties are expressed quantitatively by means of two significant parameters, the scaling delta concerning the asymptotic effects of long-range correlation, and the quantity 1-pi establishing the amount of uncorrelated fluctuations. We find a correlation between the position in the phase space (delta, pi) of patients with congestive heart failure and their mortality risk.
Both high and low HbA1c predict incident heart failure in type 2 diabetes mellitus.
Parry, Helen M; Deshmukh, Harshal; Levin, Daniel; Van Zuydam, Natalie; Elder, Douglas H J; Morris, Andrew D; Struthers, Allan D; Palmer, Colin N A; Doney, Alex S F; Lang, Chim C
2015-03-01
Type 2 diabetes mellitus is an independent risk factor for heart failure development, but the relationship between incident heart failure and antecedent glycemia has not been evaluated. The Genetics of Diabetes Audit and Research in Tayside Study study holds data for 8683 individuals with type 2 diabetes mellitus. Dispensed prescribing, hospital admission data, and echocardiography reports were linked to extract incident heart failure cases from December 1998 to August 2011. All available HbA1c measures until heart failure development or end of study were used to model HbA1c time-dependently. Individuals were observed from study enrolment until heart failure development or end of study. Proportional hazard regression calculated heart failure development risk associated with specific HbA1c ranges accounting for comorbidities associated with heart failure, including blood pressure, body mass index, and coronary artery disease. Seven hundred and one individuals with type 2 diabetes mellitus (8%) developed heart failure during follow up (mean 5.5 years, ±2.8 years). Time-updated analysis with longitudinal HbA1c showed that both HbA1c <6% (hazard ratio =1.60; 95% confidence interval, 1.38-1.86; P value <0.0001) and HbA1c >10% (hazard ratio =1.80; 95% confidence interval, 1.60-2.16; P value <0.0001) were independently associated with the risk of heart failure. Both high and low HbA1c predicted heart failure development in our cohort, forming a U-shaped relationship. © 2015 American Heart Association, Inc.
Declining Risk of Sudden Death in Heart Failure.
Shen, Li; Jhund, Pardeep S; Petrie, Mark C; Claggett, Brian L; Barlera, Simona; Cleland, John G F; Dargie, Henry J; Granger, Christopher B; Kjekshus, John; Køber, Lars; Latini, Roberto; Maggioni, Aldo P; Packer, Milton; Pitt, Bertram; Solomon, Scott D; Swedberg, Karl; Tavazzi, Luigi; Wikstrand, John; Zannad, Faiez; Zile, Michael R; McMurray, John J V
2017-07-06
The risk of sudden death has changed over time among patients with symptomatic heart failure and reduced ejection fraction with the sequential introduction of medications including angiotensin-converting-enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, and mineralocorticoid-receptor antagonists. We sought to examine this trend in detail. We analyzed data from 40,195 patients who had heart failure with reduced ejection fraction and were enrolled in any of 12 clinical trials spanning the period from 1995 through 2014. Patients who had an implantable cardioverter-defibrillator at the time of trial enrollment were excluded. Weighted multivariable regression was used to examine trends in rates of sudden death over time. Adjusted hazard ratios for sudden death in each trial group were calculated with the use of Cox regression models. The cumulative incidence rates of sudden death were assessed at different time points after randomization and according to the length of time between the diagnosis of heart failure and randomization. Sudden death was reported in 3583 patients. Such patients were older and were more often male, with an ischemic cause of heart failure and worse cardiac function, than those in whom sudden death did not occur. There was a 44% decline in the rate of sudden death across the trials (P=0.03). The cumulative incidence of sudden death at 90 days after randomization was 2.4% in the earliest trial and 1.0% in the most recent trial. The rate of sudden death was not higher among patients with a recent diagnosis of heart failure than among those with a longer-standing diagnosis. Rates of sudden death declined substantially over time among ambulatory patients with heart failure with reduced ejection fraction who were enrolled in clinical trials, a finding that is consistent with a cumulative benefit of evidence-based medications on this cause of death. (Funded by the China Scholarship Council and the University of Glasgow.).
The Remote Detection of Incipient Catastrophic Failure in Large Landslides
NASA Astrophysics Data System (ADS)
Petley, D.; Bulmer, M. H.; Murphy, W.; Mantovani, F.
2001-12-01
Landslide movement is commonly associated with brittle failure and ductile deformation. Kilburn and Petley (2001) proposed that cracking in landslides occurs due to downslope stress acting on the deforming horizon. If the assumption that a given crack event breaks a fixed distance of unbroken rock or soil the rate of cracking becomes equivalent to the number of crack events per unit time. Where crack growth (not nucleation) is occurring, the inverse rate of displacement changes linearly with time. Failure can be assumed to be the time at which displacement rates become infinitely large. Thus, for a slope heading towards catastrophic failure due to the development of a failure plane, this relationship would be linear, with the point at which failure will occur being the time when the line intercepts the x-axis. Increasing rates of deformation associated with ductile processes of crack nucleation would yield a curve with a negative gradient asymptopic to the x-axis. This hypothesis is being examined. In the 1960 movement of the Vaiont slide, Italy, although the rate of movement was accelerating, the plot of 1/deformation against time shows that it was increasing towards a steady state deformation. This movement has been associated with a low accumulated strain ductile phase of movement. In the 1963 movement event, the trend is linear. This was associated with a brittle phase of movement. A plot of 1/deformation against time for movement of the debris flow portion of the Tessina landslide (1998) shows a curve with a negative gradient asymptopic to the x-axis. This indicates that the debris flow moved as a result of ductile deformation processes. Plots of movement data for the Black Ven landslide over 1999 and 2001 also show curves that correlate with known deformation and catastrophic phases. The model results suggest there is a definable deformation pattern that is diagnostic of landslides approaching catastrophic failure. This pattern can be differentiated from landslides that are undergoing ductile deformation and those that are suffering crack nucleation.
Medication possession ratio predicts antiretroviral regimens persistence in Peru.
Salinas, Jorge L; Alave, Jorge L; Westfall, Andrew O; Paz, Jorge; Moran, Fiorella; Carbajal-Gonzalez, Danny; Callacondo, David; Avalos, Odalie; Rodriguez, Martin; Gotuzzo, Eduardo; Echevarria, Juan; Willig, James H
2013-01-01
In developing nations, the use of operational parameters (OPs) in the prediction of clinical care represents a missed opportunity to enhance the care process. We modeled the impact of multiple measurements of antiretroviral treatment (ART) adherence on antiretroviral treatment outcomes in Peru. Retrospective cohort study including ART naïve, non-pregnant, adults initiating therapy at Hospital Nacional Cayetano Heredia, Lima-Peru (2006-2010). Three OPs were defined: 1) Medication possession ratio (MPR): days with antiretrovirals dispensed/days on first-line therapy; 2) Laboratory monitory constancy (LMC): proportion of 6 months intervals with ≥1 viral load or CD4 reported; 3) Clinic visit constancy (CVC): proportion of 6 months intervals with ≥1 clinic visit. Three multi-variable Cox proportional hazard (PH) models (one per OP) were fit for (1) time of first-line ART persistence and (2) time to second-line virologic failure. All models were adjusted for socio-demographic, clinical and laboratory variables. 856 patients were included in first-line persistence analyses, median age was 35.6 years [29.4-42.9] and most were male (624; 73%). In multivariable PH models, MPR (per 10% increase HR=0.66; 95%CI=0.61-0.71) and LMC (per 10% increase 0.83; 0.71-0.96) were associated with prolonged time on first-line therapies. Among 79 individuals included in time to second-line virologic failure analyses, MPR was the only OP independently associated with prolonged time to second-line virologic failure (per 10% increase 0.88; 0.77-0.99). The capture and utilization of program level parameters such as MPR can provide valuable insight into patient-level treatment outcomes.
Aspirin Does Not Increase Heart Failure Events in Heart Failure Patients: From the WARCEF Trial.
Teerlink, John R; Qian, Min; Bello, Natalie A; Freudenberger, Ronald S; Levin, Bruce; Di Tullio, Marco R; Graham, Susan; Mann, Douglas L; Sacco, Ralph L; Mohr, J P; Lip, Gregory Y H; Labovitz, Arthur J; Lee, Seitetz C; Ponikowski, Piotr; Lok, Dirk J; Anker, Stefan D; Thompson, John L P; Homma, Shunichi
2017-08-01
The aim of this study was to determine whether aspirin increases heart failure (HF) hospitalization or death in patients with HF with reduced ejection fraction receiving an angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB). Because of its cyclooxygenase inhibiting properties, aspirin has been postulated to increase HF events in patients treated with ACE inhibitors or ARBs. However, no large randomized trial has addressed the clinical relevance of this issue. We compared aspirin and warfarin for HF events (hospitalization, death, or both) in the 2,305 patients enrolled in the WARCEF (Warfarin versus Aspirin in Reduced Cardiac Ejection Fraction) trial (98.6% on ACE inhibitor or ARB treatment), using conventional Cox models for time to first event (489 events). In addition, to examine multiple HF hospitalizations, we used 2 extended Cox models, a conditional model and a total time marginal model, in time to recurrent event analyses (1,078 events). After adjustment for baseline covariates, aspirin- and warfarin-treated patients did not differ in time to first HF event (adjusted hazard ratio: 0.87; 95% confidence interval: 0.72 to 1.04; p = 0.117) or first hospitalization alone (adjusted hazard ratio: 0.88; 95% confidence interval: 0.73 to 1.06; p = 0.168). The extended Cox models also found no significant differences in all HF events or in HF hospitalizations alone after adjustment for covariates. Among patients with HF with reduced ejection fraction in the WARCEF trial, there was no significant difference in risk of HF events between the aspirin and warfarin-treated patients. (Warfarin Versus Aspirin in Reduced Cardiac Ejection Fraction trial [WARCEF]; NCT00041938). Copyright © 2017 American College of Cardiology Foundation. All rights reserved.
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2018-04-15
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
Scaled CMOS Technology Reliability Users Guide
NASA Technical Reports Server (NTRS)
White, Mark
2010-01-01
The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.
Zhong, W; Zhang, Y; Zhang, M-Z; Huang, X-H; Li, Y; Li, R; Liu, Q-W
2018-06-01
The primary objective of this study was to compare the pharmacokinetics of dexmedetomidine in patients with end-stage renal failure and secondary hyperparathyroidism with those in normal individuals. Fifteen patients with end-stage renal failure and secondary hyperparathyroidism (Renal-failure Group) and 8 patients with normal renal and parathyroid gland function (Control Group) received intravenous 0.6 μg/kg dexmedetomidine for 10 minutes before anaesthesia induction. Arterial blood samples for plasma dexmedetomidine concentration analysis were drawn at regular intervals after the infusion was stopped. The pharmacokinetics were analysed using a nonlinear mixed-effect model with NONMEM software. The statistical significance of covariates was examined using the objective function (-2 log likelihood). In the forward inclusion and backward deletion, covariates (age, weight, sex, height, lean body mass [LBM], body surface area [BSA], body mass index [BMI], plasma albumin and grouping factor [renal failure or not]) were tested for significant effects on pharmacokinetic parameters. The validity of our population model was also evaluated using bootstrap simulations. The dexmedetomidine concentration-time curves fitted best with the principles of a two-compartmental pharmacokinetic model. No covariate of systemic clearance further improved the model. The final pharmacokinetic parameter values were as follows: V 1 = 60.6 L, V 2 = 222 L, Cl 1 = 0.825 L/min and Cl 2 = 4.48 L/min. There was no influence of age, weight, sex, height, LBM, BSA, BMI, plasma albumin and grouping factor (renal failure or not) on pharmacokinetic parameters. Although the plasma albumin concentrations (35.46 ± 4.13 vs 44.10 ± 1.12 mmol/L, respectively, P < .05) and dosage of propofol were significantly lower in the Renal-failure Group than in the Control Group (81.68 ± 18.08 vs 63.07 ± 13.45 μg/kg/min, respectively, P < .05), there were no differences in the context-sensitive half-life and the revival time of anaesthesia between the 2 groups. The pharmacokinetics of dexmedetomidine were best described by a two-compartment model in our study. The pharmacokinetic parameters of dexmedetomidine in patients with end-stage renal failure and hyperparathyroidism were similar to those in patients with normal renal function. Further studies of dexmedetomidine pharmacokinetics are recommended to optimize its clinical use. © 2017 John Wiley & Sons Ltd.
Reconfigurable Control Design with Neural Network Augmentation for a Modified F-15 Aircraft
NASA Technical Reports Server (NTRS)
Burken, John J.
2007-01-01
The viewgraphs present background information about reconfiguration control design, design methods used for paper, control failure survivability results, and results and time histories of tests. Topics examined include control reconfiguration, general information about adaptive controllers, model reference adaptive control (MRAC), the utility of neural networks, radial basis functions (RBF) neural network outputs, neurons, and results of investigations of failures.
Modelling of Rainfall Induced Landslides in Puerto Rico
NASA Astrophysics Data System (ADS)
Lepore, C.; Arnone, E.; Sivandran, G.; Noto, L. V.; Bras, R. L.
2010-12-01
We performed an island-wide determination of static landslide susceptibility and hazard assessment as well as dynamic modeling of rainfall-induced shallow landslides in a particular hydrologic basin. Based on statistical analysis of past landslides, we determined that reliable prediction of the susceptibility to landslides is strongly dependent on the resolution of the digital elevation model (DEM) employed and the reliability of the rainfall data. A distributed hydrology model, Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator with VEGetation Generator for Interactive Evolution (tRIBS-VEGGIE), tRIBS-VEGGIE, has been implemented for the first time in a humid tropical environment like Puerto Rico and validated against in-situ measurements. A slope-failure module has been added to tRIBS-VEGGIE’s framework, after analyzing several failure criterions to identify the most suitable for our application; the module is used to predict the location and timing of landsliding events. The Mameyes basin, located in the Luquillo Experimental Forest in Puerto Rico, was selected for modeling based on the availability of soil, vegetation, topographical, meteorological and historic landslide data. Application of the model yields a temporal and spatial distribution of predicted rainfall-induced landslides.
A Comparison of Functional Models for Use in the Function-Failure Design Method
NASA Technical Reports Server (NTRS)
Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.
2006-01-01
When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.
Score tests for independence in semiparametric competing risks models.
Saïd, Mériem; Ghazzali, Nadia; Rivest, Louis-Paul
2009-12-01
A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.
Cena, Tiziana; Musetti, Claudio; Quaglia, Marco; Magnani, Corrado; Stratta, Piero; Bagnardi, Vincenzo; Cantaluppi, Vincenzo
2016-10-01
The aim of this study was to evaluate the association between cancer occurrence and risk of graft failure in kidney transplant recipients. From November 1998 to November 2013, 672 adult patients received their first kidney transplant from a deceased donor and had a minimum follow-up of 6 months. During a median follow-up of 4.7 years (3523 patient-years), 47 patients developed a nonmelanoma skin cancer (NMSC) and 40 a noncutaneous malignancy (NCM). A total of 59 graft failures were observed. The failure rate was 6 per 100 patient-year (pt-yr) after NCM versus 1.5 per 100 pt-yr in patients without NCM. In a time-dependent multivariable model, the occurrence of NCM appeared to be associated with failure (HR = 3.27; 95% CI = 1.44-7.44). The effect of NCM on the cause-specific graft failure was different (P = 0.002) when considering events due to chronic rejection (HR = 0.55) versus other causes (HR = 15.59). The reduction of the immunosuppression after NCM was not associated with a greater risk of graft failure. In conclusion, our data suggest that post-transplant NCM may be a strong risk factor for graft failure, particularly for causes other than chronic rejection. © 2016 Steunstichting ESOT.
Semicompeting risks in aging research: methods, issues and needs
Varadhan, Ravi; Xue, Qian-Li; Bandeen-Roche, Karen
2015-01-01
A semicompeting risks problem involves two-types of events: a nonterminal and a terminal event (death). Typically, the nonterminal event is the focus of the study, but the terminal event can preclude the occurrence of the nonterminal event. Semicompeting risks are ubiquitous in studies of aging. Examples of semicompeting risk dyads include: dementia and death, frailty syndrome and death, disability and death, and nursing home placement and death. Semicompeting risk models can be divided into two broad classes: models based only on observables quantities (class O) and those based on potential (latent) failure times (class L). The classical illness-death model belongs to class O. This model is a special case of the multistate models, which has been an active area of methodology development. During the past decade and a half, there has also been a flurry of methodological activity on semicompeting risks based on latent failure times (L models). These advances notwithstanding, the semi-competing risks methodology has not penetrated biomedical research, in general, and gerontological research, in particular. Some possible reasons for this lack of uptake are: the methods are relatively new and sophisticated, conceptual problems associated with potential failure time models are difficult to overcome, paucity of expository articles aimed at educating practitioners, and non-availability of readily usable software. The main goals of this review article are: (i) to describe the major types of semicompeting risks problems arising in aging research, (ii) to provide a brief survey of the semicompeting risks methods, (iii) to suggest appropriate methods for addressing the problems in aging research, (iv) to highlight areas where more work is needed, and (v) to suggest ways to facilitate the uptake of the semicompeting risks methodology by the broader biomedical research community. PMID:24729136
Theory and Modeling of Liquid Explosive Detonation
NASA Astrophysics Data System (ADS)
Tarver, Craig M.; Urtiew, Paul A.
2010-10-01
The current understanding of the detonation reaction zones of liquid explosives is discussed in this article. The physical and chemical processes that precede and follow exothermic chemical reaction within the detonation reaction zone are discussed within the framework of the nonequilibrium Zeldovich-von Neumann-Doring (NEZND) theory of self-sustaining detonation. Nonequilibrium chemical and physical processes cause finite time duration induction zones before exothermic chemical energy release occurs. This separation between the leading shock wave front and the chemical energy release needed to sustain it results in shock wave amplification and the subsequent formation of complex three-dimensional cellular structures in all liquid detonation waves. To develop a practical Zeldovich-von Neumann-Doring (ZND) reactive flow model for liquid detonation, experimental data on reaction zone structure, confined failure diameter, unconfined failure diameter, and failure wave velocity in the Dremin-Trofimov test for detonating nitromethane are calculated using the ignition and growth reactive flow model.
Broët, Philippe; Tsodikov, Alexander; De Rycke, Yann; Moreau, Thierry
2004-06-01
This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests.
NASA Technical Reports Server (NTRS)
Humphreys, E. A.
1981-01-01
A computerized, analytical methodology was developed to study damage accumulation during low velocity lateral impact of layered composite plates. The impact event was modeled as perfectly plastic with complete momentum transfer to the plate structure. A transient dynamic finite element approach was selected to predict the displacement time response of the plate structure. Composite ply and interlaminar stresses were computed at selected time intervals and subsequently evaluated to predict layer and interlaminar damage. The effects of damage on elemental stiffness were then incorporated back into the analysis for subsequent time steps. Damage predicted included fiber failure, matrix ply failure and interlaminar delamination.
Altstein, L.; Li, G.
2012-01-01
Summary This paper studies a semiparametric accelerated failure time mixture model for estimation of a biological treatment effect on a latent subgroup of interest with a time-to-event outcome in randomized clinical trials. Latency is induced because membership is observable in one arm of the trial and unidentified in the other. This method is useful in randomized clinical trials with all-or-none noncompliance when patients in the control arm have no access to active treatment and in, for example, oncology trials when a biopsy used to identify the latent subgroup is performed only on subjects randomized to active treatment. We derive a computational method to estimate model parameters by iterating between an expectation step and a weighted Buckley-James optimization step. The bootstrap method is used for variance estimation, and the performance of our method is corroborated in simulation. We illustrate our method through an analysis of a multicenter selective lymphadenectomy trial for melanoma. PMID:23383608
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
Reliability Evaluation of Computer Systems
1979-04-01
detection mechanisms. The model rrvided values for the system availa bility, mean time before failure (VITBF) , and the proportion of time that the 4 system...Stanford University Comm~iuter Science 311, (also Electrical Engineering 482), Advanced Computer Organization. Graduate course in computer architeture
Bellera, Carine; Proust-Lima, Cécile; Joseph, Lawrence; Richaud, Pierre; Taylor, Jeremy; Sandler, Howard; Hanley, James; Mathoulin-Pélissier, Simone
2018-04-01
Background Biomarker series can indicate disease progression and predict clinical endpoints. When a treatment is prescribed depending on the biomarker, confounding by indication might be introduced if the treatment modifies the marker profile and risk of failure. Objective Our aim was to highlight the flexibility of a two-stage model fitted within a Bayesian Markov Chain Monte Carlo framework. For this purpose, we monitored the prostate-specific antigens in prostate cancer patients treated with external beam radiation therapy. In the presence of rising prostate-specific antigens after external beam radiation therapy, salvage hormone therapy can be prescribed to reduce both the prostate-specific antigens concentration and the risk of clinical failure, an illustration of confounding by indication. We focused on the assessment of the prognostic value of hormone therapy and prostate-specific antigens trajectory on the risk of failure. Methods We used a two-stage model within a Bayesian framework to assess the role of the prostate-specific antigens profile on clinical failure while accounting for a secondary treatment prescribed by indication. We modeled prostate-specific antigens using a hierarchical piecewise linear trajectory with a random changepoint. Residual prostate-specific antigens variability was expressed as a function of prostate-specific antigens concentration. Covariates in the survival model included hormone therapy, baseline characteristics, and individual predictions of the prostate-specific antigens nadir and timing and prostate-specific antigens slopes before and after the nadir as provided by the longitudinal process. Results We showed positive associations between an increased prostate-specific antigens nadir, an earlier changepoint and a steeper post-nadir slope with an increased risk of failure. Importantly, we highlighted a significant benefit of hormone therapy, an effect that was not observed when the prostate-specific antigens trajectory was not accounted for in the survival model. Conclusion Our modeling strategy was particularly flexible and accounted for multiple complex features of longitudinal and survival data, including the presence of a random changepoint and a time-dependent covariate.
Correlated seed failure as an environmental veto to synchronize reproduction of masting plants.
Bogdziewicz, Michał; Steele, Michael A; Marino, Shealyn; Crone, Elizabeth E
2018-07-01
Variable, synchronized seed production, called masting, is a widespread reproductive strategy in plants. Resource dynamics, pollination success, and, as described here, environmental veto are possible proximate mechanisms driving masting. We explored the environmental veto hypothesis, which assumes that reproductive synchrony is driven by external factors preventing reproduction in some years, by extending the resource budget model of masting with correlated reproductive failure. We ran this model across its parameter space to explore how key parameters interact to drive seeding dynamics. Next, we parameterized the model based on 16 yr of seed production data for populations of red (Quercus rubra) and white (Quercus alba) oaks. We used these empirical models to simulate seeding dynamics, and compared simulated time series with patterns observed in the field. Simulations showed that resource dynamics and reproduction failure can produce masting even in the absence of pollen coupling. In concordance with this, in both oaks, among-year variation in resource gain and correlated reproductive failure were necessary and sufficient to reproduce masting, whereas pollen coupling, although present, was not necessary. Reproductive failure caused by environmental veto may drive large-scale synchronization without density-dependent pollen limitation. Reproduction-inhibiting weather events are prevalent in ecosystems, making described mechanisms likely to operate in many systems. © 2018 The Authors New Phytologist © 2018 New Phytologist Trust.
Thorndahl, S; Willems, P
2008-01-01
Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.
High-Tensile Strength Tape Versus High-Tensile Strength Suture: A Biomechanical Study.
Gnandt, Ryan J; Smith, Jennifer L; Nguyen-Ta, Kim; McDonald, Lucas; LeClere, Lance E
2016-02-01
To determine which suture design, high-tensile strength tape or high-tensile strength suture, performed better at securing human tissue across 4 selected suture techniques commonly used in tendinous repair, by comparing the total load at failure measured during a fixed-rate longitudinal single load to failure using a biomechanical testing machine. Matched sets of tendon specimens with bony attachments were dissected from 15 human cadaveric lower extremities in a manner allowing for direct comparison testing. With the use of selected techniques (simple Mason-Allen in the patellar tendon specimens, whip stitch in the quadriceps tendon specimens, and Krackow stitch in the Achilles tendon specimens), 1 sample of each set was sutured with a 2-mm braided, nonabsorbable, high-tensile strength tape and the other with a No. 2 braided, nonabsorbable, high-tensile strength suture. A total of 120 specimens were tested. Each model was loaded to failure at a fixed longitudinal traction rate of 100 mm/min. The maximum load and failure method were recorded. In the whip stitch and the Krackow-stitch models, the high-tensile strength tape had a significantly greater mean load at failure with a difference of 181 N (P = .001) and 94 N (P = .015) respectively. No significant difference was found in the Mason-Allen and simple stitch models. Pull-through remained the most common method of failure at an overall rate of 56.7% (suture = 55%; tape = 58.3%). In biomechanical testing during a single load to failure, high-tensile strength tape performs more favorably than high-tensile strength suture, with a greater mean load to failure, in both the whip- and Krackow-stitch models. Although suture pull-through remains the most common method of failure, high-tensile strength tape requires a significantly greater load to pull-through in a whip-stitch and Krakow-stitch model. The biomechanical data obtained in the current study indicates that high-tensile strength tape may provide better repair strength compared with high-tensile strength suture at time-zero simulated testing. Published by Elsevier Inc.
A two-stage model of fracture of rocks
Kuksenko, V.; Tomilin, N.; Damaskinskaya, E.; Lockner, D.
1996-01-01
In this paper we propose a two-stage model of rock fracture. In the first stage, cracks or local regions of failure are uncorrelated occur randomly throughout the rock in response to loading of pre-existing flaws. As damage accumulates in the rock, there is a gradual increase in the probability that large clusters of closely spaced cracks or local failure sites will develop. Based on statistical arguments, a critical density of damage will occur where clusters of flaws become large enough to lead to larger-scale failure of the rock (stage two). While crack interaction and cooperative failure is expected to occur within clusters of closely spaced cracks, the initial development of clusters is predicted based on the random variation in pre-existing Saw populations. Thus the onset of the unstable second stage in the model can be computed from the generation of random, uncorrelated damage. The proposed model incorporates notions of the kinetic (and therefore time-dependent) nature of the strength of solids as well as the discrete hierarchic structure of rocks and the flaw populations that lead to damage accumulation. The advantage offered by this model is that its salient features are valid for fracture processes occurring over a wide range of scales including earthquake processes. A notion of the rank of fracture (fracture size) is introduced, and criteria are presented for both fracture nucleation and the transition of the failure process from one scale to another.
Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin
2017-08-04
In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.
Analysis of EDZ Development of Columnar Jointed Rock Mass in the Baihetan Diversion Tunnel
NASA Astrophysics Data System (ADS)
Hao, Xian-Jie; Feng, Xia-Ting; Yang, Cheng-Xiang; Jiang, Quan; Li, Shao-Jun
2016-04-01
Due to the time dependency of the crack propagation, columnar jointed rock masses exhibit marked time-dependent behaviour. In this study, in situ measurements, scanning electron microscope (SEM), back-analysis method and numerical simulations are presented to study the time-dependent development of the excavation damaged zone (EDZ) around underground diversion tunnels in a columnar jointed rock mass. Through in situ measurements of crack propagation and EDZ development, their extent is seen to have increased over time, despite the fact that the advancing face has passed. Similar to creep behaviour, the time-dependent EDZ development curve also consists of three stages: a deceleration stage, a stabilization stage, and an acceleration stage. A corresponding constitutive model of columnar jointed rock mass considering time-dependent behaviour is proposed. The time-dependent degradation coefficient of the roughness coefficient and residual friction angle in the Barton-Bandis strength criterion are taken into account. An intelligent back-analysis method is adopted to obtain the unknown time-dependent degradation coefficients for the proposed constitutive model. The numerical modelling results are in good agreement with the measured EDZ. Not only that, the failure pattern simulated by this time-dependent constitutive model is consistent with that observed in the scanning electron microscope (SEM) and in situ observation, indicating that this model could accurately simulate the failure pattern and time-dependent EDZ development of columnar joints. Moreover, the effects of the support system provided and the in situ stress on the time-dependent coefficients are studied. Finally, the long-term stability analysis of diversion tunnels excavated in columnar jointed rock masses is performed.
Factors Influencing Progressive Failure Analysis Predictions for Laminated Composite Structure
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.
2008-01-01
Progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model for use with a nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details are described in the present paper. Parametric studies for laminated composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented and to demonstrate their influence on progressive failure analysis predictions.
NASA Astrophysics Data System (ADS)
Hutchenson, K. D.; Hartley-McBride, S.; Saults, T.; Schmidt, D. P.
2006-05-01
The International Monitoring System (IMS) is composed in part of radionuclide particulate and gas monitoring systems. Monitoring the operational status of these systems is an important aspect of nuclear weapon test monitoring. Quality data, process control techniques, and predictive models are necessary to detect and predict system component failures. Predicting failures in advance provides time to mitigate these failures, thus minimizing operational downtime. The Provisional Technical Secretariat (PTS) requires IMS radionuclide systems be operational 95 percent of the time. The United States National Data Center (US NDC) offers contributing components to the IMS. This effort focuses on the initial research and process development using prognostics for monitoring and predicting failures of the RASA two (2) days into the future. The predictions, using time series methods, are input to an expert decision system, called SHADES (State of Health Airflow and Detection Expert System). The results enable personnel to make informed judgments about the health of the RASA system. Data are read from a relational database, processed, and displayed to the user in a GIS as a prototype GUI. This procedure mimics the real time application process that could be implemented as an operational system, This initial proof-of-concept effort developed predictive models focused on RASA components for a single site (USP79). Future work shall include the incorporation of other RASA systems, as well as their environmental conditions that play a significant role in performance. Similarly, SHADES currently accommodates specific component behaviors at this one site. Future work shall also include important environmental variables that play an important part of the prediction algorithms.
NASA Astrophysics Data System (ADS)
Sinha, Nitish; Singh, Arun K.; Singh, Trilok N.
2018-05-01
In this article, we study numerically the dynamic stability of the rate, state, temperature, and pore pressure friction (RSTPF) model at a rock interface using standard spring-mass sliding system. This particular friction model is a basically modified form of the previously studied friction model namely the rate, state, and temperature friction (RSTF). The RSTPF takes into account the role of thermal pressurization including dilatancy and permeability of the pore fluid due to shear heating at the slip interface. The linear stability analysis shows that the critical stiffness, at which the sliding becomes stable to unstable or vice versa, increases with the coefficient of thermal pressurization. Critical stiffness, on the other hand, remains constant for small values of either dilatancy factor or hydraulic diffusivity, but the same decreases as their values are increased further from dilatancy factor (˜ 10^{ - 4} ) and hydraulic diffusivity (˜ 10^{ - 9} {m}2 {s}^{ - 1} ) . Moreover, steady-state friction is independent of the coefficient of thermal pressurization, hydraulic diffusivity, and dilatancy factor. The proposed model is also used for predicting time of failure of a creeping interface of a rock slope under the constant gravitational force. It is observed that time of failure decreases with increase in coefficient of thermal pressurization and hydraulic diffusivity, but the dilatancy factor delays the failure of the rock fault under the condition of heat accumulation at the creeping interface. Moreover, stiffness of the rock-mass also stabilizes the failure process of the interface as the strain energy due to the gravitational force accumulates in the rock-mass before it transfers to the sliding interface. Practical implications of the present study are also discussed.
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
Application of Generative Topographic Mapping to Gear Failures Monitoring
NASA Astrophysics Data System (ADS)
Liao, Guanglan; Li, Weihua; Shi, Tielin; Rao, Raj B. K. N.
2002-07-01
The Generative Topographic Mapping (GTM) model is introduced as a probabilistic re-formation of the self-organizing map and has already been used in a variety of applications. This paper presents a study of the GTM in industrial gear failures monitoring. Vibration signals are analyzed using the GTM model, and the results show that gear feature data sets can be projected into a two-dimensional space and clustered in different areas according to their conditions, which can classify and identify clearly a gear work condition with cracked or broken tooth compared with the normal condition. With the trace of the image points in the two-dimensional space, the variation of gear work conditions can be observed visually, therefore, the occurrence and varying trend of gear failures can be monitored in time.
Akosah, Kwame O; Schaper, Ana M; Haus, Lindsay M; Mathiason, Michelle A; Barnhart, Sharon I; McHugh, Vicki L
2005-06-01
The purpose of our current study was to determine whether our disease-management model was associated with long-term survival benefits. A secondary objective was to determine whether program involvement was associated with medication maintenance and reduced hospitalization over time compared to usual care management of heart failure. A retrospective chart review was conducted in patients who had been hospitalized for congestive heart failure between April 1999 and March 31, 2000, and had been discharged from the hospital for follow-up in the Heart Failure Clinic vs usual care. An integrated health-care center serving a tristate area. Patients (n = 101) were followed up for 4 years after their index hospitalization for congestive heart failure. The patients followed up in the Heart Failure Clinic comprised group 1 (n = 38), and the patients receiving usual care made up group 2 (n = 63). The mean (+/- SD) age of the patients in group 1 was 68 +/- 16 years compared to 76 +/- 11 years for the patients in group 2 (p = 0.002). The patients in group 1 were more likely to have renal failure (p = 0.035), a lower left ventricular ejection fraction (p = 0.005), and hypotension at baseline (p = 0.002). At year 2, more patients in group 1 were maintained by therapy with angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin receptor blockers (ARBs) [p = 0.036]. The survival rate over 4 years was better for group 1. Univariate Cox proportional hazard ratios revealed that age, not receiving ACEIs or ARBs, and renal disease or cancer at baseline were associated with mortality. When controlling for these variables in a multivariate Cox proportional hazards ratio model, survival differences between groups remained significant (p = 0.021). Subjects in group 2 were 2.4 times more likely to die over the 4-year period than those in group 1. Our study demonstrated that, after controlling for baseline variables, patients participating in a heart failure clinic enjoyed improved survival.
Moore, Michael G; Deschler, Daniel G
2007-04-01
To evaluate the effect of clopidogrel on the rate of thrombosis in a rat model for venous microvascular failure. Forty rats were treated with clopidogrel or saline control via gastric gavage in a randomized, blinded fashion. After allowing for absorption and activation, each femoral vein was isolated and a venous "tuck" procedure was performed. The bleeding time and vessel patency were subsequently evaluated. The rate of vessel thrombosis was decreased in the clopidogrel-treated group compared to controls (7.9% vs 31.4%, P < 0.025). The bleeding time was longer in the clopidogrel-treated group compared to controls (250 +/- 100 seconds vs 173 +/- 59 seconds, P < 0.015). Clopidogrel decreased the rate of thrombosis in the rat model for venous microvascular failure. The use of clopidogrel may reduce the rate of venous thrombosis after free tissue transfer and may be indicated in select patients.
Recent and future warm extreme events and high-mountain slope stability.
Huggel, C; Salzmann, N; Allen, S; Caplan-Auerbach, J; Fischer, L; Haeberli, W; Larsen, C; Schneider, D; Wessels, R
2010-05-28
The number of large slope failures in some high-mountain regions such as the European Alps has increased during the past two to three decades. There is concern that recent climate change is driving this increase in slope failures, thus possibly further exacerbating the hazard in the future. Although the effects of a gradual temperature rise on glaciers and permafrost have been extensively studied, the impacts of short-term, unusually warm temperature increases on slope stability in high mountains remain largely unexplored. We describe several large slope failures in rock and ice in recent years in Alaska, New Zealand and the European Alps, and analyse weather patterns in the days and weeks before the failures. Although we did not find one general temperature pattern, all the failures were preceded by unusually warm periods; some happened immediately after temperatures suddenly dropped to freezing. We assessed the frequency of warm extremes in the future by analysing eight regional climate models from the recently completed European Union programme ENSEMBLES for the central Swiss Alps. The models show an increase in the higher frequency of high-temperature events for the period 2001-2050 compared with a 1951-2000 reference period. Warm events lasting 5, 10 and 30 days are projected to increase by about 1.5-4 times by 2050 and in some models by up to 10 times. Warm extremes can trigger large landslides in temperature-sensitive high mountains by enhancing the production of water by melt of snow and ice, and by rapid thaw. Although these processes reduce slope strength, they must be considered within the local geological, glaciological and topographic context of a slope.
NASA Astrophysics Data System (ADS)
Reid, M. E.; Iverson, R. M.; Brien, D. L.; Iverson, N. R.; Lahusen, R. G.; Logan, M.
2016-12-01
Shallow landslides and ensuing debris flows can be triggered by diverse hydrologic phenomena such as groundwater inflow, prolonged moderate-intensity precipitation, or bursts of high-intensity precipitation. However, hazard assessments typically rely on simplistic hydrologic models that disregard this diversity. We used the USGS debris-flow flume to conduct controlled, field-scale slope failure experiments designed to investigate the effects of diverse hydrologic pathways, as well as the effects of 3D landslide geometries and suction stresses in unsaturated soil. Using overhead sprinklers or groundwater injectors on the flume bed, we induced failures in 6 m3 (0.65-m thick and 2-m wide) prisms of loamy sand on a 31º slope. We used 50 sensors to monitor soil deformation, variably saturated pore pressures, and moisture changes. We also determined shear strength, hydraulic conductivity, and unsaturated moisture retention characteristics from ancillary tests. The three hydrologic scenarios noted above led to different behaviors. Groundwater injection and prolonged infiltration created differing soil moisture patterns. Intense sprinkling bursts caused rapid failure without development of widespread positive pore pressures. We simulated these observed differences numerically by coupling 2D variably saturated groundwater flow modeling and 3D limit-equilibrium analysis. We also simulated the time evolution of changes in factors of safety, and quantified the mechanical effects of 3D geometry and unsaturated soil suction on stability. When much of the soil became relatively wet, effects of 3D geometry and soil suction produced slight increases ( 10-20%) in factors of safety. Suction effects were more pronounced with drier soils. Our results indicate that simplistic models cannot consistently predict the timing of slope failure, and that high frequency monitoring (with sampling periods < 60 s) is needed to measure and interpret the effects of rapid hydrologic triggers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, A K; Weese, R K; Adrzejewski, W J
Accelerated aging tests play an important role in assessing the lifetime of manufactured products. There are two basic approaches to lifetime qualification. One tests a product to failure over range of accelerated conditions to calibrate a model, which is then used to calculate the failure time for conditions of use. A second approach is to test a component to a lifetime-equivalent dose (thermal or radiation) to see if it still functions to specification. Both methods have their advantages and limitations. A disadvantage of the 2nd method is that one does not know how close one is to incipient failure. Thismore » limitation can be mitigated by testing to some higher level of dose as a safety margin, but having a predictive model of failure via the 1st approach provides an additional measure of confidence. Even so, proper calibration of a failure model is non-trivial, and the extrapolated failure predictions are only as good as the model and the quality of the calibration. This paper outlines results for predicting the potential failure point of a system involving a mixture of two energetic materials, HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate). Global chemical kinetic models for the two materials individually and as a mixture are developed and calibrated from a variety of experiments. These include traditional thermal analysis experiments run on time scales from hours to a couple days, detonator aging experiments with exposures up to 50 months, and sealed-tube aging experiments for up to 5 years. Decomposition kinetics are determined for HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate) separately and together. For high levels of thermal stress, the two materials decompose faster as a mixture than individually. This effect is observed both in high-temperature thermal analysis experiments and in long-term thermal aging experiments. An Arrhenius plot of the 10% level of HMX decomposition by itself from a diverse set of experiments is linear from 120 to 260 C, with an apparent activation energy of 165 kJ/mol. Similar but less extensive thermal analysis data for the mixture suggests a slightly lower activation energy for the mixture, and an analogous extrapolation is consistent with the amount of gas observed in the long-term detonator aging experiments, which is about 30 times greater than expected from HMX by itself for 50 months at 100 C. Even with this acceleration, however, it would take {approx}10,000 years to achieve 10% decomposition at {approx}30 C. Correspondingly, negligible decomposition is predicted by this kinetic model for a few decades aging at temperatures slightly above ambient. This prediction is consistent with additional sealed-tube aging experiments at 100-120 C, which are estimated to have an effective thermal dose greater than that from decades of exposure to temperatures slightly above ambient.« less
NASA Astrophysics Data System (ADS)
Thompson, C. J.; Croke, J. C.; Grove, J. R.
2012-04-01
Non-linearity in physical systems provides a conceptual framework to explain complex patterns and form that are derived from complex internal dynamics rather than external forcings, and can be used to inform modeling and improve landscape management. One process that has been investigated previously to explore the existence of self-organised critical system (SOC) in river systems at the basin-scale is bank failure. Spatial trends in bank failure have been previously quantified to determine if the distribution of bank failures at the basin scale exhibit the necessary power law magnitude/frequency distributions. More commonly bank failures are investigated at a small-scale using several cross-sections with strong emphasis on local-scale factors such as bank height, cohesion and hydraulic properties. Advancing our understanding of non-linearity in such processes, however, requires many more studies where both the spatial and temporal measurements of the process can be used to investigate the existence or otherwise of non-linearity and self-organised criticality. This study presents measurements of bank failure throughout the Lockyer catchment in southeast Queensland, Australia, which experienced an extreme flood event in January 2011 resulting in the loss of human lives and geomorphic channel change. The most dominant form of fluvial adjustment consisted of changes in channel geometry and notably widespread bank failures, which were readily identifiable as 'scalloped' shaped failure scarps. The spatial extents of these were mapped using high-resolution LiDAR derived digital elevation model and were verified by field surveys and air photos. Pre-flood event LiDAR coverage for the catchment also existed allowing direct comparison of the magnitude and frequency of bank failures from both pre and post-flood time periods. Data were collected and analysed within a GIS framework and investigated for power-law relationships. Bank failures appeared random and occurred throughout the basin but plots of magnitude and frequency did display power-law scaling of failures. In addition, there was a lack of site specific correlations between bank failure and other factors such channel width, bank height and stream power. The data are used here to discuss the existence of SOC in fluvial systems and the relative role of local and basin-wide processes in influencing their distribution in space and time.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.
2014-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.
2016-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz
2013-01-01
The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.
Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N
2016-01-01
Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.
Real-Time Detection of Infusion Site Failures in a Closed-Loop Artificial Pancreas.
Howsmon, Daniel P; Baysal, Nihat; Buckingham, Bruce A; Forlenza, Gregory P; Ly, Trang T; Maahs, David M; Marcal, Tatiana; Towers, Lindsey; Mauritzen, Eric; Deshpande, Sunil; Huyett, Lauren M; Pinsker, Jordan E; Gondhalekar, Ravi; Doyle, Francis J; Dassau, Eyal; Hahn, Juergen; Bequette, B Wayne
2018-05-01
As evidence emerges that artificial pancreas systems improve clinical outcomes for patients with type 1 diabetes, the burden of this disease will hopefully begin to be alleviated for many patients and caregivers. However, reliance on automated insulin delivery potentially means patients will be slower to act when devices stop functioning appropriately. One such scenario involves an insulin infusion site failure, where the insulin that is recorded as delivered fails to affect the patient's glucose as expected. Alerting patients to these events in real time would potentially reduce hyperglycemia and ketosis associated with infusion site failures. An infusion site failure detection algorithm was deployed in a randomized crossover study with artificial pancreas and sensor-augmented pump arms in an outpatient setting. Each arm lasted two weeks. Nineteen participants wore infusion sets for up to 7 days. Clinicians contacted patients to confirm infusion site failures detected by the algorithm and instructed on set replacement if failure was confirmed. In real time and under zone model predictive control, the infusion site failure detection algorithm achieved a sensitivity of 88.0% (n = 25) while issuing only 0.22 false positives per day, compared with a sensitivity of 73.3% (n = 15) and 0.27 false positives per day in the SAP arm (as indicated by retrospective analysis). No association between intervention strategy and duration of infusion sets was observed ( P = .58). As patient burden is reduced by each generation of advanced diabetes technology, fault detection algorithms will help ensure that patients are alerted when they need to manually intervene. Clinical Trial Identifier: www.clinicaltrials.gov,NCT02773875.
Nielsen, Joseph; Tokuhiro, Akira; Hiromoto, Robert; ...
2015-11-13
Evaluation of the impacts of uncertainty and sensitivity in modeling presents a significant set of challenges in particular to high fidelity modeling. Computational costs and validation of models creates a need for cost effective decision making with regards to experiment design. Experiments designed to validate computation models can be used to reduce uncertainty in the physical model. In some cases, large uncertainty in a particular aspect of the model may or may not have a large impact on the final results. For example, modeling of a relief valve may result in large uncertainty, however, the actual effects on final peakmore » clad temperature in a reactor transient may be small and the large uncertainty with respect to valve modeling may be considered acceptable. Additionally, the ability to determine the adequacy of a model and the validation supporting it should be considered within a risk informed framework. Low fidelity modeling with large uncertainty may be considered adequate if the uncertainty is considered acceptable with respect to risk. In other words, models that are used to evaluate the probability of failure should be evaluated more rigorously with the intent of increasing safety margin. Probabilistic risk assessment (PRA) techniques have traditionally been used to identify accident conditions and transients. Traditional classical event tree methods utilize analysts’ knowledge and experience to identify the important timing of events in coordination with thermal-hydraulic modeling. These methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. This study presents a methodology to address combinatorial explosion using a Branch-and-Bound algorithm applied to Dynamic Event Trees (DET), which utilize LENDIT (L – Length, E – Energy, N – Number, D – Distribution, I – Information, and T – Time) as well as a set theory to describe system, state, resource, and response (S2R2) sets to create bounding functions for the DET. The optimization of the DET in identifying high probability failure branches is extended to create a Phenomenological Identification and Ranking Table (PIRT) methodology to evaluate modeling parameters important to safety of those failure branches that have a high probability of failure. The PIRT can then be used as a tool to identify and evaluate the need for experimental validation of models that have the potential to reduce risk. Finally, in order to demonstrate this methodology, a Boiling Water Reactor (BWR) Station Blackout (SBO) case study is presented.« less
Antoniou, Tony; Szadkowski, Leah; Walmsley, Sharon; Cooper, Curtis; Burchell, Ann N; Bayoumi, Ahmed M; Montaner, Julio S G; Loutfy, Mona; Klein, Marina B; Machouf, Nima; Tsoukas, Christos; Wong, Alexander; Hogg, Robert S; Raboud, Janet
2017-04-11
Atazanavir/ritonavir and darunavir/ritonavir are common protease inhibitor-based regimens for treating patients with HIV. Studies comparing these drugs in clinical practice are lacking. We conducted a retrospective cohort study of antiretroviral naïve participants in the Canadian Observational Cohort (CANOC) collaboration initiating atazanavir/ritonavir- or darunavir/ritonavir-based treatment. We used separate Fine and Gray competing risk regression models to compare times to regimen failure (composite of virologic failure or discontinuation for any reason). Additional endpoints included virologic failure, discontinuation due to virologic failure, discontinuation for other reasons, and virologic suppression. We studied 222 patients treated with darunavir/ritonavir and 1791 patients treated with atazanavir/ritonavir. Following multivariable adjustment, there was no difference between darunavir/ritonavir and atazanavir-ritonavir in the risk of regimen failure (adjusted hazard ratio 0.76, 95% CI 0.56 to 1.03) Darunavir/ritonavir-treated patients were at lower risk of virologic failure relative to atazanavir/ritonavir treated patients (aHR 0.50, 95% CI 0.28 to 0.91), findings driven largely by high rates of virologic failure among atazanavir/ritonavir-treated patients in the province of British Columbia. Of 108 discontinuations due to virologic failure, all occurred in patients starting atazanavir/ritonavir. There was no difference between regimens in time to discontinuation for reasons other than virologic failure (aHR 0.93; 95% CI 0.65 to 1.33) or virologic suppression (aHR 0.99, 95% CI 0.82 to 1.21). The risk of regimen failure was similar between patients treated with darunavir/ritonavir and atazanavir/ritonavir. Although darunavir/ritonavir was associated with a lower risk of virologic failure relative to atazanavir/ritonavir, this difference varied substantially by Canadian province and likely reflects regional variation in prescribing practices and patient characteristics.
Rogers, Jennifer K; Pocock, Stuart J; McMurray, John J V; Granger, Christopher B; Michelson, Eric L; Östergren, Jan; Pfeffer, Marc A; Solomon, Scott D; Swedberg, Karl; Yusuf, Salim
2014-01-01
Heart failure is characterized by recurrent hospitalizations, but often only the first event is considered in clinical trial reports. In chronic diseases, such as heart failure, analysing all events gives a more complete picture of treatment benefit. We describe methods of analysing repeat hospitalizations, and illustrate their value in one major trial. The Candesartan in Heart failure Assessment of Reduction in Mortality and morbidity (CHARM)-Preserved study compared candesartan with placebo in 3023 patients with heart failure and preserved systolic function. The heart failure hospitalization rates were 12.5 and 8.9 per 100 patient-years in the placebo and candesartan groups, respectively. The repeat hospitalizations were analysed using the Andersen-Gill, Poisson, and negative binomial methods. Death was incorporated into analyses by treating it as an additional event. The win ratio method and a method that jointly models hospitalizations and mortality were also considered. Using repeat events gave larger treatment benefits than time to first event analysis. The negative binomial method for the composite of recurrent heart failure hospitalizations and cardiovascular death gave a rate ratio of 0.75 [95% confidence interval (CI) 0.62-0.91, P = 0.003], whereas the hazard ratio for time to first heart failure hospitalization or cardiovascular death was 0.86 (95% CI 0.74-1.00, P = 0.050). In patients with preserved EF, candesartan reduces the rate of admissions for worsening heart failure, to a greater extent than apparent from analysing only first hospitalizations. Recurrent events should be routinely incorporated into the analysis of future clinical trials in heart failure. © 2013 The Authors. European Journal of Heart Failure © 2013 European Society of Cardiology.
Three-Dimensional High Fidelity Progressive Failure Damage Modeling of NCF Composites
NASA Technical Reports Server (NTRS)
Aitharaju, Venkat; Aashat, Satvir; Kia, Hamid G.; Satyanarayana, Arunkumar; Bogert, Philip B.
2017-01-01
Performance prediction of off-axis laminates is of significant interest in designing composite structures for energy absorption. Phenomenological models available in most of the commercial programs, where the fiber and resin properties are smeared, are very efficient for large scale structural analysis, but lack the ability to model the complex nonlinear behavior of the resin and fail to capture the complex load transfer mechanisms between the fiber and the resin matrix. On the other hand, high fidelity mesoscale models, where the fiber tows and matrix regions are explicitly modeled, have the ability to account for the complex behavior in each of the constituents of the composite. However, creating a finite element model of a larger scale composite component could be very time consuming and computationally very expensive. In the present study, a three-dimensional mesoscale model of non-crimp composite laminates was developed for various laminate schemes. The resin material was modeled as an elastic-plastic material with nonlinear hardening. The fiber tows were modeled with an orthotropic material model with brittle failure. In parallel, new stress based failure criteria combined with several damage evolution laws for matrix stresses were proposed for a phenomenological model. The results from both the mesoscale and phenomenological models were compared with the experiments for a variety of off-axis laminates.
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
Modelling of a Francis Turbine Runner Fatigue Failure Process Caused by Fluid-Structure Interaction
NASA Astrophysics Data System (ADS)
Lyutov, A.; Kryukov, A.; Cherny, S.; Chirkov, D.; Salienko, A.; Skorospelov, V.; Turuk, P.
2016-11-01
In the present paper considered is the problem of the numerical simulation of Francis turbine runner fatigue failure caused by fluid-structure interaction. The unsteady 3D flow is modeled simultaneously in the spiral chamber, each wicket gate and runner channels and in the draft tube using the Euler equations. Based on the unsteady runner loadings at each time step stresses in the whole runner are calculated using the elastic equilibrium equations solved with boundary element method. Set of static stress-strain states provides quasi-dynamics of runner cyclic loading. It is assumed that equivalent stresses in the runner are below the critical value after which irreversible plastic processes happen in the runner material. Therefore runner is subjected to the fatigue damage caused by high-cycle fatigue, in which the loads are generally low compared with the limit stress of the material. As a consequence, the stress state around the crack front can be fully characterized by linear elastic fracture mechanics. The place of runner cracking is determined as a point with maximal amplitude of stress oscillations. Stress pulsations amplitude is used to estimate the number of cycles until the moment of fatigue failure, number of loading cycles and oscillation frequency are used to calculate runner service time. Example of the real Francis runner which has encountered premature fatigue failure as a result of incorrect durability estimation is used to verify the developed numerical model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, Ja Hyeon; Kim, Myong; Jeong, Chang Wook
2014-08-01
Purpose: To evaluate the predictive accuracy and general applicability of the locoregional failure model in a different cohort of patients treated with radical cystectomy. Methods and Materials: A total of 398 patients were included in the analysis. Death and isolated distant metastasis were considered competing events, and patients without any events were censored at the time of last follow-up. The model included the 3 variables pT classification, the number of lymph nodes identified, and margin status, as follows: low risk (≤pT2), intermediate risk (≥pT3 with ≥10 nodes removed and negative margins), and high risk (≥pT3 with <10 nodes removed ormore » positive margins). Results: The bootstrap-corrected concordance index of the model 5 years after radical cystectomy was 66.2%. When the risk stratification was applied to the validation cohort, the 5-year locoregional failure estimates were 8.3%, 21.2%, and 46.3% for the low-risk, intermediate-risk, and high-risk groups, respectively. The risk of locoregional failure differed significantly between the low-risk and intermediate-risk groups (subhazard ratio [SHR], 2.63; 95% confidence interval [CI], 1.35-5.11; P<.001) and between the low-risk and high-risk groups (SHR, 4.28; 95% CI, 2.17-8.45; P<.001). Although decision curves were appropriately affected by the incidence of the competing risk, decisions about the value of the models are not likely to be affected because the model remains of value over a wide range of threshold probabilities. Conclusions: The model is not completely accurate, but it demonstrates a modest level of discrimination, adequate calibration, and meaningful net benefit gain for prediction of locoregional failure after radical cystectomy.« less
Kozai, Takashi D. Y.; Catt, Kasey; Li, Xia; Gugel, Zhannetta V.; Olafsson, Valur T.; Vazquez, Alberto L.; Cui, X. Tracy
2014-01-01
Penetrating intracortical electrode arrays that record brain activity longitudinally are powerful tools for basic neuroscience research and emerging clinical applications. However, regardless of the technology used, signals recorded by these electrodes degrade over time. The failure mechanisms of these electrodes are understood to be a complex combination of the biological reactive tissue response and material failure of the device over time. While mechanical mismatch between the brain tissue and implanted neural electrodes have been studied as a source of chronic inflammation and performance degradation, the electrode failure caused by mechanical mismatch between different material properties and different structural components within a device have remained poorly characterized. Using Finite Element Model (FEM) we simulate the mechanical strain on a planar silicon electrode. The results presented here demonstrate that mechanical mismatch between iridium and silicon leads to concentrated strain along the border of the two materials. This strain is further focused on small protrusions such as the electrical traces in planar silicon electrodes. These findings are confirmed with chronic in vivo data (133–189 days) in mice by correlating a combination of single-unit electrophysiology, evoked multi-unit recordings, electrochemical impedance spectroscopy, and scanning electron microscopy from traces and electrode sites with our modeling data. Several modes of mechanical failure of chronically implanted planar silicon electrodes are found that result in degradation and/or loss of recording. These findings highlight the importance of strains and material properties of various subcomponents within an electrode array. PMID:25453935
Failure Analysis of Nonvolatile Residue (NVR) Analyzer Model SP-1000
NASA Technical Reports Server (NTRS)
Potter, Joseph C.
2011-01-01
National Aeronautics and Space Administration (NASA) subcontractor Wiltech contacted the NASA Electrical Lab (NE-L) and requested a failure analysis of a Solvent Purity Meter; model SP-IOOO produced by the VerTis Instrument Company. The meter, used to measure the contaminate in a solvent to determine the relative contamination on spacecraft flight hardware and ground servicing equipment, had been inoperable and in storage for an unknown amount of time. NE-L was asked to troubleshoot the unit and make a determination on what may be required to make the unit operational. Through the use of general troubleshooting processes and the review of a unit in service at the time of analysis, the unit was found to be repairable but would need the replacement of multiple components.
User-Defined Material Model for Progressive Failure Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F. Jr.; Reeder, James R. (Technical Monitor)
2006-01-01
An overview of different types of composite material system architectures and a brief review of progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model (or UMAT) for use with the ABAQUS/Standard1 nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details and use of the UMAT subroutine are described in the present paper. Parametric studies for composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented.
Using volcanic tremor for eruption forecasting at White Island volcano (Whakaari), New Zealand
NASA Astrophysics Data System (ADS)
Chardot, Lauriane; Jolly, Arthur D.; Kennedy, Ben M.; Fournier, Nicolas; Sherburn, Steven
2015-09-01
Eruption forecasting is a challenging task because of the inherent complexity of volcanic systems. Despite remarkable efforts to develop complex models in order to explain volcanic processes prior to eruptions, the material Failure Forecast Method (FFM) is one of the very few techniques that can provide a forecast time for an eruption. However, the method requires testing and automation before being used as a real-time eruption forecasting tool at a volcano. We developed an automatic algorithm to issue forecasts from volcanic tremor increase episodes recorded by Real-time Seismic Amplitude Measurement (RSAM) at one station and optimised this algorithm for the period August 2011-January 2014 which comprises the recent unrest period at White Island volcano (Whakaari), New Zealand. A detailed residual analysis was paramount to select the most appropriate model explaining the RSAM time evolutions. In a hindsight simulation, four out of the five small eruptions reported during this period occurred within a failure window forecast by our optimised algorithm and the probability of an eruption on a day within a failure window was 0.21, which is 37 times higher than the probability of having an eruption on any day during the same period (0.0057). Moreover, the forecasts were issued prior to the eruptions by a few hours which is important from an emergency management point of view. Whereas the RSAM time evolutions preceding these four eruptions have a similar goodness-of-fit with the FFM, their spectral characteristics are different. The duration-amplitude distributions of the precursory tremor episodes support the hypothesis that several processes were likely occurring prior to these eruptions. We propose that slow rock failure and fluid flow processes are plausible candidates for the tremor source of these episodes. This hindsight exercise can be useful for future real-time implementation of the FFM at White Island. A similar methodology could also be tested at other volcanoes even if only a limited network is available.
NASA Astrophysics Data System (ADS)
Zaccaria, V.; Tucker, D.; Traverso, A.
2016-09-01
Solid oxide fuel cells are characterized by very high efficiency, low emissions level, and large fuel flexibility. Unfortunately, their elevated costs and relatively short lifetimes reduce the economic feasibility of these technologies at the present time. Several mechanisms contribute to degrade fuel cell performance during time, and the study of these degradation modes and potential mitigation actions is critical to ensure the durability of the fuel cell and their long-term stability. In this work, localized degradation of a solid oxide fuel cell is modeled in real-time and its effects on various cell parameters are analyzed. Profile distributions of overpotential, temperature, heat generation, and temperature gradients in the stack are investigated during degradation. Several causes of failure could occur in the fuel cell if no proper control actions are applied. A local analysis of critical parameters conducted shows where the issues are and how they could be mitigated in order to extend the life of the cell.
Generalized Accelerated Failure Time Spatial Frailty Model for Arbitrarily Censored Data
Zhou, Haiming; Hanson, Timothy; Zhang, Jiajia
2017-01-01
Flexible incorporation of both geographical patterning and risk effects in cancer survival models is becoming increasingly important, due in part to the recent availability of large cancer registries. Most spatial survival models stochastically order survival curves from different subpopulations. However, it is common for survival curves from two subpopulations to cross in epidemiological cancer studies and thus interpretable standard survival models can not be used without some modification. Common fixes are the inclusion of time-varying regression effects in the proportional hazards model or fully non-parametric modeling, either of which destroys any easy interpretability from the fitted model. To address this issue, we develop a generalized accelerated failure time model which allows stratification on continuous or categorical covariates, as well as providing per-variable tests for whether stratification is necessary via novel approximate Bayes factors. The model is interpretable in terms of how median survival changes and is able to capture crossing survival curves in the presence of spatial correlation. A detailed Markov chain Monte Carlo algorithm is presented for posterior inference and a freely available function frailtyGAFT is provided to fit the model in the R package spBayesSurv. We apply our approach to a subset of the prostate cancer data gathered for Louisiana by the Surveillance, Epidemiology, and End Results program of the National Cancer Institute. PMID:26993982
Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang
Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vsmore » treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.« less
Syndromic surveillance for health information system failures: a feasibility study.
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-05-01
To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.
Long-term strength and damage accumulation in laminates
NASA Astrophysics Data System (ADS)
Dzenis, Yuris A.; Joshi, Shiv P.
1993-04-01
A modified version of the probabilistic model developed by authors for damage evolution analysis of laminates subjected to random loading is utilized to predict long-term strength of laminates. The model assumes that each ply in a laminate consists of a large number of mesovolumes. Probabilistic variation functions for mesovolumes stiffnesses as well as strengths are used in the analysis. Stochastic strains are calculated using the lamination theory and random function theory. Deterioration of ply stiffnesses is calculated on the basis of the probabilities of mesovolumes failures using the theory of excursions of random process beyond the limits. Long-term strength and damage accumulation in a Kevlar/epoxy laminate under tension and complex in-plane loading are investigated. Effects of the mean level and stochastic deviation of loading on damage evolution and time-to-failure of laminate are discussed. Long-term cumulative damage at the time of the final failure at low loading levels is more than at high loading levels. The effect of the deviation in loading is more pronounced at lower mean loading levels.
BROËT, PHILIPPE; TSODIKOV, ALEXANDER; DE RYCKE, YANN; MOREAU, THIERRY
2010-01-01
This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests. PMID:15293627
Finite Element Creep-Fatigue Analysis of a Welded Furnace Roll for Identifying Failure Root Cause
NASA Astrophysics Data System (ADS)
Yang, Y. P.; Mohr, W. C.
2015-11-01
Creep-fatigue induced failures are often observed in engineering components operating under high temperature and cyclic loading. Understanding the creep-fatigue damage process and identifying failure root cause are very important for preventing such failures and improving the lifetime of engineering components. Finite element analyses including a heat transfer analysis and a creep-fatigue analysis were conducted to model the cyclic thermal and mechanical process of a furnace roll in a continuous hot-dip coating line. Typically, the roll has a short life, <1 year, which has been a problem for a long time. The failure occurred in the weld joining an end bell to a roll shell and resulted in the complete 360° separation of the end bell from the roll shell. The heat transfer analysis was conducted to predict the temperature history of the roll by modeling heat convection from hot air inside the furnace. The creep-fatigue analysis was performed by inputting the predicted temperature history and applying mechanical loads. The analysis results showed that the failure was resulted from a creep-fatigue mechanism rather than a creep mechanism. The difference of material properties between the filler metal and the base metal is the root cause for the roll failure, which induces higher creep strain and stress in the interface between the weld and the HAZ.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Representations are developed and illustrated for the distribution of link property values at the time of link failure in the presence of aleatory uncertainty in link properties. The following topics are considered: (i) defining properties for weak links and strong links, (ii) cumulative distribution functions (CDFs) for link failure time, (iii) integral-based derivation of CDFs for link property at time of link failure, (iv) sampling-based approximation of CDFs for link property at time of link failure, (v) verification of integral-based and sampling-based determinations of CDFs for link property at time of link failure, (vi) distributions of link properties conditional onmore » time of link failure, and (vii) equivalence of two different integral-based derivations of CDFs for link property at time of link failure.« less
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)
2007-01-01
Structures often comprise smaller substructures that are connected to each other or attached to the ground by a set of finite connections. Under static loading one or more of these connections may exceed allowable limits and be deemed to fail. Of particular interest is the structural response when a connection is severed (failed) while the structure is under static load. A transient failure analysis procedure was developed by which it is possible to examine the dynamic effects that result from introducing a discrete failure while a structure is under static load. The failure is introduced by replacing a connection load history by a time-dependent load set that removes the connection load at the time of failure. The subsequent transient response is examined to determine the importance of the dynamic effects by comparing the structural response with the appropriate allowables. Additionally, this procedure utilizes a standard finite element transient analysis that is readily available in most commercial software, permitting the study of dynamic failures without the need to purchase software specifically for this purpose. The procedure is developed and explained, demonstrated on a simple cantilever box example, and finally demonstrated on a real-world example, the American Airlines Flight 587 (AA587) vertical tail plane (VTP).
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther
2017-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current paper, the complete development of the failure model is described and the generation of a tabulated failure surface for a representative composite material is discussed.
Sundaram, Vinay; Choi, Gina; Jeon, Christie Y; Ayoub, Walid S; Nissen, Nicholas N; Klein, Andrew S; Tran, Tram T
2015-05-01
Primary sclerosing cholangitis (PSC) patients suffer from comorbidities unaccounted for by the model for end-stage liver disease scoring system and may benefit from the increased donor organ pool provided by donation after cardiac death (DCD) liver transplantation. However, the impact of DCD transplantation on PSC graft outcomes is unknown. We studied 41,018 patients using the United Network for Organ Sharing database from 2002 through 2012. Kaplan-Meier analysis and Cox regression were used to evaluate graft survival and risk factors for graft failure, respectively. The PSC patients receiving DCD livers (n=75) showed greater overall graft failure (37.3% vs. 20.4%, P = 0.001), graft failure from biliary complications (47.4% vs. 13.9%, P = 0.002), and shorter graft survival time (P = 0.003), compared to PSC patients receiving donation after brain death organs (n=1592). Among DCD transplants (n=1943), PSC and non-PSC patients showed similar prevalence of graft failure and graft survival time, though a trend existed toward increased biliary-induced graft failure among PSC patients (47.4 vs. 26.4%, P = 0.063). Cox modeling demonstrated that PSC patients have a positive graft survival advantage compared to non-PSC patients (hazard ratio [HR]=0.72, P < 0.001), whereas DCD transplantation increased risk of graft failure (HR = 1.28, P < 0.001). Furthermore, the interaction between DCD transplant and PSC was significant (HR = 1.76, P = 0.015), indicating that use of DCD organs impacts graft survival more in PSC than non-PSC patients. Donation after cardiac death liver transplantation leads to significantly worse outcomes in PSC. We recommend cautious use of DCD transplantation in this population.
New early warning system for gravity-driven ruptures based on codetection of acoustic signal
NASA Astrophysics Data System (ADS)
Faillettaz, J.
2016-12-01
Gravity-driven rupture phenomena in natural media - e.g. landslide, rockfalls, snow or ice avalanches - represent an important class of natural hazards in mountainous regions. To protect the population against such events, a timely evacuation often constitutes the only effective way to secure the potentially endangered area. However, reliable prediction of imminence of such failure events remains challenging due to the nonlinear and complex nature of geological material failure hampered by inherent heterogeneity, unknown initial mechanical state, and complex load application (rainfall, temperature, etc.). Here, a simple method for real-time early warning that considers both the heterogeneity of natural media and characteristics of acoustic emissions attenuation is proposed. This new method capitalizes on codetection of elastic waves emanating from microcracks by multiple and spatially separated sensors. Event-codetection is considered as surrogate for large event size with more frequent codetected events (i.e., detected concurrently on more than one sensor) marking imminence of catastrophic failure. Simple numerical model based on a Fiber Bundle Model considering signal attenuation and hypothetical arrays of sensors confirms the early warning potential of codetection principles. Results suggest that although statistical properties of attenuated signal amplitude could lead to misleading results, monitoring the emergence of large events announcing impeding failure is possible even with attenuated signals depending on sensor network geometry and detection threshold. Preliminary application of the proposed method to acoustic emissions during failure of snow samples has confirmed the potential use of codetection as indicator for imminent failure at lab scale. The applicability of such simple and cheap early warning system is now investigated at a larger scale (hillslope). First results of such a pilot field experiment are presented and analysed.
Heroic Reliability Improvement in Manned Space Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.
2009-06-01
to floating point , to multi-level logic. 2 Overview Self-aware computation can be distinguished from existing computational models which are...systems have advanced to the point that the time is ripe to realize such a system. To illustrate, let us examine each of the key aspects of self...servers for each service, there are no single points of failure in the system. If an OS or user core has a failure, one of several introspection cores
Failure Time Analysis of Office System Use.
ERIC Educational Resources Information Center
Cooper, Michael D.
1991-01-01
Develops mathematical models to characterize the probability of continued use of an integrated office automation system and tests these models on longitudinal data collected from 210 individuals using the IBM Professional Office System (PROFS) at the University of California at Berkeley. Analyses using survival functions and proportional hazard…
Modeling the biomechanical and injury response of human liver parenchyma under tensile loading.
Untaroiu, Costin D; Lu, Yuan-Chiao; Siripurapu, Sundeep K; Kemper, Andrew R
2015-01-01
The rapid advancement in computational power has made human finite element (FE) models one of the most efficient tools for assessing the risk of abdominal injuries in a crash event. In this study, specimen-specific FE models were employed to quantify material and failure properties of human liver parenchyma using a FE optimization approach. Uniaxial tensile tests were performed on 34 parenchyma coupon specimens prepared from two fresh human livers. Each specimen was tested to failure at one of four loading rates (0.01s(-1), 0.1s(-1), 1s(-1), and 10s(-1)) to investigate the effects of rate dependency on the biomechanical and failure response of liver parenchyma. Each test was simulated by prescribing the end displacements of specimen-specific FE models based on the corresponding test data. The parameters of a first-order Ogden material model were identified for each specimen by a FE optimization approach while simulating the pre-tear loading region. The mean material model parameters were then determined for each loading rate from the characteristic averages of the stress-strain curves, and a stochastic optimization approach was utilized to determine the standard deviations of the material model parameters. A hyperelastic material model using a tabulated formulation for rate effects showed good predictions in terms of tensile material properties of human liver parenchyma. Furthermore, the tissue tearing was numerically simulated using a cohesive zone modeling (CZM) approach. A layer of cohesive elements was added at the failure location, and the CZM parameters were identified by fitting the post-tear force-time history recorded in each test. The results show that the proposed approach is able to capture both the biomechanical and failure response, and accurately model the overall force-deflection response of liver parenchyma over a large range of tensile loadings rates. Copyright © 2014 Elsevier Ltd. All rights reserved.
ACCELERATED FAILURE TIME MODELS PROVIDE A USEFUL STATISTICAL FRAMEWORK FOR AGING RESEARCH
Swindell, William R.
2009-01-01
Survivorship experiments play a central role in aging research and are performed to evaluate whether interventions alter the rate of aging and increase lifespan. The accelerated failure time (AFT) model is seldom used to analyze survivorship data, but offers a potentially useful statistical approach that is based upon the survival curve rather than the hazard function. In this study, AFT models were used to analyze data from 16 survivorship experiments that evaluated the effects of one or more genetic manipulations on mouse lifespan. Most genetic manipulations were found to have a multiplicative effect on survivorship that is independent of age and well-characterized by the AFT model “deceleration factor”. AFT model deceleration factors also provided a more intuitive measure of treatment effect than the hazard ratio, and were robust to departures from modeling assumptions. Age-dependent treatment effects, when present, were investigated using quantile regression modeling. These results provide an informative and quantitative summary of survivorship data associated with currently known long-lived mouse models. In addition, from the standpoint of aging research, these statistical approaches have appealing properties and provide valuable tools for the analysis of survivorship data. PMID:19007875
Accelerated failure time models provide a useful statistical framework for aging research.
Swindell, William R
2009-03-01
Survivorship experiments play a central role in aging research and are performed to evaluate whether interventions alter the rate of aging and increase lifespan. The accelerated failure time (AFT) model is seldom used to analyze survivorship data, but offers a potentially useful statistical approach that is based upon the survival curve rather than the hazard function. In this study, AFT models were used to analyze data from 16 survivorship experiments that evaluated the effects of one or more genetic manipulations on mouse lifespan. Most genetic manipulations were found to have a multiplicative effect on survivorship that is independent of age and well-characterized by the AFT model "deceleration factor". AFT model deceleration factors also provided a more intuitive measure of treatment effect than the hazard ratio, and were robust to departures from modeling assumptions. Age-dependent treatment effects, when present, were investigated using quantile regression modeling. These results provide an informative and quantitative summary of survivorship data associated with currently known long-lived mouse models. In addition, from the standpoint of aging research, these statistical approaches have appealing properties and provide valuable tools for the analysis of survivorship data.
Real-time estimation of ionospheric delay using GPS measurements
NASA Astrophysics Data System (ADS)
Lin, Lao-Sheng
1997-12-01
When radio waves such as the GPS signals propagate through the ionosphere, they experience an extra time delay. The ionospheric delay can be eliminated (to the first order) through a linear combination of L1 and L2 observations from dual-frequency GPS receivers. Taking advantage of this dispersive principle, one or more dual- frequency GPS receivers can be used to determine a model of the ionospheric delay across a region of interest and, if implemented in real-time, can support single-frequency GPS positioning and navigation applications. The research objectives of this thesis were: (1) to develop algorithms to obtain accurate absolute Total Electron Content (TEC) estimates from dual-frequency GPS observables, and (2) to develop an algorithm to improve the accuracy of real-time ionosphere modelling. In order to fulfil these objectives, four algorithms have been proposed in this thesis. A 'multi-day multipath template technique' is proposed to mitigate the pseudo-range multipath effects at static GPS reference stations. This technique is based on the assumption that the multipath disturbance at a static station will be constant if the physical environment remains unchanged from day to day. The multipath template, either single-day or multi-day, can be generated from the previous days' GPS data. A 'real-time failure detection and repair algorithm' is proposed to detect and repair the GPS carrier phase 'failures', such as the occurrence of cycle slips. The proposed algorithm uses two procedures: (1) application of a statistical test on the state difference estimated from robust and conventional Kalman filters in order to detect and identify the carrier phase failure, and (2) application of a Kalman filter algorithm to repair the 'identified carrier phase failure'. A 'L1/L2 differential delay estimation algorithm' is proposed to estimate GPS satellite transmitter and receiver L1/L2 differential delays. This algorithm, based on the single-site modelling technique, is able to estimate the sum of the satellite and receiver L1/L2 differential delay for each tracked GPS satellite. A 'UNSW grid-based algorithm' is proposed to improve the accuracy of real-time ionosphere modelling. The proposed algorithm is similar to the conventional grid-based algorithm. However, two modifications were made to the algorithm: (1) an 'exponential function' is adopted as the weighting function, and (2) the 'grid-based ionosphere model' estimated from the previous day is used to predict the ionospheric delay ratios between the grid point and reference points. (Abstract shortened by UMI.)
Mankour, Mohamed; Khiat, Mounir; Ghomri, Leila; Chaker, Abdelkader; Bessalah, Mourad
2018-06-01
This paper presents modeling and study of 12-pulse HVDC (High Voltage Direct Current) based on real time simulation where the HVDC inverter is connected to a weak AC system. In goal to study the dynamic performance of the HVDC link, two serious kind of disturbance are applied at HVDC converters where the first one is the single phase to ground AC fault and the second one is the DC link to ground fault. The study is based on two different mode of analysis, which the first is to test the performance of the DC control and the second is focalized to study the effect of the protection function on the system behavior. This real time simulation considers the strength of the AC system to witch is connected and his relativity with the capacity of the DC link. The results obtained are validated by means of RT-lab platform using digital Real time simulator Hypersim (OP-5600), the results carried out show the effect of the DC control and the influence of the protection function to reduce the probability of commutation failures and also for helping inverter to take out from commutation failure even while the DC control fails to eliminate them. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Eck, M.; Mukunda, M.
The proliferation of space vehicle launch sites and the projected utilization of these facilities portends an increase in the number of on-pad, ascent, and on-orbit solid-rocket motor (SRM) casings and liquid-rocket tanks which will randomly fail or will fail from range destruct actions. Beyond the obvious safety implications, these failures may have serious resource implications for mission system and facility planners. SRM-casing failures and liquid-rocket tankage failures result in the generation of large, high velocity fragments which may be serious threats to the safety of launch support personnel if proper bunkers and exclusion areas are not provided. In addition, these fragments may be indirect threats to the general public's safety if they encounter hazardous spacecraft payloads which have not been designed to withstand shrapnel of this caliber. They may also become threats to other spacecraft if, by failing on-orbit, they add to the ever increasing space-junk collision cross-section. Most prior attempts to assess the velocity of fragments from failed SRM casings have simply assigned the available chamber impulse to available casing and fuel mass and solved the resulting momentum balance for velocity. This method may predict a fragment velocity which is high or low by a factor of two depending on the ratio of fuel to casing mass extant at the time of failure. Recognizing the limitations of existing methods, the authors devised an analytical approach which properly partitions the available impulse to each major system-mass component. This approach uses the Physics International developed PISCES code to couple the forces generated by an Eulerian modeled gas flow field to a Lagrangian modeled fuel and casing system. The details of a predictive analytical modeling process as well as the development of normalized relations for momentum partition as a function of SRM burn time and initial geometry are discussed in this paper. Methods for applying similar modeling techniques to liquid-tankage-over-pressure failures are also discussed. These methods have been calibrated against observed SRM ascent failures and on-orbit tankage failures. Casing-quadrant sized fragments with velocities exceeding 100 m/s resulted from Titan 34D-SRM range destruct actions at 10 s mission elapsed time (MET). Casing-quadrant sized fragments with velocities of approx. 200 m/s resulted from STS-SRM range destruct actions at 110 s MET. Similar sized fragments for Ariane third stage and Delta second stage tankage were predicted to have maximum velocities of 260 and 480 m/s respectively. Good agreement was found between the predictions and observations for five specific events and it was concluded that the methods developed have good potential for use in predicting the fragmentation process of a number of generically similar casing and tankage systems.
Lee, James S; O'Dochartaigh, Domhnall; MacKenzie, Mark; Hudson, Darren; Couperthwaite, Stephanie; Villa-Roel, Cristina; Rowe, Brian H
2015-06-01
Non-invasive positive pressure ventilation (NIPPV) is used to treat severe acute respiratory distress. Prehospital NIPPV has been associated with a reduction in both in-hospital mortality and the need for invasive ventilation. The authors of this study examined factors associated with NIPPV failure and evaluated the impact of NIPPV on scene times in a critical care helicopter Emergency Medical Service (HEMS). Non-invasive positive pressure ventilation failure was defined as the need for airway intervention or alternative means of ventilatory support. A retrospective chart review of consecutive patients where NIPPV was completed in a critical care HEMS was conducted. Factors associated with NIPPV failure in univariate analyses and from published literature were included in a multivariable, logistic regression model. From a total of 44 patients, NIPPV failed in 14 (32%); a Glasgow Coma Scale (GCS)<15 at HEMS arrival was associated independently with NIPPV failure (adjusted odds ratio 13.9; 95% CI, 2.4-80.3; P=.003). Mean scene times were significantly longer in patients who failed NIPPV when compared with patients in whom NIPPV was successful (95 minutes vs 51 minutes; 39.4 minutes longer; 95% CI, 16.2-62.5; P=.001). Patients with a decreased level of consciousness were more likely to fail NIPPV. Furthermore, patients who failed NIPPV had significantly longer scene times. The benefits of NIPPV should be balanced against risks of long scene times by HEMS providers. Knowing risk factors of NIPPV failure could assist HEMS providers to make the safest decision for patients on whether to initiate NIPPV or proceed directly to endotracheal intubation prior to transport.
Holbrook, Christopher M.; Perry, Russell W.; Brandes, Patricia L.; Adams, Noah S.
2013-01-01
In telemetry studies, premature tag failure causes negative bias in fish survival estimates because tag failure is interpreted as fish mortality. We used mark-recapture modeling to adjust estimates of fish survival for a previous study where premature tag failure was documented. High rates of tag failure occurred during the Vernalis Adaptive Management Plan’s (VAMP) 2008 study to estimate survival of fall-run Chinook salmon (Oncorhynchus tshawytscha) during migration through the San Joaquin River and Sacramento-San Joaquin Delta, California. Due to a high rate of tag failure, the observed travel time distribution was likely negatively biased, resulting in an underestimate of tag survival probability in this study. Consequently, the bias-adjustment method resulted in only a small increase in estimated fish survival when the observed travel time distribution was used to estimate the probability of tag survival. Since the bias-adjustment failed to remove bias, we used historical travel time data and conducted a sensitivity analysis to examine how fish survival might have varied across a range of tag survival probabilities. Our analysis suggested that fish survival estimates were low (95% confidence bounds range from 0.052 to 0.227) over a wide range of plausible tag survival probabilities (0.48–1.00), and this finding is consistent with other studies in this system. When tags fail at a high rate, available methods to adjust for the bias may perform poorly. Our example highlights the importance of evaluating the tag life assumption during survival studies, and presents a simple framework for evaluating adjusted survival estimates when auxiliary travel time data are available.
Adaptive Failure Compensation for Aircraft Tracking Control Using Engine Differential Based Model
NASA Technical Reports Server (NTRS)
Liu, Yu; Tang, Xidong; Tao, Gang; Joshi, Suresh M.
2006-01-01
An aircraft model that incorporates independently adjustable engine throttles and ailerons is employed to develop an adaptive control scheme in the presence of actuator failures. This model captures the key features of aircraft flight dynamics when in the engine differential mode. Based on this model an adaptive feedback control scheme for asymptotic state tracking is developed and applied to a transport aircraft model in the presence of two types of failures during operation, rudder failure and aileron failure. Simulation results are presented to demonstrate the adaptive failure compensation scheme.
NASA Astrophysics Data System (ADS)
Kosztowny, Cyrus Joseph Robert
Use of carbon fiber textiles in complex manufacturing methods creates new implementations of structural components by increasing performance, lowering manufacturing costs, and making composites overall more attractive across industry. Advantages of textile composites include high area output, ease of handling during the manufacturing process, lower production costs per material used resulting from automation, and provide post-manufacturing assembly mainstreaming because significantly more complex geometries such as stiffened shell structures can be manufactured with fewer pieces. One significant challenge with using stiffened composite structures is stiffener separation under compression. Axial compression loading conditions have frequently observed catastrophic structural failure due to stiffeners separating from the shell skin. Characterizing stiffener separation behavior is often costly computationally and experimentally. The objectives of this research are to demonstrate unitized stiffened textile composite panels can be manufactured to produce quality test specimens, that existing characterization techniques applied to state-of-the-art high-performance composites provide valuable information in modeling such structures, that the unitized structure concept successfully removes stiffener separation as a primary structural failure mode, and that modeling textile material failure modes are sufficient to accurately capture postbuckling and final failure responses of the stiffened structures. The stiffened panels in this study have taken the integrally stiffened concept to an extent such that the stiffeners and skin are manufactured at the same time, as one single piece, and from the same composite textile layers. Stiffener separation is shown to be removed as a primary structural failure mode for unitized stiffened composite textile panels loaded under axial compression well into the postbuckling regime. Instead of stiffener separation, a material damaging and failure model effectively captures local post-peak material response via incorporating a mesoscale model using a multiscaling framework with a smeared crack element-based failure model in the macroscale stiffened panel. Material damage behavior is characterized by simple experimental tests and incorporated into the post-peak stiffness degradation law in the smeared crack implementation. Computational modeling results are in overall excellent agreement compared to the experimental responses.
NASA Astrophysics Data System (ADS)
Wolter, Andrea; Stead, Doug; Clague, John J.
2014-02-01
The 1963 Vajont Slide in northeast Italy is an important engineering and geological event. Although the landslide has been extensively studied, new insights can be derived by applying modern techniques such as remote sensing and numerical modelling. This paper presents the first digital terrestrial photogrammetric analyses of the failure scar, landslide deposits, and the area surrounding the failure, with a focus on the scar. We processed photogrammetric models to produce discontinuity stereonets, residual maps and profiles, and slope and aspect maps, all of which provide information on the failure scar morphology. Our analyses enabled the creation of a preliminary semi-quantitative morphologic classification of the Vajont failure scar based on the large-scale tectonic folds and step-paths that define it. The analyses and morphologic classification have implications for the kinematics, dynamics, and mechanism of the slide. Metre- and decametre-scale features affected the initiation, direction, and displacement rate of sliding. The most complexly folded and stepped areas occur close to the intersection of orthogonal synclinal features related to the Dinaric and Neoalpine deformation events. Our analyses also highlight, for the first time, the evolution of the Vajont failure scar from 1963 to the present.
Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive
NASA Technical Reports Server (NTRS)
Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)
2001-01-01
Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.
Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe
2016-01-01
Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.
Postglacial rebound and fault instability in Fennoscandia
NASA Astrophysics Data System (ADS)
Wu, Patrick; Johnston, Paul; Lambeck, Kurt
1999-12-01
The best available rebound model is used to investigate the role that postglacial rebound plays in triggering seismicity in Fennoscandia. The salient features of the model include tectonic stress due to spreading at the North Atlantic Ridge, overburden pressure, gravitationally self-consistent ocean loading, and the realistic deglaciation history and compressible earth model which best fits the sea-level and ice data in Fennoscandia. The model predicts the spatio-temporal evolution of the state of stress, the magnitude of fault instability, the timing of the onset of this instability, and the mode of failure of lateglacial and postglacial seismicity. The consistency of the predictions with the observations suggests that postglacial rebound is probably the cause of the large postglacial thrust faults observed in Fennoscandia. The model also predicts a uniform stress field and instability in central Fennoscandia for the present, with thrust faulting as the predicted mode of failure. However, the lack of spatial correlation of the present seismicity with the region of uplift, and the existence of strike-slip and normal modes of current seismicity are inconsistent with this model. Further unmodelled factors such as the presence of high-angle faults in the central region of uplift along the Baltic coast would be required in order to explain the pattern of seismicity today in terms of postglacial rebound stress. The sensitivity of the model predictions to the effects of compressibility, tectonic stress, viscosity and ice model is also investigated. For sites outside the ice margin, it is found that the mode of failure is sensitive to the presence of tectonic stress and that the onset timing is also dependent on compressibility. For sites within the ice margin, the effect of Earth rheology is shown to be small. However, ice load history is shown to have larger effects on the onset time of earthquakes and the magnitude of fault instability.
Delayed Diagnosis and Treatment among Children with Autism Who Experience Adversity
ERIC Educational Resources Information Center
Berg, Kristin L.; Acharya, Kruti; Shiu, Cheng-Shi; Msall, Michael E.
2018-01-01
The effects of family adverse childhood experiences (ACEs) on timing of ASD diagnoses and receipt of therapies were measured using data from the 2011-2012 National Survey of Children's Health. Parametric accelerated failure time models estimated the relationship between family ACEs and both timing of ASD diagnosis and receipt of therapies among US…
Canonical failure modes of real-time control systems: insights from cognitive theory
NASA Astrophysics Data System (ADS)
Wallace, Rodrick
2016-04-01
Newly developed necessary conditions statistical models from cognitive theory are applied to generalisation of the data-rate theorem for real-time control systems. Rather than graceful degradation under stress, automatons and man/machine cockpits appear prone to characteristic sudden failure under demanding fog-of-war conditions. Critical dysfunctions span a spectrum of phase transition analogues, ranging from a ground state of 'all targets are enemies' to more standard data-rate instabilities. Insidious pathologies also appear possible, akin to inattentional blindness consequent on overfocus on an expected pattern. Via no-free-lunch constraints, different equivalence classes of systems, having structure and function determined by 'market pressures', in a large sense, will be inherently unreliable under different but characteristic canonical stress landscapes, suggesting that deliberate induction of failure may often be relatively straightforward. Focusing on two recent military case histories, these results provide a caveat emptor against blind faith in the current path-dependent evolutionary trajectory of automation for critical real-time processes.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Efficient Meshfree Large Deformation Simulation of Rainfall Induced Soil Slope Failure
NASA Astrophysics Data System (ADS)
Wang, Dongdong; Li, Ling
2010-05-01
An efficient Lagrangian Galerkin meshfree framework is presented for large deformation simulation of rainfall-induced soil slope failure. Detailed coupled soil-rainfall seepage equations are given for the proposed formulation. This nonlinear meshfree formulation is featured by the Lagrangian stabilized conforming nodal integration method where the low cost nature of nodal integration approach is kept and at the same time the numerical stability is maintained. The initiation and evolution of progressive failure in the soil slope is modeled by the coupled constitutive equations of isotropic damage and Drucker-Prager pressure-dependent plasticity. The gradient smoothing in the stabilized conforming integration also serves as a non-local regularization of material instability and consequently the present method is capable of effectively capture the shear band failure. The efficacy of the present method is demonstrated by simulating the rainfall-induced failure of two typical soil slopes.
Molecular Dynamics Modeling of PPTA Crystals in Aramid Fibers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mercer, Brian Scott
2016-05-19
In this work, molecular dynamics modeling is used to study the mechanical properties of PPTA crystallites, which are the fundamental microstructural building blocks of polymer aramid bers such as Kevlar. Particular focus is given to constant strain rate axial loading simulations of PPTA crystallites, which is motivated by the rate-dependent mechanical properties observed in some experiments with aramid bers. In order to accommodate the covalent bond rupture that occurs in loading a crystallite to failure, the reactive bond order force eld ReaxFF is employed to conduct the simulations. Two major topics are addressed: The rst is the general behavior ofmore » PPTA crystallites under strain rate loading. Constant strain rate loading simulations of crystalline PPTA reveal that the crystal failure strain increases with increasing strain rate, while the modulus is not a ected by the strain rate. Increasing temperature lowers both the modulus and the failure strain. The simulations also identify the C N bond connecting the aromatic rings as weakest primary bond along the backbone of the PPTA chain. The e ect of chain-end defects on PPTA micromechanics is explored, and it is found that the presence of a chain-end defect transfers load to the adjacent chains in the hydrogen-bonded sheet in which the defect resides, but does not in uence the behavior of any other chains in the crystal. Chain-end defects are found to lower the strength of the crystal when clustered together, inducing bond failure via stress concentrations arising from the load transfer to bonds in adjacent chains near the defect site. The second topic addressed is the nature of primary and secondary bond failure in crystalline PPTA. Failure of both types of bonds is found to be stochastic in nature and driven by thermal uctuations of the bonds within the crystal. A model is proposed which uses reliability theory to model bonds under constant strain rate loading as components with time-dependent failure rate functions. The model is shown to work well for predicting the onset of primary backbone bond failure, as well as the onset of secondary bond failure via chain slippage for the case of isolated non-interacting chain-end defects.« less
NASA Technical Reports Server (NTRS)
Wickens, Christopher; Vieanne, Alex; Clegg, Benjamin; Sebok, Angelia; Janes, Jessica
2015-01-01
Fifty six participants time shared a spacecraft environmental control system task with a realistic space robotic arm control task in either a manual or highly automated version. The former could suffer minor failures, whose diagnosis and repair were supported by a decision aid. At the end of the experiment this decision aid unexpectedly failed. We measured visual attention allocation and switching between the two tasks, in each of the eight conditions formed by manual-automated arm X expected-unexpected failure X monitoring- failure management. We also used our multi-attribute task switching model, based on task attributes of priority interest, difficulty and salience that were self-rated by participants, to predict allocation. An un-weighted model based on attributes of difficulty, interest and salience accounted for 96 percent of the task allocation variance across the 8 different conditions. Task difficulty served as an attractor, with more difficult tasks increasing the tendency to stay on task.
Dynamic Modeling of ALS Systems
NASA Technical Reports Server (NTRS)
Jones, Harry
2002-01-01
The purpose of dynamic modeling and simulation of Advanced Life Support (ALS) systems is to help design them. Static steady state systems analysis provides basic information and is necessary to guide dynamic modeling, but static analysis is not sufficient to design and compare systems. ALS systems must respond to external input variations and internal off-nominal behavior. Buffer sizing, resupply scheduling, failure response, and control system design are aspects of dynamic system design. We develop two dynamic mass flow models and use them in simulations to evaluate systems issues, optimize designs, and make system design trades. One model is of nitrogen leakage in the space station, the other is of a waste processor failure in a regenerative life support system. Most systems analyses are concerned with optimizing the cost/benefit of a system at its nominal steady-state operating point. ALS analysis must go beyond the static steady state to include dynamic system design. All life support systems exhibit behavior that varies over time. ALS systems must respond to equipment operating cycles, repair schedules, and occasional off-nominal behavior or malfunctions. Biological components, such as bioreactors, composters, and food plant growth chambers, usually have operating cycles or other complex time behavior. Buffer sizes, material stocks, and resupply rates determine dynamic system behavior and directly affect system mass and cost. Dynamic simulation is needed to avoid the extremes of costly over-design of buffers and material reserves or system failure due to insufficient buffers and lack of stored material.
Determinants of performance failure in the nursing home industry☆
Zinn, Jacqueline; Mor, Vincent; Feng, Zhanlian; Intrator, Orna
2013-01-01
This study investigates the determinants of performance failure in U.S. nursing homes. The sample consisted of 91,168 surveys from 10,901 facilities included in the Online Survey Certification and Reporting system from 1996 to 2005. Failed performance was defined as termination from the Medicare and Medicaid programs. Determinants of performance failure were identified as core structural change (ownership change), peripheral change (related diversification), prior financial and quality of care performance, size and environmental shock (Medicaid case mix reimbursement and prospective payment system introduction). Additional control variables that could contribute to the likelihood of performance failure were included in a cross-sectional time series generalized estimating equation logistic regression model. Our results support the contention, derived from structural inertia theory, that where in an organization’s structure change occurs determines whether it is adaptive or disruptive. In addition, while poor prior financial and quality performance and the introduction of case mix reimbursement increases the risk of failure, larger size is protective, decreasing the likelihood of performance failure. PMID:19128865
Determinants of performance failure in the nursing home industry.
Zinn, Jacqueline; Mor, Vincent; Feng, Zhanlian; Intrator, Orna
2009-03-01
This study investigates the determinants of performance failure in U.S. nursing homes. The sample consisted of 91,168 surveys from 10,901 facilities included in the Online Survey Certification and Reporting system from 1996 to 2005. Failed performance was defined as termination from the Medicare and Medicaid programs. Determinants of performance failure were identified as core structural change (ownership change), peripheral change (related diversification), prior financial and quality of care performance, size and environmental shock (Medicaid case mix reimbursement and prospective payment system introduction). Additional control variables that could contribute to the likelihood of performance failure were included in a cross-sectional time series generalized estimating equation logistic regression model. Our results support the contention, derived from structural inertia theory, that where in an organization's structure change occurs determines whether it is adaptive or disruptive. In addition, while poor prior financial and quality performance and the introduction of case mix reimbursement increases the risk of failure, larger size is protective, decreasing the likelihood of performance failure.
The unifying role of dissipative action in the dynamic failure of solids
Grady, Dennis
2015-05-19
Dissipative action, the product of dissipation energy and transport time, is fundamental to the dynamic failure of solids. Invariance of the dissipative action underlies the fourth-power nature of structured shock waves observed in selected solid metals and compounds. Dynamic failure through shock compaction, tensile spall and adiabatic shear are also governed by a constancy of the dissipative action. This commonality underlying the various modes of dynamic failure is described and leads to deeper insights into failure of solids in the intense shock wave event. These insights are in turn leading to a better understanding of the shock deformation processes underlyingmore » the fourth-power law. Experimental result and material models encompassing the dynamic failure of solids are explored for the purpose of demonstrating commonalities leading to invariance of the dissipation action. As a result, calculations are extended to aluminum and uranium metals with the intent of predicting micro-scale energetics and spatial scales in the structured shock wave.« less
Agiasotelli, Danai; Alexopoulou, Alexandra; Vasilieva, Larisa; Kalpakou, Georgia; Papadaki, Sotiria; Dourakis, Spyros P
2016-05-01
Acute-on-chronic liver failure (ACLF) is defined as an acute deterioration of liver disease with high mortality in patients with cirrhosis. The early mortality in ACLF is associated with organ failure and high leukocyte count. The time needed to reverse this condition and the factors affecting mortality after the early 30-day-period were evaluated. One hundred and ninety-seven consecutive patients with cirrhosis were included. Patients were prospectively followed up for 180 days. ACLF was diagnosed in 54.8% of the patients. Infection was the most common precipitating event in patients with ACLF. On multivariate analysis, only the neutrophil/leukocyte ratio and Chronic Liver Failure Consortium Organ Failure (CLIF-C OF) score were associated with mortality. Hazard ratios for mortality of patients with ACLF compared with those without at different time end-points post-enrollment revealed that the relative risk of death in the ACLF group was 8.54 during the first 30-day period and declined to 1.94 during the second period of observation. The time varying effect of neutrophil/leukocyte ratio and CLIF-C score was negative (1% and 18% decline in the hazard ratio per month) while that of Model for End-Stage Liver Disease (MELD) was positive (3% increase in the hazard ratio per month). The condition of ACLF was reversible in patients who survived. During the 30-180-day period following the acute event, the probability of death in ACLF became gradually similar to the non-ACLF group. The impact of inflammatory response and organ failure on survival is powerful during the first 30-day period and weakens thereafter while that of MELD increases. © 2015 The Japan Society of Hepatology.
Tam-Tham, Helen; Quinn, Robert R; Weaver, Robert G; Zhang, Jianguo; Ravani, Pietro; Liu, Ping; Thomas, Chandra; King-Shier, Kathryn; Fruetel, Karen; James, Matt T; Manns, Braden J; Tonelli, Marcello; Murtagh, Fliss E M; Hemmelgarn, Brenda R
2018-05-23
Comparisons of survival between dialysis and nondialysis care for older adults with kidney failure have been limited to those managed by nephrologists, and are vulnerable to lead and immortal time biases. So we compared time to all-cause mortality among older adults with kidney failure treated vs. not treated with chronic dialysis. Our retrospective cohort study used linked administrative and laboratory data to identify adults aged 65 or more years of age in Alberta, Canada, with kidney failure (2002-2012), defined by two or more consecutive outpatient estimated glomerular filtration rates less than 10 mL/min/1.73m 2 , spanning 90 or more days. We used marginal structural Cox models to assess the association between receipt of dialysis and all-cause mortality by allowing control for both time-varying and baseline confounders. Overall, 838 patients met inclusion criteria (mean age 79.1; 48.6% male; mean estimated glomerular filtration rate 7.8 mL/min/1.73m 2 ). Dialysis treatment (vs. no dialysis) was associated with a significantly lower risk of death for the first three years of follow-up (hazard ratio 0.59 [95% confidence interval 0.46-0.77]), but not thereafter (1.22 [0.69-2.17]). However, dialysis was associated with a significantly higher risk of hospitalization (1.40 [1.16-1.69]). Thus, among older adults with kidney failure, treatment with dialysis was associated with longer survival up to three years after reaching kidney failure, though with a higher risk of hospital admissions. These findings may assist shared decision-making about treatment of kidney failure. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
Electrocardiogram classification using delay differential equations
NASA Astrophysics Data System (ADS)
Lainscsek, Claudia; Sejnowski, Terrence J.
2013-06-01
Time series analysis with nonlinear delay differential equations (DDEs) reveals nonlinear as well as spectral properties of the underlying dynamical system. Here, global DDE models were used to analyze 5 min data segments of electrocardiographic (ECG) recordings in order to capture distinguishing features for different heart conditions such as normal heart beat, congestive heart failure, and atrial fibrillation. The number of terms and delays in the model as well as the order of nonlinearity of the model have to be selected that are the most discriminative. The DDE model form that best separates the three classes of data was chosen by exhaustive search up to third order polynomials. Such an approach can provide deep insight into the nature of the data since linear terms of a DDE correspond to the main time-scales in the signal and the nonlinear terms in the DDE are related to nonlinear couplings between the harmonic signal parts. The DDEs were able to detect atrial fibrillation with an accuracy of 72%, congestive heart failure with an accuracy of 88%, and normal heart beat with an accuracy of 97% from 5 min of ECG, a much shorter time interval than required to achieve comparable performance with other methods.
NASA Astrophysics Data System (ADS)
Faruk, Alfensi
2018-03-01
Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.
Numerical investigations of rib fracture failure models in different dynamic loading conditions.
Wang, Fang; Yang, Jikuang; Miller, Karol; Li, Guibing; Joldes, Grand R; Doyle, Barry; Wittek, Adam
2016-01-01
Rib fracture is one of the most common thoracic injuries in vehicle traffic accidents that can result in fatalities associated with seriously injured internal organs. A failure model is critical when modelling rib fracture to predict such injuries. Different rib failure models have been proposed in prediction of thorax injuries. However, the biofidelity of the fracture failure models when varying the loading conditions and the effects of a rib fracture failure model on prediction of thoracic injuries have been studied only to a limited extent. Therefore, this study aimed to investigate the effects of three rib failure models on prediction of thoracic injuries using a previously validated finite element model of the human thorax. The performance and biofidelity of each rib failure model were first evaluated by modelling rib responses to different loading conditions in two experimental configurations: (1) the three-point bending on the specimen taken from rib and (2) the anterior-posterior dynamic loading to an entire bony part of the rib. Furthermore, the simulation of the rib failure behaviour in the frontal impact to an entire thorax was conducted at varying velocities and the effects of the failure models were analysed with respect to the severity of rib cage damages. Simulation results demonstrated that the responses of the thorax model are similar to the general trends of the rib fracture responses reported in the experimental literature. However, they also indicated that the accuracy of the rib fracture prediction using a given failure model varies for different loading conditions.
Lloyd, Tom; Buck, Harleah; Foy, Andrew; Black, Sara; Pinter, Antony; Pogash, Rosanne; Eismann, Bobby; Balaban, Eric; Chan, John; Kunselman, Allen; Smyth, Joshua; Boehmer, John
2017-05-01
The Penn State Heart Assistant, a web-based, tablet computer-accessed, secure application was developed to conduct a proof of concept test, targeting patient self-care activities of heart failure patients including daily medication adherence, weight monitoring, and aerobic activity. Patients (n = 12) used the tablet computer-accessed program for 30 days-recording their information and viewing a short educational video. Linear random coefficient models assessed the relationship between weight and time and exercise and time. Good medication adherence (66% reporting taking 75% of prescribed medications) was reported. Group compliance over 30 days for weight and exercise was 84 percent. No persistent weight gain over 30 days, and some indication of weight loss (slope of weight vs time was negative (-0.17; p value = 0.002)), as well as increased exercise (slope of exercise vs time was positive (0.08; p value = 0.04)) was observed. This study suggests that mobile technology is feasible, acceptable, and has potential for cost-effective opportunities to manage heart failure patients safely at home.
Chronic Heart Failure Follow-up Management Based on Agent Technology.
Mohammadzadeh, Niloofar; Safdari, Reza
2015-10-01
Monitoring heart failure patients through continues assessment of sign and symptoms by information technology tools lead to large reduction in re-hospitalization. Agent technology is one of the strongest artificial intelligence areas; therefore, it can be expected to facilitate, accelerate, and improve health services especially in home care and telemedicine. The aim of this article is to provide an agent-based model for chronic heart failure (CHF) follow-up management. This research was performed in 2013-2014 to determine appropriate scenarios and the data required to monitor and follow-up CHF patients, and then an agent-based model was designed. Agents in the proposed model perform the following tasks: medical data access, communication with other agents of the framework and intelligent data analysis, including medical data processing, reasoning, negotiation for decision-making, and learning capabilities. The proposed multi-agent system has ability to learn and thus improve itself. Implementation of this model with more and various interval times at a broader level could achieve better results. The proposed multi-agent system is no substitute for cardiologists, but it could assist them in decision-making.
NASA Technical Reports Server (NTRS)
Eck, Marshall; Mukunda, Meera
1988-01-01
A calculational method is described which provides a powerful tool for predicting solid rocket motor (SRM) casing and liquid rocket tankage fragmentation response. The approach properly partitions the available impulse to each major system-mass component. It uses the Pisces code developed by Physics International to couple the forces generated by an Eulerian-modeled gas flow field to a Lagrangian-modeled fuel and casing system. The details of the predictive analytical modeling process and the development of normalized relations for momentum partition as a function of SRM burn time and initial geometry are discussed. Methods for applying similar modeling techniques to liquid-tankage-overpressure failures are also discussed. Good agreement between predictions and observations are obtained for five specific events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godwin, Aaron
The scope will be limited to analyzing the effect of the EFC within the system and how one improperly installed coupling affects the rest of the HPFL system. The discussion will include normal operations, impaired flow, and service interruptions. Normal operations are defined as two-way flow to buildings. Impaired operations are defined as a building that only has one-way flow being provided to the building. Service interruptions will be when a building does not have water available to it. The project will look at the following aspects of the reliability of the HPFL system: mean time to failure (MTTF) ofmore » EFCs, mean time between failures (MTBF), series system models, and parallel system models. These calculations will then be used to discuss the reliability of the system when one of the couplings fails. Compare the reliability of two-way feeds versus one-way feeds.« less
González, L A; McGwin, G; Durán, S; Pons-Estel, G J; Apte, M; Vilá, L M; Reveille, J D; Alarcón, G S
2008-08-01
To examine the predictors of time to premature gonadal failure (PGF) in patients with systemic lupus erythematosus from LUMINA, a multiethnic US cohort. PGF was defined according to the SLICC Damage Index (SDI). Factors associated with time to PGF occurrence were examined by univariable and multivariable Cox proportional hazards regression analyses: three models according to cyclophosphamide use, at T0 (model 1), over time (model 2) and the total number of intravenous pulses (model 3). Thirty-seven of 316 women (11.7%) developed PGF (19 Texan-Hispanics, 14 African-Americans, four Caucasians and no Puerto Rican-Hispanics). By multivariable analyses, older age at T0 (hazards ratio (HR) = 1.10-1.14; 95% CI 1.02-1.05 to 1.19-1.23) and disease activity (Systemic Lupus Activity Measure-Revised) in all models (HR = 1.22-1.24; 95% CI 1.10-1.12 to 1.35-1.37), Texan-Hispanic ethnicity in models 2 and 3 (HR = 4.06-5.07; 95% CI 1.03-1.25 to 15.94-20.47) and cyclophosphamide use in models 1 and 3 (1-6 pulses) (HR = 4.01-4.65; 95% CI 1.55-1.68 to 9.56-13.94) were predictors of a shorter time to PGF. Disease activity and Texan-Hispanic ethnicity emerged as predictors of a shorter time to PGF while the associations with cyclophosphamide use and older age were confirmed. Furthermore, cyclophosphamide induction therapy emerged as an important determinant of PGF.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
NASA Astrophysics Data System (ADS)
Peng, M.; Zhang, L. M.
2013-02-01
Tangjiashan landslide dam, which was triggered by the Ms = 8.0 Wenchuan earthquake in 2008 in China, threatened 1.2 million people downstream of the dam. All people in Beichuan Town 3.5 km downstream of the dam and 197 thousand people in Mianyang City 85 km downstream of the dam were evacuated 10 days before the breaching of the dam. Making such an important decision under uncertainty was difficult. This paper applied a dynamic decision-making framework for dam-break emergency management (DYDEM) to help rational decision in the emergency management of the Tangjiashan landslide dam. Three stages are identified with different levels of hydrological, geological and social-economic information along the timeline of the landslide dam failure event. The probability of dam failure is taken as a time series. The dam breaching parameters are predicted with a set of empirical models in stage 1 when no soil property information is known, and a physical model in stages 2 and 3 when knowledge of soil properties has been obtained. The flood routing downstream of the dam in these three stages is analyzed to evaluate the population at risk (PAR). The flood consequences, including evacuation costs, flood damage and monetized loss of life, are evaluated as functions of warning time using a human risk analysis model based on Bayesian networks. Finally, dynamic decision analysis is conducted to find the optimal time to evacuate the population at risk with minimum total loss in each of these three stages.
Common Cause Failure Modeling: Aerospace Versus Nuclear
NASA Technical Reports Server (NTRS)
Stott, James E.; Britton, Paul; Ring, Robert W.; Hark, Frank; Hatfield, G. Spencer
2010-01-01
Aggregate nuclear plant failure data is used to produce generic common-cause factors that are specifically for use in the common-cause failure models of NUREG/CR-5485. Furthermore, the models presented in NUREG/CR-5485 are specifically designed to incorporate two significantly distinct assumptions about the methods of surveillance testing from whence this aggregate failure data came. What are the implications of using these NUREG generic factors to model the common-cause failures of aerospace systems? Herein, the implications of using the NUREG generic factors in the modeling of aerospace systems are investigated in detail and strong recommendations for modeling the common-cause failures of aerospace systems are given.
Early laparotomy wound failure as the mechanism for incisional hernia formation
Xing, Liyu; Culbertson, Eric J.; Wen, Yuan; Franz, Michael G.
2015-01-01
Background Incisional hernia is the most common complication of abdominal surgery leading to reoperation. In the United States, 200,000 incisional hernia repairs are performed annually, often with significant morbidity. Obesity is increasing the risk of laparotomy wound failure. Methods We used a validated animal model of incisional hernia formation. We intentionally induced laparotomy wound failure in otherwise normal adult, male Sprague-Dawley rats. Radio-opaque, metal surgical clips served as markers for the use of x-ray images to follow the progress of laparotomy wound failure. We confirmed radiographic findings of the time course for mechanical laparotomy wound failure by necropsy. Results Noninvasive radiographic imaging predicts early laparotomy wound failure and incisional hernia formation. We confirmed both transverse and craniocaudad migration of radio-opaque markers at necropsy after 28 d that was uniformly associated with the clinical development of incisional hernias. Conclusions Early laparotomy wound failure is a primary mechanism for incisional hernia formation. A noninvasive radiographic method for studying laparotomy wound healing may help design clinical trials to prevent and treat this common general surgical complication. PMID:23036516
Simulating fail-stop in asynchronous distributed systems
NASA Technical Reports Server (NTRS)
Sabel, Laura; Marzullo, Keith
1994-01-01
The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.
Forecasting the brittle failure of heterogeneous, porous geomaterials
NASA Astrophysics Data System (ADS)
Vasseur, Jérémie; Wadsworth, Fabian; Heap, Michael; Main, Ian; Lavallée, Yan; Dingwell, Donald
2017-04-01
Heterogeneity develops in magmas during ascent and is dominated by the development of crystal and importantly, bubble populations or pore-network clusters which grow, interact, localize, coalesce, outgas and resorb. Pore-scale heterogeneity is also ubiquitous in sedimentary basin fill during diagenesis. As a first step, we construct numerical simulations in 3D in which randomly generated heterogeneous and polydisperse spheres are placed in volumes and which are permitted to overlap with one another, designed to represent the random growth and interaction of bubbles in a liquid volume. We use these simulated geometries to show that statistical predictions of the inter-bubble lengthscales and evolving bubble surface area or cluster densities can be made based on fundamental percolation theory. As a second step, we take a range of well constrained random heterogeneous rock samples including sandstones, andesites, synthetic partially sintered glass bead samples, and intact glass samples and subject them to a variety of stress loading conditions at a range of temperatures until failure. We record in real time the evolution of the number of acoustic events that precede failure and show that in all scenarios, the acoustic event rate accelerates toward failure, consistent with previous findings. Applying tools designed to forecast the failure time based on these precursory signals, we constrain the absolute error on the forecast time. We find that for all sample types, the error associated with an accurate forecast of failure scales non-linearly with the lengthscale between the pore clusters in the material. Moreover, using a simple micromechanical model for the deformation of porous elastic bodies, we show that the ratio between the equilibrium sub-critical crack length emanating from the pore clusters relative to the inter-pore lengthscale, provides a scaling for the error on forecast accuracy. Thus for the first time we provide a potential quantitative correction for forecasting the failure of porous brittle solids that build the Earth's crust.
Application of the health belief model in promotion of self-care in heart failure patients.
Baghianimoghadam, Mohammad Hosein; Shogafard, Golamreza; Sanati, Hamid Reza; Baghianimoghadam, Behnam; Mazloomy, Seyed Saeed; Askarshahi, Mohsen
2013-01-01
Heart failure (HF) is a condition due to a problem with the structure or function of the heart impairs its ability to supply sufficient blood flow to meet the body's needs. In developing countries, around 2% of adults suffer from heart failure, but in people over the age of 65, this rate increases to 6-10%. In Iran, around 3.3% of adults suffer from heart failure. The Health Belief Model (HBM) is one of the most widely used models in public health theoretical framework. This was a cohort experimental study, in which education as intervention factor was presented to case group. 180 Heart failure patients were randomly selected from patients who were referred to the Shahid Rajaee center of Heart Research in Tehran and allocated to two groups (90 patients in the case group and 90 in the control group). HBM was used to compare health behaviors. The questionnaire included 69 questions. All data were collected before and 2 months after intervention. About 38% of participants don't know what, the heart failure is and 43% don't know that using the salt is not suitable for them. More than 40% of participants didn't weigh any time their selves. There was significant differences between the mean grades score of variables (perceived susceptibility, perceived threat, knowledge, Perceived benefits, Perceived severity, self-efficacy Perceived barriers, cues to action, self- behavior) in the case and control groups after intervention that was not significant before it. Based on our study and also many other studies, HBM has the potential to be used as a tool to establish educational programs for individuals and communities. Therefore, this model can be used effectively to prevent different diseases and their complications including heart failure. © 2013 Tehran University of Medical Sciences. All rights reserved.
2016-09-01
Failure MTBCF Mean Time Between Critical Failure MIRV Multiple Independently-targetable Reentry Vehicle MK6LE MK6 Guidance System Life Extension...programs were the MK54 Lightweight Torpedo program, a Raytheon Radar program, and the Life Extension of the MK6 Guidance System (MK6LE) of the...activities throughout the later life -cycle phases. MBSE allowed the programs to manage the evolution of simulation capabilities, as well as to assess the
Unifying role of dissipative action in the dynamic failure of solids
NASA Astrophysics Data System (ADS)
Grady, Dennis E.
2015-04-01
A fourth-power law underlying the steady shock-wave structure and solid viscosity of condensed material has been observed for a wide range of metals and non-metals. The fourth-power law relates the steady-wave Hugoniot pressure to the fourth power of the strain rate during passage of the material through the structured shock wave. Preceding the fourth-power law was the observation in a shock transition that the product of the shock dissipation energy and the shock transition time is a constant independent of the shock pressure amplitude. Invariance of this energy-time product implies the fourth-power law. This property of the shock transition in solids was initially identified as a shock invariant. More recently, it has been referred to as the dissipative action, although no relationship to the accepted definitions of action in mechanics has been demonstrated. This same invariant property has application to a wider range of transient failure phenomena in solids. Invariance of this dissipation action has application to spall fracture, failure through adiabatic shear, shock compaction of granular media, and perhaps others. Through models of the failure processes, a clearer picture of the physics underlying the observed invariance is emerging. These insights in turn are leading to a better understanding of the shock deformation processes underlying the fourth-power law. Experimental result and material models encompassing the dynamic failure of solids are explored for the purpose of demonstrating commonalities leading to invariance of the dissipation action. Calculations are extended to aluminum and uranium metals with the intent of predicting micro-scale dynamics and spatial structure in the steady shock wave.
Generic Sensor Failure Modeling for Cooperative Systems.
Jäger, Georg; Zug, Sebastian; Casimiro, António
2018-03-20
The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application's fault tolerance and thereby promises maintainability of such system's safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques.
Generic Sensor Failure Modeling for Cooperative Systems
Jäger, Georg; Zug, Sebastian
2018-01-01
The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
Implications of Secondary Aftershocks for Failure Processes
NASA Astrophysics Data System (ADS)
Gross, S. J.
2001-12-01
When a seismic sequence with more than one mainshock or an unusually large aftershock occurs, there is a compound aftershock sequence. The secondary aftershocks need not have exactly the same decay as the primary sequence, with the differences having implications for the failure process. When the stress step from the secondary mainshock is positive but not large enough to cause immediate failure of all the remaining primary aftershocks, failure processes which involve accelerating slip will produce secondary aftershocks that decay more rapidly than primary aftershocks. This is because the primary aftershocks are an accelerated version of the background seismicity, and secondary aftershocks are an accelerated version of the primary aftershocks. Real stress perturbations may be negative, and heterogeneities in mainshock stress fields mean that the real world situation is quite complicated. I will first describe and verify my picture of secondary aftershock decay with reference to a simple numerical model of slipping faults which obeys rate and state dependent friction and lacks stress heterogeneity. With such a model, it is possible to generate secondary aftershock sequences with perturbed decay patterns, quantify those patterns, and develop an analysis technique capable of correcting for the effect in real data. The secondary aftershocks are defined in terms of frequency linearized time s(T), which is equal to the number of primary aftershocks expected by a time T, $ s ≡ ∫ t=0T n(t) dt, where the start time t=0 is the time of the primary aftershock, and the primary aftershock decay function n(t) is extrapolated forward to the times of the secondary aftershocks. In the absence of secondary sequences the function s(T)$ re-scales the time so that approximately one event occurs per new time unit; the aftershock sequence is gone. If this rescaling is applied in the presence of a secondary sequence, the secondary sequence is shaped like a primary aftershock sequence, and can be fit by the same modeling techniques applied to simple sequences. The later part of the presentation will concern the decay of Hector Mine aftershocks as influenced by the Landers aftershocks. Although attempts to predict the abundance of Hector aftershocks based on stress overlap analysis are not very successful, the analysis does do a good job fitting the decay of secondary sequences.
Reliability analysis of C-130 turboprop engine components using artificial neural network
NASA Astrophysics Data System (ADS)
Qattan, Nizar A.
In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine turbine under actual operating conditions, which can be used by aircraft operators for assessing system and component failures and customizing the maintenance programs recommended by the manufacturer.
Ng, Kenney; Steinhubl, Steven R; deFilippi, Christopher; Dey, Sanjoy; Stewart, Walter F
2016-11-01
Using electronic health records data to predict events and onset of diseases is increasingly common. Relatively little is known, although, about the tradeoffs between data requirements and model utility. We examined the performance of machine learning models trained to detect prediagnostic heart failure in primary care patients using longitudinal electronic health records data. Model performance was assessed in relation to data requirements defined by the prediction window length (time before clinical diagnosis), the observation window length (duration of observation before prediction window), the number of different data domains (data diversity), the number of patient records in the training data set (data quantity), and the density of patient encounters (data density). A total of 1684 incident heart failure cases and 13 525 sex, age-category, and clinic matched controls were used for modeling. Model performance improved as (1) the prediction window length decreases, especially when <2 years; (2) the observation window length increases but then levels off after 2 years; (3) the training data set size increases but then levels off after 4000 patients; (4) more diverse data types are used, but, in order, the combination of diagnosis, medication order, and hospitalization data was most important; and (5) data were confined to patients who had ≥10 phone or face-to-face encounters in 2 years. These empirical findings suggest possible guidelines for the minimum amount and type of data needed to train effective disease onset predictive models using longitudinal electronic health records data. © 2016 American Heart Association, Inc.
Computational Modeling System for Deformation and Failure in Polycrystalline Metals
2009-03-29
FIB/EHSD 3.3 The Voronoi Cell FEM for Micromechanical Modeling 3.4 VCFEM for Microstructural Damage Modeling 3.5 Adaptive Multiscale Simulations...accurate and efficient image-based micromechanical finite element model, for crystal plasticity and damage , incorporating real morphological and...topology with evolving strain localization and damage . (v) Development of multi-scaling algorithms in the time domain for compression and localization in
NASA Astrophysics Data System (ADS)
Song, Di; Kang, Guozheng; Kan, Qianhua; Yu, Chao; Zhang, Chuanzeng
2015-08-01
Based on the experimental observations for the uniaxial low-cycle stress fatigue failure of super-elastic NiTi shape memory alloy microtubes (Song et al 2015 Smart Mater. Struct. 24 075004) and a new definition of damage variable corresponding to the variation of accumulated dissipation energy, a phenomenological damage model is proposed to describe the damage evolution of the NiTi microtubes during cyclic loading. Then, with a failure criterion of Dc = 1, the fatigue lives of the NiTi microtubes are predicted by the damage-based model, the predicted lives are in good agreement with the experimental ones, and all of the points are located within an error band of 1.5 times.
Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2008-01-01
This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.
Syndromic surveillance for health information system failures: a feasibility study
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-01-01
Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193
Foster, Tim; Willetts, Juliet; Lane, Mike; Thomson, Patrick; Katuva, Jacob; Hope, Rob
2018-06-01
An improved understanding of failure risks for water supplies in rural sub-Saharan Africa will be critical to achieving the global goal of safe water for all by 2030. In the absence of longitudinal biophysical and operational data, investigations into water point failure risk factors have to date been limited to cross-sectional research designs. This retrospective cohort study applies survival analysis to identify factors that predict failure risks for handpumps installed on boreholes along the south coast of Kenya from the 1980s. The analysis is based on a unique dataset linking attributes of >300 water points at the time of installation with their operational lifespan over the following decades. Cox proportional hazards and accelerated failure time models suggest water point failure risks are higher and lifespans are shorter when water supplied is more saline, static water level is deeper, and groundwater is pumped from an unconsolidated sand aquifer. The risk of failure also appears to grow as distance to spare part suppliers increases. To bolster the sustainability of rural water services and ensure no community is left behind, post-construction support mechanisms will need to mitigate heterogeneous environmental and geographical challenges. Further studies are needed to better understand the causal pathways that underlie these risk factors in order to inform policies and practices that ensure water services are sustained even where unfavourable conditions prevail. Copyright © 2018 Elsevier B.V. All rights reserved.
Dilles, Ann; Heymans, Valerie; Martin, Sandra; Droogné, Walter; Denhaerynck, Kris; De Geest, Sabina
2011-09-01
Education, coaching and guidance of patients are important components of heart failure management. The aim of this study was to compare a computer assisted learning (CAL) program with standard education (brochures and oral information from nurses) on knowledge and self-care in hospitalized heart failure patients. Satisfaction with the CAL program was also assessed in the intervention group. A quasi-experimental design was used, with a convenience sample of in-hospital heart failure patients. Knowledge and self-care were measured using the Dutch Heart Failure Knowledge Scale and the European Heart Failure Self-care Behaviour Scale at hospital admission, at discharge and after a 3-month follow-up. Satisfaction with the CAL program was assessed at hospital discharge using a satisfaction questionnaire. Within and between groups, changes in knowledge and self-care over time were tested using a mixed regression model. Of 65 heart failure patients screened, 37 were included in the study: 21 in the CAL group and 16 in the usual care group. No significant differences in knowledge (p=0.65) or self-care (p=0.40) could be found between groups. However, both variables improved significantly over time in each study group (p<0.0001). Both educational strategies increased knowledge and improved self-care. The design did not allow isolation of the effects of standard education usual care from CAL. Economic and clinical outcomes of both methods should be evaluated in further research. Copyright © 2010. Published by Elsevier B.V.
Anderson, Kelley M
2014-01-01
Heart failure is a clinical syndrome that incurs a high prevalence, mortality, morbidity, and economic burden in our society. Patients with heart failure may experience hospitalization because of an acute exacerbation of their condition. Recurrent hospitalizations soon after discharge are an unfortunate occurrence in this patient population. The purpose of this study was to explore the clinical and diagnostic characteristics of individuals hospitalized with a primary diagnosis of heart failure at the time of discharge and to compare the association of these indicators in individuals who did and did not experience a heart failure hospitalization within 60 days of the index stay. The study is a descriptive, correlational, quantitative study using a retrospective review of 134 individuals discharged with a primary diagnosis of heart failure. Records were reviewed for sociodemographic characteristics, health histories, clinical assessment findings, and diagnostic information. Significant predictors of 60-day heart failure readmissions were dyspnea (β = 0.579), crackles (β = 1.688), and assistance with activities of daily living (β = 2.328), independent of age, gender, and multiple other factors. By using hierarchical logistical regression, a model was derived that demonstrated the ability to correctly classify 77.4% of the cohort, 78.2% of those who did have a readmission (sensitivity of the prediction), and 76.7% of the subjects in whom the predicted event, readmission, did not occur (specificity of the prediction). Hospitalizations for heart failure are markers of clinical instability. Future events after hospitalization are common in this patient population, and this study provides a novel understanding of clinical characteristics at the time of discharge that are associated with future outcomes, specifically 60-day heart failure readmissions. A consideration of these characteristics provides an additional perspective to guide clinical decision making and the evaluation of discharge readiness.
Finite Element Modeling of the Behavior of Armor Materials Under High Strain Rates and Large Strains
NASA Astrophysics Data System (ADS)
Polyzois, Ioannis
For years high strength steels and alloys have been widely used by the military for making armor plates. Advances in technology have led to the development of materials with improved resistance to penetration and deformation. Until recently, the behavior of these materials under high strain rates and large strains has been primarily based on laboratory testing using the Split Hopkinson Pressure Bar apparatus. With the advent of sophisticated computer programs, computer modeling and finite element simulations are being developed to predict the deformation behavior of these metals for a variety of conditions similar to those experienced during combat. In the present investigation, a modified direct impact Split Hopkinson Pressure Bar apparatus was modeled using the finite element software ABAQUS 6.8 for the purpose of simulating high strain rate compression of specimens of three armor materials: maraging steel 300, high hardness armor (HHA), and aluminum alloy 5083. These armor materials, provided by the Canadian Department of National Defence, were tested at the University of Manitoba by others. In this study, the empirical Johnson-Cook visco-plastic and damage models were used to simulate the deformation behavior obtained experimentally. A series of stress-time plots at various projectile impact momenta were produced and verified by comparison with experimental data. The impact momentum parameter was chosen rather than projectile velocity to normalize the initial conditions for each simulation. Phenomena such as the formation of adiabatic shear bands caused by deformation at high strains and strain rates were investigated through simulations. It was found that the Johnson-Cook model can accurately simulate the behavior of body-centered cubic (BCC) metals such as steels. The maximum shear stress was calculated for each simulation at various impact momenta. The finite element model showed that shear failure first occurred in the center of the cylindrical specimen and propagated outwards diagonally towards the front and back edges forming an hourglass pattern. This pattern matched the failure behavior of specimens tested experimentally, which also exhibited failure through the formation of adiabatic shear bands. Adiabatic shear bands are known to lead to a complete shear failure. Both mechanical and thermal mechanisms contribute to the formation of shear bands. However, the finite element simulations did not show the effects of temperature rise within the material, a phenomenon which is known to contribute to thermal instabilities, whereby strain hardening effects are outweighed by thermal softening effects and adiabatic shear bands begin to form. In the simulations, the purely mechanical maximum shear stress failure, nucleating from the center of the specimens, was used as an indicator of the time at which these shear bands begin to form. The time and compressive stress at the moment of thermal instability in experimental results which have shown to form adiabatic shear bands, matched closely to those at which shear failure was first observed in the simulations. Although versatile in modeling BCC behavior, the Johnson-Cook model did not show the correct stress response in face-centered cubic (FCC) metals, such as aluminum 5083, where effects of strain rate and temperature depend on strain. Similar observations have been reported in literature. In the Johnson-Cook model, temperature, strain rate and strain" parameters are independent of each other. To this end, a more physical-based model based on dislocation mechanics, namely the Feng and Bassim constitutive model, would be more appropriate.
NASA Astrophysics Data System (ADS)
Brideau, Marc-André; Yan, Ming; Stead, Doug
2009-01-01
Rock slope failures are frequently controlled by a complex combination of discontinuities that facilitate kinematic release. These discontinuities are often associated with discrete folds, faults, and shear zones, and/or related tectonic damage. The authors, through detailed case studies, illustrate the importance of considering the influence of tectonic structures not only on three-dimensional kinematic release but also in the reduction of rock mass properties due to induced damage. The case studies selected reflect a wide range of rock mass conditions. In addition to active rock slope failures they include two major historic failures, the Hope Slide, which occurred in British Columbia in 1965 and the Randa rockslides which occurred in Switzerland in 1991. Detailed engineering geological mapping combined with rock testing, GIS data analysis and for selected case numerical modelling, have shown that specific rock slope failure mechanisms may be conveniently related to rock mass classifications such as the Geological Strength Index (GSI). The importance of brittle intact rock fracture in association with pre-existing rock mass damage is emphasized though a consideration of the processes involved in the progressive-time dependent development not only of though-going failure surfaces but also lateral and rear-release mechanisms. Preliminary modelling data are presented to illustrate the importance of intact rock fracture and step-path failure mechanisms; and the results are discussed with reference to selected field observations. The authors emphasize the importance of considering all forms of pre-existing rock mass damage when assessing potential or operative failure mechanisms. It is suggested that a rock slope rock mass damage assessment can provide an improved understanding of the potential failure mode, the likely hazard presented, and appropriate methods of both analysis and remedial treatment.
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
An evaluation of a real-time fault diagnosis expert system for aircraft applications
NASA Technical Reports Server (NTRS)
Schutte, Paul C.; Abbott, Kathy H.; Palmer, Michael T.; Ricks, Wendell R.
1987-01-01
A fault monitoring and diagnosis expert system called Faultfinder was conceived and developed to detect and diagnose in-flight failures in an aircraft. Faultfinder is an automated intelligent aid whose purpose is to assist the flight crew in fault monitoring, fault diagnosis, and recovery planning. The present implementation of this concept performs monitoring and diagnosis for a generic aircraft's propulsion and hydraulic subsystems. This implementation is capable of detecting and diagnosing failures of known and unknown (i.e., unforseeable) type in a real-time environment. Faultfinder uses both rule-based and model-based reasoning strategies which operate on causal, temporal, and qualitative information. A preliminary evaluation is made of the diagnostic concepts implemented in Faultfinder. The evaluation used actual aircraft accident and incident cases which were simulated to assess the effectiveness of Faultfinder in detecting and diagnosing failures. Results of this evaluation, together with the description of the current Faultfinder implementation, are presented.
NASA Technical Reports Server (NTRS)
Tamayo, Tak Chai
1987-01-01
Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.
Simulations of carbon fiber composite delamination tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kay, G
2007-10-25
Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-statemore » testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.« less
NASA Astrophysics Data System (ADS)
Yonten, Karma
As a multi-phase material, soil exhibits highly nonlinear, anisotropic, and inelastic behavior. While it may be impractical for one constitutive model to address all features of the soil behavior, one can identify the essential aspects of the soil's stress-strainstrength response for a particular class of problems and develop a suitable constitutive model that captures those aspects. Here, attention is given to two important features of the soil stress-strain-strength behavior: anisotropy and post-failure response. An anisotropic soil plasticity model is implemented to investigate the significance of initial and induced anisotropy on the response of geo-structures founded on cohesive soils. The model is shown to produce realistic responses for a variety of over-consolidation ratios. Moreover, the performance of the model is assessed in a boundary value problem in which a cohesive soil is subjected to the weight of a newly constructed soil embankment. Significance of incorporating anisotropy is clearly demonstrated by comparing the results of the simulation using the model with those obtained by using an isotropic plasticity model. To investigate post-failure response of soils, the issue of strain localization in geostructures is considered. Post-failure analysis of geo-structures using numerical techniques such as mesh-based or mesh-free methods is often faced with convergence issues which may, at times, lead to incorrect failure mechanisms. This is due to the fact that majority of existing constitutive models are formulated within the framework of classical continuum mechanics that leads to ill-posed governing equations at the onset of localization. To overcome this challenge, a critical state two-surface plasticity model is extended to incorporate the micro-structural mechanisms that become significant within the shear band. The extended model is implemented to study the strain localization of granular soils in drained and undrained conditions. It is demonstrated that the extended model is capable of capturing salient features of soil behavior in pre- and post-failure regimes. The effects of soil particle size, initial density and confining pressure on the thickness and orientation of shear band are investigated and compared with the observed behavior of soils.
Neogi, Ujjwal; Gisslen, Magnus; Hejdeman, Bo; Flamholc, Leo; Sönnerborg, Anders
2017-01-01
Background Switch from first line antiretroviral therapy (ART) to second-line ART is common in clinical practice. However, there is limited knowledge of to which extent different reason for therapy switch are associated with differences in long-term consequences and sustainability of the second line ART. Material and methods Data from 869 patients with 14601 clinical visits between 1999–2014 were derived from the national cohort database. Reason for therapy switch and viral load (VL) levels at first-line ART failure were compared with regard to outcome of second line ART. Using the Laplace regression model we analyzed the median, 10th, 20th, 30th and 40th percentile of time to viral failure (VF). Results Most patients (n = 495; 57.0%) switched from first-line to second-line ART without VF. Patients switching due to detectable VL with (n = 124; 14.2%) or without drug resistance mutations (DRM) (n = 250; 28.8%) experienced VF to their second line regimen sooner (median time, years: 3.43 (95% CI 2.90–3.96) and 3.20 (95% 2.65–3.75), respectively) compared with those who switched without VF (4.53 years). Furthermore level of VL at first-line ART failure had a significant impact on failure of second-line ART starting after 2.5 years of second-line ART. Conclusions In the context of life-long therapy, a median time on second line ART of 4.53 years for these patients is short. To prolong time on second-line ART, further studies are needed on the reasons for therapy changes. Additionally patients with a high VL at first-line VF should be more frequently monitored the period after the therapy switch. PMID:28727795
Häggblom, Amanda; Santacatterina, Michele; Neogi, Ujjwal; Gisslen, Magnus; Hejdeman, Bo; Flamholc, Leo; Sönnerborg, Anders
2017-01-01
Switch from first line antiretroviral therapy (ART) to second-line ART is common in clinical practice. However, there is limited knowledge of to which extent different reason for therapy switch are associated with differences in long-term consequences and sustainability of the second line ART. Data from 869 patients with 14601 clinical visits between 1999-2014 were derived from the national cohort database. Reason for therapy switch and viral load (VL) levels at first-line ART failure were compared with regard to outcome of second line ART. Using the Laplace regression model we analyzed the median, 10th, 20th, 30th and 40th percentile of time to viral failure (VF). Most patients (n = 495; 57.0%) switched from first-line to second-line ART without VF. Patients switching due to detectable VL with (n = 124; 14.2%) or without drug resistance mutations (DRM) (n = 250; 28.8%) experienced VF to their second line regimen sooner (median time, years: 3.43 (95% CI 2.90-3.96) and 3.20 (95% 2.65-3.75), respectively) compared with those who switched without VF (4.53 years). Furthermore level of VL at first-line ART failure had a significant impact on failure of second-line ART starting after 2.5 years of second-line ART. In the context of life-long therapy, a median time on second line ART of 4.53 years for these patients is short. To prolong time on second-line ART, further studies are needed on the reasons for therapy changes. Additionally patients with a high VL at first-line VF should be more frequently monitored the period after the therapy switch.
High Speed Dynamics in Brittle Materials
NASA Astrophysics Data System (ADS)
Hiermaier, Stefan
2015-06-01
Brittle Materials under High Speed and Shock loading provide a continuous challenge in experimental physics, analysis and numerical modelling, and consequently for engineering design. The dependence of damage and fracture processes on material-inherent length and time scales, the influence of defects, rate-dependent material properties and inertia effects on different scales make their understanding a true multi-scale problem. In addition, it is not uncommon that materials show a transition from ductile to brittle behavior when the loading rate is increased. A particular case is spallation, a brittle tensile failure induced by the interaction of stress waves leading to a sudden change from compressive to tensile loading states that can be invoked in various materials. This contribution highlights typical phenomena occurring when brittle materials are exposed to high loading rates in applications such as blast and impact on protective structures, or meteorite impact on geological materials. A short review on experimental methods that are used for dynamic characterization of brittle materials will be given. A close interaction of experimental analysis and numerical simulation has turned out to be very helpful in analyzing experimental results. For this purpose, adequate numerical methods are required. Cohesive zone models are one possible method for the analysis of brittle failure as long as some degree of tension is present. Their recent successful application for meso-mechanical simulations of concrete in Hopkinson-type spallation tests provides new insight into the dynamic failure process. Failure under compressive loading is a particular challenge for numerical simulations as it involves crushing of material which in turn influences stress states in other parts of a structure. On a continuum scale, it can be modeled using more or less complex plasticity models combined with failure surfaces, as will be demonstrated for ceramics. Models which take microstructural cracking directly into account may provide a more physics-based approach for compressive failure in the future.
Robustness and Vulnerability of Networks with Dynamical Dependency Groups.
Bai, Ya-Nan; Huang, Ning; Wang, Lei; Wu, Zhi-Xi
2016-11-28
The dependency property and self-recovery of failure nodes both have great effects on the robustness of networks during the cascading process. Existing investigations focused mainly on the failure mechanism of static dependency groups without considering the time-dependency of interdependent nodes and the recovery mechanism in reality. In this study, we present an evolving network model consisting of failure mechanisms and a recovery mechanism to explore network robustness, where the dependency relations among nodes vary over time. Based on generating function techniques, we provide an analytical framework for random networks with arbitrary degree distribution. In particular, we theoretically find that an abrupt percolation transition exists corresponding to the dynamical dependency groups for a wide range of topologies after initial random removal. Moreover, when the abrupt transition point is above the failure threshold of dependency groups, the evolving network with the larger dependency groups is more vulnerable; when below it, the larger dependency groups make the network more robust. Numerical simulations employing the Erdős-Rényi network and Barabási-Albert scale free network are performed to validate our theoretical results.
Beaulieu, Mélanie L.; Wojtys, Edward M.; Ashton-Miller, James A.
2015-01-01
Background A reduced range of hip internal rotation is associated with increased peak anterior cruciate ligament (ACL) strain and risk for injury. It is unknown, however, whether limiting the available range of internal femoral rotation increases the susceptibility of the ACL to fatigue failure. Hypothesis Risk of ACL failure is significantly greater in female knee specimens with a limited range of internal femoral rotation, smaller femoral-ACL attachment angle, and smaller tibial eminence volume during repeated in vitro simulated single-leg pivot landings. Study Design Controlled laboratory study. Methods A custom-built testing apparatus was used to simulate repeated single-leg pivot landings with a 4×-body weight impulsive load that induces knee compression, knee flexion, and internal tibial torque in 32 paired human knee specimens from 8 male and 8 female donors. These test loads were applied to each pair of specimens, in one knee with limited internal femoral rotation and in the contralateral knee with femoral rotation resisted by 2 springs to simulate the active hip rotator muscles’ resistance to stretch. The landings were repeated until ACL failure occurred or until a minimum of 100 trials were executed. The angle at which the ACL originates from the femur and the tibial eminence volume were measured on magnetic resonance images. Results The final Cox regression model (P = .024) revealed that range of internal femoral rotation and sex of donor were significant factors in determining risk of ACL fatigue failure. The specimens with limited range of internal femoral rotation had a failure risk 17.1 times higher than did the specimens with free rotation (P = .016). The female knee specimens had a risk of ACL failure 26.9 times higher than the male specimens (P = .055). Conclusion Limiting the range of internal femoral rotation during repetitive pivot landings increases the risk of an ACL fatigue failure in comparison with free rotation in a cadaveric model. Clinical Relevance Screening for restricted internal rotation at the hip in ACL injury prevention programs as well as in individuals with ACL injuries and/or reconstructions is warranted. PMID:26122384
Meisel, Adam F; Henninger, Heath B; Barber, F Alan; Getelman, Mark H
2017-05-01
The purpose of this study was to evaluate the time zero cyclic and failure loading properties of a linked single-row rotator cuff repair compared with a standard simple suture single-row repair using triple-loaded suture anchors. Eighteen human cadaveric shoulders from 9 matched pairs were dissected, and full-thickness supraspinatus tears were created. The tendon cross-sectional area was recorded. In each pair, one side was repaired with a linked single-row construct and the other with a simple suture single-row construct, both using 2 triple-loaded suture anchors. After preloading, specimens were cycled to 1 MPa of effective stress at 1 Hz for 500 cycles, and gap formation was recorded with a digital video system. Samples were then loaded to failure, and modes of failure were recorded. There was no statistical difference in peak gap formation between the control and linked constructs (3.6 ± 0.9 mm and 3.6 ± 1.2 mm, respectively; P = .697). Both constructs averaged below a 5-mm cyclic failure threshold. There was no statistical difference in ultimate load to failure between the control and linked repair (511.1 ± 139.0 N and 561.2 ± 131.8 N, respectively; P = .164), and both groups reached failure at loads similar to previous studies. Constructs failed predominantly via tissue tearing parallel to the medial suture line. The linked repair performed similarly to the simple single-row repair. Both constructs demonstrated high ultimate load to failure and good resistance to gap formation with cyclic loading, validating the time zero strength of both constructs in a human cadaveric model. The linked repair provided equivalent resistance to gap formation and failure loads compared with simple suture single-row repairs with triple-loaded suture anchors. This suggests that the linked repair is a simplified rip-stop configuration using the existing suture that may perform similarly to current rotator cuff repair techniques. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Beaulieu, Mélanie L; Wojtys, Edward M; Ashton-Miller, James A
2015-09-01
A reduced range of hip internal rotation is associated with increased peak anterior cruciate ligament (ACL) strain and risk for injury. It is unknown, however, whether limiting the available range of internal femoral rotation increases the susceptibility of the ACL to fatigue failure. Risk of ACL failure is significantly greater in female knee specimens with a limited range of internal femoral rotation, smaller femoral-ACL attachment angle, and smaller tibial eminence volume during repeated in vitro simulated single-leg pivot landings. Controlled laboratory study. A custom-built testing apparatus was used to simulate repeated single-leg pivot landings with a 4×-body weight impulsive load that induces knee compression, knee flexion, and internal tibial torque in 32 paired human knee specimens from 8 male and 8 female donors. These test loads were applied to each pair of specimens, in one knee with limited internal femoral rotation and in the contralateral knee with femoral rotation resisted by 2 springs to simulate the active hip rotator muscles' resistance to stretch. The landings were repeated until ACL failure occurred or until a minimum of 100 trials were executed. The angle at which the ACL originates from the femur and the tibial eminence volume were measured on magnetic resonance images. The final Cox regression model (P = .024) revealed that range of internal femoral rotation and sex of donor were significant factors in determining risk of ACL fatigue failure. The specimens with limited range of internal femoral rotation had a failure risk 17.1 times higher than did the specimens with free rotation (P = .016). The female knee specimens had a risk of ACL failure 26.9 times higher than the male specimens (P = .055). Limiting the range of internal femoral rotation during repetitive pivot landings increases the risk of an ACL fatigue failure in comparison with free rotation in a cadaveric model. Screening for restricted internal rotation at the hip in ACL injury prevention programs as well as in individuals with ACL injuries and/or reconstructions is warranted. © 2015 The Author(s).
Making statistical inferences about software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1988-01-01
Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.
The influence of microstructure on the probability of early failure in aluminum-based interconnects
NASA Astrophysics Data System (ADS)
Dwyer, V. M.
2004-09-01
For electromigration in short aluminum interconnects terminated by tungsten vias, the well known "short-line" effect applies. In a similar manner, for longer lines, early failure is determined by a critical value Lcrit for the length of polygranular clusters. Any cluster shorter than Lcrit is "immortal" on the time scale of early failure where the figure of merit is not the standard t50 value (the time to 50% failures), but rather the total probability of early failure, Pcf. Pcf is a complex function of current density, linewidth, line length, and material properties (the median grain size d50 and grain size shape factor σd). It is calculated here using a model based around the theory of runs, which has proved itself to be a useful tool for assessing the probability of extreme events. Our analysis shows that Pcf is strongly dependent on σd, and a change in σd from 0.27 to 0.5 can cause an order of magnitude increase in Pcf under typical test conditions. This has implications for the web-based two-dimensional grain-growth simulator MIT/EmSim, which generates grain patterns with σd=0.27, while typical as-patterned structures are better represented by a σd in the range 0.4 - 0.6. The simulator will consequently overestimate interconnect reliability due to this particular electromigration failure mode.
NASA Astrophysics Data System (ADS)
Mbaya, Timmy
Embedded Aerospace Systems have to perform safety and mission critical operations in a real-time environment where timing and functional correctness are extremely important. Guidance, Navigation, and Control (GN&C) systems substantially rely on complex software interfacing with hardware in real-time; any faults in software or hardware, or their interaction could result in fatal consequences. Integrated Software Health Management (ISWHM) provides an approach for detection and diagnosis of software failures while the software is in operation. The ISWHM approach is based on probabilistic modeling of software and hardware sensors using a Bayesian network. To meet memory and timing constraints of real-time embedded execution, the Bayesian network is compiled into an Arithmetic Circuit, which is used for on-line monitoring. This type of system monitoring, using an ISWHM, provides automated reasoning capabilities that compute diagnoses in a timely manner when failures occur. This reasoning capability enables time-critical mitigating decisions and relieves the human agent from the time-consuming and arduous task of foraging through a multitude of isolated---and often contradictory---diagnosis data. For the purpose of demonstrating the relevance of ISWHM, modeling and reasoning is performed on a simple simulated aerospace system running on a real-time operating system emulator, the OSEK/Trampoline platform. Models for a small satellite and an F-16 fighter jet GN&C (Guidance, Navigation, and Control) system have been implemented. Analysis of the ISWHM is then performed by injecting faults and analyzing the ISWHM's diagnoses.
Dynamic Structural Fault Detection and Identification
NASA Technical Reports Server (NTRS)
Smith, Timothy; Reichenbach, Eric; Urnes, James M.
2009-01-01
Aircraft structures are designed to guarantee safety of flight in some required operational envelope. When the aircraft becomes structurally impaired, safety of flight may not be guaranteed within that previously safe operational envelope. In this case the safe operational envelope must be redefined in-flight and a means to prevent excursion from this new envelope must be implemented. A specific structural failure mode that may result in a reduced safe operating envelope, the exceedance of which could lead to catastrophic structural failure of the aircraft, will be addressed. The goal of the DFEAP program is the detection of this failure mode coupled with flight controls adaptation to limit critical loads in the damaged aircraft structure. The DFEAP program is working with an F/A-18 aircraft model. The composite wing skins are bonded to metallic spars in the wing substructure. Over time, it is possible that this bonding can deteriorate due to fatigue. In this case, the ability of the wing spar to transfer loading between the wing skins is reduced. This failure mode can translate to a reduced allowable compressive strain on the wing skin and could lead to catastrophic wing buckling if load limiting of the wing structure is not applied. The DFEAP program will make use of a simplified wing strain model for the healthy aircraft. The outputs of this model will be compared in real-time to onboard strain measurements at several locations on the aircraft wing. A damage condition is declared at a given location when the strain measurements differ sufficiently from the strain model. Parameter identification of the damaged structure wing strain parameters will be employed to provide load limiting control adaptation for the aircraft. This paper will discuss the simplified strain models used in the implementation and their interaction with the strain sensor measurements. Also discussed will be the damage detection and identification schemes employed and the means by which the damaged aircraft parameters will be used to provide load limiting that keeps the aircraft within the safe operational envelope.
A Sensitivity Analysis of Triggers and Mechanisms of Mass Movements in Fjords
NASA Astrophysics Data System (ADS)
Overeem, I.; Lintern, G.; Hill, P.
2016-12-01
Fjords are characterized by rapid sedimentation as they typically drain glaciated river catchments with high seasonal discharges and large sediment evacuation rates. For this reason, fjords commonly experience submarine mass movements; failures of the steep delta front that trigger tsunamis, and turbidity currents or debris flows. Repeat high-resolution bathymetric surveys, and in-situ process measurements collected in fjords in British Columbia, Canada, indicate that mass movements occur many times per year in some fjords and are more rare and of larger magnitude in other fjords. We ask whether these differences can be attributed to river discharge characteristics or to grainsize characteristics of the delivered sediment. To test our ideas, we couple a climate-driven river sediment transport model, HydroTrend, and a marine sedimentation model, Sedflux2D, to explore the triggers of submarine failures and mechanisms of subsequent turbidity and debris flows. HydroTrend calculates water and suspended sediment transport on a daily basis based on catchment characteristics, glaciated area, lakes and temperature and precipitation regime. Sedflux uses the generated river time-series to simulate delta plumes, failures and mass movements with separate process models. Model uncertainty and parameter sensitivity are assessed using Dakota Tools, which allows for a systematic exploration of the effects of river basin characteristics and climate scenarios on occurrence of hyperpycnal events, delta front sedimentation rate, submarine pore pressure, failure frequency and size, and run-out distances. Preliminary simulation results point to the importance of proglacial lakes and lakes abundance in the river basin, which has profound implications for event-based sediment delivery to the delta apex. Discharge-sediment rating curves can be highly variable based on these parameters. Distinction of turbidity currents and debris flows was found to be most sensitive to both earthquake frequency and delta front grainsize. As a first step we compare these model experiments against field data from the Squamish River and Delta in Howe Sound, BC.
Modelling river bank retreat by combining fluvial erosion, seepage and mass failure
NASA Astrophysics Data System (ADS)
Dapporto, S.; Rinaldi, M.
2003-04-01
Streambank erosion processes contribute significantly to the sediment yielded from a river system and represent an important issue in the contexts of soil degradation and river management. Bank retreat is controlled by a complex interaction of hydrologic, geotechnical, and hydraulic processes. The capability of modelling these different components allows for a full reconstruction and comprehension of the causes and rates of bank erosion. River bank retreat during a single flow event has been modelled by combining simulation of fluvial erosion, seepage, and mass failures. The study site, along the Sieve River (Central Italy), has been subject to extensive researches, including monitoring of pore water pressures for a period of 4 years. The simulation reconstructs fairly faithfully the observed changes, and is used to: a) test the potentiality and discuss advantages and limitations of such type of methodology for modelling bank retreat; c) quantify the contribution and mutual role of the different processes determining bank retreat. The hydrograph of the event is divided in a series of time steps. Modelling of the riverbank retreat includes for each step the following components: a) fluvial erosion and consequent changes in bank geometry; b) finite element seepage analysis; c) stability analysis by limit equilibrium method. Direct fluvial shear erosion is computed using empirically derived relationships expressing lateral erosion rate as a function of the excess of shear stress to the critical entrainment value for the different materials along the bank profile. Lateral erosion rate has been calibrated on the basis of the total bank retreat measured by digital terrestrial photogrammetry. Finite element seepage analysis is then conducted to reconstruct the saturated and unsaturated flow within the bank and the pore water pressure distribution for each time step. The safety factor for mass failures is then computed, using the pore water pressure distribution obtained by the seepage analysis, and the geometry of the upper bank is modified in case of failure.
A Bayesian Approach Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data
Yue, Meng; Toto, Tami; Jensen, Michael P.; ...
2017-05-18
Severe weather events such as strong thunderstorms are some of the most significant and frequent threats to the electrical grid infrastructure. Outages resulting from storms can be very costly. While some tools are available to utilities to predict storm occurrences and damage, they are typically very crude and provide little means of facilitating restoration efforts. This study developed a methodology to use historical high-resolution (both temporal and spatial) radar observations of storm characteristics and outage information to develop weather condition dependent failure rate models (FRMs) for different grid components. Such models can provide an estimation or prediction of the outagemore » numbers in small areas of a utility’s service territory once the real-time measurement or forecasted data of weather conditions become available as the input to the models. Considering the potential value provided by real-time outages reported, a Bayesian outage prediction (BOP) algorithm is proposed to account for both strength and uncertainties of the reported outages and failure rate models. The potential benefit of this outage prediction scheme is illustrated in this study.« less
A Bayesian Approach Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, Meng; Toto, Tami; Jensen, Michael P.
Severe weather events such as strong thunderstorms are some of the most significant and frequent threats to the electrical grid infrastructure. Outages resulting from storms can be very costly. While some tools are available to utilities to predict storm occurrences and damage, they are typically very crude and provide little means of facilitating restoration efforts. This study developed a methodology to use historical high-resolution (both temporal and spatial) radar observations of storm characteristics and outage information to develop weather condition dependent failure rate models (FRMs) for different grid components. Such models can provide an estimation or prediction of the outagemore » numbers in small areas of a utility’s service territory once the real-time measurement or forecasted data of weather conditions become available as the input to the models. Considering the potential value provided by real-time outages reported, a Bayesian outage prediction (BOP) algorithm is proposed to account for both strength and uncertainties of the reported outages and failure rate models. The potential benefit of this outage prediction scheme is illustrated in this study.« less
Damage Propagation Modeling for Aircraft Engine Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Goebel, Kai; Simon, Don; Eklund, Neil
2008-01-01
This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are generated via a thermo-dynamical simulation model for the engine as a function of variations of flow and efficiency of the modules of interest. An exponential rate of change for flow and efficiency loss was imposed for each data set, starting at a randomly chosen initial deterioration set point. The rate of change of the flow and efficiency denotes an otherwise unspecified fault with increasingly worsening effect. The rates of change of the faults were constrained to an upper threshold but were otherwise chosen randomly. Damage propagation was allowed to continue until a failure criterion was reached. A health index was defined as the minimum of several superimposed operational margins at any given time instant and the failure criterion is reached when health index reaches zero. Output of the model was the time series (cycles) of sensed measurements typically available from aircraft gas turbine engines. The data generated were used as challenge data for the Prognostics and Health Management (PHM) data competition at PHM 08.
Linking Seismicity at Depth to the Mechanics of a Lava Dome Failure - a Forecasting Approach
NASA Astrophysics Data System (ADS)
Salvage, R. O.; Neuberg, J. W.; Murphy, W.
2014-12-01
Soufriere Hills volcano (SHV), Montserrat has been in a state of ongoing unrest since 1995. Prior to eruptions, an increase in the number of seismic events has been observed. We use the Material Failure Law (MFL) (Voight, 1988) to investigate how an accelerating number of low frequency seismic events are related to the timing of a large scale dome collapse in June 1997. We show that although the forecasted timing of a dome collapse may coincide with the known timing, the accuracy of the application of the MFL to the data is poor. Using a cross correlation technique we show how characterising seismicity into similar waveform "families'' allows us to focus on a single process at depth and improve the reliability of our forecast. A number of families are investigated to assess their relative importance. We show that despite the timing of a forecasted dome collapse ranging between several hours of the known timing of collapse, each of the families produces a better forecast in terms of fit to the seismic acceleration data than when using all low frequency seismicity. In addition, we investigate the stability of such families between major dome collapses (1997 and 2003), assessing their potential for use in real-time forecasting. Initial application of Grey's Incidence Analysis suggests that a key parameter influencing the potential for a large scale slumping on the dome of SHV is the rate of low frequency seismicity associated with magma movement and dome growth. We undertook numerical modelling of an andesitic dome with a hydrothermally altered layer down to 800m. The geometry of the dome is based on SHV prior to the collapse of 2003. We show that a critical instability is reached once slope angles exceed 25°, corresponding to a summit height of just over 1100m a.s.l.. The geometry of failure is in close agreement with the identified failure plane suggesting that the input mechanical properties are broadly consistent with reality. We are therefore able to compare different failure geometries based on edifice geomorphology and determine a Factor of Safety associated with such scenarios. This modelling would be extremely useful in a holistic forecasting approach within a volcanic environment. Reference: Voight, B. (1988). A method for prediction of volcanic eruptions. Nature, 332: 125-130.
Modelling the failure behaviour of wind turbines
NASA Astrophysics Data System (ADS)
Faulstich, S.; Berkhout, V.; Mayer, J.; Siebenlist, D.
2016-09-01
Modelling the failure behaviour of wind turbines is an essential part of offshore wind farm simulation software as it leads to optimized decision making when specifying the necessary resources for the operation and maintenance of wind farms. In order to optimize O&M strategies, a thorough understanding of a wind turbine's failure behaviour is vital and is therefore being developed at Fraunhofer IWES. Within this article, first the failure models of existing offshore O&M tools are presented to show the state of the art and strengths and weaknesses of the respective models are briefly discussed. Then a conceptual framework for modelling different failure mechanisms of wind turbines is being presented. This framework takes into account the different wind turbine subsystems and structures as well as the failure modes of a component by applying several influencing factors representing wear and break failure mechanisms. A failure function is being set up for the rotor blade as exemplary component and simulation results have been compared to a constant failure rate and to empirical wind turbine fleet data as a reference. The comparison and the breakdown of specific failure categories demonstrate the overall plausibility of the model.
NASA Technical Reports Server (NTRS)
Kaufman, Howard
1998-01-01
Many papers relevant to reconfigurable flight control have appeared over the past fifteen years. In general these have consisted of theoretical issues, simulation experiments, and in some cases, actual flight tests. Results indicate that reconfiguration of flight controls is certainly feasible for a wide class of failures. However many of the proposed procedures although quite attractive, need further analytical and experimental studies for meaningful validation. Many procedures assume the availability of failure detection and identification logic that will supply adequately fast, the dynamics corresponding to the failed aircraft. This in general implies that the failure detection and fault identification logic must have access to all possible anticipated faults and the corresponding dynamical equations of motion. Unless some sort of explicit on line parameter identification is included, the computational demands could possibly be too excessive. This suggests the need for some form of adaptive control, either by itself as the prime procedure for control reconfiguration or in conjunction with the failure detection logic. If explicit or indirect adaptive control is used, then it is important that the identified models be such that the corresponding computed controls deliver adequate performance to the actual aircraft. Unknown changes in trim should be modelled, and parameter identification needs to be adequately insensitive to noise and at the same time capable of tracking abrupt changes. If however, both failure detection and system parameter identification turn out to be too time consuming in an emergency situation, then the concepts of direct adaptive control should be considered. If direct model reference adaptive control is to be used (on a linear model) with stability assurances, then a positive real or passivity condition needs to be satisfied for all possible configurations. This condition is often satisfied with a feedforward compensator around the plant. This compensator must be robustly designed such that the compensated plant satisfies the required positive real conditions over all expected parameter values. Furthermore, with the feedforward only around the plant, a nonzero (but bounded error) will exist in steady state between the plant and model outputs. This error can be removed by placing the compensator also in the reference model. Design of such a compensator should not be too difficult a problem since for flight control it is generally possible to feedback all the system states.
Psychological distress and in vitro fertilization outcome
Pasch, Lauri A.; Gregorich, Steven E.; Katz, Patricia K.; Millstein, Susan G.; Nachtigall, Robert D.; Bleil, Maria E.; Adler, Nancy E.
2016-01-01
Objective To examine whether psychological distress predicts IVF treatment outcome as well as whether IVF treatment outcome predicts subsequent psychological distress. Design Prospective cohort study over an 18-month period. Setting Five community and academic fertility practices. Patients Two hundred and two women who initiated their first IVF cycle. Interventions Women completed interviews and questionnaires at baseline and at 4, 10, and 18 months follow-up. Main Outcome Measures IVF cycle outcome and psychological distress. Results Using a binary logistic model including covariates (woman’s age, ethnicity, income, education, parity, duration of infertility, and time interval), pre-treatment depression and anxiety were not significant predictors of the outcome of the first IVF cycle. Using linear regression models including covariates (woman’s age, income, education, parity, duration of infertility, assessment point, time since last treatment cycle, and pre-IVF depression or anxiety), experiencing failed IVF was associated with higher post-IVF depression and anxiety. Conclusions IVF failure predicts subsequent psychological distress, but pre-IVF psychological distress does not predict IVF failure. Instead of focusing efforts on psychological interventions specifically aimed at improving the chance of pregnancy, these findings suggest that attention be paid to helping patients prepare for and cope with treatment and treatment failure. PMID:22698636
Psychological distress and in vitro fertilization outcome.
Pasch, Lauri A; Gregorich, Steven E; Katz, Patricia K; Millstein, Susan G; Nachtigall, Robert D; Bleil, Maria E; Adler, Nancy E
2012-08-01
To examine whether psychological distress predicts IVF treatment outcome as well as whether IVF treatment outcome predicts subsequent psychological distress. Prospective cohort study over an 18-month period. Five community and academic fertility practices. Two hundred two women who initiated their first IVF cycle. Women completed interviews and questionnaires at baseline and at 4, 10, and 18 months' follow-up. IVF cycle outcome and psychological distress. In a binary logistic model including covariates (woman's age, ethnicity, income, education, parity, duration of infertility, and time interval), pretreatment depression and anxiety were not significant predictors of the outcome of the first IVF cycle. In linear regression models including covariates (woman's age, income, education, parity, duration of infertility, assessment point, time since last treatment cycle, and pre-IVF depression or anxiety), experiencing failed IVF was associated with higher post-IVF depression and anxiety. IVF failure predicts subsequent psychological distress, but pre-IVF psychological distress does not predict IVF failure. Instead of focusing efforts on psychological interventions specifically aimed at improving the chance of pregnancy, these findings suggest that attention be paid to helping patients prepare for and cope with treatment and treatment failure. Copyright © 2012 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Qi
Rock failure process is a complex phenomenon that involves elastic and plastic deformation, microscopic cracking, macroscopic fracturing, and frictional slipping of fractures. Understanding this complex behaviour has been the focus of a significant amount of research. In this work, the combined finite-discrete element method (FDEM) was first employed to study (1) the influence of rock discontinuities on hydraulic fracturing and associated seismicity and (2) the influence of in-situ stress on seismic behaviour. Simulated seismic events were analyzed using post-processing tools including frequency-magnitude distribution (b-value), spatial fractal dimension (D-value), seismic rate, and fracture clustering. These simulations demonstrated that at the local scale, fractures tended to propagate following the rock mass discontinuities; while at reservoir scale, they developed in the direction parallel to the maximum in-situ stress. Moreover, seismic signature (i.e., b-value, D-value, and seismic rate) can help to distinguish different phases of the failure process. The FDEM modelling technique and developed analysis tools were then coupled with laboratory experiments to further investigate the different phases of the progressive rock failure process. Firstly, a uniaxial compression experiment, monitored using a time-lapse ultrasonic tomography method, was carried out and reproduced by the numerical model. Using this combination of technologies, the entire deformation and failure processes were studied at macroscopic and microscopic scales. The results not only illustrated the rock failure and seismic behaviours at different stress levels, but also suggested several precursory behaviours indicating the catastrophic failure of the rock. Secondly, rotary shear experiments were conducted using a newly developed rock physics experimental apparatus ERDmu-T) that was paired with X-ray micro-computed tomography (muCT). This combination of technologies has significant advantages over conventional rotary shear experiments since it allowed for the direct observation of how two rough surfaces interact and deform without perturbing the experimental conditions. Some intriguing observations were made pertaining to key areas of the study of fault evolution, making possible for a more comprehensive interpretation of the frictional sliding behaviour. Lastly, a carefully calibrated FDEM model that was built based on the rotary experiment was utilized to investigate facets that the experiment was not able to resolve, for example, the time-continuous stress condition and the seismic activity on the shear surface. The model reproduced the mechanical behaviour observed in the laboratory experiment, shedding light on the understanding of fault evolution.
The economics of satellite retrieval
NASA Technical Reports Server (NTRS)
Price, Kent M.; Greenberg, Joel S.
1988-01-01
The economics of space operations with and without the Space Station have been studied in terms of the financial performance of a typical communications-satellite business venture. A stochastic Monte-Carlo communications-satellite business model is employed which includes factors such as satellite configuration, random and wearout failures, reliability of launch and space operations, stand-down time resulting from failures, and insurance by operation. Financial performance impacts have been evaluated in terms of the magnitude of investment, net present value, and return on investment.
Dempsey, David; Kelkar, Sharad; Davatzes, Nick; Hickman, Stephen H.; Moos, Daniel
2015-01-01
Creation of an Enhanced Geothermal System relies on stimulation of fracture permeability through self-propping shear failure that creates a complex fracture network with high surface area for efficient heat transfer. In 2010, shear stimulation was carried out in well 27-15 at Desert Peak geothermal field, Nevada, by injecting cold water at pressure less than the minimum principal stress. An order-of-magnitude improvement in well injectivity was recorded. Here, we describe a numerical model that accounts for injection-induced stress changes and permeability enhancement during this stimulation. In a two-part study, we use the coupled thermo-hydrological-mechanical simulator FEHM to: (i) construct a wellbore model for non-steady bottom-hole temperature and pressure conditions during the injection, and (ii) apply these pressures and temperatures as a source term in a numerical model of the stimulation. In this model, a Mohr-Coulomb failure criterion and empirical fracture permeability is developed to describe permeability evolution of the fractured rock. The numerical model is calibrated using laboratory measurements of material properties on representative core samples and wellhead records of injection pressure and mass flow during the shear stimulation. The model captures both the absence of stimulation at low wellhead pressure (WHP ≤1.7 and ≤2.4 MPa) as well as the timing and magnitude of injectivity rise at medium WHP (3.1 MPa). Results indicate that thermoelastic effects near the wellbore and the associated non-local stresses further from the well combine to propagate a failure front away from the injection well. Elevated WHP promotes failure, increases the injection rate, and cools the wellbore; however, as the overpressure drops off with distance, thermal and non-local stresses play an ongoing role in promoting shear failure at increasing distance from the well.
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther
2017-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in various coordinate directions. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current paper, the complete development of the failure model is described and the generation of a tabulated failure surface for a representative composite material is discussed.
Chiu, Yuan-Shyi Peter; Sung, Peng-Cheng; Chiu, Singa Wang; Chou, Chung-Li
2015-01-01
This study uses mathematical modeling to examine a multi-product economic manufacturing quantity (EMQ) model with an enhanced end items issuing policy and rework failures. We assume that a multi-product EMQ model randomly generates nonconforming items. All of the defective are reworked, but a certain portion fails and becomes scraps. When rework process ends and the entire lot of each product is quality assured, a cost reduction n + 1 end items issuing policy is used to transport finished items of each product. As a result, a closed-form optimal production cycle time is obtained. A numerical example demonstrates the practical usage of our result and confirms a significant savings in stock holding and overall production costs as compared to that of a prior work (Chiu et al. in J Sci Ind Res India, 72:435-440 2013) in the literature.
Reliability Growth of Tactical Coolers at CMC Electronics Cincinnati: 1/5-Watt Cooler Test Report
NASA Astrophysics Data System (ADS)
Kuo, D. T.; Lody, T. D.
2004-06-01
CMC Electronics Cincinnati (CMC) is conducting a reliability growth program to extend the life of tactical Stirling-cycle cryocoolers. The continuous product improvement processes consist of testing production coolers to failure, determining the root cause, incorporating improvements and verification. The most recent life data for the 1/5-Watt Cooler (Model B512B) is presented with a discussion of leading root causes and potential improvements. The mean time to failure (MTTF) life of the coolers was found to be 22,552 hours with the root cause of failure attributed to the accumulation of methane and carbon dioxide in the cooler and the wear of the piston.
Predicting Failure Progression and Failure Loads in Composite Open-Hole Tension Coupons
NASA Technical Reports Server (NTRS)
Arunkumar, Satyanarayana; Przekop, Adam
2010-01-01
Failure types and failure loads in carbon-epoxy [45n/90n/-45n/0n]ms laminate coupons with central circular holes subjected to tensile load are simulated using progressive failure analysis (PFA) methodology. The progressive failure methodology is implemented using VUMAT subroutine within the ABAQUS(TradeMark)/Explicit nonlinear finite element code. The degradation model adopted in the present PFA methodology uses an instantaneous complete stress reduction (COSTR) approach to simulate damage at a material point when failure occurs. In-plane modeling parameters such as element size and shape are held constant in the finite element models, irrespective of laminate thickness and hole size, to predict failure loads and failure progression. Comparison to published test data indicates that this methodology accurately simulates brittle, pull-out and delamination failure types. The sensitivity of the failure progression and the failure load to analytical loading rates and solvers precision is demonstrated.
Wang, He; Lu, Shi-Chun; He, Lei; Dong, Jia-Hong
2018-02-01
Liver failure remains as the most common complication and cause of death after hepatectomy, and continues to be a challenge for doctors.t test and χ test were used for single factor analysis of data-related variables, then results were introduced into the model to undergo the multiple factors logistic regression analysis. Pearson correlation analysis was performed for related postoperative indexes, and a diagnostic evaluation was performed using the receiver operating characteristic (ROC) of postoperative indexes.Differences in age, body mass index (BMI), portal vein hypertension, bile duct cancer, total bilirubin, alkaline phosphatase (ALP), gamma-glutamyl transpeptidase (GGT), operation time, cumulative portal vein occlusion time, intraoperative blood volume, residual liver volume (RLV)/entire live rvolume, ascites volume at postoperative day (POD)3, supplemental albumin amount at POD3, hospitalization time after operation, and the prothrombin activity (PTA) were statistically significant. Furthermore, there were significant differences in total bilirubin and the supplemental albumin amount at POD3. ROC analysis of the average PTA, albumin amounts, ascites volume at POD3, and their combined diagnosis were performed, which had diagnostic value for postoperative liver failure (area under the curve (AUC): 0.895, AUC: 0.798, AUC: 0.775, and AUC: 0.903).Preoperative total bilirubin level and the supplemental albumin amount at POD3 were independent risk factors. PTA can be used as the index of postoperative liver failure, and the combined diagnosis of the indexes can improve the early prediction of postoperative liver failure.
de Moura Xavier, José Carlos; de Andrade Azevedo, Irany; de Sousa Junior, Wilson Cabral; Nishikawa, Augusto
2013-02-01
Atmospheric pollutant monitoring constitutes a primordial activity in public policies concerning air quality. In São Paulo State, Brazil, the São Paulo State Environment Company (CETESB) maintains an automatic network which continuously monitors CO, SO(2), NO(x), O(3), and particulate matter concentrations in the air. The monitoring process accuracy is a fundamental condition for the actions to be taken by CETESB. As one of the support systems, a preventive maintenance program for the different analyzers used is part of the data quality strategy. Knowledge of the behavior of analyzer failure times could help optimize the program. To achieve this goal, the failure times of an ozone analyzer-considered a repairable system-were modeled by means of the nonhomogeneous Poisson process. The rate of occurrence of failures (ROCOF) was estimated for the intervals 0-70,800 h and 0-88,320 h, in which six and seven failures were observed, respectively. The results showed that the ROCOF estimate is influenced by the choice of the observation period, t(0) = 70,800 h and t(7) = 88,320 h in the cases analyzed. Identification of preventive maintenance actions, mainly when parts replacement occurs in the last interval of observation, is highlighted, justifying the alteration in the behavior of the inter-arrival times. The performance of a follow-up on each analyzer is recommended in order to record the impact of the performed preventive maintenance program on the enhancement of its useful life.
Kawanishi, D T; Song, S; Furman, S; Parsonnet, V; Pioger, G; Petitot, J C; Godin, J F
1996-11-01
Formal Monitoring of Performance is Still Needed. In order to detect trends in the number of device or component failures that have occurred among permanent pacemaker systems since the 1970s, we reviewed the data of the five largest pacemaker manufacturers from the Bilitch Registry of permanent pacemaker pulse generators, the Stimarec failure registry, the general accounting office summaries of the United States Veterans Administration (VA) Registry of Pacemaker Leads, and the Implantable Lead Registry, from the Cleveland Clinic Lead registry, and the recalls and safety alerts issued by the United States Food and Drug Administration (FDA) over the last 20 years. The definition of failure followed the criterion, or criteria, developed within each registry and differed significantly between the registries. The 20-year period between 1976 and 1995 was divided into 5-year quartiles (QT): QT 1 = 1976-1980; QT2 = 1981-1985; QT3 = 1986-1990; and QT4 = 1991-1995. For pulse generators, the number of models with failures in each quartile in the Bilitch Registry were: QT 1 = 9; QT 2 = 11; QT3 = 17; QT4 = 13. In Stimarec, the number of units reported as having reached a dangerous condition were: QT1 = 710; QT2 = 212; QT3 = 114; QT4 = 310. From the FDA reports, the number of units included in recalls or safety alerts were: QT3 = 6,085; QT4 = 135,766. For permanent pacemaker leads, the numbers of failed or dangerous leads recorded in Stimarec were: QT3 = 16; QT4 = 32. In the VA Registry, the number of models having a below average survival was 2/92 (2.7%). In the Implantable Lead Registry, the number of models having a below average survival was 3/21 (14%). In the Cleveland Clinic series, 6/13 (46%) of lead models were recognized to have some failure involving the conductor, insulation, or connector. In the FDA reports, the number of leads involved in either recall or safety alert were: QT3 = 20,354; QT4 = 332,105. For programmers, the number of units involved either in a recall or safety alert were: QT3 = 11,124; QT4 = 3,528. In all of these series, each of the five largest manufacturers had some models or units involved in each time period. This review of programs has revealed: 1. The incidence of failures, recalls, or safety alerts did not decline over time; and 2. Despite changes in technology, formal monitoring of pacemaker systems is still warranted.
NASA Astrophysics Data System (ADS)
Sexton, E.; Thomas, A.; Delbridge, B. G.
2017-12-01
Large earthquakes often exhibit complex slip distributions and occur along non-planar fault geometries, resulting in variable stress changes throughout the region of the fault hosting aftershocks. To better discern the role of geometric discontinuities on aftershock sequences, we compare areas of enhanced and reduced Coulomb failure stress and mean stress for systematic differences in the time dependence and productivity of these aftershock sequences. In strike-slip faults, releasing structures, including stepovers and bends, experience an increase in both Coulomb failure stress and mean stress during an earthquake, promoting fluid diffusion into the region and further failure. Conversely, Coulomb failure stress and mean stress decrease in restraining bends and stepovers in strike-slip faults, and fluids diffuse away from these areas, discouraging failure. We examine spatial differences in seismicity patterns along structurally complex strike-slip faults which have hosted large earthquakes, such as the 1992 Mw 7.3 Landers, the 2010 Mw 7.2 El-Mayor Cucapah, the 2014 Mw 6.0 South Napa, and the 2016 Mw 7.0 Kumamoto events. We characterize the behavior of these aftershock sequences with the Epidemic Type Aftershock-Sequence Model (ETAS). In this statistical model, the total occurrence rate of aftershocks induced by an earthquake is λ(t) = λ_0 + \\sum_{i:t_i
A heart failure initiative to reduce the length of stay and readmission rates.
White, Sabrina Marie; Hill, Alethea
2014-01-01
The purpose of this pilot was to improve multidisciplinary coordination of care and patient education and foster self-management behaviors. The primary and secondary outcomes achieved from this pilot were to decrease the 30-day readmission rate and heart failure length of stay. The primary practice site was an inpatient medical-surgical nursing unit. The length of stay decreased from 6.05% to 4.42% for heart failure diagnostic-related group 291 as a result of utilizing the model. The length of stay decreased from 3.9% to 3.09%, which was also less than the national rate of 3.8036% for diagnostic-related group 292. In addition, the readmission rate decreased from 23.1% prior to January 2013 to 12.9%. Implementation of standards of care coordination can decrease length of stay, readmission rate, and improve self-management. Implementation of evidence-based heart failure guidelines, improved interdisciplinary coordination of care, patient education, self-management skills, and transitional care at the time of discharge improved overall heart failure outcome measures. Utilizing the longitudinal model of care to transition patients to home aided in evaluating social support, resource allocation and utilization, access to care postdischarge, and interdisciplinary coordination of care. The collaboration between disciplines improved continuity of care, patient compliance to their discharge regimen, and adequate discharge follow-up.
Machine learning for the New York City power grid.
Rudin, Cynthia; Waltz, David; Anderson, Roger N; Boulanger, Albert; Salleb-Aouissi, Ansaf; Chow, Maggie; Dutta, Haimonti; Gross, Philip N; Huang, Bert; Ierome, Steve; Isaac, Delfina F; Kressner, Arthur; Passonneau, Rebecca J; Radeva, Axinia; Wu, Leon
2012-02-01
Power companies can benefit from the use of knowledge discovery methods and statistical machine learning for preventive maintenance. We introduce a general process for transforming historical electrical grid data into models that aim to predict the risk of failures for components and systems. These models can be used directly by power companies to assist with prioritization of maintenance and repair work. Specialized versions of this process are used to produce 1) feeder failure rankings, 2) cable, joint, terminator, and transformer rankings, 3) feeder Mean Time Between Failure (MTBF) estimates, and 4) manhole events vulnerability rankings. The process in its most general form can handle diverse, noisy, sources that are historical (static), semi-real-time, or realtime, incorporates state-of-the-art machine learning algorithms for prioritization (supervised ranking or MTBF), and includes an evaluation of results via cross-validation and blind test. Above and beyond the ranked lists and MTBF estimates are business management interfaces that allow the prediction capability to be integrated directly into corporate planning and decision support; such interfaces rely on several important properties of our general modeling approach: that machine learning features are meaningful to domain experts, that the processing of data is transparent, and that prediction results are accurate enough to support sound decision making. We discuss the challenges in working with historical electrical grid data that were not designed for predictive purposes. The “rawness” of these data contrasts with the accuracy of the statistical models that can be obtained from the process; these models are sufficiently accurate to assist in maintaining New York City’s electrical grid.
Friction of hard surfaces and its application in earthquakes and rock slope stability
NASA Astrophysics Data System (ADS)
Sinha, Nitish; Singh, Arun K.; Singh, Trilok N.
2018-05-01
In this article, we discuss the friction models for hard surfaces and their applications in earth sciences. The rate and state friction (RSF) model, which is basically modified form of the classical Amontons-Coulomb friction laws, is widely used for explaining the crustal earthquakes and the rock slope failures. Yet the RSF model has further been modified by considering the role of temperature at the sliding interface known as the rate, state and temperature friction (RSTF) model. Further, if the pore pressure is also taken into account then it is stated as the rate, state, temperature and pore pressure friction (RSTPF) model. All the RSF models predict a critical stiffness as well as a critical velocity at which sliding behavior becomes stable/unstable. The friction models are also used for predicting time of failure of the rock mass on an inclined plane. Finally, the limitation and possibilities of the proposed friction models are also highlighted.
Instantaneous and controllable integer ambiguity resolution: review and an alternative approach
NASA Astrophysics Data System (ADS)
Zhang, Jingyu; Wu, Meiping; Li, Tao; Zhang, Kaidong
2015-11-01
In the high-precision application of Global Navigation Satellite System (GNSS), integer ambiguity resolution is the key step to realize precise positioning and attitude determination. As the necessary part of quality control, integer aperture (IA) ambiguity resolution provides the theoretical and practical foundation for ambiguity validation. It is mainly realized by acceptance testing. Due to the constraint of correlation between ambiguities, it is impossible to realize the controlling of failure rate according to analytical formula. Hence, the fixed failure rate approach is implemented by Monte Carlo sampling. However, due to the characteristics of Monte Carlo sampling and look-up table, we have to face the problem of a large amount of time consumption if sufficient GNSS scenarios are included in the creation of look-up table. This restricts the fixed failure rate approach to be a post process approach if a look-up table is not available. Furthermore, if not enough GNSS scenarios are considered, the table may only be valid for a specific scenario or application. Besides this, the method of creating look-up table or look-up function still needs to be designed for each specific acceptance test. To overcome these problems in determination of critical values, this contribution will propose an instantaneous and CONtrollable (iCON) IA ambiguity resolution approach for the first time. The iCON approach has the following advantages: (a) critical value of acceptance test is independently determined based on the required failure rate and GNSS model without resorting to external information such as look-up table; (b) it can be realized instantaneously for most of IA estimators which have analytical probability formulas. The stronger GNSS model, the less time consumption; (c) it provides a new viewpoint to improve the research about IA estimation. To verify these conclusions, multi-frequency and multi-GNSS simulation experiments are implemented. Those results show that IA estimators based on iCON approach can realize controllable ambiguity resolution. Besides this, compared with ratio test IA based on look-up table, difference test IA and IA least square based on the iCON approach most of times have higher success rates and better controllability to failure rates.
Penza, Veronica; Du, Xiaofei; Stoyanov, Danail; Forgione, Antonello; Mattos, Leonardo S; De Momi, Elena
2018-04-01
Despite the benefits introduced by robotic systems in abdominal Minimally Invasive Surgery (MIS), major complications can still affect the outcome of the procedure, such as intra-operative bleeding. One of the causes is attributed to accidental damages to arteries or veins by the surgical tools, and some of the possible risk factors are related to the lack of sub-surface visibilty. Assistive tools guiding the surgical gestures to prevent these kind of injuries would represent a relevant step towards safer clinical procedures. However, it is still challenging to develop computer vision systems able to fulfill the main requirements: (i) long term robustness, (ii) adaptation to environment/object variation and (iii) real time processing. The purpose of this paper is to develop computer vision algorithms to robustly track soft tissue areas (Safety Area, SA), defined intra-operatively by the surgeon based on the real-time endoscopic images, or registered from a pre-operative surgical plan. We propose a framework to combine an optical flow algorithm with a tracking-by-detection approach in order to be robust against failures caused by: (i) partial occlusion, (ii) total occlusion, (iii) SA out of the field of view, (iv) deformation, (v) illumination changes, (vi) abrupt camera motion, (vii), blur and (viii) smoke. A Bayesian inference-based approach is used to detect the failure of the tracker, based on online context information. A Model Update Strategy (MUpS) is also proposed to improve the SA re-detection after failures, taking into account the changes of appearance of the SA model due to contact with instruments or image noise. The performance of the algorithm was assessed on two datasets, representing ex-vivo organs and in-vivo surgical scenarios. Results show that the proposed framework, enhanced with MUpS, is capable of maintain high tracking performance for extended periods of time ( ≃ 4 min - containing the aforementioned events) with high precision (0.7) and recall (0.8) values, and with a recovery time after a failure between 1 and 8 frames in the worst case. Copyright © 2017 Elsevier B.V. All rights reserved.
FEAT - FAILURE ENVIRONMENT ANALYSIS TOOL (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Pack, G.
1994-01-01
The Failure Environment Analysis Tool, FEAT, enables people to see and better understand the effects of failures in a system. FEAT uses digraph models to determine what will happen to a system if a set of failure events occurs and to identify the possible causes of a selected set of failures. Failures can be user-selected from either engineering schematic or digraph model graphics, and the effects or potential causes of the failures will be color highlighted on the same schematic or model graphic. As a design tool, FEAT helps design reviewers understand exactly what redundancies have been built into a system and where weaknesses need to be protected or designed out. A properly developed digraph will reflect how a system functionally degrades as failures accumulate. FEAT is also useful in operations, where it can help identify causes of failures after they occur. Finally, FEAT is valuable both in conceptual development and as a training aid, since digraphs can identify weaknesses in scenarios as well as hardware. Digraphs models for use with FEAT are generally built with the Digraph Editor, a Macintosh-based application which is distributed with FEAT. The Digraph Editor was developed specifically with the needs of FEAT users in mind and offers several time-saving features. It includes an icon toolbox of components required in a digraph model and a menu of functions for manipulating these components. It also offers FEAT users a convenient way to attach a formatted textual description to each digraph node. FEAT needs these node descriptions in order to recognize nodes and propagate failures within the digraph. FEAT users store their node descriptions in modelling tables using any word processing or spreadsheet package capable of saving data to an ASCII text file. From within the Digraph Editor they can then interactively attach a properly formatted textual description to each node in a digraph. Once descriptions are attached to them, a selected set of nodes can be saved as a library file which represents a generic digraph structure for a class of components. The Generate Model feature can then use library files to generate digraphs for every component listed in the modeling tables, and these individual digraph files can be used in a variety of ways to speed generation of complete digraph models. FEAT contains a preprocessor which performs transitive closure on the digraph. This multi-step algorithm builds a series of phantom bridges, or gates, that allow accurate bi-directional processing of digraphs. This preprocessing can be time-consuming, but once preprocessing is complete, queries can be answered and displayed within seconds. A UNIX X-Windows port of version 3.5 of FEAT, XFEAT, is also available to speed the processing of digraph models created on the Macintosh. FEAT v3.6, which is only available for the Macintosh, has some report generation capabilities which are not available in XFEAT. For very large integrated systems, FEAT can be a real cost saver in terms of design evaluation, training, and knowledge capture. The capability of loading multiple digraphs and schematics into FEAT allows modelers to build smaller, more focused digraphs. Typically, each digraph file will represent only a portion of a larger failure scenario. FEAT will combine these files and digraphs from other modelers to form a continuous mathematical model of the system's failure logic. Since multiple digraphs can be cumbersome to use, FEAT ties propagation results to schematic drawings produced using MacDraw II (v1.1v2 or later) or MacDraw Pro. This makes it easier to identify single and double point failures that may have to cross several system boundaries and multiple engineering disciplines before creating a hazardous condition. FEAT v3.6 for the Macintosh is written in C-language using Macintosh Programmer's Workshop C v3.2. It requires at least a Mac II series computer running System 7 or System 6.0.8 and 32 Bit QuickDraw. It also requires a math coprocessor or coprocessor emulator and a color monitor (or one with 256 gray scale capability). A minimum of 4Mb of free RAM is highly recommended. The UNIX version of FEAT includes both FEAT v3.6 for the Macintosh and XFEAT. XFEAT is written in C-language for Sun series workstations running SunOS, SGI workstations running IRIX, DECstations running ULTRIX, and Intergraph workstations running CLIX version 6. It requires the MIT X Window System, Version 11 Revision 4, with OSF/Motif 1.1.3, and 16Mb of RAM. The standard distribution medium for FEAT 3.6 (Macintosh version) is a set of three 3.5 inch Macintosh format diskettes. The standard distribution package for the UNIX version includes the three FEAT 3.6 Macintosh diskettes plus a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format which contains XFEAT. Alternate distribution media and formats for XFEAT are available upon request. FEAT has been under development since 1990. Both FEAT v3.6 for the Macintosh and XFEAT v3.5 were released in 1993.
Stress and Reliability Analysis of a Metal-Ceramic Dental Crown
NASA Technical Reports Server (NTRS)
Anusavice, Kenneth J; Sokolowski, Todd M.; Hojjatie, Barry; Nemeth, Noel N.
1996-01-01
Interaction of mechanical and thermal stresses with the flaws and microcracks within the ceramic region of metal-ceramic dental crowns can result in catastrophic or delayed failure of these restorations. The objective of this study was to determine the combined influence of induced functional stresses and pre-existing flaws and microcracks on the time-dependent probability of failure of a metal-ceramic molar crown. A three-dimensional finite element model of a porcelain fused-to-metal (PFM) molar crown was developed using the ANSYS finite element program. The crown consisted of a body porcelain, opaque porcelain, and a metal substrate. The model had a 300 Newton load applied perpendicular to one cusp, a load of 30ON applied at 30 degrees from the perpendicular load case, directed toward the center, and a 600 Newton vertical load. Ceramic specimens were subjected to a biaxial flexure test and the load-to-failure of each specimen was measured. The results of the finite element stress analysis and the flexure tests were incorporated in the NASA developed CARES/LIFE program to determine the Weibull and fatigue parameters and time-dependent fracture reliability of the PFM crown. CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/Or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program.
NASA Astrophysics Data System (ADS)
Jezequel, T.; Auzoux, Q.; Le Boulch, D.; Bono, M.; Andrieu, E.; Blanc, C.; Chabretou, V.; Mozzani, N.; Rautenberg, M.
2018-02-01
During accidental power transient conditions with Pellet Cladding Interaction (PCI), the synergistic effect of the stress and strain imposed on the cladding by thermal expansion of the fuel, and corrosion by iodine released as a fission product, may lead to cladding failure by Stress Corrosion Cracking (SCC). In this study, internal pressure tests were conducted on unirradiated cold-worked stress-relieved Zircaloy-4 cladding tubes in an iodine vapor environment. The goal was to investigate the influence of loading type (constant pressure tests, constant circumferential strain rate tests, or constant circumferential strain tests) and test temperature (320, 350, or 380 °C) on iodine-induced stress corrosion cracking (I-SCC). The experimental results obtained with different loading types were consistent with each other. The apparent threshold hoop stress for I-SCC was found to be independent of the test temperature. SEM micrographs of the tested samples showed many pits distributed over the inner surface, which tended to coalesce into large pits in which a microcrack could initiate. A model for the time-to-failure of a cladding tube was developed using finite element simulations of the viscoplastic mechanical behavior of the material and a modified Kachanov's damage growth model. The times-to-failure predicted by this model are consistent with the experimental data.
Williams, Brent A; Agarwal, Shikhar
2018-02-23
Prediction models such as the Seattle Heart Failure Model (SHFM) can help guide management of heart failure (HF) patients, but the SHFM has not been validated in the office environment. This retrospective cohort study assessed the predictive performance of the SHFM among patients with new or pre-existing HF in the context of an office visit.Methods and Results:SHFM elements were ascertained through electronic medical records at an office visit. The primary outcome was all-cause mortality. A "warranty period" for the baseline SHFM risk estimate was sought by examining predictive performance over time through a series of landmark analyses. Discrimination and calibration were estimated according to the proposed warranty period. Low- and high-risk thresholds were proposed based on the distribution of SHFM estimates. Among 26,851 HF patients, 14,380 (54%) died over a mean 4.7-year follow-up period. The SHFM lost predictive performance over time, with C=0.69 and C<0.65 within 3 and beyond 12 months from baseline respectively. The diminishing predictive value was attributed to modifiable SHFM elements. Discrimination (C=0.66) and calibration for 12-month mortality were acceptable. A low-risk threshold of ∼5% mortality risk within 12 months reflects the 10% of HF patients in the office setting with the lowest risk. The SHFM has utility in the office environment.
Goldstein, Benjamin A; Thomas, Laine; Zaroff, Jonathan G; Nguyen, John; Menza, Rebecca; Khush, Kiran K
2016-07-01
Over the past two decades, there have been increasingly long waiting times for heart transplantation. We studied the relationship between heart transplant waiting time and transplant failure (removal from the waitlist, pretransplant death, or death or graft failure within 1 year) to determine the risk that conservative donor heart acceptance practices confer in terms of increasing the risk of failure among patients awaiting transplantation. We studied a cohort of 28,283 adults registered on the United Network for Organ Sharing heart transplant waiting list between 2000 and 2010. We used Kaplan-Meier methods with inverse probability censoring weights to examine the risk of transplant failure accumulated over time spent on the waiting list (pretransplant). In addition, we used transplant candidate blood type as an instrumental variable to assess the risk of transplant failure associated with increased wait time. Our results show that those who wait longer for a transplant have greater odds of transplant failure. While on the waitlist, the greatest risk of failure is during the first 60 days. Doubling the amount of time on the waiting list was associated with a 10% (1.01, 1.20) increase in the odds of failure within 1 year after transplantation. Our findings suggest a relationship between time spent on the waiting list and transplant failure, thereby supporting research aimed at defining adequate donor heart quality and acceptance standards for heart transplantation.
Tenofovir in second-line ART in Zambia and South Africa: Collaborative analysis of cohort studies
Wandeler, Gilles; Keiser, Olivia; Mulenga, Lloyd; Hoffmann, Christopher J; Wood, Robin; Chaweza, Thom; Brennan, Alana; Prozesky, Hans; Garone, Daniela; Giddy, Janet; Chimbetete, Cleophas; Boulle, Andrew; Egger, Matthias
2012-01-01
Objectives Tenofovir (TDF) is increasingly used in second-line antiretroviral treatment (ART) in sub-Saharan Africa. We compared outcomes of second-line ART containing and not containing TDF in cohort studies from Zambia and the Republic of South Africa (RSA). Methods Patients aged ≥ 16 years starting protease inhibitor-based second-line ART in Zambia (1 cohort) and RSA (5 cohorts) were included. We compared mortality, immunological failure (all cohorts) and virological failure (RSA only) between patients receiving and not receiving TDF. Competing risk models and Cox models adjusted for age, sex, CD4 count, time on first-line ART and calendar year were used to analyse mortality and treatment failure, respectively. Hazard ratios (HRs) were combined in fixed-effects meta-analysis. Findings 1,687 patients from Zambia and 1,556 patients from RSA, including 1,350 (80.0%) and 206 (13.2%) patients starting TDF, were followed over 4,471 person-years. Patients on TDF were more likely to have started second-line ART in recent years, and had slightly higher baseline CD4 counts than patients not on TDF. Overall 127 patients died, 532 were lost to follow-up and 240 patients developed immunological failure. In RSA 94 patients had virologic failure. Combined HRs comparing tenofovir with other regimens were 0.60 (95% CI 0.41–0.87) for immunologic failure and 0.63 (0.38–1.05) for mortality. The HR for virologic failure in RSA was 0.28 (0.09–0.90). Conclusions In this observational study patients on TDF-containing second-line ART were less likely to develop treatment failure than patients on other regimens. TDF seems to be an effective component of second-line ART in southern Africa. PMID:22743595
Zhang, Lijun; Jia, Xiaofang; Peng, Xia; Ou, Qiang; Zhang, Zhengguo; Qiu, Chao; Yao, Yamin; Shen, Fang; Yang, Hua; Ma, Fang; Wang, Jiefei; Yuan, Zhenghong
2010-10-01
This paper presents an liquid chromatography (LC)/mass spectrometry (MS)-based metabonomic platform that combined the discovery of differential metabolites through principal component analysis (PCA) with the verification by selective multiple reaction monitoring (MRM). These methods were applied to analyze plasma samples from liver disease patients and healthy donors. LC-MS raw data (about 1000 compounds), from the plasma of liver failure patients (n = 26) and healthy controls (n = 16), were analyzed through the PCA method and a pattern recognition profile that had significant difference between liver failure patients and healthy controls (P < 0.05) was established. The profile was verified in 165 clinical subjects. The specificity and sensitivity of this model in predicting liver failure were 94.3 and 100.0%, respectively. The differential ions with m/z of 414.5, 432.0, 520.5, and 775.0 were verified to be consistent with the results from PCA by MRM mode in 40 clinical samples, and were proved not to be caused by the medicines taken by patients through rat model experiments. The compound with m/z of 520.5 was identified to be 1-Linoleoylglycerophosphocholine or 1-Linoleoylphosphatidylcholine through exact mass measurements performed using Ion Trap-Time-of-Flight MS and METLIN Metabolite Database search. In all, it was the first time to integrate metabonomic study and MRM relative quantification of differential peaks in a large number of clinical samples. Thereafter, a rat model was used to exclude drug effects on the abundance of differential ion peaks. 1-Linoleoylglycerophosphocholine or 1-Linoleoylphosphatidylcholine, a potential biomarker, was identified. The LC/MS-based metabonomic platform could be a powerful tool for the metabonomic screening of plasma biomarkers.
Implementation of a Helicopter Flight Simulator with Individual Blade Control
NASA Astrophysics Data System (ADS)
Zinchiak, Andrew G.
2011-12-01
Nearly all modern helicopters are designed with a swashplate-based system for control of the main rotor blades. However, the swashplate-based approach does not provide the level of redundancy necessary to cope with abnormal actuator conditions. For example, if an actuator fails (becomes locked) on the main rotor, the cyclic inputs are consequently fixed and the helicopter may become stuck in a flight maneuver. This can obviously be seen as a catastrophic failure, and would likely lead to a crash. These types of failures can be overcome with the application of individual blade control (IBC). IBC is achieved using the blade pitch control method, which provides complete authority of the aerodynamic characteristics of each rotor blade at any given time by replacing the normally rigid pitch links between the swashplate and the pitch horn of the blade with hydraulic or electronic actuators. Thus, IBC can provide the redundancy necessary for subsystem failure accommodation. In this research effort, a simulation environment is developed to investigate the potential of the IBC main rotor configuration for fault-tolerant control. To examine the applications of IBC to failure scenarios and fault-tolerant controls, a conventional, swashplate-based linear model is first developed for hover and forward flight scenarios based on the UH-60 Black Hawk helicopter. The linear modeling techniques for the swashplate-based helicopter are then adapted and expanded to include IBC. Using these modified techniques, an IBC based mathematical model of the UH-60 helicopter is developed for the purposes of simulation and analysis. The methodology can be used to model and implement a different aircraft if geometric, gravimetric, and general aerodynamic data are available. Without the kinetic restrictions of the swashplate, the IBC model effectively decouples the cyclic control inputs between different blades. Simulations of the IBC model prove that the primary control functions can be manually reconfigured after local actuator failures are initiated, thus preventing a catastrophic failure or crash. Furthermore, this simulator promises to be a useful tool for the design, testing, and analysis of fault-tolerant control laws.
Stegmaier, Petra; Drendel, Vanessa; Mo, Xiaokui; Ling, Stella; Fabian, Denise; Manring, Isabel; Jilg, Cordula A.; Schultze-Seemann, Wolfgang; McNulty, Maureen; Zynger, Debra L.; Martin, Douglas; White, Julia; Werner, Martin; Grosu, Anca L.; Chakravarti, Arnab
2015-01-01
Purpose To develop a microRNA (miRNA)-based predictive model for prostate cancer patients of 1) time to biochemical recurrence after radical prostatectomy and 2) biochemical recurrence after salvage radiation therapy following documented biochemical disease progression post-radical prostatectomy. Methods Forty three patients who had undergone salvage radiation therapy following biochemical failure after radical prostatectomy with greater than 4 years of follow-up data were identified. Formalin-fixed, paraffin-embedded tissue blocks were collected for all patients and total RNA was isolated from 1mm cores enriched for tumor (>70%). Eight hundred miRNAs were analyzed simultaneously using the nCounter human miRNA v2 assay (NanoString Technologies; Seattle, WA). Univariate and multivariate Cox proportion hazards regression models as well as receiver operating characteristics were used to identify statistically significant miRNAs that were predictive of biochemical recurrence. Results Eighty eight miRNAs were identified to be significantly (p<0.05) associated with biochemical failure post-prostatectomy by multivariate analysis and clustered into two groups that correlated with early (≤ 36 months) versus late recurrence (>36 months). Nine miRNAs were identified to be significantly (p<0.05) associated by multivariate analysis with biochemical failure after salvage radiation therapy. A new predictive model for biochemical recurrence after salvage radiation therapy was developed; this model consisted of miR-4516 and miR-601 together with, Gleason score, and lymph node status. The area under the ROC curve (AUC) was improved to 0.83 compared to that of 0.66 for Gleason score and lymph node status alone. Conclusion miRNA signatures can distinguish patients who fail soon after radical prostatectomy versus late failures, giving insight into which patients may need adjuvant therapy. Notably, two novel miRNAs (miR-4516 and miR-601) were identified that significantly improve prediction of biochemical failure post-salvage radiation therapy compared to clinico-histopathological factors, supporting the use of miRNAs within clinically used predictive models. Both findings warrant further validation studies. PMID:25760964
Cardiovascular risks associated with abacavir and tenofovir exposure in HIV-infected persons.
Choi, Andy I; Vittinghoff, Eric; Deeks, Steven G; Weekley, Cristin C; Li, Yongmei; Shlipak, Michael G
2011-06-19
Abacavir use has been associated with cardiovascular risk, but it is unknown whether this association may be partly explained by patients with kidney disease being preferentially treated with abacavir to avoid tenofovir. Our objective was to compare associations of abacavir and tenofovir with cardiovascular risks in HIV-infected veterans. Cohort study of 10 931 HIV-infected patients initiating antiretroviral therapy in the Veterans Health Administration from 1997 to 2007, using proportional hazards survival regression. Primary predictors were exposure to abacavir or tenofovir within the past 6 months, compared with no exposure to these drugs, respectively. Outcomes were time to first atherosclerotic cardiovascular event, defined as coronary, cerebrovascular, or peripheral arterial disease; and time to incident heart failure. Over 60 588 person-years of observation, there were 501 cardiovascular and 194 heart failure events. Age-standardized event rates among abacavir and tenofovir users were 12.5 versus 8.2 per 1000 person-years for cardiovascular disease, and 3.9 and 3.7 per 1000 person-years for heart failure, respectively. In multivariate-adjusted models, including time-updated measurements of kidney function, recent abacavir use was significantly associated with incident cardiovascular disease [hazard ratio 1.48, 95% confidence interval (CI) 1.08-2.04]; the association was similar but nonsignificant for heart failure (1.45, 0.85-2.47). In contrast, recent tenofovir use was significantly associated with heart failure (1.82, 1.02-3.24), but not with cardiovascular events (0.78, 0.52-1.16). Recent abacavir exposure was independently associated with increased risk for cardiovascular events. We also observed an association between recent tenofovir exposure and heart failure, which needs to be confirmed in future studies.
Cardiovascular risks associated with abacavir and tenofovir exposure in HIV-infected persons
Choi, Andy I.; Vittinghoff, Eric; Deeks, Steven G.; Weekley, Cristin C.; Li, Yongmei; Shlipak, Michael G.
2014-01-01
Objective Abacavir use has been associated with cardiovascular risk, but it is unknown whether this association may be partly explained by patients with kidney disease being preferentially treated with abacavir to avoid tenofovir. Our objective was to compare associations of abacavir and tenofovir with cardiovascular risks in HIV-infected veterans. Design Cohort study of 10 931 HIV-infected patients initiating antiretroviral therapy in the Veterans Health Administration from 1997 to 2007, using proportional hazards survival regression. Methods Primary predictors were exposure to abacavir or tenofovir within the past 6 months, compared with no exposure to these drugs, respectively. Outcomes were time to first atherosclerotic cardiovascular event, defined as coronary, cerebrovascular, or peripheral arterial disease; and time to incident heart failure. Results Over 60 588 person-years of observation, there were 501 cardiovascular and 194 heart failure events. Age-standardized event rates among abacavir and tenofovir users were 12.5 versus 8.2 per 1000 person-years for cardiovascular disease, and 3.9 and 3.7 per 1000 person-years for heart failure, respectively. In multivariate-adjusted models, including time-updated measurements of kidney function, recent abacavir use was significantly associated with incident cardiovascular disease [hazard ratio 1.48, 95% confidence interval (CI) 1.08–2.04]; the association was similar but nonsignificant for heart failure (1.45, 0.85–2.47). In contrast, recent tenofovir use was significantly associated with heart failure (1.82, 1.02–3.24), but not with cardiovascular events (0.78, 0.52–1.16). Conclusion Recent abacavir exposure was independently associated with increased risk for cardiovascular events. We also observed an association between recent tenofovir exposure and heart failure, which needs to be confirmed in future studies. PMID:21516027
Gotsman, Israel; Ezra, Orly; Hirsh Raccah, Bruria; Admon, Dan; Lotan, Chaim; Dekeyser Ganz, Freda
2017-08-01
Many patients with heart failure need anticoagulants, including warfarin. Good control is particularly challenging in heart failure patients, with <60% of international normalized ratio (INR) measurements in the therapeutic range, thereby increasing the risk of complications. This study aimed to evaluate the effect of a patient-specific tailored intervention on anticoagulation control in patients with heart failure. Patients with heart failure taking warfarin therapy (n = 145) were randomized to either standard care or a 1-time intervention assessing potential risk factors for lability of INR, in which they received patient-specific instructions. Time in therapeutic range (TTR) using Rosendaal's linear model was assessed 3 months before and after the intervention. The patient-tailored intervention significantly increased anticoagulation control. The median TTR levels before intervention were suboptimal in the interventional and control groups (53% vs 45%, P = .14). After intervention the median TTR increased significantly in the interventional group compared with the control group (80% [interquartile range, 62%-93%] vs 44% [29%-61%], P <.0001). The intervention resulted in a significant improvement in the interventional group before versus after intervention (53% vs 80%, P <.0001) but not in the control group (45% vs 44%, P = .95). The percentage of patients with a TTR ≥60%, considered therapeutic, was substantially higher in the interventional group: 79% versus 25% (P <.0001). The INR variability (standard deviation of each patient's INR measurements) decreased significantly in the interventional group, from 0.53 to 0.32 (P <.0001) after intervention but not in the control group. Patient-specific tailored intervention significantly improves anticoagulation therapy in patients with heart failure. Copyright © 2017 Elsevier Inc. All rights reserved.
French, David; Noroozi, Mehdi; Shariati, Batoul; Larjava, Hannu
2016-01-01
The aim of this retrospective study was to investigate whether self-reported allergy to penicillin may contribute to a higher rate of postsurgical infection and implant failure. This retrospective, non-interventional, open cohort study reports on implant survival and infection complications of 5,576 implants placed in private practice by one periodontist, and includes 4,132 implants that were followed for at least 1 year. Logistic regression was applied to examine the relationship between self-reported allergy to penicillin and implant survival, while controlling for potential confounders such as smoking, implant site, bone augmentation, loading protocol, immediate implantation, and bone level at baseline. The cumulative survival rate (CSR) was calculated according to the life table method and the Cox proportional hazard model was fitted to data. Out of 5,106 implants placed in patients taking penicillin it was found that 0.8% failed, while 2.1% failed of the 470 implants placed for patients with self-reported allergy to penicillin (P = .002). Odds of failure for implants placed in penicillin-allergic patients were 3.1 times higher than in non-allergic patients. For immediate implant placement, penicillin-allergic patients had a failure rate 10-times higher than the non-allergic cohort. Timing of implant failure occurring within 6 months following implantation was 80% in the penicillin-allergic group versus 54% in the non-allergic group. From the 48 implant sites showing postoperative infection: penicillin-allergic patients had an infection rate of 3.4% (n = 16/470) versus 0.6% in the non-allergic group (n = 32/5,106) (P < .05). Self-reported penicillin allergy was associated with a higher rate of infection, and primarily affected early implant failure.
Onoya, Dorina; Sineke, Tembeka; Brennan, Alana T.; Long, Lawrence; Fox, Matthew P.
2017-01-01
Objectives: We assessed the association between the timing of pregnancy with the risk of postpartum virologic failure and loss from HIV care in South Africa. Design: This is a retrospective cohort study of 6306 HIV-positive women aged 15–49 at antiretroviral therapy (ART) initiation, initiated on ART between January 2004 and December 2013 in Johannesburg, South Africa. Methods: The incidence of virologic failure (two consecutive viral load measurements of >1000 copies/ml) and loss to follow-up (>3 months late for a visit) during 24 months postpartum were assessed using Cox proportional hazards modelling. Results: The rate of postpartum virologic failure was higher following an incident pregnancy on ART [adjusted hazard ratio 1.8, 95% confidence interval (CI): 1.1–2.7] than among women who initiated ART during pregnancy. This difference was sustained among women with CD4+ cell count less than 350 cells/μl at delivery (adjusted hazard ratio 1.8, 95% CI: 1.1–3.0). Predictors of postpartum virologic failure were being viremic, longer time on ART, being 25 or less years old and low CD4+ cell count and anaemia at delivery, as well as initiating ART on stavudine-containing or abacavir-containing regimen. There was no difference postpartum loss to follow-up rates between the incident pregnancies group (hazard ratio 0.9, 95% CI: 0.7–1.1) and those who initiated ART in pregnancy. Conclusion: The risk of virologic failure remains high among postpartum women, particularly those who conceive on ART. The results highlight the need to provide adequate support for HIV-positive women with fertility intention after ART initiation and to strengthen monitoring and retention efforts for postpartum women to sustain the benefits of ART. PMID:28463877
Onoya, Dorina; Sineke, Tembeka; Brennan, Alana T; Long, Lawrence; Fox, Matthew P
2017-07-17
We assessed the association between the timing of pregnancy with the risk of postpartum virologic failure and loss from HIV care in South Africa. This is a retrospective cohort study of 6306 HIV-positive women aged 15-49 at antiretroviral therapy (ART) initiation, initiated on ART between January 2004 and December 2013 in Johannesburg, South Africa. The incidence of virologic failure (two consecutive viral load measurements of >1000 copies/ml) and loss to follow-up (>3 months late for a visit) during 24 months postpartum were assessed using Cox proportional hazards modelling. The rate of postpartum virologic failure was higher following an incident pregnancy on ART [adjusted hazard ratio 1.8, 95% confidence interval (CI): 1.1-2.7] than among women who initiated ART during pregnancy. This difference was sustained among women with CD4 cell count less than 350 cells/μl at delivery (adjusted hazard ratio 1.8, 95% CI: 1.1-3.0). Predictors of postpartum virologic failure were being viremic, longer time on ART, being 25 or less years old and low CD4 cell count and anaemia at delivery, as well as initiating ART on stavudine-containing or abacavir-containing regimen. There was no difference postpartum loss to follow-up rates between the incident pregnancies group (hazard ratio 0.9, 95% CI: 0.7-1.1) and those who initiated ART in pregnancy. The risk of virologic failure remains high among postpartum women, particularly those who conceive on ART. The results highlight the need to provide adequate support for HIV-positive women with fertility intention after ART initiation and to strengthen monitoring and retention efforts for postpartum women to sustain the benefits of ART.
Kantor, Rami; Smeaton, Laura; Vardhanabhuti, Saran; Hudelson, Sarah E.; Wallis, Carol L.; Tripathy, Srikanth; Morgado, Mariza G.; Saravanan, Shanmugham; Balakrishnan, Pachamuthu; Reitsma, Marissa; Hart, Stephen; Mellors, John W.; Halvas, Elias; Grinsztejn, Beatriz; Hosseinipour, Mina C.; Kumwenda, Johnstone; La Rosa, Alberto; Lalloo, Umesh G.; Lama, Javier R.; Rassool, Mohammed; Santos, Breno R.; Supparatpinyo, Khuanchai; Hakim, James; Flanigan, Timothy; Kumarasamy, Nagalingeswaran; Campbell, Thomas B.; Eshleman, Susan H.
2015-01-01
Background. Evaluation of pretreatment HIV genotyping is needed globally to guide treatment programs. We examined the association of pretreatment (baseline) drug resistance and subtype with virologic failure in a multinational, randomized clinical trial that evaluated 3 antiretroviral treatment (ART) regimens and included resource-limited setting sites. Methods. Pol genotyping was performed in a nested case-cohort study including 270 randomly sampled participants (subcohort), and 218 additional participants failing ART (case group). Failure was defined as confirmed viral load (VL) >1000 copies/mL. Cox proportional hazards models estimated resistance–failure association. Results. In the representative subcohort (261/270 participants with genotypes; 44% women; median age, 35 years; median CD4 cell count, 151 cells/µL; median VL, 5.0 log10 copies/mL; 58% non-B subtypes), baseline resistance occurred in 4.2%, evenly distributed among treatment arms and subtypes. In the subcohort and case groups combined (466/488 participants with genotypes), used to examine the association between resistance and treatment failure, baseline resistance occurred in 7.1% (9.4% with failure, 4.3% without). Baseline resistance was significantly associated with shorter time to virologic failure (hazard ratio [HR], 2.03; P = .035), and after adjusting for sex, treatment arm, sex–treatment arm interaction, pretreatment CD4 cell count, baseline VL, and subtype, was still independently associated (HR, 2.1; P = .05). Compared with subtype B, subtype C infection was associated with higher failure risk (HR, 1.57; 95% confidence interval [CI], 1.04–2.35), whereas non-B/C subtype infection was associated with longer time to failure (HR, 0.47; 95% CI, .22–.98). Conclusions. In this global clinical trial, pretreatment resistance and HIV-1 subtype were independently associated with virologic failure. Pretreatment genotyping should be considered whenever feasible. Clinical Trials Registration. NCT00084136. PMID:25681380
Liang, Hongxia; Huang, Ke; Su, Teng; Li, Zhenhua; Hu, Shiqi; Dinh, Phuong-Uyen; Wrona, Emily A; Shao, Chen; Qiao, Li; Vandergriff, Adam C; Hensley, M Taylor; Cores, Jhon; Allen, Tyler; Zhang, Hongyu; Zeng, Qinglei; Xing, Jiyuan; Freytes, Donald O; Shen, Deliang; Yu, Zujiang; Cheng, Ke
2018-06-26
Acute liver failure is a critical condition characterized by global hepatocyte death and often time needs a liver transplantation. Such treatment is largely limited by donor organ shortage. Stem cell therapy offers a promising option to patients with acute liver failure. Yet, therapeutic efficacy and feasibility are hindered by delivery route and storage instability of live cell products. We fabricated a nanoparticle that carries the beneficial regenerative factors from mesenchymal stem cells and further coated it with the membranes of red blood cells to increase blood stability. Unlike uncoated nanoparticles, these particles promote liver cell proliferation in vitro and have lower internalization by macrophage cells. After intravenous delivery, these artificial stem cell analogs are able to remain in the liver and mitigate carbon tetrachloride-induced liver failure in a mouse model, as gauged by histology and liver function test. Our technology provides an innovative and off-the-shelf strategy to treat liver failure.
Manias, Elizabeth; Geddes, Fiona; Watson, Bernadette; Jones, Dorothy; Della, Phillip
2015-01-01
In the emergency department, communication failures occur in clinical handover due to the urgent, changing and unpredictable nature of care provision. We present a case report of a female patient who was assaulted, and identify how various factors interacted to produce communication failures at multiple clinical handovers, leading to a poor patient outcome. Several handovers created many communication failures at diverse time points. The bedside medical handover produced misunderstandings during verbal exchange of information between emergency department consultants and junior doctors, and there was miscommunication involving plastic registrars. There was a failure in adequately informing the general practitioner and the patient relating to follow-up care after discharge. Deficiencies of communication occurred with conveying changes in an investigative report. Communication could be improved by dividing the conduct of handover in a quiet room and at the bedside, ensuring multiple sources of information are used and encouraging role-modelling behaviours for junior clinicians.
Failure detection and fault management techniques for flush airdata sensing systems
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.
1992-01-01
Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.
Statistical analysis of lithium iron sulfide status cell cycle life and failure mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gay, E.C.; Battles, J.E.; Miller, W.E.
1983-08-01
A statistical model was developed for life cycle testing of electrochemical cell life cycle trials and verified experimentally. The Weibull distribution was selected to predict the end of life for a cell, based on a 20 percent loss of initial stabilized capacity or a decrease to less than 95 percent coulombic efficiency. Groups of 12 or more Li-alloy/FeS cells were cycled to determine the mean time to failure (MTTF) and also to identify the failure modes. The cells were all full size electric vehicle batteries with 150-350 A-hr capacity. The Weibull shape factors were determined and verified in prediction ofmore » the number of cell failures in two 10 cell modules. The short circuit failure in the cells with BN-felt and MgO powder separators were found to be caused by the formation of Li-Al protrusions that penetrated the BN-felt separators, and the extrusion of active material at the edge of the electrodes.« less
Grams, Morgan E; Sang, Yingying; Ballew, Shoshana H; Carrero, Juan Jesus; Djurdjev, Ognjenka; Heerspink, Hiddo J L; Ho, Kevin; Ito, Sadayoshi; Marks, Angharad; Naimark, David; Nash, Danielle M; Navaneethan, Sankar D; Sarnak, Mark; Stengel, Benedicte; Visseren, Frank L J; Wang, Angela Yee-Moon; Köttgen, Anna; Levey, Andrew S; Woodward, Mark; Eckardt, Kai-Uwe; Hemmelgarn, Brenda; Coresh, Josef
2018-06-01
Patients with chronic kidney disease and severely decreased glomerular filtration rate (GFR) are at high risk for kidney failure, cardiovascular disease (CVD) and death. Accurate estimates of risk and timing of these clinical outcomes could guide patient counseling and therapy. Therefore, we developed models using data of 264,296 individuals in 30 countries participating in the international Chronic Kidney Disease Prognosis Consortium with estimated GFR (eGFR)s under 30 ml/min/1.73m 2 . Median participant eGFR and urine albumin-to-creatinine ratio were 24 ml/min/1.73m 2 and 168 mg/g, respectively. Using competing-risk regression, random-effect meta-analysis, and Markov processes with Monte Carlo simulations, we developed two- and four-year models of the probability and timing of kidney failure requiring kidney replacement therapy (KRT), a non-fatal CVD event, and death according to age, sex, race, eGFR, albumin-to-creatinine ratio, systolic blood pressure, smoking status, diabetes mellitus, and history of CVD. Hypothetically applied to a 60-year-old white male with a history of CVD, a systolic blood pressure of 140 mmHg, an eGFR of 25 ml/min/1.73m 2 and a urine albumin-to-creatinine ratio of 1000 mg/g, the four-year model predicted a 17% chance of survival after KRT, a 17% chance of survival after a CVD event, a 4% chance of survival after both, and a 28% chance of death (9% as a first event, and 19% after another CVD event or KRT). Risk predictions for KRT showed good overall agreement with the published kidney failure risk equation, and both models were well calibrated with observed risk. Thus, commonly-measured clinical characteristics can predict the timing and occurrence of clinical outcomes in patients with severely decreased GFR. Copyright © 2018 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.
Bleske, Barry E; Zineh, Issam; Hwang, Hyun Seok; Welder, Gregory J; Ghannam, Michael M J; Boluyt, Marvin O
2007-12-01
Hawthorn extract (Crataegeus sp.) a botanical complementary and alternative medicine is often used to treat heart failure. The mechanism(s) by which hawthorn extract may treat heart failure is unknown but may include, theoretically, immunological effects. Therefore, the purpose of this study is to determine the effect of hawthorn extract on the immunomodulatory response in a pressure overload model of heart failure. A total of 62 male Sprague-Dawley rats were randomized to either aortic constriction + vehicle (AC; n=15), aortic constriction + hawthorn 1.3 mg/kg (HL, n=17), aortic constriction + hawthorn 13 mg/kg (HM, n=15), or aortic constriction + hawthorn 130 mg/kg (HH, n=15). Six months after surgical procedure animals were sacrificed and plasma samples obtained for the measurement of the following immunomodulatory markers: interleukin (IL) IL-1ss, IL-2, IL-6, IL-10; and leptin. The mortality rate following 6 months of aortic constriction was 40% in the AC group compared to 41%, 60%, and 53% for the HL, HM, and HH groups respectively (P>0.05 compared to AC). Aortic constriction produced a similar increase in the left ventricle/body weight ratio for all groups. Hawthorn extract had no effect on the immunomodulatory markers measured in this study, although there appeared to be a trend suggesting suppression of IL-2 plasma concentrations. In this animal model of heart failure, hawthorn extract failed to significantly affect the immunomodulatory response characterized after 6 months of pressure overload at a time when approximately 50% mortality was exhibited. Mechanisms other than immunological may better define hawthorn's effect in treating heart failure.
Physical Exercise and Patients with Chronic Renal Failure: A Meta-Analysis.
Qiu, Zhenzhen; Zheng, Kai; Zhang, Haoxiang; Feng, Ji; Wang, Lizhi; Zhou, Hao
2017-01-01
Chronic renal failure is a severe clinical problem which has some significant socioeconomic impact worldwide and hemodialysis is an important way to maintain patients' health state, but it seems difficult to get better in short time. Considering these, the aim in our research is to update and evaluate the effects of exercise on the health of patients with chronic renal failure. The databases were used to search for the relevant studies in English or Chinese. And the association between physical exercise and health state of patients with chronic renal failure has been investigated. Random-effect model was used to compare the physical function and capacity in exercise and control groups. Exercise is helpful in ameliorating the situation of blood pressure in patients with renal failure and significantly reduces VO 2 in patients with renal failure. The results of subgroup analyses show that, in the age >50, physical activity can significantly reduce blood pressure in patients with renal failure. The activity program containing warm-up, strength, and aerobic exercises has benefits in blood pressure among sick people and improves their maximal oxygen consumption level. These can help patients in physical function and aerobic capacity and may give them further benefits.
Physical Exercise and Patients with Chronic Renal Failure: A Meta-Analysis
Qiu, Zhenzhen; Zheng, Kai; Zhang, Haoxiang; Feng, Ji; Wang, Lizhi
2017-01-01
Chronic renal failure is a severe clinical problem which has some significant socioeconomic impact worldwide and hemodialysis is an important way to maintain patients' health state, but it seems difficult to get better in short time. Considering these, the aim in our research is to update and evaluate the effects of exercise on the health of patients with chronic renal failure. The databases were used to search for the relevant studies in English or Chinese. And the association between physical exercise and health state of patients with chronic renal failure has been investigated. Random-effect model was used to compare the physical function and capacity in exercise and control groups. Exercise is helpful in ameliorating the situation of blood pressure in patients with renal failure and significantly reduces VO2 in patients with renal failure. The results of subgroup analyses show that, in the age >50, physical activity can significantly reduce blood pressure in patients with renal failure. The activity program containing warm-up, strength, and aerobic exercises has benefits in blood pressure among sick people and improves their maximal oxygen consumption level. These can help patients in physical function and aerobic capacity and may give them further benefits. PMID:28316986
Comparison between four dissimilar solar panel configurations
NASA Astrophysics Data System (ADS)
Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.
2017-12-01
Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.
Track inspection planning and risk measurement analysis.
DOT National Transportation Integrated Search
2014-11-01
This project models track inspection operations on a railroad network and discusses how the inspection results can : be used to measure the risk of failure on the tracks. In particular, the inspection times of the tracks, inspection frequency of the ...
A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft
NASA Technical Reports Server (NTRS)
Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.
Tsunamis caused by submarine slope failures along western Great Bahama Bank
Schnyder, Jara S.D.; Eberli, Gregor P.; Kirby, James T.; Shi, Fengyan; Tehranirad, Babak; Mulder, Thierry; Ducassou, Emmanuelle; Hebbeln, Dierk; Wintersteller, Paul
2016-01-01
Submarine slope failures are a likely cause for tsunami generation along the East Coast of the United States. Among potential source areas for such tsunamis are submarine landslides and margin collapses of Bahamian platforms. Numerical models of past events, which have been identified using high-resolution multibeam bathymetric data, reveal possible tsunami impact on Bimini, the Florida Keys, and northern Cuba. Tsunamis caused by slope failures with terminal landslide velocity of 20 ms−1 will either dissipate while traveling through the Straits of Florida, or generate a maximum wave of 1.5 m at the Florida coast. Modeling a worst-case scenario with a calculated terminal landslide velocity generates a wave of 4.5 m height. The modeled margin collapse in southwestern Great Bahama Bank potentially has a high impact on northern Cuba, with wave heights between 3.3 to 9.5 m depending on the collapse velocity. The short distance and travel time from the source areas to densely populated coastal areas would make the Florida Keys and Miami vulnerable to such low-probability but high-impact events. PMID:27811961
Tsunamis caused by submarine slope failures along western Great Bahama Bank
NASA Astrophysics Data System (ADS)
Schnyder, Jara S. D.; Eberli, Gregor P.; Kirby, James T.; Shi, Fengyan; Tehranirad, Babak; Mulder, Thierry; Ducassou, Emmanuelle; Hebbeln, Dierk; Wintersteller, Paul
2016-11-01
Submarine slope failures are a likely cause for tsunami generation along the East Coast of the United States. Among potential source areas for such tsunamis are submarine landslides and margin collapses of Bahamian platforms. Numerical models of past events, which have been identified using high-resolution multibeam bathymetric data, reveal possible tsunami impact on Bimini, the Florida Keys, and northern Cuba. Tsunamis caused by slope failures with terminal landslide velocity of 20 ms-1 will either dissipate while traveling through the Straits of Florida, or generate a maximum wave of 1.5 m at the Florida coast. Modeling a worst-case scenario with a calculated terminal landslide velocity generates a wave of 4.5 m height. The modeled margin collapse in southwestern Great Bahama Bank potentially has a high impact on northern Cuba, with wave heights between 3.3 to 9.5 m depending on the collapse velocity. The short distance and travel time from the source areas to densely populated coastal areas would make the Florida Keys and Miami vulnerable to such low-probability but high-impact events.
Tsunamis caused by submarine slope failures along western Great Bahama Bank.
Schnyder, Jara S D; Eberli, Gregor P; Kirby, James T; Shi, Fengyan; Tehranirad, Babak; Mulder, Thierry; Ducassou, Emmanuelle; Hebbeln, Dierk; Wintersteller, Paul
2016-11-04
Submarine slope failures are a likely cause for tsunami generation along the East Coast of the United States. Among potential source areas for such tsunamis are submarine landslides and margin collapses of Bahamian platforms. Numerical models of past events, which have been identified using high-resolution multibeam bathymetric data, reveal possible tsunami impact on Bimini, the Florida Keys, and northern Cuba. Tsunamis caused by slope failures with terminal landslide velocity of 20 ms -1 will either dissipate while traveling through the Straits of Florida, or generate a maximum wave of 1.5 m at the Florida coast. Modeling a worst-case scenario with a calculated terminal landslide velocity generates a wave of 4.5 m height. The modeled margin collapse in southwestern Great Bahama Bank potentially has a high impact on northern Cuba, with wave heights between 3.3 to 9.5 m depending on the collapse velocity. The short distance and travel time from the source areas to densely populated coastal areas would make the Florida Keys and Miami vulnerable to such low-probability but high-impact events.
Using Rollback Avoidance to Mitigate Failures in Next-Generation Extreme-Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, Scott N.
2016-05-01
High-performance computing (HPC) systems enable scientists to numerically model complex phenomena in many important physical systems. The next major milestone in the development of HPC systems is the construction of the rst supercomputer capable executing more than an exa op, 10 18 oating point operations per second. On systems of this scale, failures will occur much more frequently than on current systems. As a result, resilience is a key obstacle to building next-generation extremescale systems. Coordinated checkpointing is currently the most widely-used mechanism for handling failures on HPC systems. Although coordinated checkpointing remains e ective on current systems, increasing themore » scale of today's systems to build next-generation systems will increase the cost of fault tolerance as more and more time is taken away from the application to protect against or recover from failure. Rollback avoidance techniques seek to mitigate the cost of checkpoint/restart by allowing an application to continue its execution rather than rolling back to an earlier checkpoint when failures occur. These techniqes include failure prediction and preventive migration, replicated computation, fault-tolerant algorithms, and softwarebased memory fault correction. In this thesis, we examine how rollback avoidance techniques can be used to address failures on extreme-scale systems. Using a combination of analytic modeling and simulation, we evaluate the potential impact of rollback avoidance on these systems. We then present a novel rollback avoidance technique that exploits similarities in application memory. Finally, we examine the feasibility of using this technique to protect against memory faults in kernel memory.« less
ETARA PC version 3.3 user's guide: Reliability, availability, maintainability simulation model
NASA Technical Reports Server (NTRS)
Hoffman, David J.; Viterna, Larry A.
1991-01-01
A user's manual describing an interactive, menu-driven, personal computer based Monte Carlo reliability, availability, and maintainability simulation program called event time availability reliability (ETARA) is discussed. Given a reliability block diagram representation of a system, ETARA simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair intervals as a function of exponential and/or Weibull distributions. Availability parameters such as equivalent availability, state availability (percentage of time as a particular output state capability), continuous state duration and number of state occurrences can be calculated. Initial spares allotment and spares replenishment on a resupply cycle can be simulated. The number of block failures are tabulated both individually and by block type, as well as total downtime, repair time, and time waiting for spares. Also, maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can be calculated over a cumulative period of time or at specific points in time.
Chronic Heart Failure Follow-up Management Based on Agent Technology
Safdari, Reza
2015-01-01
Objectives Monitoring heart failure patients through continues assessment of sign and symptoms by information technology tools lead to large reduction in re-hospitalization. Agent technology is one of the strongest artificial intelligence areas; therefore, it can be expected to facilitate, accelerate, and improve health services especially in home care and telemedicine. The aim of this article is to provide an agent-based model for chronic heart failure (CHF) follow-up management. Methods This research was performed in 2013-2014 to determine appropriate scenarios and the data required to monitor and follow-up CHF patients, and then an agent-based model was designed. Results Agents in the proposed model perform the following tasks: medical data access, communication with other agents of the framework and intelligent data analysis, including medical data processing, reasoning, negotiation for decision-making, and learning capabilities. Conclusions The proposed multi-agent system has ability to learn and thus improve itself. Implementation of this model with more and various interval times at a broader level could achieve better results. The proposed multi-agent system is no substitute for cardiologists, but it could assist them in decision-making. PMID:26618038
Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Song-Hua Shen; Gary DeMoss
2010-06-01
Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less
NASA Astrophysics Data System (ADS)
Steger, Stefan; Schmaltz, Elmar; Glade, Thomas
2017-04-01
Empirical landslide susceptibility maps spatially depict the areas where future slope failures are likely due to specific environmental conditions. The underlying statistical models are based on the assumption that future landsliding is likely to occur under similar circumstances (e.g. topographic conditions, lithology, land cover) as past slope failures. This principle is operationalized by applying a supervised classification approach (e.g. a regression model with a binary response: landslide presence/absence) that enables discrimination between conditions that favored past landslide occurrences and the circumstances typical for landslide absences. The derived empirical relation is then transferred to each spatial unit of an area. Literature reveals that the specific topographic conditions representative for landslide presences are frequently extracted from derivatives of digital terrain models at locations were past landslides were mapped. The underlying morphology-based landslide identification becomes possible due to the fact that the topography at a specific locality usually changes after landslide occurrence (e.g. hummocky surface, concave and steep scarp). In a strict sense, this implies that topographic predictors used within conventional statistical landslide susceptibility models relate to post-failure topographic conditions - and not to the required pre-failure situation. This study examines the assumption that models calibrated on the basis of post-failure topographies may not be appropriate to predict future landslide locations, because (i) post-failure and pre-failure topographic conditions may differ and (ii) areas were future landslides will occur do not yet exhibit such a distinct post-failure morphology. The study was conducted for an area located in the Walgau region (Vorarlberg, western Austria), where a detailed inventory consisting of shallow landslides was available. The methodology comprised multiple systematic comparisons of models generated on the basis of post-failure conditions (i.e. the standard approach) with models based on an approximated pre-failure topography. Pre-failure topography was approximated by (i) erasing the area of mapped landslide polygons within a digital terrain model and (ii) filling these "empty" areas by interpolating elevation points located outside the mapped landslides. Landslide presence information was extracted from the respective landslide scarp locations while an equal number of randomly sampled points represented landslide absences. After an initial exploratory data analysis, mixed-effects logistic regression was applied to model landslide susceptibility on the basis of two predictor sets (post-failure versus pre-failure predictors). Furthermore, all analyses were separately conducted for five different modelling resolutions to elaborate the suspicion that the degree of generalization of topographic parameters may as well play a role on how the respective models may differ. Model evaluation was conducted by means of multiple procedures (i.e. odds ratios, k-fold cross validation, permutation-based variable importance, difference maps of predictions). The results revealed that models based on highest resolutions (e.g. 1 m, 2.5 m) and post-failure topography performed best from a purely quantitative perspective. A confrontation of models (post-failure versus pre-failure based models) based on an identical modelling resolution exposed that validation results, modelled relationships as well as the prediction pattern tended to converge with a decreasing raster resolution. Based on the results, we concluded that an approximation of pre-failure topography does not significantly contribute to improved landslide susceptibility models in the case (i) the underlying inventory consists of small landslide features and (ii) the models are based on coarse raster resolutions (e.g. 25 m). However, in the case modelling with high raster resolutions is envisaged (e.g. 1 m, 2.5 m) or the inventory mainly consists of larger events, a reconstruction of pre-failure conditions might be highly expedient, even though conventional validation results might indicate an opposite tendency. Finally, we recommend to consider that topographic predictors highly useful to detect past slope movements (e.g. roughness) are not necessarily valuable to predict future slope instabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chao; Xu, Jun; Cao, Lei
The electrodes of lithium-ion batteries (LIB) are known to be brittle and to fail earlier than the separators during an external crush event. Thus, the understanding of mechanical failure mechanism for LIB electrodes (anode and cathode) is critical for the safety design of LIB cells. In this paper, we present experimental and numerical studies on the constitutive behavior and progression of failure in LIB electrodes. Mechanical tests were designed and conducted to evaluate the constitutive properties of porous electrodes. Constitutive models were developed to describe the stress-strain response of electrodes under uniaxial tensile and compressive loads. The failure criterion andmore » a damage model were introduced to model their unique tensile and compressive failure behavior. The failure mechanism of LIB electrodes was studied using the blunt rod test on dry electrodes, and numerical models were built to simulate progressive failure. The different failure processes were examined and analyzed in detail numerically, and correlated with experimentally observed failure phenomena. Finally, the test results and models improve our understanding of failure behavior in LIB electrodes, and provide constructive insights on future development of physics-based safety design tools for battery structures under mechanical abuse.« less
Constitutive behavior and progressive mechanical failure of electrodes in lithium-ion batteries
NASA Astrophysics Data System (ADS)
Zhang, Chao; Xu, Jun; Cao, Lei; Wu, Zenan; Santhanagopalan, Shriram
2017-07-01
The electrodes of lithium-ion batteries (LIB) are known to be brittle and to fail earlier than the separators during an external crush event. Thus, the understanding of mechanical failure mechanism for LIB electrodes (anode and cathode) is critical for the safety design of LIB cells. In this paper, we present experimental and numerical studies on the constitutive behavior and progression of failure in LIB electrodes. Mechanical tests were designed and conducted to evaluate the constitutive properties of porous electrodes. Constitutive models were developed to describe the stress-strain response of electrodes under uniaxial tensile and compressive loads. The failure criterion and a damage model were introduced to model their unique tensile and compressive failure behavior. The failure mechanism of LIB electrodes was studied using the blunt rod test on dry electrodes, and numerical models were built to simulate progressive failure. The different failure processes were examined and analyzed in detail numerically, and correlated with experimentally observed failure phenomena. The test results and models improve our understanding of failure behavior in LIB electrodes, and provide constructive insights on future development of physics-based safety design tools for battery structures under mechanical abuse.
Constitutive behavior and progressive mechanical failure of electrodes in lithium-ion batteries
Zhang, Chao; Xu, Jun; Cao, Lei; ...
2017-05-05
The electrodes of lithium-ion batteries (LIB) are known to be brittle and to fail earlier than the separators during an external crush event. Thus, the understanding of mechanical failure mechanism for LIB electrodes (anode and cathode) is critical for the safety design of LIB cells. In this paper, we present experimental and numerical studies on the constitutive behavior and progression of failure in LIB electrodes. Mechanical tests were designed and conducted to evaluate the constitutive properties of porous electrodes. Constitutive models were developed to describe the stress-strain response of electrodes under uniaxial tensile and compressive loads. The failure criterion andmore » a damage model were introduced to model their unique tensile and compressive failure behavior. The failure mechanism of LIB electrodes was studied using the blunt rod test on dry electrodes, and numerical models were built to simulate progressive failure. The different failure processes were examined and analyzed in detail numerically, and correlated with experimentally observed failure phenomena. Finally, the test results and models improve our understanding of failure behavior in LIB electrodes, and provide constructive insights on future development of physics-based safety design tools for battery structures under mechanical abuse.« less
Numerical simulation of failure behavior of granular debris flows based on flume model tests.
Zhou, Jian; Li, Ye-xun; Jia, Min-cai; Li, Cui-na
2013-01-01
In this study, the failure behaviors of debris flows were studied by flume model tests with artificial rainfall and numerical simulations (PFC(3D)). Model tests revealed that grain sizes distribution had profound effects on failure mode, and the failure in slope of medium sand started with cracks at crest and took the form of retrogressive toe sliding failure. With the increase of fine particles in soil, the failure mode of the slopes changed to fluidized flow. The discrete element method PFC(3D) can overcome the hypothesis of the traditional continuous medium mechanic and consider the simple characteristics of particle. Thus, a numerical simulations model considering liquid-solid coupled method has been developed to simulate the debris flow. Comparing the experimental results, the numerical simulation result indicated that the failure mode of the failure of medium sand slope was retrogressive toe sliding, and the failure of fine sand slope was fluidized sliding. The simulation result is consistent with the model test and theoretical analysis, and grain sizes distribution caused different failure behavior of granular debris flows. This research should be a guide to explore the theory of debris flow and to improve the prevention and reduction of debris flow.
A simplified fragility analysis of fan type cable stayed bridges
NASA Astrophysics Data System (ADS)
Khan, R. A.; Datta, T. K.; Ahmad, S.
2005-06-01
A simplified fragility analysis of fan type cable stayed bridges using Probabilistic Risk Analysis (PRA) procedure is presented for determining their failure probability under random ground motion. Seismic input to the bridge support is considered to be a risk consistent response spectrum which is obtained from a separate analysis. For the response analysis, the bridge deck is modeles as a beam supported on spring at different points. The stiffnesses of the springs are determined by a separate 2D static analysis of cable-tower-deck system. The analysis provides a coupled stiffness matrix for the spring system. A continuum method of analysis using dynamic stiffness is used to determine the dynamic properties of the bridges. The response of the bridge deck is obtained by the response spectrum method of analysis as applied to multidegree of freedom system which duly takes into account the quasi-static component of bridge deck vibration. The fragility analysis includes uncertainties arising due to the variation in ground motion, material property, modeling, method of analysis, ductility factor and damage concentration effect. Probability of failure of the bridge deck is determined by the First Order Second Moment (FOSM) method of reliability. A three span double plane symmetrical fan type cable stayed bridge of total span 689 m, is used as an illustrative example. The fragility curves for the bridge deck failure are obtained under a number of parametric variations. Some of the important conclusions of the study indicate that (i) not only vertical component but also the horizontal component of ground motion has considerable effect on the probability of failure; (ii) ground motion with no time lag between support excitations provides a smaller probability of failure as compared to ground motion with very large time lag between support excitation; and (iii) probability of failure may considerably increase soft soil condition.
49 CFR Appendix A to Part 665 - Tests To Be Performed at the Bus Testing Facility
Code of Federal Regulations, 2010 CFR
2010-10-01
.... Because the operator will not become familiar with the detailed design of all new bus models that are tested, tests to determine the time and skill required to remove and reinstall an engine, a transmission... feasible to conduct statistical reliability tests. The detected bus failures, repair time, and the actions...
49 CFR Appendix A to Part 665 - Tests To Be Performed at the Bus Testing Facility
Code of Federal Regulations, 2011 CFR
2011-10-01
.... Because the operator will not become familiar with the detailed design of all new bus models that are tested, tests to determine the time and skill required to remove and reinstall an engine, a transmission... feasible to conduct statistical reliability tests. The detected bus failures, repair time, and the actions...
49 CFR Appendix A to Part 665 - Tests To Be Performed at the Bus Testing Facility
Code of Federal Regulations, 2013 CFR
2013-10-01
.... Because the operator will not become familiar with the detailed design of all new bus models that are tested, tests to determine the time and skill required to remove and reinstall an engine, a transmission... feasible to conduct statistical reliability tests. The detected bus failures, repair time, and the actions...
Promotion at Canadian Universities: The Intersection of Gender, Discipline, and Institution
ERIC Educational Resources Information Center
Ornstein, Michael; Stewart, Penni; Drakich, Janice
2007-01-01
Statistics Canada's annual census of full-time faculty at all Canadian universities, between 1984 to 1999, is used to measure the effect of gender, discipline, and institution on promotion from assistant to associate professor and from associate to full professor. Accelerated failure time models show that gender has some effect on rates of…
Uncertainty and Intelligence in Computational Stochastic Mechanics
NASA Technical Reports Server (NTRS)
Ayyub, Bilal M.
1996-01-01
Classical structural reliability assessment techniques are based on precise and crisp (sharp) definitions of failure and non-failure (survival) of a structure in meeting a set of strength, function and serviceability criteria. These definitions are provided in the form of performance functions and limit state equations. Thus, the criteria provide a dichotomous definition of what real physical situations represent, in the form of abrupt change from structural survival to failure. However, based on observing the failure and survival of real structures according to the serviceability and strength criteria, the transition from a survival state to a failure state and from serviceability criteria to strength criteria are continuous and gradual rather than crisp and abrupt. That is, an entire spectrum of damage or failure levels (grades) is observed during the transition to total collapse. In the process, serviceability criteria are gradually violated with monotonically increasing level of violation, and progressively lead into the strength criteria violation. Classical structural reliability methods correctly and adequately include the ambiguity sources of uncertainty (physical randomness, statistical and modeling uncertainty) by varying amounts. However, they are unable to adequately incorporate the presence of a damage spectrum, and do not consider in their mathematical framework any sources of uncertainty of the vagueness type. Vagueness can be attributed to sources of fuzziness, unclearness, indistinctiveness, sharplessness and grayness; whereas ambiguity can be attributed to nonspecificity, one-to-many relations, variety, generality, diversity and divergence. Using the nomenclature of structural reliability, vagueness and ambiguity can be accounted for in the form of realistic delineation of structural damage based on subjective judgment of engineers. For situations that require decisions under uncertainty with cost/benefit objectives, the risk of failure should depend on the underlying level of damage and the uncertainties associated with its definition. A mathematical model for structural reliability assessment that includes both ambiguity and vagueness types of uncertainty was suggested to result in the likelihood of failure over a damage spectrum. The resulting structural reliability estimates properly represent the continuous transition from serviceability to strength limit states over the ultimate time exposure of the structure. In this section, a structural reliability assessment method based on a fuzzy definition of failure is suggested to meet these practical needs. A failure definition can be developed to indicate the relationship between failure level and structural response. In this fuzzy model, a subjective index is introduced to represent all levels of damage (or failure). This index can be interpreted as either a measure of failure level or a measure of a degree of belief in the occurrence of some performance condition (e.g., failure). The index allows expressing the transition state between complete survival and complete failure for some structural response based on subjective evaluation and judgment.
Micromechanical investigation of ductile failure in Al 5083-H116 via 3D unit cell modeling
NASA Astrophysics Data System (ADS)
Bomarito, G. F.; Warner, D. H.
2015-01-01
Ductile failure is governed by the evolution of micro-voids within a material. The micro-voids, which commonly initiate at second phase particles within metal alloys, grow and interact with each other until failure occurs. The evolution of the micro-voids, and therefore ductile failure, depends on many parameters (e.g., stress state, temperature, strain rate, void and particle volume fraction, etc.). In this study, the stress state dependence of the ductile failure of Al 5083-H116 is investigated by means of 3-D Finite Element (FE) periodic cell models. The cell models require only two pieces of information as inputs: (1) the initial particle volume fraction of the alloy and (2) the constitutive behavior of the matrix material. Based on this information, cell models are subjected to a given stress state, defined by the stress triaxiality and the Lode parameter. For each stress state, the cells are loaded in many loading orientations until failure. Material failure is assumed to occur in the weakest orientation, and so the orientation in which failure occurs first is considered as the critical orientation. The result is a description of material failure that is derived from basic principles and requires no fitting parameters. Subsequently, the results of the simulations are used to construct a homogenized material model, which is used in a component-scale FE model. The component-scale FE model is compared to experiments and is shown to over predict ductility. By excluding smaller nucleation events and load path non-proportionality, it is concluded that accuracy could be gained by including more information about the true microstructure in the model; emphasizing that its incorporation into micromechanical models is critical to developing quantitatively accurate physics-based ductile failure models.
NASA Astrophysics Data System (ADS)
Darvish, Hoda; Nouri-Taleghani, Morteza; Shokrollahi, Amin; Tatar, Afshin
2015-11-01
According to the growth of demands to oil resources, increasing the rate of oil production seems necessary. However, oil production declines with time as a result of pressure drop in reservoir as well as sealing of microscopic cracks and pores in the reservoir rock. Hydraulic fracturing is one of the common methods with high performance, which is widely applied to oil and gas reservoirs. In this study, wells in three sections of east, center, and west sides of a field are compared regarding the suitable layer for hydraulic fracturing operation. Firstly, elastic modulus were obtained in both dynamic and static conditions, then uniaxial compressive strength (UCS), type of shear and tensile failures, the most accurate model of failure in wells, safe and stable mud window, the best zone and layers, and finally reference pressures are determined as nominates for hydraulic fracturing. Types of shear failure in minimum, and maximum range of model and in tensile model were determined to be "Shear failure wide breakout (SWBO)", "Shear narrow breakout (SNBO)", and "Tensile vertical failure (TVER)", respectively. The range of safe mud window (SMW) in the studied wells was almost in the same range as it was in every three spots of the field. This range was determined between 5200-8800psi and 5800-10100psi for Ilam and Sarvak zones, respectively. Initial fracture pressure ranges for selected layers were determined 11,759-14,722, 11,910-14,164, and 11,848-14,953psi for the eastern, central, and western wells. Thus, western wells have the best situation for Hydraulic fracturing operation. Finally, it was concluded that the operation is more economic in Sarvak zone and western wells.
NASA Astrophysics Data System (ADS)
Liu, P. F.; Li, X. K.
2018-06-01
The purpose of this paper is to study micromechanical progressive failure properties of carbon fiber/epoxy composites with thermal residual stress by finite element analysis (FEA). Composite microstructures with hexagonal fiber distribution are used for the representative volume element (RVE), where an initial fiber breakage is assumed. Fiber breakage with random fiber strength is predicted using Monte Carlo simulation, progressive matrix damage is predicted by proposing a continuum damage mechanics model and interface failure is simulated using Xu and Needleman's cohesive model. Temperature dependent thermal expansion coefficients for epoxy matrix are used. FEA by developing numerical codes using ANSYS finite element software is divided into two steps: 1. Thermal residual stresses due to mismatch between fiber and matrix are calculated; 2. Longitudinal tensile load is further exerted on the RVE to perform progressive failure analysis of carbon fiber/epoxy composites. Numerical convergence is solved by introducing the viscous damping effect properly. The extended Mori-Tanaka method that considers interface debonding is used to get homogenized mechanical responses of composites. Three main results by FEA are obtained: 1. the real-time matrix cracking, fiber breakage and interface debonding with increasing tensile strain is simulated. 2. the stress concentration coefficients on neighbouring fibers near the initial broken fiber and the axial fiber stress distribution along the broken fiber are predicted, compared with the results using the global and local load-sharing models based on the shear-lag theory. 3. the tensile strength of composite by FEA is compared with those by the shear-lag theory and experiments. Finally, the tensile stress-strain curve of composites by FEA is applied to the progressive failure analysis of composite pressure vessel.
Mohamed, Moumouni Guero; Fan, Xuejun; Zhang, Guoqi; Pecht, Michael
2017-01-01
With the expanding application of light-emitting diodes (LEDs), the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD), defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED’s optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1) the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2) the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs), and color rendering indexes (CRIs) of phosphor-converted (pc)-white LEDs, and also can estimate the residual color life. PMID:28773176
Fan, Jiajie; Mohamed, Moumouni Guero; Qian, Cheng; Fan, Xuejun; Zhang, Guoqi; Pecht, Michael
2017-07-18
With the expanding application of light-emitting diodes (LEDs), the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD), defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED's optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1) the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2) the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs), and color rendering indexes (CRIs) of phosphor-converted (pc)-white LEDs, and also can estimate the residual color life.
NASA Astrophysics Data System (ADS)
Liu, P. F.; Li, X. K.
2017-09-01
The purpose of this paper is to study micromechanical progressive failure properties of carbon fiber/epoxy composites with thermal residual stress by finite element analysis (FEA). Composite microstructures with hexagonal fiber distribution are used for the representative volume element (RVE), where an initial fiber breakage is assumed. Fiber breakage with random fiber strength is predicted using Monte Carlo simulation, progressive matrix damage is predicted by proposing a continuum damage mechanics model and interface failure is simulated using Xu and Needleman's cohesive model. Temperature dependent thermal expansion coefficients for epoxy matrix are used. FEA by developing numerical codes using ANSYS finite element software is divided into two steps: 1. Thermal residual stresses due to mismatch between fiber and matrix are calculated; 2. Longitudinal tensile load is further exerted on the RVE to perform progressive failure analysis of carbon fiber/epoxy composites. Numerical convergence is solved by introducing the viscous damping effect properly. The extended Mori-Tanaka method that considers interface debonding is used to get homogenized mechanical responses of composites. Three main results by FEA are obtained: 1. the real-time matrix cracking, fiber breakage and interface debonding with increasing tensile strain is simulated. 2. the stress concentration coefficients on neighbouring fibers near the initial broken fiber and the axial fiber stress distribution along the broken fiber are predicted, compared with the results using the global and local load-sharing models based on the shear-lag theory. 3. the tensile strength of composite by FEA is compared with those by the shear-lag theory and experiments. Finally, the tensile stress-strain curve of composites by FEA is applied to the progressive failure analysis of composite pressure vessel.
Thermomechanical Controls on the Success and Failure of Continental Rift Systems
NASA Astrophysics Data System (ADS)
Brune, S.
2017-12-01
Studies of long-term continental rift evolution are often biased towards rifts that succeed in breaking the continent like the North Atlantic, South China Sea, or South Atlantic rifts. However there are many prominent rift systems on Earth where activity stopped before the formation of a new ocean basin such as the North Sea, the West and Central African Rifts, or the West Antarctic Rift System. The factors controlling the success and failure of rifts can be divided in two groups: (1) Intrinsic processes - for instance frictional weakening, lithospheric thinning, shear heating or the strain-dependent growth of rift strength by replacing weak crust with strong mantle. (2) External processes - such as a change of plate divergence rate, the waning of a far-field driving force, or the arrival of a mantle plume. Here I use numerical and analytical modeling to investigate the role of these processes for the success and failure of rift systems. These models show that a change of plate divergence rate under constant force extension is controlled by the non-linearity of lithospheric materials. For successful rifts, a strong increase in divergence velocity can be expected to take place within few million years, a prediction that agrees with independent plate tectonic reconstructions of major Mesozoic and Cenozoic ocean-forming rift systems. Another model prediction is that oblique rifting is mechanically favored over orthogonal rifting, which means that simultaneous deformation within neighboring rift systems of different obliquity and otherwise identical properties will lead to success and failure of the more and less oblique rift, respectively. This can be exemplified by the Cretaceous activity within the Equatorial Atlantic and the West African Rifts that lead to the formation of a highly oblique oceanic spreading center and the failure of the West African Rift System. While in nature the circumstances of rift success or failure may be manifold, simplified numerical and analytical models allow the isolated analysis of various contributing factors and to define a characteristic time scale for each process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property valuemore » at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.« less
Failure mode and effect analysis in blood transfusion: a proactive tool to reduce risks.
Lu, Yao; Teng, Fang; Zhou, Jie; Wen, Aiqing; Bi, Yutian
2013-12-01
The aim of blood transfusion risk management is to improve the quality of blood products and to assure patient safety. We utilize failure mode and effect analysis (FMEA), a tool employed for evaluating risks and identifying preventive measures to reduce the risks in blood transfusion. The failure modes and effects occurring throughout the whole process of blood transfusion were studied. Each failure mode was evaluated using three scores: severity of effect (S), likelihood of occurrence (O), and probability of detection (D). Risk priority numbers (RPNs) were calculated by multiplying the S, O, and D scores. The plan-do-check-act cycle was also used for continuous improvement. Analysis has showed that failure modes with the highest RPNs, and therefore the greatest risk, were insufficient preoperative assessment of the blood product requirement (RPN, 245), preparation time before infusion of more than 30 minutes (RPN, 240), blood transfusion reaction occurring during the transfusion process (RPN, 224), blood plasma abuse (RPN, 180), and insufficient and/or incorrect clinical information on request form (RPN, 126). After implementation of preventative measures and reassessment, a reduction in RPN was detected with each risk. The failure mode with the second highest RPN, namely, preparation time before infusion of more than 30 minutes, was shown in detail to prove the efficiency of this tool. FMEA evaluation model is a useful tool in proactively analyzing and reducing the risks associated with the blood transfusion procedure. © 2013 American Association of Blood Banks.
Structural Analysis for the American Airlines Flight 587 Accident Investigation: Global Analysis
NASA Technical Reports Server (NTRS)
Young, Richard D.; Lovejoy, Andrew E.; Hilburger, Mark W.; Moore, David F.
2005-01-01
NASA Langley Research Center (LaRC) supported the National Transportation Safety Board (NTSB) in the American Airlines Flight 587 accident investigation due to LaRC's expertise in high-fidelity structural analysis and testing of composite structures and materials. A Global Analysis Team from LaRC reviewed the manufacturer s design and certification procedures, developed finite element models and conducted structural analyses, and participated jointly with the NTSB and Airbus in subcomponent tests conducted at Airbus in Hamburg, Germany. The Global Analysis Team identified no significant or obvious deficiencies in the Airbus certification and design methods. Analysis results from the LaRC team indicated that the most-likely failure scenario was failure initiation at the right rear main attachment fitting (lug), followed by an unstable progression of failure of all fin-to-fuselage attachments and separation of the VTP from the aircraft. Additionally, analysis results indicated that failure initiates at the final observed maximum fin loading condition in the accident, when the VTP was subjected to loads that were at minimum 1.92 times the design limit load condition for certification. For certification, the VTP is only required to support loads of 1.5 times design limit load without catastrophic failure. The maximum loading during the accident was shown to significantly exceed the certification requirement. Thus, the structure appeared to perform in a manner consistent with its design and certification, and failure is attributed to VTP loads greater than expected.
Modelling accelerated degradation data using Wiener diffusion with a time scale transformation.
Whitmore, G A; Schenkelberg, F
1997-01-01
Engineering degradation tests allow industry to assess the potential life span of long-life products that do not fail readily under accelerated conditions in life tests. A general statistical model is presented here for performance degradation of an item of equipment. The degradation process in the model is taken to be a Wiener diffusion process with a time scale transformation. The model incorporates Arrhenius extrapolation for high stress testing. The lifetime of an item is defined as the time until performance deteriorates to a specified failure threshold. The model can be used to predict the lifetime of an item or the extent of degradation of an item at a specified future time. Inference methods for the model parameters, based on accelerated degradation test data, are presented. The model and inference methods are illustrated with a case application involving self-regulating heating cables. The paper also discusses a number of practical issues encountered in applications.
Roychowdhury, D F; Hayden, A; Liepa, A M
2003-02-15
This retrospective analysis examined prognostic significance of health-related quality-of-life (HRQoL) parameters combined with baseline clinical factors on outcomes (overall survival, time to progressive disease, and time to treatment failure) in bladder cancer. Outcome and HRQoL (European Organization for Research and Treatment of Cancer Quality of Life Questionnaire C30) data were collected prospectively in a phase III study assessing gemcitabine and cisplatin versus methotrexate, vinblastine, doxorubicin, and cisplatin in locally advanced or metastatic bladder cancer. Prespecified baseline clinical factors (performance status, tumor-node-metastasis staging, visceral metastases [VM], alkaline phosphatase [AP] level, number of metastatic sites, prior radiotherapy, disease measurability, sex, time from diagnosis, and sites of disease) and selected HRQoL parameters (global QoL; all functional scales; symptoms: pain, fatigue, insomnia, dyspnea, anorexia) were evaluated using Cox's proportional hazards model. Factors with individual prognostic value (P <.05) on outcomes in univariate models were assessed for joint prognostic value in a multivariate model. A final model was developed using a backward selection strategy. Patients with baseline HRQoL were included (364 of 405, 90%). The final model predicted longer survival with low/normal AP levels, no VM, high physical functioning, low role functioning, and no anorexia. Positive prognostic factors for time to progressive disease were good performance status, low/normal AP levels, no VM, and minimal fatigue; for time to treatment failure, they were low/normal AP levels, minimal fatigue, and no anorexia. Global QoL was a significant predictor of outcome in univariate analyses but was not retained in the multivariate model. HRQoL parameters are independent prognostic factors for outcome in advanced bladder cancer; their prognostic importance needs further evaluation.
Norouzi, Jamshid; Yadollahpour, Ali; Mirbagheri, Seyed Ahmad; Mazdeh, Mitra Mahdavi; Hosseini, Seyed Ahmad
2016-01-01
Chronic kidney disease (CKD) is a covert disease. Accurate prediction of CKD progression over time is necessary for reducing its costs and mortality rates. The present study proposes an adaptive neurofuzzy inference system (ANFIS) for predicting the renal failure timeframe of CKD based on real clinical data. This study used 10-year clinical records of newly diagnosed CKD patients. The threshold value of 15 cc/kg/min/1.73 m(2) of glomerular filtration rate (GFR) was used as the marker of renal failure. A Takagi-Sugeno type ANFIS model was used to predict GFR values. Variables of age, sex, weight, underlying diseases, diastolic blood pressure, creatinine, calcium, phosphorus, uric acid, and GFR were initially selected for the predicting model. Weight, diastolic blood pressure, diabetes mellitus as underlying disease, and current GFR(t) showed significant correlation with GFRs and were selected as the inputs of model. The comparisons of the predicted values with the real data showed that the ANFIS model could accurately estimate GFR variations in all sequential periods (Normalized Mean Absolute Error lower than 5%). Despite the high uncertainties of human body and dynamic nature of CKD progression, our model can accurately predict the GFR variations at long future periods.
NASA Astrophysics Data System (ADS)
Zuo, Ye; Sun, Guangjun; Li, Hongjing
2018-01-01
Under the action of near-fault ground motions, curved bridges are prone to pounding, local damage of bridge components and even unseating. A multi-scale fine finite element model of a typical three-span curved bridge is established by considering the elastic-plastic behavior of piers and pounding effect of adjacent girders. The nonlinear time-history method is used to study the seismic response of the curved bridge equipped with unseating failure control system under the action of near-fault ground motion. An in-depth analysis is carried to evaluate the control effect of the proposed unseating failure control system. The research results indicate that under the near-fault ground motion, the seismic response of the curved bridge is strong. The unseating failure control system perform effectively to reduce the pounding force of the adjacent girders and the probability of deck unseating.
Orbiter post-tire failure and skid testing results
NASA Technical Reports Server (NTRS)
Daugherty, Robert H.; Stubbs, Sandy M.
1989-01-01
An investigation was conducted at the NASA Langley Research Center's Aircraft Landing Dynamics Facility (ALDF) to define the post-tire failure drag characteristics of the Space Shuttle Orbiter main tire and wheel assembly. Skid tests on various materials were also conducted to define their friction and wear rate characteristics under higher speed and bearing pressures than any previous tests. The skid tests were conducted to support a feasibility study of adding a skid to the orbiter strut between the main tires to protect an intact tire from failure due to overload should one of the tires fail. Roll-on-rim tests were conducted to define the ability of a standard and a modified orbiter main wheel to roll without a tire. Results of the investigation are combined into a generic model of strut drag versus time under failure conditions for inclusion into rollout simulators used to train the shuttle astronauts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Sisi; Li, Yun; Levitt, Karl N.
Consensus is a fundamental approach to implementing fault-tolerant services through replication where there exists a tradeoff between the cost and the resilience. For instance, Crash Fault Tolerant (CFT) protocols have a low cost but can only handle crash failures while Byzantine Fault Tolerant (BFT) protocols handle arbitrary failures but have a higher cost. Hybrid protocols enjoy the benefits of both high performance without failures and high resiliency under failures by switching among different subprotocols. However, it is challenging to determine which subprotocols should be used. We propose a moving target approach to switch among protocols according to the existing systemmore » and network vulnerability. At the core of our approach is a formalized cost model that evaluates the vulnerability and performance of consensus protocols based on real-time Intrusion Detection System (IDS) signals. Based on the evaluation results, we demonstrate that a safe, cheap, and unpredictable protocol is always used and a high IDS error rate can be tolerated.« less
Nanowire growth process modeling and reliability models for nanodevices
NASA Astrophysics Data System (ADS)
Fathi Aghdam, Faranak
Nowadays, nanotechnology is becoming an inescapable part of everyday life. The big barrier in front of its rapid growth is our incapability of producing nanoscale materials in a reliable and cost-effective way. In fact, the current yield of nano-devices is very low (around 10 %), which makes fabrications of nano-devices very expensive and uncertain. To overcome this challenge, the first and most important step is to investigate how to control nano-structure synthesis variations. The main directions of reliability research in nanotechnology can be classified either from a material perspective or from a device perspective. The first direction focuses on restructuring materials and/or optimizing process conditions at the nano-level (nanomaterials). The other direction is linked to nano-devices and includes the creation of nano-electronic and electro-mechanical systems at nano-level architectures by taking into account the reliability of future products. In this dissertation, we have investigated two topics on both nano-materials and nano-devices. In the first research work, we have studied the optimization of one of the most important nanowire growth processes using statistical methods. Research on nanowire growth with patterned arrays of catalyst has shown that the wire-to-wire spacing is an important factor affecting the quality of resulting nanowires. To improve the process yield and the length uniformity of fabricated nanowires, it is important to reduce the resource competition between nanowires during the growth process. We have proposed a physical-statistical nanowire-interaction model considering the shadowing effect and shared substrate diffusion area to determine the optimal pitch that would ensure the minimum competition between nanowires. A sigmoid function is used in the model, and the least squares estimation method is used to estimate the model parameters. The estimated model is then used to determine the optimal spatial arrangement of catalyst arrays. This work is an early attempt that uses a physical-statistical modeling approach to studying selective nanowire growth for the improvement of process yield. In the second research work, the reliability of nano-dielectrics is investigated. As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure (i.e., failure mechanisms and modes). This necessitates new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor that is subject to various failure mechanisms. Dielectric breakdown is known to be the most critical one and has become a major barrier for reliable circuit design in nano-scale. Due to the need for aggressive downscaling of transistors, dielectric films are being made extremely thin, and this has led to adopting high permittivity (k) dielectrics as an alternative to widely used SiO2 in recent years. Since most time-dependent dielectric breakdown test data on bilayer stacks show significant deviations from a Weibull trend, we have proposed two new approaches to modeling the time to breakdown of bi-layer high-k dielectrics. In the first approach, we have used a marked space-time self-exciting point process to model the defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is employed to minimize the Kullback-Leibler divergence between the empirical distribution obtained from the real data and the one based on the simulated data to find the best parameter values and to predict the total time to failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectric that is a function of time, space and size of the previous defects. In addition, in the second approach, a k-out-of-n system framework is proposed to estimate the total failure time after the generation of more than one soft breakdown.
Using Seismic Signals to Forecast Volcanic Processes
NASA Astrophysics Data System (ADS)
Salvage, R.; Neuberg, J. W.
2012-04-01
Understanding seismic signals generated during volcanic unrest have the ability to allow scientists to more accurately predict and understand active volcanoes since they are intrinsically linked to rock failure at depth (Voight, 1988). In particular, low frequency long period signals (LP events) have been related to the movement of fluid and the brittle failure of magma at depth due to high strain rates (Hammer and Neuberg, 2009). This fundamentally relates to surface processes. However, there is currently no physical quantitative model for determining the likelihood of an eruption following precursory seismic signals, or the timing or type of eruption that will ensue (Benson et al., 2010). Since the beginning of its current eruptive phase, accelerating LP swarms (< 10 events per hour) have been a common feature at Soufriere Hills volcano, Montserrat prior to surface expressions such as dome collapse or eruptions (Miller et al., 1998). The dynamical behaviour of such swarms can be related to accelerated magma ascent rates since the seismicity is thought to be a consequence of magma deformation as it rises to the surface. In particular, acceleration rates can be successfully used in collaboration with the inverse material failure law; a linear relationship against time (Voight, 1988); in the accurate prediction of volcanic eruption timings. Currently, this has only been investigated for retrospective events (Hammer and Neuberg, 2009). The identification of LP swarms on Montserrat and analysis of their dynamical characteristics allows a better understanding of the nature of the seismic signals themselves, as well as their relationship to surface processes such as magma extrusion rates. Acceleration and deceleration rates of seismic swarms provide insights into the plumbing system of the volcano at depth. The application of the material failure law to multiple LP swarms of data allows a critical evaluation of the accuracy of the method which further refines current understanding of the relationship between seismic signals and volcanic eruptions. It is hoped that such analysis will assist the development of real time forecasting models.
Zebrafish heart failure models: opportunities and challenges.
Shi, Xingjuan; Chen, Ru; Zhang, Yu; Yun, Junghwa; Brand-Arzamendi, Koroboshka; Liu, Xiangdong; Wen, Xiao-Yan
2018-05-03
Heart failure is a complex pathophysiological syndrome of pumping functional failure that results from injury, infection or toxin-induced damage on the myocardium, as well as genetic influence. Gene mutations associated with cardiomyopathies can lead to various pathologies of heart failure. In recent years, zebrafish, Danio rerio, has emerged as an excellent model to study human cardiovascular diseases such as congenital heart defects, cardiomyopathy, and preclinical development of drugs targeting these diseases. In this review, we will first summarize zebrafish genetic models of heart failure arose from cardiomyopathy, which is caused by mutations in sarcomere, calcium or mitochondrial-associated genes. Moreover, we outline zebrafish heart failure models triggered by chemical compounds. Elucidation of these models will improve the understanding of the mechanism of pathogenesis and provide potential targets for novel therapies.
Surrogate oracles, generalized dependency and simpler models
NASA Technical Reports Server (NTRS)
Wilson, Larry
1990-01-01
Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.
NASA Astrophysics Data System (ADS)
Park, Young-Joon; Andleigh, Vaibhav K.; Thompson, Carl V.
1999-04-01
An electromigration model is developed to simulate the reliability of Al and Al-Cu interconnects. A polynomial expression for the free energy of solution by Murray [Int. Met. Rev. 30, 211 (1985)] was used to calculate the chemical potential for Al and Cu while the diffusivities were defined based on a Cu-trapping model by Rosenberg [J. Vac. Sci. Technol. 9, 263 (1972)]. The effects of Cu on stress evolution and lifetime were investigated in all-bamboo and near-bamboo stud-to-stud structures. In addition, the significance of the effect of mechanical stress on the diffusivity of both Al and Cu was determined in all-bamboo and near-bamboo lines. The void nucleation and growth process was simulated in 200 μm, stud-to-stud lines. Current density scaling behavior for void-nucleation-limited failure and void-growth-limited failure modes was simulated in long, stud-to-stud lines. Current density exponents of both n=2 for void nucleation and n=1 for void growth failure modes were found in both pure Al and Al-Cu lines. Limitations of the most widely used current density scaling law (Black's equation) in the analysis of the reliability of stud-to-stud lines are discussed. By modifying the input materials properties used in this model (when they are known), this model can be adapted to predict the reliability of other interconnect materials such as pure Cu and Cu alloys.
Degradation modeling of mid-power white-light LEDs by using Wiener process.
Huang, Jianlin; Golubović, Dušan S; Koh, Sau; Yang, Daoguo; Li, Xiupeng; Fan, Xuejun; Zhang, G Q
2015-07-27
The IES standard TM-21-11 provides a guideline for lifetime prediction of LED devices. As it uses average normalized lumen maintenance data and performs non-linear regression for lifetime modeling, it cannot capture dynamic and random variation of the degradation process of LED devices. In addition, this method cannot capture the failure distribution, although it is much more relevant in reliability analysis. Furthermore, the TM-21-11 only considers lumen maintenance for lifetime prediction. Color shift, as another important performance characteristic of LED devices, may also render significant degradation during service life, even though the lumen maintenance has not reached the critical threshold. In this study, a modified Wiener process has been employed for the modeling of the degradation of LED devices. By using this method, dynamic and random variations, as well as the non-linear degradation behavior of LED devices, can be easily accounted for. With a mild assumption, the parameter estimation accuracy has been improved by including more information into the likelihood function while neglecting the dependency between the random variables. As a consequence, the mean time to failure (MTTF) has been obtained and shows comparable result with IES TM-21-11 predictions, indicating the feasibility of the proposed method. Finally, the cumulative failure distribution was presented corresponding to different combinations of lumen maintenance and color shift. The results demonstrate that a joint failure distribution of LED devices could be modeled by simply considering their lumen maintenance and color shift as two independent variables.
Predictors of nursing home residents' time to hospitalization.
O'Malley, A James; Caudry, Daryl J; Grabowski, David C
2011-02-01
To model the predictors of the time to first acute hospitalization for nursing home residents, and accounting for previous hospitalizations, model the predictors of time between subsequent hospitalizations. Merged file from New York State for the period 1998-2004 consisting of nursing home information from the minimum dataset and hospitalization information from the Statewide Planning and Research Cooperative System. Accelerated failure time models were used to estimate the model parameters and predict survival times. The models were fit to observations from 50 percent of the nursing homes and validated on the remaining observations. Pressure ulcers and facility-level deficiencies were associated with a decreased time to first hospitalization, while the presence of advance directives and facility staffing was associated with an increased time. These predictors of the time to first hospitalization model had effects of similar magnitude in predicting the time between subsequent hospitalizations. This study provides novel evidence suggesting modifiable patient and nursing home characteristics are associated with the time to first hospitalization and time to subsequent hospitalizations for nursing home residents. © Health Research and Educational Trust.
Monitoring of waste disposal in deep geological formations
NASA Astrophysics Data System (ADS)
German, V.; Mansurov, V.
2003-04-01
In the paper application of kinetic approach for description of rock failure process and waste disposal microseismic monitoring is advanced. On base of two-stage model of failure process the capability of rock fracture is proved. The requests to monitoring system such as real time mode of data registration and processing and its precision range are formulated. The method of failure nuclei delineation in a rock masses is presented. This method is implemented in a software program for strong seismic events forecasting. It is based on direct use of the fracture concentration criterion. The method is applied to the database of microseismic events of the North Ural Bauxite Mine. The results of this application, such as: efficiency, stability, possibility of forecasting rockburst are discussed.
Semiparametric regression analysis of interval-censored competing risks data.
Mao, Lu; Lin, Dan-Yu; Zeng, Donglin
2017-09-01
Interval-censored competing risks data arise when each study subject may experience an event or failure from one of several causes and the failure time is not observed directly but rather is known to lie in an interval between two examinations. We formulate the effects of possibly time-varying (external) covariates on the cumulative incidence or sub-distribution function of competing risks (i.e., the marginal probability of failure from a specific cause) through a broad class of semiparametric regression models that captures both proportional and non-proportional hazards structures for the sub-distribution. We allow each subject to have an arbitrary number of examinations and accommodate missing information on the cause of failure. We consider nonparametric maximum likelihood estimation and devise a fast and stable EM-type algorithm for its computation. We then establish the consistency, asymptotic normality, and semiparametric efficiency of the resulting estimators for the regression parameters by appealing to modern empirical process theory. In addition, we show through extensive simulation studies that the proposed methods perform well in realistic situations. Finally, we provide an application to a study on HIV-1 infection with different viral subtypes. © 2017, The International Biometric Society.
Traffic protection in MPLS networks using an off-line flow optimization model
NASA Astrophysics Data System (ADS)
Krzesinski, Anthony E.; Muller, Karen E.
2002-07-01
MPLS-based recovery is intended to effect rapid and complete restoration of traffic affected by a fault in an MPLS network. Two MPLS-based recovery models have been proposed: IP re-routing which establishes recovery paths on demand, and protection switching which works with pre-established recovery paths. IP re-routing is robust and frugal since no resources are pre-committed but is inherently slower than protection switching which is intended to offer high reliability to premium services where fault recovery takes place at the 100 ms time scale. We present a model of protection switching in MPLS networks. A variant of the flow deviation method is used to find and capacitate a set of optimal label switched paths. The traffic is routed over a set of working LSPs. Global repair is implemented by reserving a set of pre-established recovery LSPs. An analytic model is used to evaluate the MPLS-based recovery mechanisms in response to bi-directional link failures. A simulation model is used to evaluate the MPLS recovery cycle in terms of the time needed to restore the traffic after a uni-directional link failure. The models are applied to evaluate the effectiveness of protection switching in networks consisting of between 20 and 100 nodes.
An interface finite element model can be used to predict healing outcome of bone fractures.
Alierta, J A; Pérez, M A; García-Aznar, J M
2014-01-01
After fractures, bone can experience different potential outcomes: successful bone consolidation, non-union and bone failure. Although, there are a lot of factors that influence fracture healing, experimental studies have shown that the interfragmentary movement (IFM) is one of the main regulators for the course of bone healing. In this sense, computational models may help to improve the development of mechanical-based treatments for bone fracture healing. Hence, based on this fact, we propose a combined repair-failure mechanistic computational model to describe bone fracture healing. Despite being a simple model, it is able to correctly estimate the time course evolution of the IFM compared to in vivo measurements under different mechanical conditions. Therefore, this mathematical approach is especially suitable for modeling the healing response of bone to fractures treated with different mechanical fixators, simulating realistic clinical conditions. This model will be a useful tool to identify factors and define targets for patient specific therapeutics interventions. © 2013 Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.
2018-03-01
In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.