Sample records for failure rate model

  1. Reliability analysis of C-130 turboprop engine components using artificial neural network

    NASA Astrophysics Data System (ADS)

    Qattan, Nizar A.

    In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine turbine under actual operating conditions, which can be used by aircraft operators for assessing system and component failures and customizing the maintenance programs recommended by the manufacturer.

  2. Payload maintenance cost model for the space telescope

    NASA Technical Reports Server (NTRS)

    White, W. L.

    1980-01-01

    An optimum maintenance cost model for the space telescope for a fifteen year mission cycle was developed. Various documents and subsequent updates of failure rates and configurations were made. The reliability of the space telescope for one year, two and one half years, and five years were determined using the failure rates and configurations. The failure rates and configurations were also used in the maintenance simulation computer model which simulate the failure patterns for the fifteen year mission life of the space telescope. Cost algorithms associated with the maintenance options as indicated by the failure patterns were developed and integrated into the model.

  3. Failure rate and reliability of the KOMATSU hydraulic excavator in surface limestone mine

    NASA Astrophysics Data System (ADS)

    Harish Kumar N., S.; Choudhary, R. P.; Murthy, Ch. S. N.

    2018-04-01

    The model with failure rate function of bathtub-shaped is helpful in reliability analysis of any system and particularly in reliability associated privative maintenance. The usual Weibull distribution is, however, not capable to model the complete lifecycle of the any with a bathtub-shaped failure rate function. In this paper, failure rate and reliability analysis of the KOMATSU hydraulic excavator/shovel in surface mine is presented and also to improve the reliability and decrease the failure rate of each subsystem of the shovel based on the preventive maintenance. The model of the bathtub-shaped for shovel can also be seen as a simplification of the Weibull distribution.

  4. A quantitative model of honey bee colony population dynamics.

    PubMed

    Khoury, David S; Myerscough, Mary R; Barron, Andrew B

    2011-04-18

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem.

  5. The failure of earthquake failure models

    USGS Publications Warehouse

    Gomberg, J.

    2001-01-01

    In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.

  6. High-Strain Rate Failure Modeling Incorporating Shear Banding and Fracture

    DTIC Science & Technology

    2017-11-22

    High Strain Rate Failure Modeling Incorporating Shear Banding and Fracture The views, opinions and/or findings contained in this report are those of...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS...Report as of 05-Dec-2017 Agreement Number: W911NF-13-1-0238 Organization: Columbia University Title: High Strain Rate Failure Modeling Incorporating

  7. On rate-state and Coulomb failure models

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified Coulomb failure model in which the failure stress threshold is lowered due to weakening, increasing the clock advance. The deviation from a non-Coulomb response also depends on the loading rate, elastic stiffness, initial conditions, and assumptions about how state evolves.

  8. Vocal fold tissue failure: preliminary data and constitutive modeling.

    PubMed

    Chan, Roger W; Siegmund, Thomas

    2004-08-01

    In human voice production (phonation), linear small-amplitude vocal fold oscillation occurs only under restricted conditions. Physiologically, phonation more often involves large-amplitude oscillation associated with tissue stresses and strains beyond their linear viscoelastic limits, particularly in the lamina propria extracellular matrix (ECM). This study reports some preliminary measurements of tissue deformation and failure response of the vocal fold ECM under large-strain shear The primary goal was to formulate and test a novel constitutive model for vocal fold tissue failure, based on a standard-linear cohesive-zone (SL-CZ) approach. Tissue specimens of the sheep vocal fold mucosa were subjected to torsional deformation in vitro, at constant strain rates corresponding to twist rates of 0.01, 0.1, and 1.0 rad/s. The vocal fold ECM demonstrated nonlinear stress-strain and rate-dependent failure response with a failure strain as low as 0.40 rad. A finite-element implementation of the SL-CZ model was capable of capturing the rate dependence in these preliminary data, demonstrating the model's potential for describing tissue failure. Further studies with additional tissue specimens and model improvements are needed to better understand vocal fold tissue failure.

  9. Model analysis of the link between interest rates and crashes

    NASA Astrophysics Data System (ADS)

    Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft

    2016-09-01

    We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.

  10. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  11. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  12. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  13. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  14. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  15. On a Stochastic Failure Model under Random Shocks

    NASA Astrophysics Data System (ADS)

    Cha, Ji Hwan

    2013-02-01

    In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.

  16. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

  17. Failure rate analysis of Goddard Space Flight Center spacecraft performance during orbital life

    NASA Technical Reports Server (NTRS)

    Norris, H. P.; Timmins, A. R.

    1976-01-01

    Space life performance data on 57 Goddard Space Flight Center spacecraft are analyzed from the standpoint of determining an appropriate reliability model and the associated reliability parameters. Data from published NASA reports, which cover the space performance of GSFC spacecraft launched in the 1960-1970 decade, form the basis of the analyses. The results of the analyses show that the time distribution of 449 malfunctions, of which 248 were classified as failures (not necessarily catastrophic), follow a reliability growth pattern that can be described with either the Duane model or a Weibull distribution. The advantages of both mathematical models are used in order to: identify space failure rates, observe chronological trends, and compare failure rates with those experienced during the prelaunch environmental tests of the flight model spacecraft.

  18. Failure rates of mini-implants placed in the infrazygomatic region.

    PubMed

    Uribe, Flavio; Mehr, Rana; Mathur, Ajay; Janakiraman, Nandakumar; Allareddy, Veerasathpurush

    2015-01-01

    The purpose of this pilot study was to evaluate the failure rates of mini-implants placed in the infrazygomatic region and to evaluate factors that affect their stability. A retrospective cohort study of 30 consecutive patients (55 mini-implants) who had infrazygomatic mini-implants at a University Clinic were evaluated for failure rates. Patient, mini-implant, orthodontic, surgical, and mini-implant maintenance factors were evaluated by univariate logistic regression models for association to failure rates. A 21.8 % failure rate of mini-implants placed in the infazygomatic region was observed. None of the predictor variables were significantly associated with higher or lower odds for failed implants. Failure rates for infrazygomatic mini-implants were slightly higher than those reported in other maxilla-mandibular osseous locations. No predictor variables were found to be associated to the failure rates.

  19. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  20. Mechanistic Considerations Used in the Development of the PROFIT PCI Failure Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pankaskie, P. J.

    A fuel Pellet-Zircaloy Cladding (thermo-mechanical-chemical) Interactions (PC!) failure model for estimating the probability of failure in !ransient increases in power (PROFIT) was developed. PROFIT is based on 1) standard statistical methods applied to available PC! fuel failure data and 2) a mechanistic analysis of the environmental and strain-rate-dependent stress versus strain characteristics of Zircaloy cladding. The statistical analysis of fuel failures attributable to PCI suggested that parameters in addition to power, transient increase in power, and burnup are needed to define PCI fuel failures in terms of probability estimates with known confidence limits. The PROFIT model, therefore, introduces an environmentalmore » and strain-rate dependent strain energy absorption to failure (SEAF) concept to account for the stress versus strain anomalies attributable to interstitial-disloction interaction effects in the Zircaloy cladding. Assuming that the power ramping rate is the operating corollary of strain-rate in the Zircaloy cladding, then the variables of first order importance in the PCI fuel failure phenomenon are postulated to be: 1. pre-transient fuel rod power, P{sub I}, 2. transient increase in fuel rod power, {Delta}P, 3. fuel burnup, Bu, and 4. the constitutive material property of the Zircaloy cladding, SEAF.« less

  1. VLSI (Very Large Scale Integrated Circuits) Device Reliability Models.

    DTIC Science & Technology

    1984-12-01

    CIRCUIT COMPLEXITY FAILURE RATES FOR... A- 40 MOS SSI/MSI DEVICES IN FAILURE PER 106 HOURS TABLE 5.1.2.5-19: C1 AND C2 CIRCUIT COMPLEXITY FAILURE RATES FOR...A- 40 MOS SSI/MSI DEVICES IN FAILURE PER 106 HOURS TABLE 5.1.2.5-19: Cl AND C2 CIRCUIT COMPLEXITY FAILURE RATES FOR... A-41 LINEAR DEVICES IN...19 National Semiconductor 20 Nitron 21 Raytheon 22 Sprague 23 Synertek 24 Teledyne Crystalonics 25 TRW Semiconductor 26 Zilog The following companies

  2. Molecular Dynamics Modeling of PPTA Crystals in Aramid Fibers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mercer, Brian Scott

    2016-05-19

    In this work, molecular dynamics modeling is used to study the mechanical properties of PPTA crystallites, which are the fundamental microstructural building blocks of polymer aramid bers such as Kevlar. Particular focus is given to constant strain rate axial loading simulations of PPTA crystallites, which is motivated by the rate-dependent mechanical properties observed in some experiments with aramid bers. In order to accommodate the covalent bond rupture that occurs in loading a crystallite to failure, the reactive bond order force eld ReaxFF is employed to conduct the simulations. Two major topics are addressed: The rst is the general behavior ofmore » PPTA crystallites under strain rate loading. Constant strain rate loading simulations of crystalline PPTA reveal that the crystal failure strain increases with increasing strain rate, while the modulus is not a ected by the strain rate. Increasing temperature lowers both the modulus and the failure strain. The simulations also identify the C N bond connecting the aromatic rings as weakest primary bond along the backbone of the PPTA chain. The e ect of chain-end defects on PPTA micromechanics is explored, and it is found that the presence of a chain-end defect transfers load to the adjacent chains in the hydrogen-bonded sheet in which the defect resides, but does not in uence the behavior of any other chains in the crystal. Chain-end defects are found to lower the strength of the crystal when clustered together, inducing bond failure via stress concentrations arising from the load transfer to bonds in adjacent chains near the defect site. The second topic addressed is the nature of primary and secondary bond failure in crystalline PPTA. Failure of both types of bonds is found to be stochastic in nature and driven by thermal uctuations of the bonds within the crystal. A model is proposed which uses reliability theory to model bonds under constant strain rate loading as components with time-dependent failure rate functions. The model is shown to work well for predicting the onset of primary backbone bond failure, as well as the onset of secondary bond failure via chain slippage for the case of isolated non-interacting chain-end defects.« less

  3. Explosive Model Tarantula 4d/JWL++ Calibration of LX-17

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souers, P C; Vitello, P A

    2008-09-30

    Tarantula is an explosive kinetic package intended to do detonation, shock initiation, failure, corner-turning with dead zones, gap tests and air gaps in reactive flow hydrocode models. The first, 2007-2008 version with monotonic Q is here run inside JWL++ with square zoning from 40 to 200 zones/cm on ambient LX-17. The model splits the rate behavior in every zone into sections set by the hydrocode pressure, P + Q. As the pressure rises, we pass through the no-reaction, initiation, ramp-up/failure and detonation sections sequentially. We find that the initiation and pure detonation rate constants are largely insensitive to zoning butmore » that the ramp-up/failure rate constant is extremely sensitive. At no time does the model pass every test, but the pressure-based approach generally works. The best values for the ramp/failure region are listed here in Mb units.« less

  4. 40 CFR 51.351 - Enhanced I/M performance standard.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... vehicles. (10) Stringency. A 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver... 2001 and newer vehicles. (10) Stringency. A 20% emission test failure rate among pre-1981 model year... which will be achieved by the I/M program design in the SIP to those of the model program described in...

  5. 40 CFR 51.351 - Enhanced I/M performance standard.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... vehicles. (10) Stringency. A 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver... 2001 and newer vehicles. (10) Stringency. A 20% emission test failure rate among pre-1981 model year... which will be achieved by the I/M program design in the SIP to those of the model program described in...

  6. 40 CFR 51.351 - Enhanced I/M performance standard.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... vehicles. (10) Stringency. A 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver... 2001 and newer vehicles. (10) Stringency. A 20% emission test failure rate among pre-1981 model year... which will be achieved by the I/M program design in the SIP to those of the model program described in...

  7. 40 CFR 51.351 - Enhanced I/M performance standard.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... vehicles. (10) Stringency. A 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver... 2001 and newer vehicles. (10) Stringency. A 20% emission test failure rate among pre-1981 model year... which will be achieved by the I/M program design in the SIP to those of the model program described in...

  8. 40 CFR 51.351 - Enhanced I/M performance standard.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... vehicles. (10) Stringency. A 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver... 2001 and newer vehicles. (10) Stringency. A 20% emission test failure rate among pre-1981 model year... which will be achieved by the I/M program design in the SIP to those of the model program described in...

  9. Savannah River Site generic data base development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanton, C.H.; Eide, S.A.

    This report describes the results of a project to improve the generic component failure data base for the Savannah River Site (SRS). A representative list of components and failure modes for SRS risk models was generated by reviewing existing safety analyses and component failure data bases and from suggestions from SRS safety analysts. Then sources of data or failure rate estimates were identified and reviewed for applicability. A major source of information was the Nuclear Computerized Library for Assessing Reactor Reliability, or NUCLARR. This source includes an extensive collection of failure data and failure rate estimates for commercial nuclear powermore » plants. A recent Idaho National Engineering Laboratory report on failure data from the Idaho Chemical Processing Plant was also reviewed. From these and other recent sources, failure data and failure rate estimates were collected for the components and failure modes of interest. This information was aggregated to obtain a recommended generic failure rate distribution (mean and error factor) for each component failure mode.« less

  10. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  11. The BGR Contingency Model for Leading Change

    ERIC Educational Resources Information Center

    Brown, Derek R.; Gordon, Raymond; Rose, Dennis Michael

    2012-01-01

    The continuing failure rates of change initiatives, combined with an increasingly complex business environment, have created significant challenges for the practice of change management. High failure rates suggest that existing change models are not working, or are being incorrectly used. A different mindset to change is required. The BGR…

  12. A maximum entropy fracture model for low and high strain-rate fracture in TinSilverCopper alloys

    NASA Astrophysics Data System (ADS)

    Chan, Dennis K.

    SnAgCu solder alloys exhibit significant rate-dependent constitutive behavior. Solder joints made of these alloys exhibit failure modes that are also rate-dependent. Solder joints are an integral part of microelectronic packages and are subjected to a wide variety of loading conditions which range from thermo-mechanical fatigue to impact loading. Consequently, there is a need for non-empirical rate-dependent failure theory that is able to accurately predict fracture in these solder joints. In the present thesis, various failure models are first reviewed. But, these models are typically empirical or are not valid for solder joints due to limiting assumptions such as elastic behavior. Here, the development and validation of a maximum entropy fracture model (MEFM) valid for low strain-rate fracture in SnAgCu solders is presented. To this end, work on characterizing SnAgCu solder behavior at low strain-rates using a specially designed tester to estimate parameters for constitutive models is presented. Next, the maximum entropy fracture model is reviewed. This failure model uses a single damage accumulation parameter and relates the risk of fracture to accumulated inelastic dissipation. A methodology is presented to extract this model parameter through a custom-built microscale mechanical tester for Sn3.8Ag0.7Cu solder. This single parameter is used to numerically simulate fracture in two solder joints with entirely different geometries. The simulations are compared to experimentally observed fracture in these same packages. Following the simulations of fracture at low strain rate, the constitutive behavior of solder alloys across nine decades of strain rates through MTS compression tests and split-Hopkinson bar are presented. Preliminary work on using orthogonal machining as novel technique of material characterization at high strain rates is also presented. The resultant data from the MTS compression and split-Hopkinson bar tester is used to demonstrate the localization of stress to the interface of solder joints at high strain rates. The MEFM is further extended to predict failure in brittle materials. Such an extension allows for fracture prediction within intermetallic compounds (IMCs) in solder joints. It has been experimentally observed that the failure mode shifts from bulk solder to the IMC layer with increasing loading rates. The extension of the MEFM would allow for prediction of the fracture mode within the solder joint under different loading conditions. A fracture model capable of predicting failure modes at higher strain rates is necessary, as mobile electronics are becoming ubiquitous. Mobile devices are prone to being dropped which can induce loading rates within solder joints that are much larger than experienced under thermo-mechanical fatigue. A range of possible damage accumulation parameters for Cu6Sn 5 is determined for the MEFM. A value within the aforementioned range is used to demonstrate the increasing likelihood of IMC fracture in solder joints with larger loading rates. The thesis is concluded with remarks about ongoing work that include determining a more accurate damage accumulation parameter for Cu6Sn 5 IMC, and on using machining as a technique for extracting failure parameters for the MEFM.

  13. First time description of early lead failure of the Linox Smart lead compared to other contemporary high-voltage leads.

    PubMed

    Weberndörfer, Vanessa; Nyffenegger, Tobias; Russi, Ian; Brinkert, Miriam; Berte, Benjamin; Toggweiler, Stefan; Kobza, Richard

    2018-05-01

    Early lead failure has recently been reported in ICD patients with Linox SD leads. We aimed to compare the long-term performance of the following lead model Linox Smart SD with other contemporary high-voltage leads. All patients receiving high-voltage leads at our center between November 2009 and May 2017 were retrospectively analyzed. Lead failure was defined as the occurrence of one or more of the following: non-physiological high-rate episodes, low- or high-voltage impedance anomalies, undersensing, or non-capture. In total, 220 patients were included (Linox Smart SD, n = 113; contemporary lead, n = 107). During a median follow-up of 3.8 years (IQR 1.6-5.9 years), a total of 16 (14 in Linox Smart SD and 2 in contemporary group) lead failures occurred, mostly due to non-physiological high-rate sensing or impedance abnormalities. Lead failure incidence rates per 100 person-years were 2.9 (95% CI 1.7-4.9) and 0.6 (95% CI 0.1-2.3) for Linox Smart SD compared to contemporary leads respectively. Kaplan Meier estimates of 5-year lead failure rates were 14.0% (95% CI 8.1-23.6%) and 1.3% (95% CI 0.2-8.9%), respectively (log-rank p = 0.028). Implantation of a Linox Smart SD lead increased the risk of lead failure with a hazard ratio (HR) of 4.53 (95% CI 1.03-19.95, p = 0.046) and 4.44 (95% CI 1.00-19.77, p = 0.05) in uni- and multivariable Cox models. The new Linox Smart SD lead model was associated with high failure rates and should be monitored closely to detect early signs of lead failure.

  14. Porting Inition and Failure to Linked Cheetah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P; Souers, P C

    2007-07-18

    Linked CHEETAH is a thermo-chemical code coupled to a 2-D hydrocode. Initially, a quadratic-pressure dependent kinetic rate was used, which worked well in modeling prompt detonation of explosives of large size, but does not work on other aspects of explosive behavior. The variable-pressure Tarantula reactive flow rate model was developed with JWL++ in order to also describe failure and initiation, and we have moved this model into Linked CHEETAH. The model works by turning on only above a pressure threshold, where a slow turn-on creates initiation. At a higher pressure, the rate suddenly leaps to a large value over amore » small pressure range. A slowly failing cylinder will see a rapidly declining rate, which pushes it quickly into failure. At a high pressure, the detonation rate is constant. A sequential validation procedure is used, which includes metal-confined cylinders, rate-sticks, corner-turning, initiation and threshold, gap tests and air gaps. The size (diameter) effect is central to the calibration.« less

  15. Micromechanics of failure waves in glass. 2: Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espinosa, H.D.; Xu, Y.; Brar, N.S.

    1997-08-01

    In an attempt to elucidate the failure mechanism responsible for the so-called failure waves in glass, numerical simulations of plate and rod impact experiments, with a multiple-plane model, have been performed. These simulations show that the failure wave phenomenon can be modeled by the nucleation and growth of penny-shaped shear defects from the specimen surface to its interior. Lateral stress increase, reduction of spall strength,and progressive attenuation of axial stress behind the failure front are properly predicted by the multiple-plane model. Numerical simulations of high-strain-rate pressure-shear experiments indicate that the model predicts reasonably well the shear resistance of the materialmore » at strain rates as high as 1 {times} 10{sup 6}/s. The agreement is believed to be the result of the model capability in simulating damage-induced anisotropy. By examining the kinetics of the failure process in plate experiments, the authors show that the progressive glass spallation in the vicinity of the failure front and the rate of increase in lateral stress are more consistent with a representation of inelasticity based on shear-activated flow surfaces, inhomogeneous flow, and microcracking, rather than pure microcracking. In the former mechanism, microcracks are likely formed at a later time at the intersection of flow surfaces, in the case of rod-on-rod impact, stress and radial velocity histories predicted by the microcracking model are in agreement with the experimental measurements. Stress attenuation, pulse duration, and release structure are properly simulated. It is shown that failure wave speeds in excess to 3,600 m/s are required for adequate prediction in rod radial expansion.« less

  16. Syndromic surveillance for health information system failures: a feasibility study.

    PubMed

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-05-01

    To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.

  17. A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.

    DTIC Science & Technology

    1981-06-01

    Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    Previously the SURFplus reactive burn model was calibrated for the TATB based explosive PBX 9502. The calibration was based on fitting Pop plot data, the failure diameter and the limiting detonation speed, and curvature effect data for small curvature. The model failure diameter is determined utilizing 2-D simulations of an unconfined rate stick to find the minimum diameter for which a detonation wave propagates. Here we examine the effect of mesh resolution on an unconfined rate stick with a diameter (10mm) slightly greater than the measured failure diameter (8 to 9 mm).

  19. Influence of enamel preservation on failure rates of porcelain laminate veneers.

    PubMed

    Gurel, Galip; Sesma, Newton; Calamita, Marcelo A; Coachman, Christian; Morimoto, Susana

    2013-01-01

    The purpose of this study was to evaluate the failure rates of porcelain laminate veneers (PLVs) and the influence of clinical parameters on these rates in a retrospective survey of up to 12 years. Five hundred eighty laminate veneers were bonded in 66 patients. The following parameters were analyzed: type of preparation (depth and margin), crown lengthening, presence of restoration, diastema, crowding, discoloration, abrasion, and attrition. Survival was analyzed using the Kaplan-Meier method. Cox regression modeling was used to determine which factors would predict PLV failure. Forty-two veneers (7.2%) failed in 23 patients, and an overall cumulative survival rate of 86% was observed. A statistically significant association was noted between failure and the limits of the prepared tooth surface (margin and depth). The most frequent failure type was fracture (n = 20). The results revealed no significant influence of crown lengthening apically, presence of restoration, diastema, discoloration, abrasion, or attrition on failure rates. Multivariable analysis (Cox regression model) also showed that PLVs bonded to dentin and teeth with preparation margins in dentin were approximately 10 times more likely to fail than PLVs bonded to enamel. Moreover, coronal crown lengthening increased the risk of PLV failure by 2.3 times. A survival rate of 99% was observed for veneers with preparations confined to enamel and 94% for veneers with enamel only at the margins. Laminate veneers have high survival rates when bonded to enamel and provide a safe and predictable treatment option that preserves tooth structure.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    SURFplus is a reactive burn model for high explosives aimed at modelling shock initiation and propagation of detonation waves. It utilizes the SURF model for the fast hot-spot reaction plus a slow reaction for the energy released by carbon clustering. A feature of the SURF model is that there is a partially decoupling between burn rate parameters and detonation wave properties. Previously, parameters for PBX 9502 that control shock ini- tiation had been calibrated to Pop plot data (distance-of-run to detonation as a function of shock pressure initiating the detonation). Here burn rate parameters for the high pres- sure regimemore » are adjusted to t the failure diameter and the limiting detonation speed just above the failure diameter. Simulated results are shown for an uncon ned rate stick when the 9502 diameter is slightly above and slightly below the failure diameter. Just above the failure diameter, in the rest frame of the detonation wave, the front is sonic at the PBX/air interface. As a consequence, the lead shock in the neighborhood of the interface is supported by the detonation pressure in the interior of the explosive rather than the reaction immediately behind the front. In the interior, the sonic point occurs near the end of the fast hot-spot reaction. Consequently, the slow carbon clustering reaction can not a ect the failure diameter. Below the failure diameter, the radial extent of the detonation front decreases starting from the PBX/air interface. That is, the failure starts at the PBX boundary and propagates inward to the axis of the rate stick.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthias C. M. Troffaes; Gero Walter; Dana Kelly

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus onmore » elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.« less

  2. Multiaxial Temperature- and Time-Dependent Failure Model

    NASA Technical Reports Server (NTRS)

    Richardson, David; McLennan, Michael; Anderson, Gregory; Macon, David; Batista-Rodriquez, Alicia

    2003-01-01

    A temperature- and time-dependent mathematical model predicts the conditions for failure of a material subjected to multiaxial stress. The model was initially applied to a filled epoxy below its glass-transition temperature, and is expected to be applicable to other materials, at least below their glass-transition temperatures. The model is justified simply by the fact that it closely approximates the experimentally observed failure behavior of this material: The multiaxiality of the model has been confirmed (see figure) and the model has been shown to be applicable at temperatures from -20 to 115 F (-29 to 46 C) and to predict tensile failures of constant-load and constant-load-rate specimens with failure times ranging from minutes to months..

  3. Earthquake triggering by transient and static deformations

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.M.; Blanpied, M.L.; Bodin, P.

    1998-01-01

    Observational evidence for both static and transient near-field and far-field triggered seismicity are explained in terms of a frictional instability model, based on a single degree of freedom spring-slider system and rate- and state-dependent frictional constitutive equations. In this study a triggered earthquake is one whose failure time has been advanced by ??t (clock advance) due to a stress perturbation. Triggering stress perturbations considered include square-wave transients and step functions, analogous to seismic waves and coseismic static stress changes, respectively. Perturbations are superimposed on a constant background stressing rate which represents the tectonic stressing rate. The normal stress is assumed to be constant. Approximate, closed-form solutions of the rate-and-state equations are derived for these triggering and background loads, building on the work of Dieterich [1992, 1994]. These solutions can be used to simulate the effects of static and transient stresses as a function of amplitude, onset time t0, and in the case of square waves, duration. The accuracies of the approximate closed-form solutions are also evaluated with respect to the full numerical solution and t0. The approximate solutions underpredict the full solutions, although the difference decreases as t0, approaches the end of the earthquake cycle. The relationship between ??t and t0 differs for transient and static loads: a static stress step imposed late in the cycle causes less clock advance than an equal step imposed earlier, whereas a later applied transient causes greater clock advance than an equal one imposed earlier. For equal ??t, transient amplitudes must be greater than static loads by factors of several tens to hundreds depending on t0. We show that the rate-and-state model requires that the total slip at failure is a constant, regardless of the loading history. Thus a static load applied early in the cycle, or a transient applied at any time, reduces the stress at the initiation of failure, whereas static loads that are applied sufficiently late raise it. Rate-and-state friction predictions differ markedly from those based on Coulomb failure stress changes (??CFS) in which ??t equals the amplitude of the static stress change divided by the background stressing rate. The ??CFS model assumes a stress failure threshold, while the rate-and-state equations require a slip failure threshold. The complete rale-and-state equations predict larger ??t than the ??CFS model does for static stress steps at small t0, and smaller ??t than the ??CFS model for stress steps at large t0. The ??CFS model predicts nonzero ??t only for transient loads that raise the stress to failure stress levels during the transient. In contrast, the rate-and-state model predicts nonzero ??t for smaller loads, and triggered failure may occur well after the transient is finished. We consider heuristically the effects of triggering on a population of faults, as these effects might be evident in seismicity data. Triggering is manifest as an initial increase in seismicity rate that may be followed by a quiescence or by a return to the background rate. Available seismicity data are insufficient to discriminate whether triggered earthquakes are "new" or clock advanced. However, if triggering indeed results from advancing the failure time of inevitable earthquakes, then our modeling suggests that a quiescence always follows transient triggering and that the duration of increased seismicity also cannot exceed the duration of a triggering transient load. Quiescence follows static triggering only if the population of available faults is finite.

  4. Syndromic surveillance for health information system failures: a feasibility study

    PubMed Central

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-01-01

    Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193

  5. A measurement-based performability model for a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  6. Modeling the biomechanical and injury response of human liver parenchyma under tensile loading.

    PubMed

    Untaroiu, Costin D; Lu, Yuan-Chiao; Siripurapu, Sundeep K; Kemper, Andrew R

    2015-01-01

    The rapid advancement in computational power has made human finite element (FE) models one of the most efficient tools for assessing the risk of abdominal injuries in a crash event. In this study, specimen-specific FE models were employed to quantify material and failure properties of human liver parenchyma using a FE optimization approach. Uniaxial tensile tests were performed on 34 parenchyma coupon specimens prepared from two fresh human livers. Each specimen was tested to failure at one of four loading rates (0.01s(-1), 0.1s(-1), 1s(-1), and 10s(-1)) to investigate the effects of rate dependency on the biomechanical and failure response of liver parenchyma. Each test was simulated by prescribing the end displacements of specimen-specific FE models based on the corresponding test data. The parameters of a first-order Ogden material model were identified for each specimen by a FE optimization approach while simulating the pre-tear loading region. The mean material model parameters were then determined for each loading rate from the characteristic averages of the stress-strain curves, and a stochastic optimization approach was utilized to determine the standard deviations of the material model parameters. A hyperelastic material model using a tabulated formulation for rate effects showed good predictions in terms of tensile material properties of human liver parenchyma. Furthermore, the tissue tearing was numerically simulated using a cohesive zone modeling (CZM) approach. A layer of cohesive elements was added at the failure location, and the CZM parameters were identified by fitting the post-tear force-time history recorded in each test. The results show that the proposed approach is able to capture both the biomechanical and failure response, and accurately model the overall force-deflection response of liver parenchyma over a large range of tensile loadings rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Comparative study of the failure rates among 3 implantable defibrillator leads.

    PubMed

    van Malderen, Sophie C H; Szili-Torok, Tamas; Yap, Sing C; Hoeks, Sanne E; Zijlstra, Felix; Theuns, Dominic A M J

    2016-12-01

    After the introduction of the Biotronik Linox S/SD high-voltage lead, several cases of early failure have been observed. The purpose of this article was to assess the performance of the Linox S/SD lead in comparison to 2 other contemporary leads. We used the prospective Erasmus MC ICD registry to identify all implanted Linox S/SD (n = 408), Durata (St. Jude Medical, model 7122) (n = 340), and Endotak Reliance (Boston Scientific, models 0155, 0138, and 0158) (n = 343) leads. Lead failure was defined by low- or high-voltage impedance, failure to capture, sense or defibrillate, or the presence of nonphysiological signals not due to external interference. During a median follow-up of 5.1 years, 24 Linox (5.9%), 5 Endotak (1.5%), and 5 Durata (1.5%) leads failed. At 5-year follow-up, the cumulative failure rate of Linox leads (6.4%) was higher than that of Endotak (0.4%; P < .0001) and Durata (2.0%; P = .003) leads. The incidence rate was higher in Linox leads (1.3 per 100 patient-years) than in Endotak and Durata leads (0.2 and 0.3 per 100 patient-years, respectively; P < .001). A log-log analysis of the cumulative hazard for Linox leads functioning at 3-year follow-up revealed a stable failure rate of 3% per year. The majority of failures consisted of noise (62.5%) and abnormal impedance (33.3%). This study demonstrates a higher failure rate of Linox S/SD high-voltage leads compared to contemporary leads. Although the mechanism of lead failure is unclear, the majority presents with abnormal electrical parameters. Comprehensive monitoring of Linox S/SD high-voltage leads includes remote monitoring to facilitate early detection of lead failure. Copyright © 2016. Published by Elsevier Inc.

  8. A Queueing Approach to Optimal Resource Replication in Wireless Sensor Networks

    DTIC Science & Technology

    2009-04-29

    network (an energy- centric approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure- centric ...replication strategies in wireless sensor networks. The model can be used to minimize either the total transmission rate of the network (an energy- centric ...approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure- centric approach). The model explicitly

  9. Predicting Failure Progression and Failure Loads in Composite Open-Hole Tension Coupons

    NASA Technical Reports Server (NTRS)

    Arunkumar, Satyanarayana; Przekop, Adam

    2010-01-01

    Failure types and failure loads in carbon-epoxy [45n/90n/-45n/0n]ms laminate coupons with central circular holes subjected to tensile load are simulated using progressive failure analysis (PFA) methodology. The progressive failure methodology is implemented using VUMAT subroutine within the ABAQUS(TradeMark)/Explicit nonlinear finite element code. The degradation model adopted in the present PFA methodology uses an instantaneous complete stress reduction (COSTR) approach to simulate damage at a material point when failure occurs. In-plane modeling parameters such as element size and shape are held constant in the finite element models, irrespective of laminate thickness and hole size, to predict failure loads and failure progression. Comparison to published test data indicates that this methodology accurately simulates brittle, pull-out and delamination failure types. The sensitivity of the failure progression and the failure load to analytical loading rates and solvers precision is demonstrated.

  10. Reliability analysis and fault-tolerant system development for a redundant strapdown inertial measurement unit. [inertial platforms

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.

  11. Delamination modeling of laminate plate made of sublaminates

    NASA Astrophysics Data System (ADS)

    Kormaníková, Eva; Kotrasová, Kamila

    2017-07-01

    The paper presents the mixed-mode delamination of plates made of sublaminates. To this purpose an opening load mode of delamination is proposed as failure model. The failure model is implemented in ANSYS code to calculate the mixed-mode delamination response as energy release rate. The analysis is based on interface techniques. Within the interface finite element modeling there are calculated the individual components of damage parameters as spring reaction forces, relative displacements and energy release rates along the lamination front.

  12. Evaluation of a Multi-Axial, Temperature, and Time Dependent (MATT) Failure Model

    NASA Technical Reports Server (NTRS)

    Richardson, D. E.; Anderson, G. L.; Macon, D. J.; Rudolphi, Michael (Technical Monitor)

    2002-01-01

    To obtain a better understanding the response of the structural adhesives used in the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle, an extensive effort has been conducted to characterize in detail the failure properties of these adhesives. This effort involved the development of a failure model that includes the effects of multi-axial loading, temperature, and time. An understanding of the effects of these parameters on the failure of the adhesive is crucial to the understanding and prediction of the safety of the RSRM nozzle. This paper documents the use of this newly developed multi-axial, temperature, and time (MATT) dependent failure model for modeling failure for the adhesives TIGA 321, EA913NA, and EA946. The development of the mathematical failure model using constant load rate normal and shear test data is presented. Verification of the accuracy of the failure model is shown through comparisons between predictions and measured creep and multi-axial failure data. The verification indicates that the failure model performs well for a wide range of conditions (loading, temperature, and time) for the three adhesives. The failure criterion is shown to be accurate through the glass transition for the adhesive EA946. Though this failure model has been developed and evaluated with adhesives, the concepts are applicable for other isotropic materials.

  13. Predictors of incident heart failure in patients after an acute coronary syndrome: The LIPID heart failure risk-prediction model.

    PubMed

    Driscoll, Andrea; Barnes, Elizabeth H; Blankenberg, Stefan; Colquhoun, David M; Hunt, David; Nestel, Paul J; Stewart, Ralph A; West, Malcolm J; White, Harvey D; Simes, John; Tonkin, Andrew

    2017-12-01

    Coronary heart disease is a major cause of heart failure. Availability of risk-prediction models that include both clinical parameters and biomarkers is limited. We aimed to develop such a model for prediction of incident heart failure. A multivariable risk-factor model was developed for prediction of first occurrence of heart failure death or hospitalization. A simplified risk score was derived that enabled subjects to be grouped into categories of 5-year risk varying from <5% to >20%. Among 7101 patients from the LIPID study (84% male), with median age 61years (interquartile range 55-67years), 558 (8%) died or were hospitalized because of heart failure. Older age, history of claudication or diabetes mellitus, body mass index>30kg/m 2 , LDL-cholesterol >2.5mmol/L, heart rate>70 beats/min, white blood cell count, and the nature of the qualifying acute coronary syndrome (myocardial infarction or unstable angina) were associated with an increase in heart failure events. Coronary revascularization was associated with a lower event rate. Incident heart failure increased with higher concentrations of B-type natriuretic peptide >50ng/L, cystatin C>0.93nmol/L, D-dimer >273nmol/L, high-sensitivity C-reactive protein >4.8nmol/L, and sensitive troponin I>0.018μg/L. Addition of biomarkers to the clinical risk model improved the model's C statistic from 0.73 to 0.77. The net reclassification improvement incorporating biomarkers into the clinical model using categories of 5-year risk was 23%. Adding a multibiomarker panel to conventional parameters markedly improved discrimination and risk classification for future heart failure events. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  14. Derivation of Failure Rates and Probability of Failures for the International Space Station Probabilistic Risk Assessment Study

    NASA Technical Reports Server (NTRS)

    Vitali, Roberto; Lutomski, Michael G.

    2004-01-01

    National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.

  15. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  16. Predictions of structural integrity of steam generator tubes under normal operating, accident, an severe accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majumdar, S.

    1997-02-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmedmore » by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.« less

  17. Porting Initiation and Failure into Linked CHEETAH

    NASA Astrophysics Data System (ADS)

    Souers, Clark; Vitello, Peter

    2007-06-01

    Linked CHEETAH is a thermo-chemical code coupled to a 2-D hydrocode. Initially, a quadratic-pressure dependent kinetic rate was used, which worked well in modeling prompt detonation of explosives of large size, but does not work on other aspects of explosive behavior. The variable-pressure Tarantula reactive flow rate model was developed with JWL++ in order to also describe failure and initiation, and we have moved this model into Linked CHEETAH. The model works by turning on only above a pressure threshold, where a slow turn-on creates initiation. At a higher pressure, the rate suddenly leaps to a large value over a small pressure range. A slowly failing cylinder will see a rapidly declining rate, which pushes it quickly into failure. At a high pressure, the detonation rate is constant. A sequential validation procedure is used, which includes metal-confined cylinders, rate-sticks, corner-turning, initiation and threshold, gap tests and air gaps. The size (diameter) effect is central to the calibration. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  18. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  19. Do heart and respiratory rate variability improve prediction of extubation outcomes in critically ill patients?

    PubMed Central

    2014-01-01

    Introduction Prolonged ventilation and failed extubation are associated with increased harm and cost. The added value of heart and respiratory rate variability (HRV and RRV) during spontaneous breathing trials (SBTs) to predict extubation failure remains unknown. Methods We enrolled 721 patients in a multicenter (12 sites), prospective, observational study, evaluating clinical estimates of risk of extubation failure, physiologic measures recorded during SBTs, HRV and RRV recorded before and during the last SBT prior to extubation, and extubation outcomes. We excluded 287 patients because of protocol or technical violations, or poor data quality. Measures of variability (97 HRV, 82 RRV) were calculated from electrocardiogram and capnography waveforms followed by automated cleaning and variability analysis using Continuous Individualized Multiorgan Variability Analysis (CIMVA™) software. Repeated randomized subsampling with training, validation, and testing were used to derive and compare predictive models. Results Of 434 patients with high-quality data, 51 (12%) failed extubation. Two HRV and eight RRV measures showed statistically significant association with extubation failure (P <0.0041, 5% false discovery rate). An ensemble average of five univariate logistic regression models using RRV during SBT, yielding a probability of extubation failure (called WAVE score), demonstrated optimal predictive capacity. With repeated random subsampling and testing, the model showed mean receiver operating characteristic area under the curve (ROC AUC) of 0.69, higher than heart rate (0.51), rapid shallow breathing index (RBSI; 0.61) and respiratory rate (0.63). After deriving a WAVE model based on all data, training-set performance demonstrated that the model increased its predictive power when applied to patients conventionally considered high risk: a WAVE score >0.5 in patients with RSBI >105 and perceived high risk of failure yielded a fold increase in risk of extubation failure of 3.0 (95% confidence interval (CI) 1.2 to 5.2) and 3.5 (95% CI 1.9 to 5.4), respectively. Conclusions Altered HRV and RRV (during the SBT prior to extubation) are significantly associated with extubation failure. A predictive model using RRV during the last SBT provided optimal accuracy of prediction in all patients, with improved accuracy when combined with clinical impression or RSBI. This model requires a validation cohort to evaluate accuracy and generalizability. Trial registration ClinicalTrials.gov NCT01237886. Registered 13 October 2010. PMID:24713049

  20. Modeling Population-Level Consequences of Polychlorinated Biphenyl Exposure in East Greenland Polar Bears.

    PubMed

    Pavlova, Viola; Grimm, Volker; Dietz, Rune; Sonne, Christian; Vorkamp, Katrin; Rigét, Frank F; Letcher, Robert J; Gustavson, Kim; Desforges, Jean-Pierre; Nabe-Nielsen, Jacob

    2016-01-01

    Polychlorinated biphenyls (PCBs) can cause endocrine disruption, cancer, immunosuppression, or reproductive failure in animals. We used an individual-based model to explore whether and how PCB-associated reproductive failure could affect the dynamics of a hypothetical polar bear (Ursus maritimus) population exposed to PCBs to the same degree as the East Greenland subpopulation. Dose-response data from experimental studies on a surrogate species, the mink (Mustela vision), were used in the absence of similar data for polar bears. Two alternative types of reproductive failure in relation to maternal sum-PCB concentrations were considered: increased abortion rate and increased cub mortality. We found that the quantitative impact of PCB-induced reproductive failure on population growth rate depended largely on the actual type of reproductive failure involved. Critical potencies of the dose-response relationship for decreasing the population growth rate were established for both modeled types of reproductive failure. Comparing the model predictions of the age-dependent trend of sum-PCBs concentrations in females with actual field measurements from East Greenland indicated that it was unlikely that PCB exposure caused a high incidence of abortions in the subpopulation. However, on the basis of this analysis, it could not be excluded that PCB exposure contributes to higher cub mortality. Our results highlight the necessity for further research on the possible influence of PCBs on polar bear reproduction regarding their physiological pathway. This includes determining the exact cause of reproductive failure, i.e., in utero exposure versus lactational exposure of offspring; the timing of offspring death; and establishing the most relevant reference metrics for the dose-response relationship.

  1. Modelling the failure behaviour of wind turbines

    NASA Astrophysics Data System (ADS)

    Faulstich, S.; Berkhout, V.; Mayer, J.; Siebenlist, D.

    2016-09-01

    Modelling the failure behaviour of wind turbines is an essential part of offshore wind farm simulation software as it leads to optimized decision making when specifying the necessary resources for the operation and maintenance of wind farms. In order to optimize O&M strategies, a thorough understanding of a wind turbine's failure behaviour is vital and is therefore being developed at Fraunhofer IWES. Within this article, first the failure models of existing offshore O&M tools are presented to show the state of the art and strengths and weaknesses of the respective models are briefly discussed. Then a conceptual framework for modelling different failure mechanisms of wind turbines is being presented. This framework takes into account the different wind turbine subsystems and structures as well as the failure modes of a component by applying several influencing factors representing wear and break failure mechanisms. A failure function is being set up for the rotor blade as exemplary component and simulation results have been compared to a constant failure rate and to empirical wind turbine fleet data as a reference. The comparison and the breakdown of specific failure categories demonstrate the overall plausibility of the model.

  2. Tuning critical failure with viscoelasticity: How aftershocks inhibit criticality in an analytical mean field model of fracture.

    NASA Astrophysics Data System (ADS)

    Baro Urbea, J.; Davidsen, J.

    2017-12-01

    The hypothesis of critical failure relates the presence of an ultimate stability point in the structural constitutive equation of materials to a divergence of characteristic scales in the microscopic dynamics responsible of deformation. Avalanche models involving critical failure have determined universality classes in different systems: from slip events in crystalline and amorphous materials to the jamming of granular media or the fracture of brittle materials. However, not all empirical failure processes exhibit the trademarks of critical failure. As an example, the statistical properties of ultrasonic acoustic events recorded during the failure of porous brittle materials are stationary, except for variations in the activity rate that can be interpreted in terms of aftershock and foreshock activity (J. Baró et al., PRL 2013).The rheological properties of materials introduce dissipation, usually reproduced in atomistic models as a hardening of the coarse-grained elements of the system. If the hardening is associated to a relaxation process the same mechanism is able to generate temporal correlations. We report the analytic solution of a mean field fracture model exemplifying how criticality and temporal correlations are tuned by transient hardening. We provide a physical meaning to the conceptual model by deriving the constitutive equation from the explicit representation of the transient hardening in terms of a generalized viscoelasticity model. The rate of 'aftershocks' is controlled by the temporal evolution of the viscoelastic creep. At the quasistatic limit, the moment release is invariant to rheology. Therefore, the lack of criticality is explained by the increase of the activity rate close to failure, i.e. 'foreshocks'. Finally, the avalanche propagation can be reinterpreted as a pure mathematical problem in terms of a stochastic counting process. The statistical properties depend only on the distance to a critical point, which is universal for any parametrization of the transient hardening and a whole category of fracture models.

  3. A heart failure initiative to reduce the length of stay and readmission rates.

    PubMed

    White, Sabrina Marie; Hill, Alethea

    2014-01-01

    The purpose of this pilot was to improve multidisciplinary coordination of care and patient education and foster self-management behaviors. The primary and secondary outcomes achieved from this pilot were to decrease the 30-day readmission rate and heart failure length of stay. The primary practice site was an inpatient medical-surgical nursing unit. The length of stay decreased from 6.05% to 4.42% for heart failure diagnostic-related group 291 as a result of utilizing the model. The length of stay decreased from 3.9% to 3.09%, which was also less than the national rate of 3.8036% for diagnostic-related group 292. In addition, the readmission rate decreased from 23.1% prior to January 2013 to 12.9%. Implementation of standards of care coordination can decrease length of stay, readmission rate, and improve self-management. Implementation of evidence-based heart failure guidelines, improved interdisciplinary coordination of care, patient education, self-management skills, and transitional care at the time of discharge improved overall heart failure outcome measures. Utilizing the longitudinal model of care to transition patients to home aided in evaluating social support, resource allocation and utilization, access to care postdischarge, and interdisciplinary coordination of care. The collaboration between disciplines improved continuity of care, patient compliance to their discharge regimen, and adequate discharge follow-up.

  4. Stochastic Availability of a Repairable System with an Age - and Maintenance - Dependent Failure Rate,

    DTIC Science & Technology

    1982-06-01

    STOCKATZC LV AaMIQ.YN 0gp M@lIm iii s m -r ANAs WgLMSZIb 940=04 WoeU-O PolytechnicInstitute June 1982 Stochastic Availability of a Repairable System ...STOCHASTIC AVAILABILITY OF A REPAIRABLE SYSTEM WITH AN AGE AND MAINTENANCE DEPENDENT FAILURE RATE by JACK-KANG CHAN June 1982 Report No..Poly EE/CS 82-004...1.1 Concepts of System Availability 1 1.2 Maintenance and Failure Rate 7 1.3 Summary Chapter 2 SYSTEM4 MODEL 2.1 A Repairable System with Lintenance

  5. Reliability Prediction Models for Discrete Semiconductor Devices

    DTIC Science & Technology

    1988-07-01

    influence failure rate were device construction, semiconductor material, junction temperature, electrical stress, circuit application., a plication...found to influence failure rate were device construction, semiconductor material, junction temperature, electrical stress, circuit application...MFA Airbreathlng 14issile, Flight MFF Missile, Free Flight ML Missile, Launch MMIC Monolithic Microwave Integrated Circuits MOS Metal-Oxide

  6. Scaling of coupled dilatancy-diffusion processes in space and time

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Bell, A. F.; Meredith, P. G.; Brantut, N.; Heap, M.

    2012-04-01

    Coupled dilatancy-diffusion processes resulting from microscopically brittle damage due to precursory cracking have been observed in the laboratory and suggested as a mechanism for earthquake precursors. One reason precursors have proven elusive may be the scaling in space: recent geodetic and seismic data placing strong limits on the spatial extent of the nucleation zone for recent earthquakes. Another may be the scaling in time: recent laboratory results on axi-symmetric samples show both a systematic decrease in circumferential extensional strain at failure and a delayed and a sharper acceleration of acoustic emission event rate as strain rate is decreased. Here we examine the scaling of such processes in time from laboratory to field conditions using brittle creep (constant stress loading) to failure tests, in an attempt to bridge part of the strain rate gap to natural conditions, and discuss the implications for forecasting the failure time. Dilatancy rate is strongly correlated to strain rate, and decreases to zero in the steady-rate creep phase at strain rates around 10-9 s-1 for a basalt from Mount Etna. The data are well described by a creep model based on the linear superposition of transient (decelerating) and accelerating micro-crack growth due to stress corrosion. The model produces good fits to the failure time in retrospect using the accelerating acoustic emission event rate, but in prospective tests on synthetic data with the same properties we find failure-time forecasting is subject to systematic epistemic and aleatory uncertainties that degrade predictability. The next stage is to use the technology developed to attempt failure forecasting in real time, using live streamed data and a public web-based portal to quantify the prospective forecast quality under such controlled laboratory conditions.

  7. Mechanical characterization and modeling of the deformation and failure of the highly crosslinked RTM6 epoxy resin

    NASA Astrophysics Data System (ADS)

    Morelle, X. P.; Chevalier, J.; Bailly, C.; Pardoen, T.; Lani, F.

    2017-08-01

    The nonlinear deformation and fracture of RTM6 epoxy resin is characterized as a function of strain rate and temperature under various loading conditions involving uniaxial tension, notched tension, uniaxial compression, torsion, and shear. The parameters of the hardening law depend on the strain-rate and temperature. The pressure-dependency and hardening law, as well as four different phenomenological failure criteria, are identified using a subset of the experimental results. Detailed fractography analysis provides insight into the competition between shear yielding and maximum principal stress driven brittle failure. The constitutive model and a stress-triaxiality dependent effective plastic strain based failure criterion are readily introduced in the standard version of Abaqus, without the need for coding user subroutines, and can thus be directly used as an input in multi-scale modeling of fibre-reinforced composite material. The model is successfully validated against data not used for the identification and through the full simulation of the crack propagation process in the V-notched beam shear test.

  8. Micromolecular modeling

    NASA Technical Reports Server (NTRS)

    Guillet, J. E.

    1984-01-01

    A reaction kinetics based model of the photodegradation process, which measures all important rate constants, and a computerized model capable of predicting the photodegradation rate and failure modes of a 30 year period, were developed. It is shown that the computerized photodegradation model for polyethylene correctly predicts failure of ELVAX 15 and cross linked ELVAX 150 on outdoor exposure. It is indicated that cross linking ethylene vinyl acetate (EVA) does not significantly change its degradation rate. It is shown that the effect of the stabilizer package is approximately equivalent on both polymers. The computerized model indicates that peroxide decomposers and UV absorbers are the most effective stabilizers. It is found that a combination of UV absorbers and a hindered amine light stabilizer (HALS) is the most effective stabilizer system.

  9. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  10. The impact of vaccine failure rate on epidemic dynamics in responsive networks.

    PubMed

    Liang, Yu-Hao; Juang, Jonq

    2015-04-01

    An SIS model based on the microscopic Markov-chain approximation is considered in this paper. It is assumed that the individual vaccination behavior depends on the contact awareness, local and global information of an epidemic. To better simulate the real situation, the vaccine failure rate is also taken into consideration. Our main conclusions are given in the following. First, we show that if the vaccine failure rate α is zero, then the epidemic eventually dies out regardless of what the network structure is or how large the effective spreading rate and the immunization response rates of an epidemic are. Second, we show that for any positive α, there exists a positive epidemic threshold depending on an adjusted network structure, which is only determined by the structure of the original network, the positive vaccine failure rate and the immunization response rate for contact awareness. Moreover, the epidemic threshold increases with respect to the strength of the immunization response rate for contact awareness. Finally, if the vaccine failure rate and the immunization response rate for contact awareness are positive, then there exists a critical vaccine failure rate αc > 0 so that the disease free equilibrium (DFE) is stable (resp., unstable) if α < αc (resp., α > αc). Numerical simulations to see the effectiveness of our theoretical results are also provided.

  11. Estimating distributions with increasing failure rate in an imperfect repair model.

    PubMed

    Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R

    2002-03-01

    A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.

  12. Damage evolution of bi-body model composed of weakly cemented soft rock and coal considering different interface effect.

    PubMed

    Zhao, Zenghui; Lv, Xianzhou; Wang, Weiming; Tan, Yunliang

    2016-01-01

    Considering the structure effect of tunnel stability in western mining of China, three typical kinds of numerical model were respectively built as follows based on the strain softening constitutive model and linear elastic-perfectly plastic model for soft rock and interface: R-M, R-C(s)-M and R-C(w)-M. Calculation results revealed that the stress-strain relation and failure characteristics of the three models vary between each other. The combination model without interface or with a strong interface presented continuous failure, while weak interface exhibited 'cut off' effect. Thus, conceptual models of bi-material model and bi-body model were established. Then numerical experiments of tri-axial compression were carried out for the two models. The relationships between stress evolution, failure zone and deformation rate fluctuations as well as the displacement of interface were detailed analyzed. Results show that two breakaway points of deformation rate actually demonstrate the starting and penetration of the main rupture, respectively. It is distinguishable due to the large fluctuation. The bi-material model shows general continuous failure while bi-body model shows 'V' type shear zone in weak body and failure in strong body near the interface due to the interface effect. With the increasing of confining pressure, the 'cut off' effect of weak interface is not obvious. These conclusions lay the theoretical foundation for further development of constitutive model for soft rock-coal combination body.

  13. Contraceptive failure rates: new estimates from the 1995 National Survey of Family Growth.

    PubMed

    Fu, H; Darroch, J E; Haas, T; Ranjit, N

    1999-01-01

    Unintended pregnancy remains a major public health concern in the United States. Information on pregnancy rates among contraceptive users is needed to guide medical professionals' recommendations and individuals' choices of contraceptive methods. Data were taken from the 1995 National Survey of Family Growth (NSFG) and the 1994-1995 Abortion Patient Survey (APS). Hazards models were used to estimate method-specific contraceptive failure rates during the first six months and during the first year of contraceptive use for all U.S. women. In addition, rates were corrected to take into account the underreporting of induced abortion in the NSFG. Corrected 12-month failure rates were also estimated for subgroups of women by age, union status, poverty level, race or ethnicity, and religion. When contraceptive methods are ranked by effectiveness over the first 12 months of use (corrected for abortion underreporting), the implant and injectables have the lowest failure rates (2-3%), followed by the pill (8%), the diaphragm and the cervical cap (12%), the male condom (14%), periodic abstinence (21%), withdrawal (24%) and spermicides (26%). In general, failure rates are highest among cohabiting and other unmarried women, among those with an annual family income below 200% of the federal poverty level, among black and Hispanic women, among adolescents and among women in their 20s. For example, adolescent women who are not married but are cohabiting experience a failure rate of about 31% in the first year of contraceptive use, while the 12-month failure rate among married women aged 30 and older is only 7%. Black women have a contraceptive failure rate of about 19%, and this rate does not vary by family income; in contrast, overall 12-month rates are lower among Hispanic women (15%) and white women (10%), but vary by income, with poorer women having substantially greater failure rates than more affluent women. Levels of contraceptive failure vary widely by method, as well as by personal and background characteristics. Income's strong influence on contraceptive failure suggests that access barriers and the general disadvantage associated with poverty seriously impede effective contraceptive practice in the United States.

  14. Assessment of compressive failure process of cortical bone materials using damage-based model.

    PubMed

    Ng, Theng Pin; R Koloor, S S; Djuansjah, J R P; Abdul Kadir, M R

    2017-02-01

    The main failure factors of cortical bone are aging or osteoporosis, accident and high energy trauma or physiological activities. However, the mechanism of damage evolution coupled with yield criterion is considered as one of the unclear subjects in failure analysis of cortical bone materials. Therefore, this study attempts to assess the structural response and progressive failure process of cortical bone using a brittle damaged plasticity model. For this reason, several compressive tests are performed on cortical bone specimens made of bovine femur, in order to obtain the structural response and mechanical properties of the material. Complementary finite element (FE) model of the sample and test is prepared to simulate the elastic-to-damage behavior of the cortical bone using the brittle damaged plasticity model. The FE model is validated in a comparative method using the predicted and measured structural response as load-compressive displacement through simulation and experiment. FE results indicated that the compressive damage initiated and propagated at central region where maximum equivalent plastic strain is computed, which coincided with the degradation of structural compressive stiffness followed by a vast amount of strain energy dissipation. The parameter of compressive damage rate, which is a function dependent on damage parameter and the plastic strain is examined for different rates. Results show that considering a similar rate to the initial slope of the damage parameter in the experiment would give a better sense for prediction of compressive failure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Risk factors for early failure after peripheral endovascular intervention: application of a reliability engineering approach.

    PubMed

    Meltzer, Andrew J; Graham, Ashley; Connolly, Peter H; Karwowski, John K; Bush, Harry L; Frazier, Peter I; Schneider, Darren B

    2013-01-01

    We apply an innovative and novel analytic approach, based on reliability engineering (RE) principles frequently used to characterize the behavior of manufactured products, to examine outcomes after peripheral endovascular intervention. We hypothesized that this would allow for improved prediction of outcome after peripheral endovascular intervention, specifically with regard to identification of risk factors for early failure. Patients undergoing infrainguinal endovascular intervention for chronic lower-extremity ischemia from 2005 to 2010 were identified in a prospectively maintained database. The primary outcome of failure was defined as patency loss detected by duplex ultrasonography, with or without clinical failure. Analysis included univariate and multivariate Cox regression models, as well as RE-based analysis including product life-cycle models and Weibull failure plots. Early failures were distinguished using the RE principle of "basic rating life," and multivariate models identified independent risk factors for early failure. From 2005 to 2010, 434 primary endovascular peripheral interventions were performed for claudication (51.8%), rest pain (16.8%), or tissue loss (31.3%). Fifty-five percent of patients were aged ≥75 years; 57% were men. Failure was noted after 159 (36.6%) interventions during a mean follow-up of 18 months (range, 0-71 months). Using multivariate (Cox) regression analysis, rest pain and tissue loss were independent predictors of patency loss, with hazard ratios of 2.5 (95% confidence interval, 1.6-4.1; P < 0.001) and 3.2 (95% confidence interval, 2.0-5.2, P < 0.001), respectively. The distribution of failure times for both claudication and critical limb ischemia fit distinct Weibull plots, with different characteristics: interventions for claudication demonstrated an increasing failure rate (β = 1.22, θ = 13.46, mean time to failure = 12.603 months, index of fit = 0.99037, R(2) = 0.98084), whereas interventions for critical limb ischemia demonstrated a decreasing failure rate, suggesting the predominance of early failures (β = 0.7395, θ = 6.8, mean time to failure = 8.2, index of fit = 0.99391, R(2) = 0.98786). By 3.1 months, 10% of interventions failed. This point (90% reliability) was identified as the basic rating life. Using multivariate analysis of failure data, independent predictors of early failure (before 3.1 months) included tissue loss, long lesion length, chronic total occlusions, heart failure, and end-stage renal disease. Application of a RE framework to the assessment of clinical outcomes after peripheral interventions is feasible, and potentially more informative than traditional techniques. Conceptualization of interventions as "products" permits application of product life-cycle models that allow for empiric definition of "early failure" may facilitate comparative effectiveness analysis and enable the development of individualized surveillance programs after endovascular interventions. Copyright © 2013 Annals of Vascular Surgery Inc. Published by Elsevier Inc. All rights reserved.

  16. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  17. Failure to Rescue Rates After Coronary Artery Bypass Grafting: An Analysis From The Society of Thoracic Surgeons Adult Cardiac Surgery Database.

    PubMed

    Edwards, Fred H; Ferraris, Victor A; Kurlansky, Paul A; Lobdell, Kevin W; He, Xia; O'Brien, Sean M; Furnary, Anthony P; Rankin, J Scott; Vassileva, Christina M; Fazzalari, Frank L; Magee, Mitchell J; Badhwar, Vinay; Xian, Ying; Jacobs, Jeffrey P; Wyler von Ballmoos, Moritz C; Shahian, David M

    2016-08-01

    Failure to rescue (FTR) is increasingly recognized as an important quality indicator in surgery. The Society of Thoracic Surgeons National Database was used to develop FTR metrics and a predictive FTR model for coronary artery bypass grafting (CABG). The study included 604,154 patients undergoing isolated CABG at 1,105 centers from January 2010 to January 2014. FTR was defined as death after four complications: stroke, renal failure, reoperation, and prolonged ventilation. FTR was determined for each complication and a composite of the four complications. A statistical model to predict FTR was developed. FTR rates were 22.3% for renal failure, 16.4% for stroke, 12.4% for reoperation, 12.1% for prolonged ventilation, and 10.5% for the composite. Mortality increased with multiple complications and with specific combinations of complications. The multivariate risk model for prediction of FTR demonstrated a C index of 0.792 and was well calibrated, with a 1.0% average difference between observed/expected (O/E) FTR rates. With centers grouped into mortality terciles, complication rates increased modestly (11.4% to 15.7%), but FTR rates more than doubled (6.8% to 13.9%) from the lowest to highest terciles. Centers in the lowest complication rate tercile had an FTR O/E of 1.14, whereas centers in the highest complication rate tercile had an FTR O/E of 0.91. CABG mortality rates vary directly with FTR, but complication rates have little relation to death. FTR rates derived from The Society of Thoracic Surgeons data can serve as national benchmarks. Predicted FTR rates may facilitate patient counseling, and FTR O/E ratios have promise as valuable quality metrics. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  18. The dynamic failure behavior of tungsten heavy alloys subjected to transverse loads

    NASA Astrophysics Data System (ADS)

    Tarcza, Kenneth Robert

    Tungsten heavy alloys (WHA), a category of particulate composites used in defense applications as kinetic energy penetrators, have been studied for many years. Even so, their dynamic failure behavior is not fully understood and cannot be predicted by numerical models presently in use. In this experimental investigation, a comprehensive understanding of the high-rate transverse-loading fracture behavior of WHA has been developed. Dynamic fracture events spanning a range of strain rates and loading conditions were created via mechanical testing and used to determine the influence of surface condition and microstructure on damage initiation, accumulation, and sample failure under different loading conditions. Using standard scanning electron microscopy metallographic and fractographic techniques, sample surface condition is shown to be extremely influential to the manner in which WHA fails, causing a fundamental change from externally to internally nucleated failures as surface condition is improved. Surface condition is characterized using electron microscopy and surface profilometry. Fracture surface analysis is conducted using electron microscopy, and linear elastic fracture mechanics is used to understand the influence of surface condition, specifically initial flaw size, on sample failure behavior. Loading conditions leading to failure are deduced from numerical modeling and experimental observation. The results highlight parameters and considerations critical to the understanding of dynamic WHA fracture and the development of dynamic WHA failure models.

  19. Tensile Strength of Carbon Nanotubes Under Realistic Temperature and Strain Rate

    NASA Technical Reports Server (NTRS)

    Wei, Chen-Yu; Cho, Kyeong-Jae; Srivastava, Deepak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Strain rate and temperature dependence of the tensile strength of single-wall carbon nanotubes has been investigated with molecular dynamics simulations. The tensile failure or yield strain is found to be strongly dependent on the temperature and strain rate. A transition state theory based predictive model is developed for the tensile failure of nanotubes. Based on the parameters fitted from high-strain rate and temperature dependent molecular dynamics simulations, the model predicts that a defect free micrometer long single-wall nanotube at 300 K, stretched with a strain rate of 1%/hour, fails at about 9 plus or minus 1% tensile strain. This is in good agreement with recent experimental findings.

  20. High-Tensile Strength Tape Versus High-Tensile Strength Suture: A Biomechanical Study.

    PubMed

    Gnandt, Ryan J; Smith, Jennifer L; Nguyen-Ta, Kim; McDonald, Lucas; LeClere, Lance E

    2016-02-01

    To determine which suture design, high-tensile strength tape or high-tensile strength suture, performed better at securing human tissue across 4 selected suture techniques commonly used in tendinous repair, by comparing the total load at failure measured during a fixed-rate longitudinal single load to failure using a biomechanical testing machine. Matched sets of tendon specimens with bony attachments were dissected from 15 human cadaveric lower extremities in a manner allowing for direct comparison testing. With the use of selected techniques (simple Mason-Allen in the patellar tendon specimens, whip stitch in the quadriceps tendon specimens, and Krackow stitch in the Achilles tendon specimens), 1 sample of each set was sutured with a 2-mm braided, nonabsorbable, high-tensile strength tape and the other with a No. 2 braided, nonabsorbable, high-tensile strength suture. A total of 120 specimens were tested. Each model was loaded to failure at a fixed longitudinal traction rate of 100 mm/min. The maximum load and failure method were recorded. In the whip stitch and the Krackow-stitch models, the high-tensile strength tape had a significantly greater mean load at failure with a difference of 181 N (P = .001) and 94 N (P = .015) respectively. No significant difference was found in the Mason-Allen and simple stitch models. Pull-through remained the most common method of failure at an overall rate of 56.7% (suture = 55%; tape = 58.3%). In biomechanical testing during a single load to failure, high-tensile strength tape performs more favorably than high-tensile strength suture, with a greater mean load to failure, in both the whip- and Krackow-stitch models. Although suture pull-through remains the most common method of failure, high-tensile strength tape requires a significantly greater load to pull-through in a whip-stitch and Krakow-stitch model. The biomechanical data obtained in the current study indicates that high-tensile strength tape may provide better repair strength compared with high-tensile strength suture at time-zero simulated testing. Published by Elsevier Inc.

  1. The iothalamate clearance in cats with experimentally induced renal failure.

    PubMed

    Ohashi, F; Kuroda, K; Shimada, T; Shimada, Y; Ota, M

    1996-08-01

    Plasma iothalamate (IOT) disappearance rates were measured after a single-injection of IOT (113.8 mg/kg, IV) in cats with experimentally induced renal failure. The disappearance rates especially fitted into the one compartment model. The mean value of plasma disappearance rates of IOT in these cats with induced renal failure (2.16 +/- 0.240 x 10(-3) micrograms/ml/min) was markedly lower than that of clinically healthy cats (4.10 +/- 1.00 x 10(-3) micrograms/ml/min). These results demonstrate that IOT clearance is available for evaluation of renal function in cats.

  2. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  3. A cost simulation for mammography examinations taking into account equipment failures and resource utilization characteristics.

    PubMed

    Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A

    2010-12-01

    This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.

  4. Rate-weakening friction characterizes both slow sliding and catastrophic failure of landslides.

    PubMed

    Handwerger, Alexander L; Rempel, Alan W; Skarbek, Rob M; Roering, Joshua J; Hilley, George E

    2016-09-13

    Catastrophic landslides cause billions of dollars in damages and claim thousands of lives annually, whereas slow-moving landslides with negligible inertia dominate sediment transport on many weathered hillslopes. Surprisingly, both failure modes are displayed by nearby landslides (and individual landslides in different years) subjected to almost identical environmental conditions. Such observations have motivated the search for mechanisms that can cause slow-moving landslides to transition via runaway acceleration to catastrophic failure. A similarly diverse range of sliding behavior, including earthquakes and slow-slip events, occurs along tectonic faults. Our understanding of these phenomena has benefitted from mechanical treatments that rely upon key ingredients that are notably absent from previous landslide descriptions. Here, we describe landslide motion using a rate- and state-dependent frictional model that incorporates a nonlocal stress balance to account for the elastic response to gradients in slip. Our idealized, one-dimensional model reproduces both the displacement patterns observed in slow-moving landslides and the acceleration toward failure exhibited by catastrophic events. Catastrophic failure occurs only when the slip surface is characterized by rate-weakening friction and its lateral dimensions exceed a critical nucleation length [Formula: see text] that is shorter for higher effective stresses. However, landslides that are extensive enough to fall within this regime can nevertheless slide slowly for months or years before catastrophic failure. Our results suggest that the diversity of slip behavior observed during landslides can be described with a single model adapted from standard fault mechanics treatments.

  5. Investigation of precipitate refinement in Mg alloys by an analytical composite failure model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabei, Ali; Li, Dongsheng; Lavender, Curt A.

    2015-10-01

    An analytical model is developed to simulate precipitate refinement in second phase strengthened magnesium alloys. The model is developed based on determination of the stress fields inside elliptical precipitates embedded in a rate dependent inelastic matrix. The stress fields are utilized to determine the failure mode that governs the refinement behavior. Using an AZ31 Mg alloy as an example, the effects the applied load, aspect ratio and orientation of the particle is studied on the macroscopic failure of a single α-Mg17Al12 precipitate. Additionally, a temperature dependent version of the corresponding constitutive law is used to incorporate the effects of temperature.more » In plane strain compression, an extensional failure mode always fragments the precipitates. The critical strain rate at which the precipitates start to fail strongly depends on the orientation of the precipitate with respect to loading direction. The results show that the higher the aspect ratio is, the easier the precipitate fractures. Precipitate shape is another factor influencing the failure response. In contrast to elliptical precipitates with high aspect ratio, spherical precipitates are strongly resistant to sectioning. In pure shear loading, in addition to the extensional mode of precipitate failure, a shearing mode may get activated depending on orientation and aspect ratio of the precipitate. The effect of temperature in relation to strain rate was also verified for plane strain compression and pure shear loading cases.« less

  6. Rate-weakening friction characterizes both slow sliding and catastrophic failure of landslides

    PubMed Central

    Handwerger, Alexander L.; Rempel, Alan W.; Skarbek, Rob M.; Roering, Joshua J.; Hilley, George E.

    2016-01-01

    Catastrophic landslides cause billions of dollars in damages and claim thousands of lives annually, whereas slow-moving landslides with negligible inertia dominate sediment transport on many weathered hillslopes. Surprisingly, both failure modes are displayed by nearby landslides (and individual landslides in different years) subjected to almost identical environmental conditions. Such observations have motivated the search for mechanisms that can cause slow-moving landslides to transition via runaway acceleration to catastrophic failure. A similarly diverse range of sliding behavior, including earthquakes and slow-slip events, occurs along tectonic faults. Our understanding of these phenomena has benefitted from mechanical treatments that rely upon key ingredients that are notably absent from previous landslide descriptions. Here, we describe landslide motion using a rate- and state-dependent frictional model that incorporates a nonlocal stress balance to account for the elastic response to gradients in slip. Our idealized, one-dimensional model reproduces both the displacement patterns observed in slow-moving landslides and the acceleration toward failure exhibited by catastrophic events. Catastrophic failure occurs only when the slip surface is characterized by rate-weakening friction and its lateral dimensions exceed a critical nucleation length h* that is shorter for higher effective stresses. However, landslides that are extensive enough to fall within this regime can nevertheless slide slowly for months or years before catastrophic failure. Our results suggest that the diversity of slip behavior observed during landslides can be described with a single model adapted from standard fault mechanics treatments. PMID:27573836

  7. Forming limit curves of DP600 determined in high-speed Nakajima tests and predicted by two different strain-rate-sensitive models

    NASA Astrophysics Data System (ADS)

    Weiß-Borkowski, Nathalie; Lian, Junhe; Camberg, Alan; Tröster, Thomas; Münstermann, Sebastian; Bleck, Wolfgang; Gese, Helmut; Richter, Helmut

    2018-05-01

    Determination of forming limit curves (FLC) to describe the multi-axial forming behaviour is possible via either experimental measurements or theoretical calculations. In case of theoretical determination, different models are available and some of them consider the influence of strain rate in the quasi-static and dynamic strain rate regime. Consideration of the strain rate effect is necessary as many material characteristics such as yield strength and failure strain are affected by loading speed. In addition, the start of instability and necking depends not only on the strain hardening coefficient but also on the strain rate sensitivity parameter. Therefore, the strain rate dependency of materials for both plasticity and the failure behaviour is taken into account in crash simulations for strain rates up to 1000 s-1 and FLC can be used for the description of the material's instability behaviour at multi-axial loading. In this context, due to the strain rate dependency of the material behaviour, an extrapolation of the quasi-static FLC to dynamic loading condition is not reliable. Therefore, experimental high-speed Nakajima tests or theoretical models shall be used to determine the FLC at high strain rates. In this study, two theoretical models for determination of FLC at high strain rates and results of experimental high-speed Nakajima tests for a DP600 are presented. One of the theoretical models is the numerical algorithm CRACH as part of the modular material and failure model MF GenYld+CrachFEM 4.2, which is based on an initial imperfection. Furthermore, the extended modified maximum force criterion considering the strain rate effect is also used to predict the FLC. These two models are calibrated by the quasi-static and dynamic uniaxial tensile tests and bulge tests. The predictions for the quasi-static and dynamic FLC by both models are presented and compared with the experimental results.

  8. Impact and Penetration of Thin Aluminum 2024 Flat Panels at Oblique Angles of Incidence

    NASA Technical Reports Server (NTRS)

    Ruggeri, Charles R.; Revilock, Duane M.; Pereira, J. Michael; Emmerling, William; Queitzsch, Gilbert K., Jr.

    2015-01-01

    The U.S. Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) are actively involved in improving the predictive capabilities of transient finite element computational methods for application to safety issues involving unintended impacts on aircraft and aircraft engine structures. One aspect of this work involves the development of an improved deformation and failure model for metallic materials, known as the Tabulated Johnson-Cook model, or MAT224, which has been implemented in the LS-DYNA commercial transient finite element analysis code (LSTC Corp., Livermore, CA) (Ref. 1). In this model the yield stress is a function of strain, strain rate and temperature and the plastic failure strain is a function of the state of stress, temperature and strain rate. The failure criterion is based on the accumulation of plastic strain in an element. The model also incorporates a regularization scheme to account for the dependency of plastic failure strain on mesh size. For a given material the model requires a significant amount of testing to determine the yield stress and failure strain as a function of the three-dimensional state of stress, strain rate and temperature. In addition, experiments are required to validate the model. Currently the model has been developed for Aluminum 2024 and validated against a series of ballistic impact tests on flat plates of various thicknesses (Refs. 1 to 3). Full development of the model for Titanium 6Al-4V is being completed, and mechanical testing for Inconel 718 has begun. The validation testing for the models involves ballistic impact tests using cylindrical projectiles impacting flat plates at a normal incidence (Ref. 2). By varying the thickness of the plates, different stress states and resulting failure modes are induced, providing a range of conditions over which the model can be validated. The objective of the study reported here was to provide experimental data to evaluate the model under more extreme conditions, using a projectile with a more complex shape and sharp contacts, impacting flat panels at oblique angles of incidence.

  9. Prediction of line failure fault based on weighted fuzzy dynamic clustering and improved relational analysis

    NASA Astrophysics Data System (ADS)

    Meng, Xiaocheng; Che, Renfei; Gao, Shi; He, Juntao

    2018-04-01

    With the advent of large data age, power system research has entered a new stage. At present, the main application of large data in the power system is the early warning analysis of the power equipment, that is, by collecting the relevant historical fault data information, the system security is improved by predicting the early warning and failure rate of different kinds of equipment under certain relational factors. In this paper, a method of line failure rate warning is proposed. Firstly, fuzzy dynamic clustering is carried out based on the collected historical information. Considering the imbalance between the attributes, the coefficient of variation is given to the corresponding weights. And then use the weighted fuzzy clustering to deal with the data more effectively. Then, by analyzing the basic idea and basic properties of the relational analysis model theory, the gray relational model is improved by combining the slope and the Deng model. And the incremental composition and composition of the two sequences are also considered to the gray relational model to obtain the gray relational degree between the various samples. The failure rate is predicted according to the principle of weighting. Finally, the concrete process is expounded by an example, and the validity and superiority of the proposed method are verified.

  10. A Methodology for Modeling Nuclear Power Plant Passive Component Aging in Probabilistic Risk Assessment under the Impact of Operating Conditions, Surveillance and Maintenance Activities

    NASA Astrophysics Data System (ADS)

    Guler Yigitoglu, Askin

    In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.

  11. What is the Best Way to Reduce Unintended Pregnancies? A Micro Simulation of Contraceptive Switching, Discontinuation and Failure Patterns in France

    PubMed Central

    Diamond-Smith, Nadia; Moreau, Caroline; Bishai, David

    2015-01-01

    Despite high rates of contraceptive use in France, over a third of pregnancies are unintended. We built a dynamic micro simulation model which applies data from the French COCON study on method switching, discontinuation, and failure rates to a hypothetical population of 20,000 women, followed for 5 years. We use the model to estimate the adjustment factor needed to make the survey data fit the demographic profile of France, by adjusting for underreporting of contraceptive non-use and abortions. We then test three behavior change scenarios which would aim to reduce unintended pregnancies: decreasing method failure, increasing time spent on effective methods, and increasing switching from less to more effective methods. Our model suggests that decreasing method failure is the most effective strategy for reducing unintended pregnancies, but all scenarios reduced unintended pregnancies by at least 25%. Dynamic micro simulations such as this may be useful for policy makers. PMID:25469928

  12. Reducing unintended pregnancies: a microsimulation of contraceptive switching, discontinuation, and failure patterns in france.

    PubMed

    Diamond-Smith, Nadia G; Moreau, Caroline; Bishai, David M

    2014-12-01

    Although the rate of contraceptive use in France is high, more than one-third of pregnancies are unintended. We built a dynamic microsimulation model that applies data from the French COCON study on method switching, discontinuation, and failure rates to a hypothetical population of 20,000 women, followed for five years. We use the model to estimate the adjustment factor needed to make the survey data fit the demographic profile of France by adjusting for underreporting of contraceptive nonuse and abortion. We then test three behavior-change scenarios that could reduce unintended pregnancies: decreasing method failure, increasing time using effective methods, and increasing switching from less effective to more effective methods. Our model suggests that decreasing method failure is the most effective means of reducing unintended pregnancies, but we found that all of the scenarios reduced unintended pregnancies by at least 25 percent. Dynamic microsimulations may have great potential in reproductive health research and prove useful for policymakers. © 2014 The Population Council, Inc.

  13. Material failure modelling in metals at high strain rates

    NASA Astrophysics Data System (ADS)

    Panov, Vili

    2005-07-01

    Plate impact tests have been conducted on the OFHC Cu using single-stage gas gun. Using stress gauges, which were supported with PMMA blocks on the back of the target plates, stress-time histories have been recorded. After testing, micro structural observations of the softly recovered OFHC Cu spalled specimen were carried out and evolution of damage has been examined. To account for the physical mechanisms of failure, the concept that thermal activation in material separation during fracture processes has been adopted as basic mechanism for this material failure model development. With this basic assumption, the proposed model is compatible with the Mechanical Threshold Stress model and therefore in this development it was incorporated into the MTS material model in DYNA3D. In order to analyse proposed criterion a series of FE simulations have been performed for OFHC Cu. The numerical analysis results clearly demonstrate the ability of the model to predict the spall process and experimentally observed tensile damage and failure. It is possible to simulate high strain rate deformation processes and dynamic failure in tension for wide range of temperature. The proposed cumulative criterion, introduced in the DYNA3D code, is able to reproduce the ``pull-back'' stresses of the free surface caused by creation of the internal spalling, and enables one to analyse numerically the spalling over a wide range of impact velocities.

  14. Mechanistic analysis of time-dependent failure of oxynitride glass-joined silicon nitride below 1000 degree C

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, M.H.; Coon, D.M.

    Time-dependent failure at elevated temperatures currently governs the service life of oxynitride glass-joined silicon nitride. Creep, devitrification, stress- aided oxidation-controlled slow crack growth, and viscous cabitation-controlled failure are examined as possible controlling mechanisms. Creep deformation failure is observed above 1000{degrees}C. Fractographic evidence indicates cavity formation and growth below 1000{degrees}C. Auger electron spectroscopy verified that the oxidation rate of the joining glass is governed by the oxygen supply rate. Time-to-failure data and those predicted using the Tsai and Raj, and Raj and Dang viscous cavitation models. It is concluded that viscous relaxation and isolated cavity growth control the rate of failuremore » in oxynitride glass-filled silicon nitride joints below 1000{degrees}C. Several possible methods are also proposed for increasing the service lives of these joints.« less

  15. Modeling the effect of laser heating on the strength and failure of 7075-T6 aluminum

    DOE PAGES

    Florando, J. N.; Margraf, J. D.; Reus, J. F.; ...

    2015-06-06

    The effect of rapid laser heating on the response of 7075-T6 aluminum has been characterized using 3-D digital image correlation and a series of thermocouples. The experimental results indicate that as the samples are held under a constant load, the heating from the laser profile causes non-uniform temperature and strain fields, and the strain-rate increases dramatically as the sample nears failure. Simulations have been conducted using the LLNL multi-physics code ALE3D, and compared to the experiments. The strength and failure of the material was modeled using the Johnson–Cook strength and damage models. Here, in order to capture the response, amore » dual-condition criterion was utilized which calibrated one set of parameters to low temperature quasi-static strain rate data, while the other parameter set is calibrated to high temperature high strain rate data. The thermal effects were captured using temperature dependent thermal constants and invoking thermal transport with conduction, convection, and thermal radiation.« less

  16. Simple Predictive Model of Early Failure among Patients Undergoing First-Time Arteriovenous Fistula Creation.

    PubMed

    Eslami, Mohammad H; Zhu, Clara K; Rybin, Denis; Doros, Gheorghe; Siracuse, Jeffrey J; Farber, Alik

    2016-08-01

    Native arteriovenous fistulas (AVFs) have a high 1 year failure rate leading to a need for secondary procedures. We set out to create a predictive model of early failure in patients undergoing first-time AVF creation, to identify failure-associated factors and stratify initial failure risk. The Vascular Study Group of New England (VSGNE) (2010-2014) was queried to identify patients undergoing first-time AVF creation. Patients with early (within 3 months postoperation) AVF failure (EF) or no failure (NF) were compared, failure being defined as any AVF that could not be used for dialysis. A multivariate logistic regression predictive model of EF based on perioperative clinical variables was created. Backward elimination with alpha level of 0.2 was used to create a parsimonious model. We identified 376 first-time AVF patients with follow-up data available in VSGNE. EF rate was 17.5%. Patients in the EF group had lower rates of hypertension (80.3% vs. 93.2%, P = 0.003) and diabetes (47.0% vs. 61.3%, P = 0.039). EF patients were also more likely to have radial artery inflow (57.6% vs. 38.4%, P = 0.011) and have forearm cephalic vein outflow (57.6% vs. 36.5%, P = 0.008). Additionally, the EF group was noted to have significantly smaller mean diameters of target artery (3.1 ± 0.9 vs. 3.6 ± 1.1, P = 0.002) and vein (3.1 ± 0.7 vs. 3.6 ± 0.9, P < 0.001). Multivariate analyses revealed that hypertension, diabetes, and vein larger than 3 mm were protective of EF (P < 0.05). The discriminating ability of this model was good (C-statistic = 0.731) and the model fits the data well (Hosmer-Lemeshow P = 0.149). β-estimates of significant factors were used to create a point system and assign probabilities of EF. We developed a simple model that robustly predicts first-time AVF EF and suggests that anatomical and clinical factors directly affect early AVF outcomes. The risk score has the potential to be used in clinical settings to stratify risk and make informed follow-up plans for AVF patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Failure Rates and Patterns of Recurrence in Patients With Resected N1 Non-Small-Cell Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varlotto, John M., E-mail: jvarlotto@hmc.psu.edu; Medford-Davis, Laura Nyshel; Recht, Abram

    2011-10-01

    Purpose: To examine the local and distant recurrence rates and patterns of failure in patients undergoing potentially curative resection of N1 non-small-cell lung cancer. Methods and Materials: The study included 60 consecutive unirradiated patients treated from 2000 to 2006. Median follow-up was 30 months. Failure rates were calculated by the Kaplan-Meier method. A univariate Cox proportional hazard model was used to assess factors associated with recurrence. Results: Local and distant failure rates (as the first site of failure) at 2, 3, and 5 years were 33%, 33%, and 46%; and 26%, 26%, and 32%, respectively. The most common site ofmore » local failure was in the mediastinum; 12 of 18 local recurrences would have been included within proposed postoperative radiotherapy fields. Patients who received chemotherapy were found to be at increased risk of local failure, whereas those who underwent pneumonectomy or who had more positive nodes had significantly increased risks of distant failure. Conclusions: Patients with resected non-small-cell lung cancer who have N1 disease are at substantial risk of local recurrence as the first site of relapse, which is greater than the risk of distant failure. The role of postoperative radiotherapy in such patients should be revisited in the era of adjuvant chemotherapy.« less

  18. A Novel Solution-Technique Applied to a Novel WAAS Architecture

    NASA Technical Reports Server (NTRS)

    Bavuso, J.

    1998-01-01

    The Federal Aviation Administration has embarked on an historic task of modernizing and significantly improving the national air transportation system. One system that uses the Global Positioning System (GPS) to determine aircraft navigational information is called the Wide Area Augmentation System (WAAS). This paper describes a reliability assessment of one candidate system architecture for the WAAS. A unique aspect of this study regards the modeling and solution of a candidate system that allows a novel cold sparing scheme. The cold spare is a WAAS communications satellite that is fabricated and launched after a predetermined number of orbiting satellite failures have occurred and after some stochastic fabrication time transpires. Because these satellites are complex systems with redundant components, they exhibit an increasing failure rate with a Weibull time to failure distribution. Moreover, the cold spare satellite build-time is Weibull and upon launch is considered to be a good-as-new system with an increasing failure rate and a Weibull time to failure distribution as well. The reliability model for this system is non-Markovian because three distinct system clocks are required: the time to failure of the orbiting satellites, the build time for the cold spare, and the time to failure for the launched spare satellite. A powerful dynamic fault tree modeling notation and Monte Carlo simulation technique with importance sampling are shown to arrive at a reliability prediction for a 10 year mission.

  19. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  20. Clopidogrel (Plavix) reduces the rate of thrombosis in the rat tuck model for microvenous anastomosis.

    PubMed

    Moore, Michael G; Deschler, Daniel G

    2007-04-01

    To evaluate the effect of clopidogrel on the rate of thrombosis in a rat model for venous microvascular failure. Forty rats were treated with clopidogrel or saline control via gastric gavage in a randomized, blinded fashion. After allowing for absorption and activation, each femoral vein was isolated and a venous "tuck" procedure was performed. The bleeding time and vessel patency were subsequently evaluated. The rate of vessel thrombosis was decreased in the clopidogrel-treated group compared to controls (7.9% vs 31.4%, P < 0.025). The bleeding time was longer in the clopidogrel-treated group compared to controls (250 +/- 100 seconds vs 173 +/- 59 seconds, P < 0.015). Clopidogrel decreased the rate of thrombosis in the rat model for venous microvascular failure. The use of clopidogrel may reduce the rate of venous thrombosis after free tissue transfer and may be indicated in select patients.

  1. Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2008-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  2. Panel-Stiffener Debonding and Analysis Using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2007-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer foot, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  3. Rate-weakening friction characterizes both slow sliding and catastrophic failure of landslides

    NASA Astrophysics Data System (ADS)

    Handwerger, Alexander L.; Rempel, Alan W.; Skarbek, Rob M.; Roering, Joshua J.; Hilley, George E.

    2016-09-01

    Catastrophic landslides cause billions of dollars in damages and claim thousands of lives annually, whereas slow-moving landslides with negligible inertia dominate sediment transport on many weathered hillslopes. Surprisingly, both failure modes are displayed by nearby landslides (and individual landslides in different years) subjected to almost identical environmental conditions. Such observations have motivated the search for mechanisms that can cause slow-moving landslides to transition via runaway acceleration to catastrophic failure. A similarly diverse range of sliding behavior, including earthquakes and slow-slip events, occurs along tectonic faults. Our understanding of these phenomena has benefitted from mechanical treatments that rely upon key ingredients that are notably absent from previous landslide descriptions. Here, we describe landslide motion using a rate- and state-dependent frictional model that incorporates a nonlocal stress balance to account for the elastic response to gradients in slip. Our idealized, one-dimensional model reproduces both the displacement patterns observed in slow-moving landslides and the acceleration toward failure exhibited by catastrophic events. Catastrophic failure occurs only when the slip surface is characterized by rate-weakening friction and its lateral dimensions exceed a critical nucleation length h*h* that is shorter for higher effective stresses. However, landslides that are extensive enough to fall within this regime can nevertheless slide slowly for months or years before catastrophic failure. Our results suggest that the diversity of slip behavior observed during landslides can be described with a single model adapted from standard fault mechanics treatments.

  4. Micromechanical investigation of ductile failure in Al 5083-H116 via 3D unit cell modeling

    NASA Astrophysics Data System (ADS)

    Bomarito, G. F.; Warner, D. H.

    2015-01-01

    Ductile failure is governed by the evolution of micro-voids within a material. The micro-voids, which commonly initiate at second phase particles within metal alloys, grow and interact with each other until failure occurs. The evolution of the micro-voids, and therefore ductile failure, depends on many parameters (e.g., stress state, temperature, strain rate, void and particle volume fraction, etc.). In this study, the stress state dependence of the ductile failure of Al 5083-H116 is investigated by means of 3-D Finite Element (FE) periodic cell models. The cell models require only two pieces of information as inputs: (1) the initial particle volume fraction of the alloy and (2) the constitutive behavior of the matrix material. Based on this information, cell models are subjected to a given stress state, defined by the stress triaxiality and the Lode parameter. For each stress state, the cells are loaded in many loading orientations until failure. Material failure is assumed to occur in the weakest orientation, and so the orientation in which failure occurs first is considered as the critical orientation. The result is a description of material failure that is derived from basic principles and requires no fitting parameters. Subsequently, the results of the simulations are used to construct a homogenized material model, which is used in a component-scale FE model. The component-scale FE model is compared to experiments and is shown to over predict ductility. By excluding smaller nucleation events and load path non-proportionality, it is concluded that accuracy could be gained by including more information about the true microstructure in the model; emphasizing that its incorporation into micromechanical models is critical to developing quantitatively accurate physics-based ductile failure models.

  5. Cost-effectiveness analysis of fixation options for intertrochanteric hip fractures.

    PubMed

    Swart, Eric; Makhni, Eric C; Macaulay, William; Rosenwasser, Melvin P; Bozic, Kevin J

    2014-10-01

    Intertrochanteric hip fractures are a major source of morbidity and financial burden, accounting for 7% of osteoporotic fractures and costing nearly $6 billion annually in the United States. Traditionally, "stable" fracture patterns have been treated with an extramedullary sliding hip screw whereas "unstable" patterns have been treated with the more expensive intramedullary nail. The purpose of this study was to identify parameters to guide cost-effective implant choices with use of decision-analysis techniques to model these common clinical scenarios. An expected-value decision-analysis model was constructed to estimate the total costs and health utility based on the choice of a sliding hip screw or an intramedullary nail for fixation of an intertrochanteric hip fracture. Values for critical parameters, such as fixation failure rate, were derived from the literature. Three scenarios were evaluated: (1) a clearly stable fracture (AO type 31-A1), (2) a clearly unstable fracture (A3), or (3) a fracture with questionable stability (A2). Sensitivity analysis was performed to test the validity of the model. The fixation failure rate and implant cost were the most important factors in determining implant choice. When the incremental cost for the intramedullary nail was set at the median value ($1200), intramedullary nailing had an incremental cost-effectiveness ratio of $50,000/quality-adjusted life year when the incremental failure rate of sliding hip screws was 1.9%. When the incremental failure rate of sliding hip screws was >5.0%, intramedullary nails dominated with lower cost and better health outcomes. The sliding hip screw was always more cost-effective for A1 fractures, and the intramedullary nail always dominated for A3 fractures. As for A2 fractures, the sliding hip screw was cost-effective in 70% of the cases, although this was highly sensitive to the failure rate. Sliding hip screw fixation is likely more cost-effective for stable intertrochanteric fractures (A1) or those with questionable stability (A2), whereas intramedullary nail fixation is more cost-effective for reverse obliquity fractures (A3). These conclusions are highly sensitive to the fixation failure rate, which was the major influence on the model results. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  6. Infraclavicular versus axillary nerve catheters: A retrospective comparison of early catheter failure rate.

    PubMed

    Quast, Michaela B; Sviggum, Hans P; Hanson, Andrew C; Stoike, David E; Martin, David P; Niesen, Adam D

    2018-05-01

    Continuous brachial plexus catheters are often used to decrease pain following elbow surgery. This investigation aimed to assess the rate of early failure of infraclavicular (IC) and axillary (AX) nerve catheters following elbow surgery. Retrospective study. Postoperative recovery unit and inpatient hospital floor. 328 patients who received IC or AX nerve catheters and underwent elbow surgery were identified by retrospective query of our institution's database. Data collected included unplanned catheter dislodgement, catheter replacement rate, postoperative pain scores, and opioid administration on postoperative day 1. Catheter failure was defined as unplanned dislodging within 24 h of placement or requirement for catheter replacement and evaluated using a covariate adjusted model. 119 IC catheters and 209 AX catheters were evaluated. There were 8 (6.7%) failed IC catheters versus 13 (6.2%) failed AX catheters. After adjusting for age, BMI, and gender there was no difference in catheter failure rate between IC and AX nerve catheters (p = 0.449). These results suggest that IC and AX nerve catheters do not differ in the rate of early catheter failure, despite differences in anatomic location and catheter placement techniques. Both techniques provided effective postoperative analgesia with median pain scores < 3/10 for patients following elbow surgery. Reasons other than rate of early catheter failure should dictate which approach is performed. Copyright © 2018. Published by Elsevier Inc.

  7. Statistics of acoustic emissions and stress drops during granular shearing using a stick-slip fiber bundle mode

    NASA Astrophysics Data System (ADS)

    Cohen, D.; Michlmayr, G.; Or, D.

    2012-04-01

    Shearing of dense granular materials appears in many engineering and Earth sciences applications. Under a constant strain rate, the shearing stress at steady state oscillates with slow rises followed by rapid drops that are linked to the build up and failure of force chains. Experiments indicate that these drops display exponential statistics. Measurements of acoustic emissions during shearing indicates that the energy liberated by failure of these force chains has power-law statistics. Representing force chains as fibers, we use a stick-slip fiber bundle model to obtain analytical solutions of the statistical distribution of stress drops and failure energy. In the model, fibers stretch, fail, and regain strength during deformation. Fibers have Weibull-distributed threshold strengths with either quenched and annealed disorder. The shape of the distribution for drops and energy obtained from the model are similar to those measured during shearing experiments. This simple model may be useful to identify failure events linked to force chain failures. Future generalizations of the model that include different types of fiber failure may also allow identification of different types of granular failures that have distinct statistical acoustic emission signatures.

  8. A Cohesive Zone Approach for Fatigue-Driven Delamination Analysis in Composite Materials

    NASA Astrophysics Data System (ADS)

    Amiri-Rad, Ahmad; Mashayekhi, Mohammad

    2017-08-01

    A new model for prediction of fatigue-driven delamination in laminated composites is proposed using cohesive interface elements. The presented model provides a link between cohesive elements damage evolution rate and crack growth rate of Paris law. This is beneficial since no additional material parameters are required and the well-known Paris law constants are used. The link between the cohesive zone method and fracture mechanics is achieved without use of effective length which has led to more accurate results. The problem of unknown failure path in calculation of the energy release rate is solved by imposing a condition on the damage model which leads to completely vertical failure path. A global measure of energy release rate is used for the whole cohesive zone which is computationally more efficient compared to previous similar models. The performance of the proposed model is investigated by simulation of well-known delamination tests and comparison against experimental data of the literature.

  9. High Speed Dynamics in Brittle Materials

    NASA Astrophysics Data System (ADS)

    Hiermaier, Stefan

    2015-06-01

    Brittle Materials under High Speed and Shock loading provide a continuous challenge in experimental physics, analysis and numerical modelling, and consequently for engineering design. The dependence of damage and fracture processes on material-inherent length and time scales, the influence of defects, rate-dependent material properties and inertia effects on different scales make their understanding a true multi-scale problem. In addition, it is not uncommon that materials show a transition from ductile to brittle behavior when the loading rate is increased. A particular case is spallation, a brittle tensile failure induced by the interaction of stress waves leading to a sudden change from compressive to tensile loading states that can be invoked in various materials. This contribution highlights typical phenomena occurring when brittle materials are exposed to high loading rates in applications such as blast and impact on protective structures, or meteorite impact on geological materials. A short review on experimental methods that are used for dynamic characterization of brittle materials will be given. A close interaction of experimental analysis and numerical simulation has turned out to be very helpful in analyzing experimental results. For this purpose, adequate numerical methods are required. Cohesive zone models are one possible method for the analysis of brittle failure as long as some degree of tension is present. Their recent successful application for meso-mechanical simulations of concrete in Hopkinson-type spallation tests provides new insight into the dynamic failure process. Failure under compressive loading is a particular challenge for numerical simulations as it involves crushing of material which in turn influences stress states in other parts of a structure. On a continuum scale, it can be modeled using more or less complex plasticity models combined with failure surfaces, as will be demonstrated for ceramics. Models which take microstructural cracking directly into account may provide a more physics-based approach for compressive failure in the future.

  10. Heterogeneity: The key to forecasting material failure?

    NASA Astrophysics Data System (ADS)

    Vasseur, J.; Wadsworth, F. B.; Lavallée, Y.; Dingwell, D. B.

    2014-12-01

    Empirical mechanistic models have been applied to the description of the stress and strain rate upon failure for heterogeneous materials. The behaviour of porous rocks and their analogous two-phase viscoelastic suspensions are particularly well-described by such models. Nevertheless, failure cannot yet be predicted forcing a reliance on other empirical prediction tools such as the Failure Forecast Method (FFM). Measurable, accelerating rates of physical signals (e.g., seismicity and deformation) preceding failure are often used as proxies for damage accumulation in the FFM. Previous studies have already statistically assessed the applicability and performance of the FFM, but none (to the best of our knowledge) has done so in terms of intrinsic material properties. Here we use a rheological standard glass, which has been powdered and then sintered for different times (up to 32 hours) at high temperature (675°C) in order to achieve a sample suite with porosities in the range of 0.10-0.45 gas volume fraction. This sample suite was then subjected to mechanical tests in a uniaxial press at a constant strain rate of 10-3 s-1 and a temperature in the region of the glass transition. A dual acoustic emission (AE) rig has been employed to test the success of the FFM in these materials of systematically varying porosity. The pore-emanating crack model describes well the peak stress at failure in the elastic regime for these materials. We show that the FFM predicts failure within 0-15% error at porosities >0.2. However, when porosities are <0.2, the forecast error associated with predicting the failure time increases to >100%. We interpret these results as a function of the low efficiency with which strain energy can be released in the scenario where there are few or no heterogeneities from which cracks can propagate. These observations shed light on questions surrounding the variable efficacy of the FFM applied to active volcanoes. In particular, they provide a systematic demonstration of the fact that a good understanding of the material properties is required. Thus, we wish to emphasize the need for a better coupling of empirical failure forecasting models with mechanical parameters, such as failure criteria for heterogeneous materials, and point to the implications of this for a broad range of material-based disciplines.

  11. Mesoscale simulation of concrete spall failure

    NASA Astrophysics Data System (ADS)

    Knell, S.; Sauer, M.; Millon, O.; Riedel, W.

    2012-05-01

    Although intensively studied, it is still being debated which physical mechanisms are responsible for the increase of dynamic strength and fracture energy of concrete observed at high loading rates, and to what extent structural inertia forces on different scales contribute to the observation. We present a new approach for the three dimensional mesoscale modelling of dynamic damage and cracking in concrete. Concrete is approximated as a composite of spherical elastic aggregates of mm to cm size embedded in an elastic cement stone matrix. Cracking within the matrix and at aggregate interfaces in the μm range are modelled with adaptively inserted—initially rigid—cohesive interface elements. The model is applied to analyse the dynamic tensile failure observed in Hopkinson-Bar spallation experiments with strain rates up to 100/s. The influence of the key mesoscale failure parameters of strength, fracture energy and relative weakening of the ITZ on macromechanic strength, momentum and energy conservation is numerically investigated.

  12. A Computational Comparison of High Strain Rate Strength and Failure Models for Glass

    DTIC Science & Technology

    2012-11-05

    many researchers, however accuracy across a broad range of impact conditions is still not always achievable. Glasses , including soda - lime - silica ...plug/cone failure appearance when testing soda - lime - silica glass (see Fig. 5 from Ref. [7]). He notes that at 60 µs, the plug begins to break up and...material model. Although the JH-2 model has been adapated to provide reasonably accurate predictions for soda - lime glass , the Holmquist-Johnson model

  13. Semiparametric modeling and estimation of the terminal behavior of recurrent marker processes before failure events.

    PubMed

    Chan, Kwun Chuen Gary; Wang, Mei-Cheng

    2017-01-01

    Recurrent event processes with marker measurements are mostly and largely studied with forward time models starting from an initial event. Interestingly, the processes could exhibit important terminal behavior during a time period before occurrence of the failure event. A natural and direct way to study recurrent events prior to a failure event is to align the processes using the failure event as the time origin and to examine the terminal behavior by a backward time model. This paper studies regression models for backward recurrent marker processes by counting time backward from the failure event. A three-level semiparametric regression model is proposed for jointly modeling the time to a failure event, the backward recurrent event process, and the marker observed at the time of each backward recurrent event. The first level is a proportional hazards model for the failure time, the second level is a proportional rate model for the recurrent events occurring before the failure event, and the third level is a proportional mean model for the marker given the occurrence of a recurrent event backward in time. By jointly modeling the three components, estimating equations can be constructed for marked counting processes to estimate the target parameters in the three-level regression models. Large sample properties of the proposed estimators are studied and established. The proposed models and methods are illustrated by a community-based AIDS clinical trial to examine the terminal behavior of frequencies and severities of opportunistic infections among HIV infected individuals in the last six months of life.

  14. Friction of hard surfaces and its application in earthquakes and rock slope stability

    NASA Astrophysics Data System (ADS)

    Sinha, Nitish; Singh, Arun K.; Singh, Trilok N.

    2018-05-01

    In this article, we discuss the friction models for hard surfaces and their applications in earth sciences. The rate and state friction (RSF) model, which is basically modified form of the classical Amontons-Coulomb friction laws, is widely used for explaining the crustal earthquakes and the rock slope failures. Yet the RSF model has further been modified by considering the role of temperature at the sliding interface known as the rate, state and temperature friction (RSTF) model. Further, if the pore pressure is also taken into account then it is stated as the rate, state, temperature and pore pressure friction (RSTPF) model. All the RSF models predict a critical stiffness as well as a critical velocity at which sliding behavior becomes stable/unstable. The friction models are also used for predicting time of failure of the rock mass on an inclined plane. Finally, the limitation and possibilities of the proposed friction models are also highlighted.

  15. Predictability of Landslide Timing From Quasi-Periodic Precursory Earthquakes

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.

    2018-02-01

    Accelerating rates of geophysical signals are observed before a range of material failure phenomena. They provide insights into the physical processes controlling failure and the basis for failure forecasts. However, examples of accelerating seismicity before landslides are rare, and their behavior and forecasting potential are largely unknown. Here I use a Bayesian methodology to apply a novel gamma point process model to investigate a sequence of quasiperiodic repeating earthquakes preceding a large landslide at Nuugaatsiaq in Greenland in June 2017. The evolution in earthquake rate is best explained by an inverse power law increase with time toward failure, as predicted by material failure theory. However, the commonly accepted power law exponent value of 1.0 is inconsistent with the data. Instead, the mean posterior value of 0.71 indicates a particularly rapid acceleration toward failure and suggests that only relatively short warning times may be possible for similar landslides in future.

  16. Voltage stress effects on microcircuit accelerated life test failure rates

    NASA Technical Reports Server (NTRS)

    Johnson, G. M.

    1976-01-01

    The applicability of Arrhenius and Eyring reaction rate models for describing microcircuit aging characteristics as a function of junction temperature and applied voltage was evaluated. The results of a matrix of accelerated life tests with a single metal oxide semiconductor microcircuit operated at six different combinations of temperature and voltage were used to evaluate the models. A total of 450 devices from two different lots were tested at ambient temperatures between 200 C and 250 C and applied voltages between 5 Vdc and 15 Vdc. A statistical analysis of the surface related failure data resulted in bimodal failure distributions comprising two lognormal distributions; a 'freak' distribution observed early in time, and a 'main' distribution observed later in time. The Arrhenius model was shown to provide a good description of device aging as a function of temperature at a fixed voltage. The Eyring model also appeared to provide a reasonable description of main distribution device aging as a function of temperature and voltage. Circuit diagrams are shown.

  17. Quality of care and investment in property, plant, and equipment in hospitals.

    PubMed

    Levitt, S W

    1994-02-01

    This study explores the relationship between quality of care and investment in property, plant, and equipment (PPE) in hospitals. Hospitals' investment in PPE was derived from audited financial statements for the fiscal years 1984-1989. Peer Review Organization (PRO) Generic Quality Screen (GQS) reviews and confirmed failures between April 1989 and September 1990 were obtained from the Massachusetts PRO. Weighted least squares regression models used PRO GQS confirmed failure rates as the dependent variable, and investment in PPE as the key explanatory variable. Investment in PPE was standardized, summed by the hospital over the six years, and divided by the hospital's average number of beds in that period. The number of PRO reviewed cases with one or more GQS confirmed failures was divided by the total number of cases reviewed to create confirmed failure rates. Investment in PPE in Massachusetts hospitals is correlated with GQS confirmed failure rates. A financial variable, investment in PPE, predicts certain dimensions of quality of care in hospitals.

  18. New findings confirm the viscoelastic behaviour of the inter-lamellar matrix of the disc annulus fibrosus in radial and circumferential directions of loading.

    PubMed

    Tavakoli, J; Costi, J J

    2018-04-15

    While few studies have improved our understanding of composition and organization of elastic fibres in the inter-lamellar matrix (ILM), its clinical relevance is not fully understood. Moreover, no studies have measured the direct tensile and shear failure and viscoelastic properties of the ILM. Therefore, the aim of this study was, for the first time, to measure the viscoelastic and failure properties of the ILM in both the tension and shear directions of loading. Using an ovine model, isolated ILM samples were stretched to 40% of their initial length at three strain rates of 0.1%s -1 (slow), 1%s -1 (medium) and 10%s -1 (fast) and a ramp test to failure was performed at a strain rate of 10%s -1 . The findings from this study identified that the stiffness of the ILM was significantly larger at faster strain rates, and energy absorption significantly smaller, compared to slower strain rates, and the viscoelastic and failure properties were not significantly different under tension and shear loading. We found a strain rate dependent response of the ILM during dynamic loading, particularly at the fastest rate. The ILM demonstrated a significantly higher capability for energy absorption at slow strain rates compared to medium and fast strain rates. A significant increase in modulus was found in both loading directions and all strain rates, having a trend of larger modulus in tension and at faster strain rates. The finding of no significant difference in failure properties in both loading directions, was consistent with our previous ultra-structural studies that revealed a well-organized (±45°) elastic fibre orientation in the ILM. The results from this study can be used to develop and validate finite element models of the AF at the tissue scale, as well as providing new strategies for fabricating tissue engineered scaffolds. While few studies have improved our understanding of composition and organization of elastic fibres in the inter-lamellar matrix (ILM) of the annulus in the disc no studies have measured the direct mechanical failure and viscoelastic properties of the ILM. The findings from this study identified that the stiffness of the ILM was significantly larger at faster strain rates, and energy absorption significantly smaller, compared to slower strain rates. The failure properties of the ILM were not significantly different under tension and shear. Copyright © 2018 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  19. Pilots Rate Augmented Generalized Predictive Control for Reconfiguration

    NASA Technical Reports Server (NTRS)

    Soloway, Don; Haley, Pam

    2004-01-01

    The objective of this paper is to report the results from the research being conducted in reconfigurable fight controls at NASA Ames. A study was conducted with three NASA Dryden test pilots to evaluate two approaches of reconfiguring an aircraft's control system when failures occur in the control surfaces and engine. NASA Ames is investigating both a Neural Generalized Predictive Control scheme and a Neural Network based Dynamic Inverse controller. This paper highlights the Predictive Control scheme where a simple augmentation to reduce zero steady-state error led to the neural network predictor model becoming redundant for the task. Instead of using a neural network predictor model, a nominal single point linear model was used and then augmented with an error corrector. This paper shows that the Generalized Predictive Controller and the Dynamic Inverse Neural Network controller perform equally well at reconfiguration, but with less rate requirements from the actuators. Also presented are the pilot ratings for each controller for various failure scenarios and two samples of the required control actuation during reconfiguration. Finally, the paper concludes by stepping through the Generalized Predictive Control's reconfiguration process for an elevator failure.

  20. Failure modes and conditions of a cohesive, spherical body due to YORP spin-up

    NASA Astrophysics Data System (ADS)

    Hirabayashi, Masatoshi

    2015-12-01

    This paper presents transition of the failure mode of a cohesive, spherical body due to The Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) spin-up. On the assumption that the distribution of materials in the body is homogeneous, failed regions first appearing in the body at different spin rates are predicted by comparing the yield condition of an elastic stress in the body. It is found that as the spin rate increases, the locations of the failed regions move from the equatorial surface to the central region. To avoid such failure modes, the body should have higher cohesive strength. The results by this model are consistent with those by a plastic finite element model. Then, this model and a two-layered-cohesive model first proposed by Hirabayashi et al. are used to classify possible evolution and disruption of a spherical body. There are three possible pathways to disruption. First, because of a strong structure, failure of the central region is dominant and eventually leads to a breakup into multiple components. Secondly, a weak surface and a weak interior make the body oblate. Thirdly, a strong internal core prevents the body from failing and only allows surface shedding. This implies that observed failure modes may highly depend on the internal structure of an asteroid, which could provide crucial information for giving constraints on the physical properties.

  1. Modeling and Simulating Multiple Failure Masking enabled by Local Recovery for Stencil-based Applications at Extreme Scales

    DOE PAGES

    Gamell, Marc; Teranishi, Keita; Mayo, Jackson; ...

    2017-04-24

    By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less

  2. Modeling and Simulating Multiple Failure Masking enabled by Local Recovery for Stencil-based Applications at Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamell, Marc; Teranishi, Keita; Mayo, Jackson

    By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less

  3. Aftershock triggering by complete Coulomb stress changes

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.; Bodin, P.

    2002-01-01

    We examine the correlation between seismicity rate change following the 1992, M7.3, Landers, California, earthquake and characteristics of the complete Coulomb failure stress (CFS) changes (??CFS(t)) that this earthquake generated. At close distances the time-varying "dynamic" portion of the stress change depends on how the rupture develops temporally and spatially and arises from radiated seismic waves and from permanent coseismic fault displacement. The permanent "static" portion (??CFS) depends only on the final coseismic displacement. ??CFS diminishes much more rapidly with distance than the transient, dynamic stress changes. A common interpretation of the strong correlation between ??CFS and aftershocks is that load changes can advance or delay failure. Stress changes may also promote failure by physically altering properties of the fault or its environs. Because it is transient, ??CFS(t) can alter the failure rate only by the latter means. We calculate both ??CFS and the maximum positive value of ??CFS(t) (peak ??CFS(t)) using a reflectivity program. Input parameters are constrained by modeling Landers displacement seismograms. We quantify the correlation between maps of seismicity rate changes and maps of modeled ??CFS and peak ??CFS(t) and find agreement for both models. However, rupture directivity, which does not affect ??CFS, creates larger peak ??CFS(t) values northwest of the main shock. This asymmetry is also observed in seismicity rate changes but not in ??CFS. This result implies that dynamic stress changes are as effective as static stress changes in triggering aftershocks and may trigger earthquakes long after the waves have passed.

  4. Long-term cost-effectiveness of disease management in systolic heart failure.

    PubMed

    Miller, George; Randolph, Stephen; Forkner, Emma; Smith, Brad; Galbreath, Autumn Dawn

    2009-01-01

    Although congestive heart failure (CHF) is a primary target for disease management programs, previous studies have generated mixed results regarding the effectiveness and cost savings of disease management when applied to CHF. We estimated the long-term impact of systolic heart failure disease management from the results of an 18-month clinical trial. We used data generated from the trial (starting population distributions, resource utilization, mortality rates, and transition probabilities) in a Markov model to project results of continuing the disease management program for the patients' lifetimes. Outputs included distribution of illness severity, mortality, resource consumption, and the cost of resources consumed. Both cost and effectiveness were discounted at a rate of 3% per year. Cost-effectiveness was computed as cost per quality-adjusted life year (QALY) gained. Model results were validated against trial data and indicated that, over their lifetimes, patients experienced a lifespan extension of 51 days. Combined discounted lifetime program and medical costs were $4850 higher in the disease management group than the control group, but the program had a favorable long-term discounted cost-effectiveness of $43,650/QALY. These results are robust to assumptions regarding mortality rates, the impact of aging on the cost of care, the discount rate, utility values, and the targeted population. Estimation of the clinical benefits and financial burden of disease management can be enhanced by model-based analyses to project costs and effectiveness. Our results suggest that disease management of heart failure patients can be cost-effective over the long term.

  5. Predictions of High Strain Rate Failure Modes in Layered Aluminum Composites

    NASA Astrophysics Data System (ADS)

    Khanikar, Prasenjit; Zikry, M. A.

    2014-01-01

    A dislocation density-based crystalline plasticity formulation, specialized finite-element techniques, and rational crystallographic orientation relations were used to predict and characterize the failure modes associated with the high strain rate behavior of aluminum layered composites. Two alloy layers, a high strength alloy, aluminum 2195, and an aluminum alloy 2139, with high toughness, were modeled with representative microstructures that included precipitates, dispersed particles, and different grain boundary distributions. Different layer arrangements were investigated for high strain rate applications and the optimal arrangement was with the high toughness 2139 layer on the bottom, which provided extensive shear strain localization, and the high strength 2195 layer on the top for high strength resistance The layer thickness of the bottom high toughness layer also affected the bending behavior of the roll-bonded interface and the potential delamination of the layers. Shear strain localization, dynamic cracking, and delamination are the mutually competing failure mechanisms for the layered metallic composite, and control of these failure modes can be used to optimize behavior for high strain rate applications.

  6. Experimental Evidence of Accelerated Seismic Release without Critical Failure in Acoustic Emissions of Compressed Nanoporous Materials

    NASA Astrophysics Data System (ADS)

    Baró, Jordi; Dahmen, Karin A.; Davidsen, Jörn; Planes, Antoni; Castillo, Pedro O.; Nataf, Guillaume F.; Salje, Ekhard K. H.; Vives, Eduard

    2018-06-01

    The total energy of acoustic emission (AE) events in externally stressed materials diverges when approaching macroscopic failure. Numerical and conceptual models explain this accelerated seismic release (ASR) as the approach to a critical point that coincides with ultimate failure. Here, we report ASR during soft uniaxial compression of three silica-based (SiO2 ) nanoporous materials. Instead of a singular critical point, the distribution of AE energies is stationary, and variations in the activity rate are sufficient to explain the presence of multiple periods of ASR leading to distinct brittle failure events. We propose that critical failure is suppressed in the AE statistics by mechanisms of transient hardening. Some of the critical exponents estimated from the experiments are compatible with mean field models, while others are still open to interpretation in terms of the solution of frictional and fracture avalanche models.

  7. Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1998-01-01

    Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.

  8. Modeling Strain Rate Effect of Heterogeneous Materials Using SPH Method

    NASA Astrophysics Data System (ADS)

    Ma, G. W.; Wang, X. J.; Li, Q. M.

    2010-11-01

    The strain rate effect on the dynamic compressive failure of heterogeneous material based on the smoothed particle hydrodynamics (SPH) method is studied. The SPH method employs a rate-insensitive elasto-plastic damage model incorporated with a Weibull distribution law to reflect the mechanical behavior of heterogeneous rock-like materials. A series of simulations are performed for heterogeneous specimens by applying axial velocity conditions, which induce different strain-rate loadings to the specimen. A detailed failure process of the specimens in terms of microscopic crack-activities and the macro-mechanical response are discussed. Failure mechanisms between the low and high strain rate cases are compared. The result shows that the strain-rate effects on the rock strength are mainly caused by the changing internal pressure due to the inertial effects as well as the material heterogeneity. It also demonstrates that the inertial effect becomes significant only when the induced strain rate exceeds a threshold, below which, the dynamic strength enhancement can be explained due to the heterogeneities in the material. It also shows that the dynamic strength is affected more significantly for a relatively more heterogeneous specimen, which coincides with the experimental results showing that the poor quality specimen had a relatively larger increase in the dynamic strength.

  9. Donor age and early graft failure after lung transplantation: a cohort study.

    PubMed

    Baldwin, M R; Peterson, E R; Easthausen, I; Quintanilla, I; Colago, E; Sonett, J R; D'Ovidio, F; Costa, J; Diamond, J M; Christie, J D; Arcasoy, S M; Lederer, D J

    2013-10-01

    Lungs from older adult organ donors are often unused because of concerns for increased mortality. We examined associations between donor age and transplant outcomes among 8860 adult lung transplant recipients using Organ Procurement and Transplantation Network and Lung Transplant Outcomes Group data. We used stratified Cox proportional hazard models and generalized linear mixed models to examine associations between donor age and both 1-year graft failure and primary graft dysfunction (PGD). The rate of 1-year graft failure was similar among recipients of lungs from donors age 18-64 years, but severely ill recipients (Lung Allocation Score [LAS] >47.7 or use of mechanical ventilation) of lungs from donors age 56-64 years had increased rates of 1-year graft failure (p-values for interaction = 0.04 and 0.02, respectively). Recipients of lungs from donors <18 and ≥65 years had increased rates of 1-year graft failure (adjusted hazard ratio [HR] 1.23, 95% CI 1.01-1.50 and adjusted HR 2.15, 95% CI 1.47-3.15, respectively). Donor age was not associated with the risk of PGD. In summary, the use of lungs from donors age 56 to 64 years may be safe for adult candidates without a high LAS and the use of lungs from pediatric donors is associated with a small increase in early graft failure. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.

  10. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.

  11. Loading rate effect on mechanical properties of cervical spine ligaments.

    PubMed

    Trajkovski, Ana; Omerovic, Senad; Krasna, Simon; Prebil, Ivan

    2014-01-01

    Mechanical properties of cervical spine ligaments are of great importance for an accurate finite element model when analyzing the injury mechanism. However, there is still little experimental data in literature regarding fresh human cervical spine ligaments under physiological conditions. The focus of the present study is placed on three cervical spine ligaments that stabilize the spine and protect the spinal cord: the anterior longitudinal ligament, the posterior longitudinal ligament and the ligamentum flavum. The ligaments were tested within 24-48 hours after death, under two different loading rates. An increase trend in failure load, failure stress, stiffness and modulus was observed, but proved not to be significant for all ligament types. The loading rate had the highest impact on failure forces for all three ligaments (a 39.1% average increase was found). The observed increase trend, compared to the existing increase trends reported in literature, indicates the importance of carefully applying the existing experimental data, especially when creating scaling factors. A better understanding of the loading rate effect on ligaments properties would enable better case-specific human modelling.

  12. Toward Failure Modeling In Complex Dynamic Systems: Impact of Design and Manufacturing Variations

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; McAdams, Daniel A.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    When designing vehicle vibration monitoring systems for aerospace devices, it is common to use well-established models of vibration features to determine whether failures or defects exist. Most of the algorithms used for failure detection rely on these models to detect significant changes during a flight environment. In actual practice, however, most vehicle vibration monitoring systems are corrupted by high rates of false alarms and missed detections. Research conducted at the NASA Ames Research Center has determined that a major reason for the high rates of false alarms and missed detections is the numerous sources of statistical variations that are not taken into account in the. modeling assumptions. In this paper, we address one such source of variations, namely, those caused during the design and manufacturing of rotating machinery components that make up aerospace systems. We present a novel way of modeling the vibration response by including design variations via probabilistic methods. The results demonstrate initial feasibility of the method, showing great promise in developing a general methodology for designing more accurate aerospace vehicle vibration monitoring systems.

  13. Is rhythm-control superior to rate-control in patients with atrial fibrillation and diastolic heart failure?

    PubMed

    Kong, Melissa H; Shaw, Linda K; O'Connor, Christopher; Califf, Robert M; Blazing, Michael A; Al-Khatib, Sana M

    2010-07-01

    Although no clinical trial data exist on the optimal management of atrial fibrillation (AF) in patients with diastolic heart failure, it has been hypothesized that rhythm-control is more advantageous than rate-control due to the dependence of these patients' left ventricular filling on atrial contraction. We aimed to determine whether patients with AF and heart failure with preserved ejection fraction (EF) survive longer with rhythm versus rate-control strategy. The Duke Cardiovascular Disease Database was queried to identify patients with EF > 50%, heart failure symptoms and AF between January 1,1995 and June 30, 2005. We compared baseline characteristics and survival of patients managed with rate- versus rhythm-control strategies. Using a 60-day landmark view, Kaplan-Meier curves were generated and results were adjusted for baseline differences using Cox proportional hazards modeling. Three hundred eighty-two patients met the inclusion criteria (285 treated with rate-control and 97 treated with rhythm-control). The 1-, 3-, and 5-year survival rates were 93.2%, 69.3%, and 56.8%, respectively in rate-controlled patients and 94.8%, 78.0%, and 59.9%, respectively in rhythm-controlled patients (P > 0.10). After adjustments for baseline differences, no significant difference in mortality was detected (hazard ratio for rhythm-control vs rate-control = 0.696, 95% CI 0.453-1.07, P = 0.098). Based on our observational data, rhythm-control seems to offer no survival advantage over rate-control in patients with heart failure and preserved EF. Randomized clinical trials are needed to verify these findings and examine the effect of each strategy on stroke risk, heart failure decompensation, and quality of life.

  14. Surrogate oracles, generalized dependency and simpler models

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1990-01-01

    Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.

  15. Heart-rate variability depression in porcine peritonitis-induced sepsis without organ failure.

    PubMed

    Jarkovska, Dagmar; Valesova, Lenka; Chvojka, Jiri; Benes, Jan; Danihel, Vojtech; Sviglerova, Jitka; Nalos, Lukas; Matejovic, Martin; Stengl, Milan

    2017-05-01

    Depression of heart-rate variability (HRV) in conditions of systemic inflammation has been shown in both patients and experimental animal models and HRV has been suggested as an early indicator of sepsis. The sensitivity of HRV-derived parameters to the severity of sepsis, however, remains unclear. In this study we modified the clinically relevant porcine model of peritonitis-induced sepsis in order to avoid the development of organ failure and to test the sensitivity of HRV to such non-severe conditions. In 11 anesthetized, mechanically ventilated and instrumented domestic pigs of both sexes, sepsis was induced by fecal peritonitis. The dose of feces was adjusted and antibiotic therapy was administered to avoid multiorgan failure. Experimental subjects were screened for 40 h from the induction of sepsis. In all septic animals, sepsis with hyperdynamic circulation and increased plasma levels of inflammatory mediators developed within 12 h from the induction of peritonitis. The sepsis did not progress to multiorgan failure and there was no spontaneous death during the experiment despite a modest requirement for vasopressor therapy in most animals (9/11). A pronounced reduction of HRV and elevation of heart rate developed quickly (within 5 h, time constant of 1.97 ± 0.80 h for HRV parameter TINN) upon the induction of sepsis and were maintained throughout the experiment. The frequency domain analysis revealed a decrease in the high-frequency component. The reduction of HRV parameters and elevation of heart rate preceded sepsis-associated hemodynamic changes by several hours (time constant of 11.28 ± 2.07 h for systemic vascular resistance decline). A pronounced and fast reduction of HRV occurred in the setting of a moderate experimental porcine sepsis without organ failure. Inhibition of parasympathetic cardiac signaling probably represents the main mechanism of HRV reduction in sepsis. The sensitivity of HRV to systemic inflammation may allow early detection of a moderate sepsis without organ failure. Impact statement A pronounced and fast reduction of heart-rate variability occurred in the setting of a moderate experimental porcine sepsis without organ failure. Dominant reduction of heart-rate variability was found in the high-frequency band indicating inhibition of parasympathetic cardiac signaling as the main mechanism of heart-rate variability reduction. The sensitivity of heart-rate variability to systemic inflammation may contribute to an early detection of moderate sepsis without organ failure.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, Isaac M.

    To facilitate and accelerate the process of introducing, evaluating and adopting of new material systems, it is important to develop/establish comprehensive and effective procedures of characterization, modeling and failure prediction of structural laminates based on the properties of the constituent materials, e. g., fibers, matrix, and the single ply or lamina. A new failure theory, the Northwestern (NU-Daniel) theory, has been proposed for predicting lamina yielding and failure under multi-axial states of stress including strain rate effects. It is primarily applicable to matrix-dominated interfiber/interlaminar failures. It is based on micromechanical failure mechanisms but is expressed in terms of easily measuredmore » macroscopic lamina stiffness and strength properties. It is presented in the form of a master failure envelope incorporating strain rate effects. The theory was further adapted and extended to the prediction of in situ first ply yielding and failure (FPY and FPF) and progressive failure of multi-directional laminates under static and dynamic loadings. The significance of this theory is that it allows for rapid screening of new composite materials without very extensive testing and offers easily implemented design tools.« less

  17. A novel heart rate control model provides insights linking LF-HRV behavior to the open-loop gain.

    PubMed

    Dvir, Hila; Bobrovsky, Ben Zion; Gabbay, Uri

    2013-09-20

    Low-frequency heart rate variability (LF-HRV) at rest has already been successfully modeled as self-sustained oscillations in a nonlinear control loop, but these models fail to simulate LF-HRV decreases either during aerobic exercise or in heart failure patients. Following control engineering practices, we assume the existence of a biological excitation (dither) within the heart rate control loop that softens the nonlinearity and studied LF-HRV behavior in a dither-embedded model. We adopted the Ottesen model with some revisions and induced a dither of high-frequency stochastic perturbations. We simulated scenarios of a healthy subject at rest and during aerobic exercise (by decreasing peripheral vascular resistance) and a heart failure patient (by decreasing stroke volume). The simulations resembled physiological LF-HRV behavior, i.e., LF-HRV decreased during aerobic exercise and in the heart failure patient. The simulations exhibited LF-HRV dependency on the open-loop gain, which is related to the product of the feedback gain and the feed forward gain. We are the first to demonstrate that LF-HRV may be dependent on the open-loop gain. Accordingly, reduced open-loop gain results in decreased LF-HRV, and vice versa. Our findings explain a well-known but unexplained observed phenomenon of reduced LF-HRV both in heart failure patients and in healthy subjects performing aerobic exercise. These findings have implications on how changes in LF-HRV can be interpreted physiologically, a necessary step towards the clinical utilization of LF-HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Failure and Degradation Modes of PV modules in a Hot Dry Climate: Results after 4 and 12 years of field exposure

    NASA Astrophysics Data System (ADS)

    Mallineni, Jaya krishna

    This study evaluates two photovoltaic (PV) power plants based on electrical performance measurements, diode checks, visual inspections and infrared scanning. The purpose of this study is to measure degradation rates of performance parameters (Pmax, Isc, Voc, Vmax, Imax and FF) and to identify the failure modes in a "hot-dry desert" climatic condition along with quantitative determination of safety failure rates and reliability failure rates. The data obtained from this study can be used by module manufacturers in determining the warranty limits of their modules and also by banks, investors, project developers and users in determining appropriate financing or decommissioning models. In addition, the data obtained in this study will be helpful in selecting appropriate accelerated stress tests which would replicate the field failures for the new modules and would predict the lifetime for new PV modules. The study was conducted at two, single axis tracking monocrystalline silicon (c-Si) power plants, Site 3 and Site 4c of Salt River Project (SRP). The Site 3 power plant is located in Glendale, Arizona and the Site 4c power plant is located in Mesa, Arizona both considered a "hot-dry" field condition. The Site 3 power plant has 2,352 modules (named as Model-G) which was rated at 250 kW DC output. The mean and median degradation of these 12 years old modules are 0.95%/year and 0.96%/year, respectively. The major cause of degradation found in Site 3 is due to high series resistance (potentially due to solder-bond thermo-mechanical fatigue) and the failure mode is ribbon-ribbon solder bond failure/breakage. The Site 4c power plant has 1,280 modules (named as Model-H) which provide 243 kW DC output. The mean and median degradation of these 4 years old modules are 0.96%/year and 1%/year, respectively. At Site 4c, practically, none of the module failures are observed. The average soiling loss is 6.9% in Site 3 and 5.5% in Site 4c. The difference in soiling level is attributed to the rural and urban surroundings of these two power plants.

  19. Acceleration to failure in geophysical signals prior to laboratory rock failure and volcanic eruptions (Invited)

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Bell, A. F.; Greenhough, J.; Heap, M. J.; Meredith, P. G.

    2010-12-01

    The nucleation processes that ultimately lead to earthquakes, volcanic eruptions, rock bursts in mines, and landslides from cliff slopes are likely to be controlled at some scale by brittle failure of the Earth’s crust. In laboratory brittle deformation experiments geophysical signals commonly exhibit an accelerating trend prior to dynamic failure. Similar signals have been observed prior to volcanic eruptions, including volcano-tectonic earthquake event and moment release rates. Despite a large amount of effort in the search, no such statistically robust systematic trend is found prior to natural earthquakes. Here we describe the results of a suite of laboratory tests on Mount Etna Basalt and other rocks to examine the nature of the non-linear scaling from laboratory to field conditions, notably using laboratory ‘creep’ tests to reduce the boundary strain rate to conditions more similar to those in the field. Seismic event rate, seismic moment release rate and rate of porosity change show a classic ‘bathtub’ graph that can be derived from a simple damage model based on separate transient and accelerating sub-critical crack growth mechanisms, resulting from separate processes of negative and positive feedback in the population dynamics. The signals exhibit clear precursors based on formal statistical model tests using maximum likelihood techniques with Poisson errors. After correcting for the finite loading time of the signal, the results show a transient creep rate that decays as a classic Omori law for earthquake aftershocks, and remarkably with an exponent near unity, as commonly observed for natural earthquake sequences. The accelerating trend follows an inverse power law when fitted in retrospect, i.e. with prior knowledge of the failure time. In contrast the strain measured on the sample boundary shows a less obvious but still accelerating signal that is often absent altogether in natural strain data prior to volcanic eruptions. To test the forecasting power of such constitutive rules in prospective mode, we examine the forecast quality of several synthetic trials, by adding representative statistical fluctuations, due to finite real-time sampling effects, to an underlying accelerating trend. Metrics of forecast quality change systematically and dramatically with time. In particular the model accuracy increases, and the forecast bias decreases, as the failure time approaches.

  20. Impact Testing of Aluminum 2024 and Titanium 6Al-4V for Material Model Development

    NASA Technical Reports Server (NTRS)

    Pereira, J. Michael; Revilock, Duane M.; Lerch, Bradley A.; Ruggeri, Charles R.

    2013-01-01

    One of the difficulties with developing and verifying accurate impact models is that parameters such as high strain rate material properties, failure modes, static properties, and impact test measurements are often obtained from a variety of different sources using different materials, with little control over consistency among the different sources. In addition there is often a lack of quantitative measurements in impact tests to which the models can be compared. To alleviate some of these problems, a project is underway to develop a consistent set of material property, impact test data and failure analysis for a variety of aircraft materials that can be used to develop improved impact failure and deformation models. This project is jointly funded by the NASA Glenn Research Center and the FAA William J. Hughes Technical Center. Unique features of this set of data are that all material property data and impact test data are obtained using identical material, the test methods and procedures are extensively documented and all of the raw data is available. Four parallel efforts are currently underway: Measurement of material deformation and failure response over a wide range of strain rates and temperatures and failure analysis of material property specimens and impact test articles conducted by The Ohio State University; development of improved numerical modeling techniques for deformation and failure conducted by The George Washington University; impact testing of flat panels and substructures conducted by NASA Glenn Research Center. This report describes impact testing which has been done on aluminum (Al) 2024 and titanium (Ti) 6Al-4vanadium (V) sheet and plate samples of different thicknesses and with different types of projectiles, one a regular cylinder and one with a more complex geometry incorporating features representative of a jet engine fan blade. Data from this testing will be used in validating material models developed under this program. The material tests and the material models developed in this program will be published in separate reports.

  1. Statistics-related and reliability-physics-related failure processes in electronics devices and products

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    2014-05-01

    The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.

  2. Why earthquakes correlate weakly with the solid Earth tides: Effects of periodic stress on the rate and probability of earthquake occurrence

    USGS Publications Warehouse

    Beeler, N.M.; Lockner, D.A.

    2003-01-01

    We provide an explanation why earthquake occurrence does not correlate well with the daily solid Earth tides. The explanation is derived from analysis of laboratory experiments in which faults are loaded to quasiperiodic failure by the combined action of a constant stressing rate, intended to simulate tectonic loading, and a small sinusoidal stress, analogous to the Earth tides. Event populations whose failure times correlate with the oscillating stress show two modes of response; the response mode depends on the stressing frequency. Correlation that is consistent with stress threshold failure models, e.g., Coulomb failure, results when the period of stress oscillation exceeds a characteristic time tn; the degree of correlation between failure time and the phase of the driving stress depends on the amplitude and frequency of the stress oscillation and on the stressing rate. When the period of the oscillating stress is less than tn, the correlation is not consistent with threshold failure models, and much higher stress amplitudes are required to induce detectable correlation with the oscillating stress. The physical interpretation of tn is the duration of failure nucleation. Behavior at the higher frequencies is consistent with a second-order dependence of the fault strength on sliding rate which determines the duration of nucleation and damps the response to stress change at frequencies greater than 1/tn. Simple extrapolation of these results to the Earth suggests a very weak correlation of earthquakes with the daily Earth tides, one that would require >13,000 earthquakes to detect. On the basis of our experiments and analysis, the absence of definitive daily triggering of earthquakes by the Earth tides requires that for earthquakes, tn exceeds the daily tidal period. The experiments suggest that the minimum typical duration of earthquake nucleation on the San Andreas fault system is ???1 year.

  3. Modeling the ductile fracture and the plastic anisotropy of DC01 steel at room temperature and low strain rates

    NASA Astrophysics Data System (ADS)

    Tuninetti, V.; Yuan, S.; Gilles, G.; Guzmán, C. F.; Habraken, A. M.; Duchêne, L.

    2016-08-01

    This paper presents different extensions of the classical GTN damage model implemented in a finite element code. The goal of this study is to assess these extensions for the numerical prediction of failure of a DC01 steel sheet during a single point incremental forming process, after a proper identification of the material parameters. It is shown that the prediction of failure appears too early compared to experimental results. Though, the use of the Thomason criterion permitted to delay the onset of coalescence and consequently the final failure.

  4. Yield and failure criteria for composite materials under static and dynamic loading

    DOE PAGES

    Daniel, Isaac M.

    2015-12-23

    To facilitate and accelerate the process of introducing, evaluating and adopting of new material systems, it is important to develop/establish comprehensive and effective procedures of characterization, modeling and failure prediction of structural laminates based on the properties of the constituent materials, e. g., fibers, matrix, and the single ply or lamina. A new failure theory, the Northwestern (NU-Daniel) theory, has been proposed for predicting lamina yielding and failure under multi-axial states of stress including strain rate effects. It is primarily applicable to matrix-dominated interfiber/interlaminar failures. It is based on micromechanical failure mechanisms but is expressed in terms of easily measuredmore » macroscopic lamina stiffness and strength properties. It is presented in the form of a master failure envelope incorporating strain rate effects. The theory was further adapted and extended to the prediction of in situ first ply yielding and failure (FPY and FPF) and progressive failure of multi-directional laminates under static and dynamic loadings. The significance of this theory is that it allows for rapid screening of new composite materials without very extensive testing and offers easily implemented design tools.« less

  5. An Approach for Reducing the Error Rate in Automated Lung Segmentation

    PubMed Central

    Gill, Gurman; Beichel, Reinhard R.

    2016-01-01

    Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855 ± 0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897

  6. Does an inter-flaw length control the accuracy of rupture forecasting in geological materials?

    NASA Astrophysics Data System (ADS)

    Vasseur, Jérémie; Wadsworth, Fabian B.; Heap, Michael J.; Main, Ian G.; Lavallée, Yan; Dingwell, Donald B.

    2017-10-01

    Multi-scale failure of porous materials is an important phenomenon in nature and in material physics - from controlled laboratory tests to rockbursts, landslides, volcanic eruptions and earthquakes. A key unsolved research question is how to accurately forecast the time of system-sized catastrophic failure, based on observations of precursory events such as acoustic emissions (AE) in laboratory samples, or, on a larger scale, small earthquakes. Until now, the length scale associated with precursory events has not been well quantified, resulting in forecasting tools that are often unreliable. Here we test the hypothesis that the accuracy of the forecast failure time depends on the inter-flaw distance in the starting material. We use new experimental datasets for the deformation of porous materials to infer the critical crack length at failure from a static damage mechanics model. The style of acceleration of AE rate prior to failure, and the accuracy of forecast failure time, both depend on whether the cracks can span the inter-flaw length or not. A smooth inverse power-law acceleration of AE rate to failure, and an accurate forecast, occurs when the cracks are sufficiently long to bridge pore spaces. When this is not the case, the predicted failure time is much less accurate and failure is preceded by an exponential AE rate trend. Finally, we provide a quantitative and pragmatic correction for the systematic error in the forecast failure time, valid for structurally isotropic porous materials, which could be tested against larger-scale natural failure events, with suitable scaling for the relevant inter-flaw distances.

  7. Residual shear strength variability as a primary control on movement of landslides reactivated by earthquake-induced ground motion: Implications for coastal Oregon, U.S.

    USGS Publications Warehouse

    Schulz, William H.; Wang, Gonghui

    2014-01-01

    Most large seismogenic landslides are reactivations of preexisting landslides with basal shear zones in the residual strength condition. Residual shear strength often varies during rapid displacement, but the response of residual shear zones to seismic loading is largely unknown. We used a ring shear apparatus to perform simulated seismic loading tests, constant displacement rate tests, and tests during which shear stress was gradually varied on specimens from two landslides to improve understanding of coseismic landslide reactivation and to identify shear strength models valid for slow gravitational failure through rapid coseismic failure. The landslides we studied represent many along the Oregon, U.S., coast. Seismic loading tests resulted in (1) catastrophic failure involving unbounded displacement when stresses represented those for the existing landslides and (2) limited to unbounded displacement when stresses represented those for hypothetical dormant landslides, suggesting that coseismic landslide reactivation may be significant during future great earthquakes occurring near the Oregon Coast. Constant displacement rate tests indicated that shear strength decreased exponentially during the first few decimeters of displacement but increased logarithmically with increasing displacement rate when sheared at 0.001 cm s−1 or greater. Dynamic shear resistance estimated from shear strength models correlated well with stresses observed during seismic loading tests, indicating that displacement rate and amount primarily controlled failure characteristics. We developed a stress-based approach to estimate coseismic landslide displacement that utilizes the variable shear strength model. The approach produced results that compared favorably to observations made during seismic loading tests, indicating its utility for application to landslides.

  8. Quality of care and investment in property, plant, and equipment in hospitals.

    PubMed Central

    Levitt, S W

    1994-01-01

    OBJECTIVE. This study explores the relationship between quality of care and investment in property, plant, and equipment (PPE) in hospitals. DATA SOURCES. Hospitals' investment in PPE was derived from audited financial statements for the fiscal years 1984-1989. Peer Review Organization (PRO) Generic Quality Screen (GQS) reviews and confirmed failures between April 1989 and September 1990 were obtained from the Massachusetts PRO. STUDY DESIGN. Weighted least squares regression models used PRO GQS confirmed failure rates as the dependent variable, and investment in PPE as the key explanatory variable. DATA EXTRACTION. Investment in PPE was standardized, summed by the hospital over the six years, and divided by the hospital's average number of beds in that period. The number of PRO reviewed cases with one or more GQS confirmed failures was divided by the total number of cases reviewed to create confirmed failure rates. PRINCIPAL FINDINGS. Investment in PPE in Massachusetts hospitals is correlated with GQS confirmed failure rates. CONCLUSIONS. A financial variable, investment in PPE, predicts certain dimensions of quality of care in hospitals. PMID:8113054

  9. [Hazard function and life table: an introduction to the failure time analysis].

    PubMed

    Matsushita, K; Inaba, H

    1987-04-01

    Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.

  10. Field Programmable Gate Array Failure Rate Estimation Guidelines for Launch Vehicle Fault Tree Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.

    2017-01-01

    Today's launch vehicles complex electronic and avionics systems heavily utilize Field Programmable Gate Array (FPGA) integrated circuits (IC) for their superb speed and reconfiguration capabilities. Consequently, FPGAs are prevalent ICs in communication protocols such as MILSTD- 1553B and in control signal commands such as in solenoid valve actuations. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.

  11. Identifying black swans in NextGen: predicting human performance in off-nominal conditions.

    PubMed

    Wickens, Christopher D; Hooey, Becky L; Gore, Brian F; Sebok, Angelia; Koenicke, Corey S

    2009-10-01

    The objective is to validate a computational model of visual attention against empirical data--derived from a meta-analysis--of pilots' failure to notice safety-critical unexpected events. Many aircraft accidents have resulted, in part, because of failure to notice nonsalient unexpected events outside of foveal vision, illustrating the phenomenon of change blindness. A model of visual noticing, N-SEEV (noticing-salience, expectancy, effort, and value), was developed to predict these failures. First, 25 studies that reported objective data on miss rate for unexpected events in high-fidelity cockpit simulations were identified, and their miss rate data pooled across five variables (phase of flight, event expectancy, event location, presence of a head-up display, and presence of a highway-in-the-sky display). Second, the parameters of the N-SEEV model were tailored to mimic these dichotomies. The N-SEEV model output predicted variance in the obtained miss rate (r = .73). The individual miss rates of all six dichotomous conditions were predicted within 14%, and four of these were predicted within 7%. The N-SEEV model, developed on the basis of an independent data set, was able to successfully predict variance in this safety-critical measure of pilot response to abnormal circumstances, as collected from the literature. As new technology and procedures are envisioned for the future airspace, it is important to predict if these may compromise safety in terms of pilots' failing to notice unexpected events. Computational models such as N-SEEV support cost-effective means of making such predictions.

  12. Evaluation of the probability of arrester failure in a high-voltage transmission line using a Q learning artificial neural network model

    NASA Astrophysics Data System (ADS)

    Ekonomou, L.; Karampelas, P.; Vita, V.; Chatzarakis, G. E.

    2011-04-01

    One of the most popular methods of protecting high voltage transmission lines against lightning strikes and internal overvoltages is the use of arresters. The installation of arresters in high voltage transmission lines can prevent or even reduce the lines' failure rate. Several studies based on simulation tools have been presented in order to estimate the critical currents that exceed the arresters' rated energy stress and to specify the arresters' installation interval. In this work artificial intelligence, and more specifically a Q-learning artificial neural network (ANN) model, is addressed for evaluating the arresters' failure probability. The aims of the paper are to describe in detail the developed Q-learning ANN model and to compare the results obtained by its application in operating 150 kV Greek transmission lines with those produced using a simulation tool. The satisfactory and accurate results of the proposed ANN model can make it a valuable tool for designers of electrical power systems seeking more effective lightning protection, reducing operational costs and better continuity of service.

  13. The Influence of Temperature on Time-Dependent Deformation and Failure in Granite: A Mesoscale Modeling Approach

    NASA Astrophysics Data System (ADS)

    Xu, T.; Zhou, G. L.; Heap, Michael J.; Zhu, W. C.; Chen, C. F.; Baud, Patrick

    2017-09-01

    An understanding of the influence of temperature on brittle creep in granite is important for the management and optimization of granitic nuclear waste repositories and geothermal resources. We propose here a two-dimensional, thermo-mechanical numerical model that describes the time-dependent brittle deformation (brittle creep) of low-porosity granite under different constant temperatures and confining pressures. The mesoscale model accounts for material heterogeneity through a stochastic local failure stress field, and local material degradation using an exponential material softening law. Importantly, the model introduces the concept of a mesoscopic renormalization to capture the co-operative interaction between microcracks in the transition from distributed to localized damage. The mesoscale physico-mechanical parameters for the model were first determined using a trial-and-error method (until the modeled output accurately captured mechanical data from constant strain rate experiments on low-porosity granite at three different confining pressures). The thermo-physical parameters required for the model, such as specific heat capacity, coefficient of linear thermal expansion, and thermal conductivity, were then determined from brittle creep experiments performed on the same low-porosity granite at temperatures of 23, 50, and 90 °C. The good agreement between the modeled output and the experimental data, using a unique set of thermo-physico-mechanical parameters, lends confidence to our numerical approach. Using these parameters, we then explore the influence of temperature, differential stress, confining pressure, and sample homogeneity on brittle creep in low-porosity granite. Our simulations show that increases in temperature and differential stress increase the creep strain rate and therefore reduce time-to-failure, while increases in confining pressure and sample homogeneity decrease creep strain rate and increase time-to-failure. We anticipate that the modeling presented herein will assist in the management and optimization of geotechnical engineering projects within granite.

  14. High rate of virological failure and low rate of switching to second-line treatment among adolescents and adults living with HIV on first-line ART in Myanmar, 2005-2015

    PubMed Central

    Harries, Anthony D.; Kumar, Ajay M. V.; Oo, Myo Minn; Kyaw, Khine Wut Yee; Win, Than; Aung, Thet Ko; Min, Aung Chan; Oo, Htun Nyunt

    2017-01-01

    Background The number of people living with HIV on antiretroviral treatment (ART) in Myanmar has been increasing rapidly in recent years. This study aimed to estimate rates of virological failure on first-line ART and switching to second-line ART due to treatment failure at the Integrated HIV Care program (IHC). Methods Routinely collected data of all adolescent and adult patients living with HIV who were initiated on first-line ART at IHC between 2005 and 2015 were retrospectively analyzed. The cumulative hazard of virological failure on first-line ART and switching to second-line ART were estimated. Crude and adjusted hazard ratios were calculated using the Cox regression model to identify risk factors associated with the two outcomes. Results Of 23,248 adults and adolescents, 7,888 (34%) were tested for HIV viral load. The incidence rate of virological failure among those tested was 3.2 per 100 person-years follow-up and the rate of switching to second-line ART among all patients was 1.4 per 100 person-years follow-up. Factors associated with virological failure included: being adolescent; being lost to follow-up at least once; having WHO stage 3 and 4 at ART initiation; and having taken first-line ART elsewhere before coming to IHC. Of the 1032 patients who met virological failure criteria, 762 (74%) switched to second-line ART. Conclusions We found high rates of virological failure among one third of patients in the cohort who were tested for viral load. Of those failing virologically on first-line ART, about one quarter were not switched to second-line ART. Routine viral load monitoring, especially for those identified as having a higher risk of treatment failure, should be considered in this setting to detect all patients failing on first-line ART. Strategies also need to be put in place to prevent treatment failure and to treat more of those patients who are actually failing. PMID:28182786

  15. Finite Element Modeling of the Behavior of Armor Materials Under High Strain Rates and Large Strains

    NASA Astrophysics Data System (ADS)

    Polyzois, Ioannis

    For years high strength steels and alloys have been widely used by the military for making armor plates. Advances in technology have led to the development of materials with improved resistance to penetration and deformation. Until recently, the behavior of these materials under high strain rates and large strains has been primarily based on laboratory testing using the Split Hopkinson Pressure Bar apparatus. With the advent of sophisticated computer programs, computer modeling and finite element simulations are being developed to predict the deformation behavior of these metals for a variety of conditions similar to those experienced during combat. In the present investigation, a modified direct impact Split Hopkinson Pressure Bar apparatus was modeled using the finite element software ABAQUS 6.8 for the purpose of simulating high strain rate compression of specimens of three armor materials: maraging steel 300, high hardness armor (HHA), and aluminum alloy 5083. These armor materials, provided by the Canadian Department of National Defence, were tested at the University of Manitoba by others. In this study, the empirical Johnson-Cook visco-plastic and damage models were used to simulate the deformation behavior obtained experimentally. A series of stress-time plots at various projectile impact momenta were produced and verified by comparison with experimental data. The impact momentum parameter was chosen rather than projectile velocity to normalize the initial conditions for each simulation. Phenomena such as the formation of adiabatic shear bands caused by deformation at high strains and strain rates were investigated through simulations. It was found that the Johnson-Cook model can accurately simulate the behavior of body-centered cubic (BCC) metals such as steels. The maximum shear stress was calculated for each simulation at various impact momenta. The finite element model showed that shear failure first occurred in the center of the cylindrical specimen and propagated outwards diagonally towards the front and back edges forming an hourglass pattern. This pattern matched the failure behavior of specimens tested experimentally, which also exhibited failure through the formation of adiabatic shear bands. Adiabatic shear bands are known to lead to a complete shear failure. Both mechanical and thermal mechanisms contribute to the formation of shear bands. However, the finite element simulations did not show the effects of temperature rise within the material, a phenomenon which is known to contribute to thermal instabilities, whereby strain hardening effects are outweighed by thermal softening effects and adiabatic shear bands begin to form. In the simulations, the purely mechanical maximum shear stress failure, nucleating from the center of the specimens, was used as an indicator of the time at which these shear bands begin to form. The time and compressive stress at the moment of thermal instability in experimental results which have shown to form adiabatic shear bands, matched closely to those at which shear failure was first observed in the simulations. Although versatile in modeling BCC behavior, the Johnson-Cook model did not show the correct stress response in face-centered cubic (FCC) metals, such as aluminum 5083, where effects of strain rate and temperature depend on strain. Similar observations have been reported in literature. In the Johnson-Cook model, temperature, strain rate and strain" parameters are independent of each other. To this end, a more physical-based model based on dislocation mechanics, namely the Feng and Bassim constitutive model, would be more appropriate.

  16. Health information systems: failure, success and improvisation.

    PubMed

    Heeks, Richard

    2006-02-01

    The generalised assumption of health information systems (HIS) success is questioned by a few commentators in the medical informatics field. They point to widespread HIS failure. The purpose of this paper was therefore to develop a better conceptual foundation for, and practical guidance on, health information systems failure (and success). Literature and case analysis plus pilot testing of developed model. Defining HIS failure and success is complex, and the current evidence base on HIS success and failure rates was found to be weak. Nonetheless, the best current estimate is that HIS failure is an important problem. The paper therefore derives and explains the "design-reality gap" conceptual model. This is shown to be robust in explaining multiple cases of HIS success and failure, yet provides a contingency that encompasses the differences which exist in different HIS contexts. The design-reality gap model is piloted to demonstrate its value as a tool for risk assessment and mitigation on HIS projects. It also throws into question traditional, structured development methodologies, highlighting the importance of emergent change and improvisation in HIS. The design-reality gap model can be used to address the problem of HIS failure, both as a post hoc evaluative tool and as a pre hoc risk assessment and mitigation tool. It also validates a set of methods, techniques, roles and competencies needed to support the dynamic improvisations that are found to underpin cases of HIS success.

  17. Dynamic Stability of the Rate, State, Temperature, and Pore Pressure Friction Model at a Rock Interface

    NASA Astrophysics Data System (ADS)

    Sinha, Nitish; Singh, Arun K.; Singh, Trilok N.

    2018-05-01

    In this article, we study numerically the dynamic stability of the rate, state, temperature, and pore pressure friction (RSTPF) model at a rock interface using standard spring-mass sliding system. This particular friction model is a basically modified form of the previously studied friction model namely the rate, state, and temperature friction (RSTF). The RSTPF takes into account the role of thermal pressurization including dilatancy and permeability of the pore fluid due to shear heating at the slip interface. The linear stability analysis shows that the critical stiffness, at which the sliding becomes stable to unstable or vice versa, increases with the coefficient of thermal pressurization. Critical stiffness, on the other hand, remains constant for small values of either dilatancy factor or hydraulic diffusivity, but the same decreases as their values are increased further from dilatancy factor (˜ 10^{ - 4} ) and hydraulic diffusivity (˜ 10^{ - 9} {m}2 {s}^{ - 1} ) . Moreover, steady-state friction is independent of the coefficient of thermal pressurization, hydraulic diffusivity, and dilatancy factor. The proposed model is also used for predicting time of failure of a creeping interface of a rock slope under the constant gravitational force. It is observed that time of failure decreases with increase in coefficient of thermal pressurization and hydraulic diffusivity, but the dilatancy factor delays the failure of the rock fault under the condition of heat accumulation at the creeping interface. Moreover, stiffness of the rock-mass also stabilizes the failure process of the interface as the strain energy due to the gravitational force accumulates in the rock-mass before it transfers to the sliding interface. Practical implications of the present study are also discussed.

  18. Scaled CMOS Technology Reliability Users Guide

    NASA Technical Reports Server (NTRS)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.

  19. Reliability and performance experience with flat-plate photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1982-01-01

    Statistical models developed to define the most likely sources of photovoltaic (PV) array failures and the optimum method of allowing for the defects in order to achieve a 20 yr lifetime with acceptable performance degradation are summarized. Significant parameters were the cost of energy, annual power output, initial cost, replacement cost, rate of module replacement, the discount rate, and the plant lifetime. Acceptable degradation allocations were calculated to be 0.0001 cell failures/yr, 0.005 module failures/yr, 0.05 power loss/yr, a 0.01 rate of power loss/yr, and a 25 yr module wear-out length. Circuit redundancy techniques were determined to offset cell failures using fault tolerant designs such as series/parallel and bypass diode arrangements. Screening processes have been devised to eliminate cells that will crack in operation, and multiple electrical contacts at each cell compensate for the cells which escape the screening test and then crack when installed. The 20 yr array lifetime is expected to be achieved in the near-term.

  20. The Influence of Various Factors on High School Football Helmet Face Mask Removal: A Retrospective, Cross-Sectional Analysis

    PubMed Central

    Swartz, Erik E; Decoster, Laura C; Norkus, Susan A; Cappaert, Thomas A

    2007-01-01

    Context: Most research on face mask removal has been performed on unused equipment. Objective: To identify and compare factors that influence the condition of helmet components and their relationship to face mask removal. Design: A cross-sectional, retrospective study. Setting: Five athletic equipment reconditioning/recertification facilities. Participants: 2584 helmets from 46 high school football teams representing 5 geographic regions. Intervention(s): Helmet characteristics (brand, model, hardware components) were recorded. Helmets were mounted and face mask removal was attempted using a cordless screwdriver. The 2004 season profiles and weather histories were obtained for each high school. Main Outcome Measure(s): Success and failure (including reason) for removal of 4 screws from the face mask were noted. Failure rates among regions, teams, reconditioning year, and screw color (type) were compared. Weather histories were compared. We conducted a discriminant analysis to determine if weather variables, region, helmet brand and model, reconditioning year, and screw color could predict successful face mask removal. Metallurgic analysis of screw samples was performed. Results: All screws were successfully removed from 2165 (84%) helmets. At least 1 screw could not be removed from 419 (16%) helmets. Significant differences were found for mean screw failure per helmet among the 5 regions, with the Midwest having the lowest failure rate (0.08 ± 0.38) and the Southern (0.33 ± 0.72), the highest. Differences were found in screw failure rates among the 46 teams (F1,45 = 9.4, P < .01). Helmets with the longest interval since last reconditioning (3 years) had the highest failure rate, 0.47 ± 0.93. Differences in success rates were found among 4 screw types (χ21,4 = 647, P < .01), with silver screws having the lowest percentage of failures (3.4%). A discriminant analysis (Λ = .932, χ214,n=2584 = 175.34, P < .001) revealed screw type to be the strongest predictor of successful removal. Conclusions: Helmets with stainless steel or nickel-plated carbon steel screws reconditioned in the previous year had the most favorable combination of factors for successful screw removal. T-nut spinning at the side screw locations was the most common reason and location for failure. PMID:17597938

  1. A more realistic disc herniation model incorporating compression, flexion and facet-constrained shear: a mechanical and microstructural analysis. Part II: high rate or 'surprise' loading.

    PubMed

    Shan, Zhi; Wade, Kelly R; Schollum, Meredith L; Robertson, Peter A; Thambyah, Ashvin; Broom, Neil D

    2017-10-01

    Part I of this study explored mechanisms of disc failure in a complex posture incorporating physiological amounts of flexion and shear at a loading rate considerably lower than likely to occur in a typical in vivo manual handling situation. Given the strain-rate-dependent mechanical properties of the heavily hydrated disc, loading rate will likely influence the mechanisms of disc failure. Part II investigates the mechanisms of failure in healthy discs subjected to surprise-rate compression while held in the same complex posture. 37 motion segments from 13 healthy mature ovine lumbar spines were compressed in a complex posture intended to simulate the situation arising when bending and twisting while lifting a heavy object at a displacement rate of 400 mm/min. Seven of the 37 samples reached the predetermined displacement prior to a reduction in load and were classified as early stage failures, providing insight to initial areas of disc disruption. Both groups of damaged discs were then analysed microstructurally using light microscopy. The average failure load under high rate complex loading was 6.96 kN (STD 1.48 kN), significantly lower statistically than for low rate complex loading [8.42 kN (STD 1.22 kN)]. Also, unlike simple flexion or low rate complex loading, direct radial ruptures and non-continuous mid-wall tearing in the posterior and posterolateral regions were commonly accompanied by disruption extending to the lateral and anterior disc. This study has again shown that multiple modes of damage are common when compressing a segment in a complex posture, and the load bearing ability, already less than in a neutral or flexed posture, is further compromised with high rate complex loading.

  2. Changes in electrical and thermal parameters of led packages under different current and heating stresses

    NASA Astrophysics Data System (ADS)

    Jayawardena, Adikaramge Asiri

    The goal of this dissertation is to identify electrical and thermal parameters of an LED package that can be used to predict catastrophic failure real-time in an application. Through an experimental study the series electrical resistance and thermal resistance were identified as good indicators of contact failure of LED packages. This study investigated the long-term changes in series electrical resistance and thermal resistance of LED packages at three different current and junction temperature stress conditions. Experiment results showed that the series electrical resistance went through four phases of change; including periods of latency, rapid increase, saturation, and finally a sharp decline just before failure. Formation of voids in the contact metallization was identified as the underlying mechanism for series resistance increase. The rate of series resistance change was linked to void growth using the theory of electromigration. The rate of increase of series resistance is dependent on temperature and current density. The results indicate that void growth occurred in the cap (Au) layer, was constrained by the contact metal (Ni) layer, preventing open circuit failure of contact metal layer. Short circuit failure occurred due to electromigration induced metal diffusion along dislocations in GaN. The increase in ideality factor, and reverse leakage current with time provided further evidence to presence of metal in the semiconductor. An empirical model was derived for estimation of LED package failure time due to metal diffusion. The model is based on the experimental results and theories of electromigration and diffusion. Furthermore, the experimental results showed that the thermal resistance of LED packages increased with aging time. A relationship between thermal resistance change rate, with case temperature and temperature gradient within the LED package was developed. The results showed that dislocation creep is responsible for creep induced plastic deformation in the die-attach solder. The temperatures inside the LED package reached the melting point of die-attach solder due to delamination just before catastrophic open circuit failure. A combined model that could estimate life of LED packages based on catastrophic failure of thermal and electrical contacts is presented for the first time. This model can be used to make a-priori or real-time estimation of LED package life based on catastrophic failure. Finally, to illustrate the usefulness of the findings from this thesis, two different implementations of real-time life prediction using prognostics and health monitoring techniques are discussed.

  3. Kinetic Behaviour of Failure Waves in a Filled Glass

    NASA Astrophysics Data System (ADS)

    Resnyansky, A. D.; Bourne, N. K.

    2007-12-01

    Experimental stress and velocity profiles in a lead filled glass demonstrate a pronounced kinetic behaviour for failure waves in the material during shock loading. The present work summarises the experimental proofs of the kinetic behaviour obtained with stress and velocity gauges. The work describes a model for this behaviour employing a kinetic description used earlier for fracture waves in Pyrex glass. This model is part of a family of two-phase, strain-rate sensitive models describing the behaviour of damaged brittle materials. The modelling results describe well both the stress decay of the failure wave precursor in the stress profiles and main pulse attenuation in the velocity profiles. The influences of the kinetic mechanisms and wave interactions within the test assembly on the reduction of this behaviour are discussed.

  4. A frictional population model of seismicity rate change

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.; Cocco, M.; Belardinelli, M.E.

    2005-01-01

    We study models of seismicity rate changes caused by the application of a static stress perturbation to a population of faults and discuss our results with respect to the model proposed by Dieterich (1994). These models assume distribution of nucleation sites (e.g., faults) obeying rate-state frictional relations that fail at constant rate under tectonic loading alone, and predicts a positive static stress step at time to will cause an immediate increased seismicity rate that decays according to Omori's law. We show one way in which the Dieterich model may be constructed from simple general idead, illustratted using numerically computed synthetic seismicity and mathematical formulation. We show that seismicity rate change predicted by these models (1) depend on the particular relationship between the clock-advanced failure and fault maturity, (2) are largest for the faults closest to failure at to, (3) depend strongly on which state evolution law faults obey, and (4) are insensitive to some types of population hetrogeneity. We also find that if individual faults fail repeatedly and populations are finite, at timescales much longer than typical aftershock durations, quiescence follows at seismicity rate increase regardless of the specific frictional relations. For the examined models the quiescence duration is comparable to the ratio of stress change to stressing rate ????/??,which occurs after a time comparable to the average recurrence interval of the individual faults in the population and repeats in the absence of any new load may pertubations; this simple model may partly explain observations of repeated clustering of earthquakes. Copyright 2005 by the American Geophysical Union.

  5. Activation of PPAR-α in the early stage of heart failure maintained myocardial function and energetics in pressure-overload heart failure.

    PubMed

    Kaimoto, Satoshi; Hoshino, Atsushi; Ariyoshi, Makoto; Okawa, Yoshifumi; Tateishi, Shuhei; Ono, Kazunori; Uchihashi, Motoki; Fukai, Kuniyoshi; Iwai-Kanai, Eri; Matoba, Satoaki

    2017-02-01

    Failing heart loses its metabolic flexibility, relying increasingly on glucose as its preferential substrate and decreasing fatty acid oxidation (FAO). Peroxisome proliferator-activated receptor α (PPAR-α) is a key regulator of this substrate shift. However, its role during heart failure is complex and remains unclear. Recent studies reported that heart failure develops in the heart of myosin heavy chain-PPAR-α transgenic mice in a manner similar to that of diabetic cardiomyopathy, whereas cardiac dysfunction is enhanced in PPAR-α knockout mice in response to chronic pressure overload. We created a pressure-overload heart failure model in mice through transverse aortic constriction (TAC) and activated PPAR-α during heart failure using an inducible transgenic model. After 8 wk of TAC, left ventricular (LV) function had decreased with the reduction of PPAR-α expression in wild-type mice. We examined the effect of PPAR-α induction during heart failure using the Tet-Off system. Eight weeks after the TAC operation, LV construction was preserved significantly by PPAR-α induction with an increase in PPAR-α-targeted genes related to fatty acid metabolism. The increase of expression of fibrosis-related genes was significantly attenuated by PPAR-α induction. Metabolic rates measured by isolated heart perfusions showed a reduction in FAO and glucose oxidation in TAC hearts, but the rate of FAO preserved significantly owing to the induction of PPAR-α. Myocardial high-energy phosphates were significantly preserved by PPAR-α induction. These results suggest that PPAR-α activation during pressure-overloaded heart failure improved myocardial function and energetics. Thus activating PPAR-α and modulation of FAO could be a promising therapeutic strategy for heart failure. NEW & NOTEWORTHY The present study demonstrates the role of PPAR-α activation in the early stage of heart failure using an inducible transgenic mouse model. Induction of PPAR-α preserved heart function, and myocardial energetics. Activating PPAR-α and modulation of fatty acid oxidation could be a promising therapeutic strategy for heart failure. Copyright © 2017 the American Physiological Society.

  6. An evaluation of pulse oximeters in dogs, cats and horses.

    PubMed

    Matthews, Nora S; Hartke, Sherrie; Allen, John C

    2003-01-01

    Evaluation of five pulse oximeters in dogs, cats and horses with sensors placed at five sites and hemoglobin saturation at three plateaus. Prospective randomized multispecies experimental trial. Five healthy dogs, cats and horses. Animals were anesthetized and instrumented with ECG leads and arterial catheters. Five pulse oximeters (Nellcor Puritan Bennett-395, NPB-190, NPB-290, NPB-40 and Surgi-Vet V3304) with sensors at five sites were studied in a 5 x 5 Latin square design. Ten readings (SpO2) were taken at each of three hemoglobin saturation plateaus (98, 85 and 72%) in each animal. Arterial samples were drawn concurrently and hemoglobin saturation was measured with a co-oximeter. Accuracy of saturation measurements was calculated as the root mean squared difference (RMSD), a composite of bias and precision, for each model tested in each species. Accuracy varied widely. In dogs, the RMSD for the NPB-395, NPB-190, NPB-290, NPB-40 and V3304 were 2.7, 2.2, 2.4, 1.7 and 2.7% respectively. Failure to produce readings for the NPB-395, NPB-190, NPB-290, NPB-40 and V3304 were 0, 0, 0.7, 0, and 20%, respectively. The Pearson correlation coefficients for the tongue, toe, ear, lip and prepuce or vulva were 0.95, 0.97, 0.69, 0.87 and 0.95, respectively. In horses, the RMSD for the NPB-395, NPB-190, NPB-290, NPB-40 and V3304 were 3.1, 3.0, 4.7, 3.3 and 2.1%, respectively while rates of failure to produce readings were 10, 21, 0, 17 and 60%, respectively. The Pearson correlation coefficients for the tongue, nostril, ear, lip and prepuce or vulva were 0.98, 0.94, 0.88, 0.93 and 0.94, respectively. In cats, the RMSD for all data for the NPB-395, NPB-190, NPB-290, NPB-40 and V3304 were 5.9, 5.6, 7.9, 7.9 and 10.7%, respectively while failure rates were 0, 0.7, 0, 20 and 32%, respectively. The correlation coefficients for the tongue, rear paw, ear, lip and front paw were 0.54, 0.79,.0.64, 0.49 and 0.57, respectively. For saturations above 90% in cats, the RMSD for the NPB-395, NPB-190, NPB-290, NPB-40 and V3304 were 2.6, 4.4, 4.0, 3.5 and 4.8%, respectively, while failure rates were 0, 1.7, 0, 25 and 43%, respectively. Accuracy and failure rates (failure to produce a reading) varied widely from model to model and from species to species. Generally, among the models tested in the clinically relevant range (90-100%) RMSD ranged from 2-5% while failure rates were highest in the V3304.

  7. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  8. Analysing recurrent hospitalizations in heart failure: a review of statistical methodology, with application to CHARM-Preserved.

    PubMed

    Rogers, Jennifer K; Pocock, Stuart J; McMurray, John J V; Granger, Christopher B; Michelson, Eric L; Östergren, Jan; Pfeffer, Marc A; Solomon, Scott D; Swedberg, Karl; Yusuf, Salim

    2014-01-01

    Heart failure is characterized by recurrent hospitalizations, but often only the first event is considered in clinical trial reports. In chronic diseases, such as heart failure, analysing all events gives a more complete picture of treatment benefit. We describe methods of analysing repeat hospitalizations, and illustrate their value in one major trial. The Candesartan in Heart failure Assessment of Reduction in Mortality and morbidity (CHARM)-Preserved study compared candesartan with placebo in 3023 patients with heart failure and preserved systolic function. The heart failure hospitalization rates were 12.5 and 8.9 per 100 patient-years in the placebo and candesartan groups, respectively. The repeat hospitalizations were analysed using the Andersen-Gill, Poisson, and negative binomial methods. Death was incorporated into analyses by treating it as an additional event. The win ratio method and a method that jointly models hospitalizations and mortality were also considered. Using repeat events gave larger treatment benefits than time to first event analysis. The negative binomial method for the composite of recurrent heart failure hospitalizations and cardiovascular death gave a rate ratio of 0.75 [95% confidence interval (CI) 0.62-0.91, P = 0.003], whereas the hazard ratio for time to first heart failure hospitalization or cardiovascular death was 0.86 (95% CI 0.74-1.00, P = 0.050). In patients with preserved EF, candesartan reduces the rate of admissions for worsening heart failure, to a greater extent than apparent from analysing only first hospitalizations. Recurrent events should be routinely incorporated into the analysis of future clinical trials in heart failure. © 2013 The Authors. European Journal of Heart Failure © 2013 European Society of Cardiology.

  9. Field Programmable Gate Array Failure Rate Estimation Guidelines for Launch Vehicle Fault Tree Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven D.; Hatfield, Glen S.; Britton, Paul

    2017-01-01

    Today's launch vehicles complex electronic and avionic systems heavily utilize the Field Programmable Gate Array (FPGA) integrated circuit (IC). FPGAs are prevalent ICs in communication protocols such as MIL-STD-1553B, and in control signal commands such as in solenoid/servo valves actuations. This paper will demonstrate guidelines to estimate FPGA failure rates for a launch vehicle, the guidelines will account for hardware, firmware, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC, FPGA memory and clock. The firmware portion will provide guidelines on the high level FPGA programming language and ways to account for software/code reliability growth. The radiation portion will provide guidelines on environment susceptibility as well as guidelines on tailoring other launch vehicle programs historical data to a specific launch vehicle.

  10. Heroic Reliability Improvement in Manned Space Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.

  11. Moxonidine-induced central sympathoinhibition improves prognosis in rats with hypertensive heart failure.

    PubMed

    Honda, Nobuhiro; Hirooka, Yoshitaka; Ito, Koji; Matsukawa, Ryuichi; Shinohara, Keisuke; Kishi, Takuya; Yasukawa, Keiji; Utsumi, Hideo; Sunagawa, Kenji

    2013-11-01

    Enhanced central sympathetic outflow is an indicator of the prognosis of heart failure. Although the central sympatholytic drug moxonidine is an established therapeutic strategy for hypertension, its benefits for hypertensive heart failure are poorly understood. In the present study, we investigated the effects of central sympathoinhibition by intracerebral infusion of moxonidine on survival in a rat model of hypertensive heart failure and the possible mechanisms involved. As a model of hypertensive heart failure, we fed Dahl salt-sensitive rats an 8% NaCl diet from 7 weeks of age. Intracerebroventricular (ICV) infusion of moxonidine (moxonidine-ICV-treated group [Mox-ICV]) or vehicle (vehicle-ICV-treated group [Veh-ICV]) was performed at 14-20 weeks of age, during the increased heart failure phase. Survival rates were examined, and sympathetic activity, left ventricular function and remodelling, and brain oxidative stress were measured. Hypertension and left ventricular hypertrophy were established by 13 weeks of age. At around 20 weeks of age, Veh-ICV rats exhibited overt heart failure concomitant with increased urinary norepinephrine (uNE) excretion as an index of sympathetic activity, dilated left ventricle, decreased percentage fractional shortening, and myocardial fibrosis. Survival rates at 21 weeks of age (n = 28) were only 23% in Veh-ICV rats, and 76% (n = 17) in Mox-ICV rats with concomitant decreases in uNE, myocardial fibrosis, collagen type I/III ratio, brain oxidative stress, and suppressed left ventricular dysfunction. Moxonidine-induced central sympathoinhibition attenuated brain oxidative stress, prevented cardiac dysfunction and remodelling, and improved the prognosis in rats with hypertensive heart failure. Central sympathoinhibition can be effective for the treatment of hypertensive heart failure.

  12. Failure-to-rescue after injury is associated with preventability: The results of mortality panel review of failure-to-rescue cases in trauma.

    PubMed

    Kuo, Lindsay E; Kaufman, Elinore; Hoffman, Rebecca L; Pascual, Jose L; Martin, Niels D; Kelz, Rachel R; Holena, Daniel N

    2017-03-01

    Failure-to-rescue is defined as the conditional probability of death after a complication, and the failure-to-rescue rate reflects a center's ability to successfully "rescue" patients after complications. The validity of the failure-to-rescue rate as a quality measure is dependent on the preventability of death and the appropriateness of this measure for use in the trauma population is untested. We sought to evaluate the relationship between preventability and failure-to-rescue in trauma. All adjudications from a mortality review panel at an academic level I trauma center from 2005-2015 were merged with registry data for the same time period. The preventability of each death was determined by panel consensus as part of peer review. Failure-to-rescue deaths were defined as those occurring after any registry-defined complication. Univariate and multivariate logistic regression models between failure-to-rescue status and preventability were constructed and time to death was examined using survival time analyses. Of 26,557 patients, 2,735 (10.5%) had a complication, of whom 359 died for a failure-to-rescue rate of 13.2%. Of failure-to-rescue deaths, 272 (75.6%) were judged to be non-preventable, 65 (18.1%) were judged potentially preventable, and 22 (6.1%) were judged to be preventable by peer review. After adjusting for other patient factors, there remained a strong association between failure-to-rescue status and potentially preventable (odds ratio 2.32, 95% confidence interval, 1.47-3.66) and preventable (odds ratio 14.84, 95% confidence interval, 3.30-66.71) judgment. Despite a strong association between failure-to-rescue status and preventability adjudication, only a minority of deaths meeting the definition of failure to rescue were judged to be preventable or potentially preventable. Revision of the failure-to-rescue metric before use in trauma care benchmarking is warranted. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Failure-to-rescue after injury is associated with preventability: The results of mortality panel review of failure-to-rescue cases in trauma

    PubMed Central

    Kuo, Lindsay E.; Kaufman, Elinore; Hoffman, Rebecca L.; Pascual, Jose L.; Martin, Niels D.; Kelz, Rachel R.; Holena, Daniel N.

    2018-01-01

    Background Failure-to-rescue is defined as the conditional probability of death after a complication, and the failure-to-rescue rate reflects a center’s ability to successfully “rescue” patients after complications. The validity of the failure-to-rescue rate as a quality measure is dependent on the preventability of death and the appropriateness of this measure for use in the trauma population is untested. We sought to evaluate the relationship between preventability and failure-to-rescue in trauma. Methods All adjudications from a mortality review panel at an academic level I trauma center from 2005–2015 were merged with registry data for the same time period. The preventability of each death was determined by panel consensus as part of peer review. Failure-to-rescue deaths were defined as those occurring after any registry-defined complication. Univariate and multivariate logistic regression models between failure-to-rescue status and preventability were constructed and time to death was examined using survival time analyses. Results Of 26,557 patients, 2,735 (10.5%) had a complication, of whom 359 died for a failure-to-rescue rate of 13.2%. Of failure-to-rescue deaths, 272 (75.6%) were judged to be non-preventable, 65 (18.1%) were judged potentially preventable, and 22 (6.1%) were judged to be preventable by peer review. After adjusting for other patient factors, there remained a strong association between failure-to-rescue status and potentially preventable (odds ratio 2.32, 95% confidence interval, 1.47–3.66) and preventable (odds ratio 14.84, 95% confidence interval, 3.30–66.71) judgment. Conclusion Despite a strong association between failure-to-rescue status and preventability adjudication, only a minority of deaths meeting the definition of failure to rescue were judged to be preventable or potentially preventable. Revision of the failure-to-rescue metric before use in trauma care benchmarking is warranted. PMID:27788924

  14. Forecasting volcanic eruptions and other material failure phenomena: An evaluation of the failure forecast method

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.; Naylor, Mark; Heap, Michael J.; Main, Ian G.

    2011-08-01

    Power-law accelerations in the mean rate of strain, earthquakes and other precursors have been widely reported prior to material failure phenomena, including volcanic eruptions, landslides and laboratory deformation experiments, as predicted by several theoretical models. The Failure Forecast Method (FFM), which linearizes the power-law trend, has been routinely used to forecast the failure time in retrospective analyses; however, its performance has never been formally evaluated. Here we use synthetic and real data, recorded in laboratory brittle creep experiments and at volcanoes, to show that the assumptions of the FFM are inconsistent with the error structure of the data, leading to biased and imprecise forecasts. We show that a Generalized Linear Model method provides higher-quality forecasts that converge more accurately to the eventual failure time, accounting for the appropriate error distributions. This approach should be employed in place of the FFM to provide reliable quantitative forecasts and estimate their associated uncertainties.

  15. Fibre Break Failure Processes in Unidirectional Composites. Part 2: Failure and Critical Damage State Induced by Sustained Tensile Loading

    NASA Astrophysics Data System (ADS)

    Thionnet, A.; Chou, H. Y.; Bunsell, A.

    2015-04-01

    The purpose of these three papers is not to just revisit the modelling of unidirectional composites. It is to provide a robust framework based on physical processes that can be used to optimise the design and long term reliability of internally pressurised filament wound structures. The model presented in Part 1 for the case of monotonically loaded unidirectional composites is further developed to consider the effects of the viscoelastic nature of the matrix in determining the kinetics of fibre breaks under slow or sustained loading. It is shown that the relaxation of the matrix around fibre breaks leads to locally increasing loads on neighbouring fibres and in some cases their delayed failure. Although ultimate failure is similar to the elastic case in that clusters of fibre breaks ultimately control composite failure the kinetics of their development varies significantly from the elastic case. Failure loads have been shown to reduce when loading rates are lowered.

  16. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  17. Does high-flow nasal cannula oxygen improve outcome in acute hypoxemic respiratory failure? A systematic review and meta-analysis.

    PubMed

    Lin, Si-Ming; Liu, Kai-Xiong; Lin, Zhi-Hong; Lin, Pei-Hong

    2017-10-01

    To evaluate the efficacy of high-flow nasal cannula (HFNC) in the rate of intubation and mortality for patients with acute hypoxemic respiratory failure. We searched Pubmed, EMBASE, and the Cochrane Library for relevant studies. Two reviewers extracted data and reviewed the quality of the studies independently. The primary outcome was the rate of intubation; secondary outcome was mortality in the hospital. Study-level data were pooled using a random-effects model when I2 was >50% or a fixed-effects model when I2 was <50%. Eight randomized controlled studies with a total of 1,818patients were considered. Pooled analysis showed that no statistically significant difference was found between groups regarding the rate of intubation (odds ratio [OR] = 0.79; 95% confidence interval [CI]: 0.60-1.04; P = 0.09; I2 = 36%) and no statistically significant difference was found between groups regarding hospital mortality (OR = 0.89; 95% CI: 0.62-127; P = 0.51; I2 = 47%). The use of HFNC showed a trend toward reduction in the intubation rate, which did not meet statistical significance, in patients with acute respiratory failure compared with conventional oxygen therapy (COT) and noninvasive ventilation (NIV). Moreover no difference in mortality. So, Large, well-designed, randomized, multi-center trials are needed to confirm the effects of HFNC in acute hypoxemic respiratory failure patients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Physician-directed heart failure transitional care program: a retrospective case review.

    PubMed

    Ota, Ken S; Beutler, David S; Gerkin, Richard D; Weiss, Jessica L; Loli, Akil I

    2013-10-01

    Despite a variety of national efforts to improve transitions of care for patients at risk for rehospitalization, 30-day rehospitalization rates for patients with heart failure have remained largely unchanged. This is a retrospective review of 73 patients enrolled in our hospital-based, physican-directed Heart Failure Transitional Care Program (HFTCP). This study evaluated the 30- and 90- day readmission rates before and after enrollment in the program. The Transitionalist's services focused on bedside consultation prior to hospital discharge, follow-up home visits within 72 hours of discharge, frequent follow-up phone calls, disease-specific education, outpatient intravenous diuretic therapy, and around-the-clock telephone access to the Transitionalist. The pre-enrollment 30-day readmission rates for acute decompensated heart failure (ADHF) and all-cause readmission was 26.0% and 28.8%, respectively, while the post-enrollment rates for ADHF and all-cause readmission were 4.1% (P < 0.001) and 8.2% (P = 0.002), respectively. The pre-enrollment 90-day all-cause and ADHF readmission rates were 69.8%, and 58.9% respectively, while the post-enrollment rates for all-cause and ADHF were 27.3% (P < 0.001) and 16.4% (P < 0.001) respectively. Our physician-implemented HFTCP reduced rehospitalization risk for patients enrolled in the program. This program may serve as a model to assist other hospital systems to reduce readmission rates of patients with HF.

  19. A new yield and failure theory for composite materials under static and dynamic loading

    DOE PAGES

    Daniel, Isaac M.; Daniel, Sam M.; Fenner, Joel S.

    2017-09-12

    In order to facilitate and accelerate the process of introducing, evaluating and adopting new material systems, it is important to develop/establish comprehensive and effective procedures of characterization, modeling and failure prediction of composite structures based on the properties of the constituent materials, e. g., fibers, matrix, and the single ply or lamina. A new yield/failure theory is proposed for predicting lamina yielding and failure under multi-axial states of stress including strain rate effects. It is based on the equivalent stress concept derived from energy principles and is expressed in terms of a single criterion. It is presented in the formmore » of master yield and failure envelopes incorporating strain rate effects. The theory can be further adapted and extended to the prediction of in situ first ply yielding and failure (FPY and FPF) and progressive damage of multi-directional laminates under static and dynamic loadings. The significance of this theory is that it allows for rapid screening of new composite materials without extensive testing and offers easily implemented design tools.« less

  20. A new yield and failure theory for composite materials under static and dynamic loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, Isaac M.; Daniel, Sam M.; Fenner, Joel S.

    In order to facilitate and accelerate the process of introducing, evaluating and adopting new material systems, it is important to develop/establish comprehensive and effective procedures of characterization, modeling and failure prediction of composite structures based on the properties of the constituent materials, e. g., fibers, matrix, and the single ply or lamina. A new yield/failure theory is proposed for predicting lamina yielding and failure under multi-axial states of stress including strain rate effects. It is based on the equivalent stress concept derived from energy principles and is expressed in terms of a single criterion. It is presented in the formmore » of master yield and failure envelopes incorporating strain rate effects. The theory can be further adapted and extended to the prediction of in situ first ply yielding and failure (FPY and FPF) and progressive damage of multi-directional laminates under static and dynamic loadings. The significance of this theory is that it allows for rapid screening of new composite materials without extensive testing and offers easily implemented design tools.« less

  1. Simulations of the modified gap experiment

    NASA Astrophysics Data System (ADS)

    Sutherland, Gerrit T.; Benjamin, Richard; Kooker, Douglas

    2017-01-01

    Modified gap experiment (test) hydrocode simulations predict the trends seen in experimental excess free surface velocity versus input pressure curves for explosives with both large and modest failure diameters. Simulations were conducted for explosive "A", an explosive with a large failure diameter, and for cast TNT, which has a modest failure diameter. Using the best available reactive rate models, the simulations predicted sustained ignition thresholds similar to experiment. This is a threshold where detonation is likely given a long enough run distance. For input pressures greater than the sustained ignition threshold pressure, the simulations predicted too little velocity for explosive "A" and too much velocity for TNT. It was found that a better comparison of experiment and simulation requires additional experimental data for both explosives. It was observed that the choice of reactive rate model for cast TNT can lead to large differences in the predicted modified gap experiment result. The cause of the difference is that the same data was not used to parameterize both models; one set of data was more shock reactive than the other.

  2. High Risk of Graft Failure in Emerging Adult Heart Transplant Recipients.

    PubMed

    Foster, B J; Dahhou, M; Zhang, X; Dharnidharka, V; Ng, V; Conway, J

    2015-12-01

    Emerging adulthood (17-24 years) is a period of high risk for graft failure in kidney transplant. Whether a similar association exists in heart transplant recipients is unknown. We sought to estimate the relative hazards of graft failure at different current ages, compared with patients between 20 and 24 years old. We evaluated 11 473 patients recorded in the Scientific Registry of Transplant Recipients who received a first transplant at <40 years old (1988-2013) and had at least 6 months of graft function. Time-dependent Cox models were used to estimate the association between current age (time-dependent) and failure risk, adjusted for time since transplant and other potential confounders. Failure was defined as death following graft failure or retransplant; observation was censored at death with graft function. There were 2567 failures. Crude age-specific graft failure rates were highest in 21-24 year olds (4.2 per 100 person-years). Compared to individuals with the same time since transplant, 21-24 year olds had significantly higher failure rates than all other age periods except 17-20 years (HR 0.92 [95%CI 0.77, 1.09]) and 25-29 years (0.86 [0.73, 1.03]). Among young first heart transplant recipients, graft failure risks are highest in the period from 17 to 29 years of age. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.

  3. Matrix Dominated Failure of Fiber-Reinforced Composite Laminates Under Static and Dynamic Loading

    NASA Astrophysics Data System (ADS)

    Schaefer, Joseph Daniel

    Hierarchical material systems provide the unique opportunity to connect material knowledge to solving specific design challenges. Representing the quickest growing class of hierarchical materials in use, fiber-reinforced polymer composites (FRPCs) offer superior strength and stiffness-to-weight ratios, damage tolerance, and decreasing production costs compared to metals and alloys. However, the implementation of FRPCs has historically been fraught with inadequate knowledge of the material failure behavior due to incomplete verification of recent computational constitutive models and improper (or non-existent) experimental validation, which has severely slowed creation and development. Noted by the recent Materials Genome Initiative and the Worldwide Failure Exercise, current state of the art qualification programs endure a 20 year gap between material conceptualization and implementation due to the lack of effective partnership between computational coding (simulation) and experimental characterization. Qualification processes are primarily experiment driven; the anisotropic nature of composites predisposes matrix-dominant properties to be sensitive to strain rate, which necessitates extensive testing. To decrease the qualification time, a framework that practically combines theoretical prediction of material failure with limited experimental validation is required. In this work, the Northwestern Failure Theory (NU Theory) for composite lamina is presented as the theoretical basis from which the failure of unidirectional and multidirectional composite laminates is investigated. From an initial experimental characterization of basic lamina properties, the NU Theory is employed to predict the matrix-dependent failure of composites under any state of biaxial stress from quasi-static to 1000 s-1 strain rates. It was found that the number of experiments required to characterize the strain-rate-dependent failure of a new composite material was reduced by an order of magnitude, and the resulting strain-rate-dependence was applicable for a large class of materials. The presented framework provides engineers with the capability to quickly identify fiber and matrix combinations for a given application and determine the failure behavior over the range of practical loadings cases. The failure-mode-based NU Theory may be especially useful when partnered with computational approaches (which often employ micromechanics to determine constituent and constitutive response) to provide accurate validation of the matrix-dominated failure modes experienced by laminates during progressive failure.

  4. Modelling the charring behaviour of structural lumber

    Treesearch

    Peter W.C. Lau; Robert White; Ineke Van Zealand

    1999-01-01

    Charring rates for large-section timber based on experimental data have been generally established. The established rates may not be appropriately used for the prediction of failure times of lumber members which are small by comparison. It is questionable whether a constant rate can be safely assumed for lumber members since the rate is likely to increase once the...

  5. [Diuretics in acute kidney failure: useful or harmful?].

    PubMed

    Tataw, J; Saudan, P

    2011-03-02

    Loop diuretics are commonly prescribed within different clinical settings to prevent and or to treat acute renal failure. In most cases they facilitate fluid management following an increased urine output. Experimental models in animals revealed protective effects of loop diuretics in acute renal failure. Several clinical trials have failed to outline better outcomes associated with the use of diuretics in acute renal failure as there was no recovery in renal function nor a reduction in the number of dialysis sessions required. Glomerular filtration rate did not improve with the administration of loop diuretics after continuous renal replacement therapy. The administration of loop diuretics in the management of acute renal failure should be mainly restricted to patients with hypervolemia.

  6. The failure of brittle materials under overall compression: Effects of loading rate and defect distribution

    NASA Astrophysics Data System (ADS)

    Paliwal, Bhasker

    The constitutive behaviors and failure processes of brittle materials under far-field compressive loading are studied in this work. Several approaches are used: experiments to study the compressive failure behavior of ceramics, design of experimental techniques by means of finite element simulations, and the development of micro-mechanical damage models to analyze and predict mechanical response of brittle materials under far-field compression. Experiments have been conducted on various ceramics, (primarily on a transparent polycrystalline ceramic, aluminum oxynitride or AlON) under loading rates ranging from quasi-static (˜ 5X10-6) to dynamic (˜ 200 MPa/mus), using a servo-controlled hydraulic test machine and a modified compression Kolsky bar (MKB) technique respectively. High-speed photography has also been used with exposure times as low as 20 ns to observe the dynamic activation, growth and coalescence of cracks and resulting damage zones in the specimen. The photographs were correlated in time with measurements of the stresses in the specimen. Further, by means of 3D finite element simulations, an experimental technique has been developed to impose a controlled, homogeneous, planar confinement in the specimen. The technique can be used in conjunction with a high-speed camera to study the in situ dynamic failure behavior of materials under confinement. AlON specimens are used for the study. The statically pre-compressed specimen is subjected to axial dynamic compressive loading using the MKB. Results suggest that confinement not only increases the load carrying capacity, it also results in a non-linear stress evolution in the material. High-speed photographs also suggest an inelastic deformation mechanism in AlON under confinement which evolves more slowly than the typical brittle-cracking type of damage in the unconfined case. Next, an interacting micro-crack damage model is developed that explicitly accounts for the interaction among the micro-cracks in brittle materials. The model incorporates pre-existing defect distributions and a crack growth law. The damage is defined as a scalar parameter which is a function of the micro-crack density, the evolution of which is a function of the existing defect distribution and the crack growth dynamics. A specific case of a uniaxial compressive loading under constant strain-rate has been studied to predict the effects of the strain-rate, defect distribution and the crack growth dynamics on the constitutive response and failure behavior of brittle materials. Finally, the effects of crack growth dynamics on the strain-rate sensitivity of brittle materials are studied with the help of the micro-mechanical damage model. The results are compared with the experimentally observed damage evolution and the rate-sensitive behavior of the compressive strength of several engineering ceramics. The dynamic failure of armor-grade hot-pressed boron carbide (B 4C) under loading rates of ˜ 5X10-6 to 200 MPa/mus is also discussed.

  7. Wind Turbine Failures - Tackling current Problems in Failure Data Analysis

    NASA Astrophysics Data System (ADS)

    Reder, M. D.; Gonzalez, E.; Melero, J. J.

    2016-09-01

    The wind industry has been growing significantly over the past decades, resulting in a remarkable increase in installed wind power capacity. Turbine technologies are rapidly evolving in terms of complexity and size, and there is an urgent need for cost effective operation and maintenance (O&M) strategies. Especially unplanned downtime represents one of the main cost drivers of a modern wind farm. Here, reliability and failure prediction models can enable operators to apply preventive O&M strategies rather than corrective actions. In order to develop these models, the failure rates and downtimes of wind turbine (WT) components have to be understood profoundly. This paper is focused on tackling three of the main issues related to WT failure analyses. These are, the non-uniform data treatment, the scarcity of available failure analyses, and the lack of investigation on alternative data sources. For this, a modernised form of an existing WT taxonomy is introduced. Additionally, an extensive analysis of historical failure and downtime data of more than 4300 turbines is presented. Finally, the possibilities to encounter the lack of available failure data by complementing historical databases with Supervisory Control and Data Acquisition (SCADA) alarms are evaluated.

  8. A fuzzy set approach for reliability calculation of valve controlling electric actuators

    NASA Astrophysics Data System (ADS)

    Karmachev, D. P.; Yefremov, A. A.; Luneva, E. E.

    2017-02-01

    The oil and gas equipment and electric actuators in particular frequently perform in various operational modes and under dynamic environmental conditions. These factors affect equipment reliability measures in a vague, uncertain way. To eliminate the ambiguity, reliability model parameters could be defined as fuzzy numbers. We suggest a technique that allows constructing fundamental fuzzy-valued performance reliability measures based on an analysis of electric actuators failure data in accordance with the amount of work, completed before the failure, instead of failure time. Also, this paper provides a computation example of fuzzy-valued reliability and hazard rate functions, assuming Kumaraswamy complementary Weibull geometric distribution as a lifetime (reliability) model for electric actuators.

  9. A multicenter randomized controlled evaluation of automated home monitoring and telephonic disease management in patients recently hospitalized for congestive heart failure: the SPAN-CHF II trial.

    PubMed

    Weintraub, Andrew; Gregory, Douglas; Patel, Ayan R; Levine, Daniel; Venesy, David; Perry, Kathleen; Delano, Christine; Konstam, Marvin A

    2010-04-01

    We performed a prospective, randomized investigation assessing the incremental effect of automated health monitoring (AHM) technology over and above that of a previously described nurse directed heart failure (HF) disease management program. The AHM system measured and transmitted body weight, blood pressure, and heart rate data as well as subjective patient self-assessments via a standard telephone line to a central server. A total of 188 consented and eligible patients were randomized between intervention and control groups in 1:1 ratio. Subjects randomized to the control arm received the Specialized Primary and Networked Care in Heart Failure (SPAN-CHF) heart failure disease management program. Subjects randomized to the intervention arm received the SPAN-CHF disease management program in conjunction with the AHM system. The primary end point was prespecified as the relative event rate of HF hospitalization between intervention and control groups at 90 days. The relative event rate of HF hospitalization for the intervention group compared with controls was 0.50 (95%CI [0.25-0.99], P = .05). Short-term reductions in the heart failure hospitalization rate were associated with the use of automated home monitoring equipment. Long-term benefits in this model remain to be studied. (c) 2010 Elsevier Inc. All rights reserved.

  10. Shear Behaviour and Acoustic Emission Characteristics of Bolted Rock Joints with Different Roughnesses

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Zhang, Yongzheng; Jiang, Yujing; Liu, Peixun; Guo, Yanshuang; Liu, Jiankang; Ma, Ming; Wang, Ke; Wang, Shugang

    2018-06-01

    To study shear failure, acoustic emission counts and characteristics of bolted jointed rock-like specimens are evaluated under compressive shear loading. Model joint surfaces with different roughnesses are made of rock-like material (i.e. cement). The jointed rock masses are anchored with bolts with different elongation rates. The characteristics of the shear mechanical properties, the failure mechanism, and the acoustic emission parameters of the anchored joints are studied under different surface roughnesses and anchorage conditions. The shear strength and residual strength increase with the roughness of the anchored joint surface. With an increase in bolt elongation, the shear strength of the anchored joint surface gradually decreases. When the anchored structural plane is sheared, the ideal cumulative impact curve can be divided into four stages: initial emission, critical instability, cumulative energy, and failure. With an increase in the roughness of the anchored joint surface, the peak energy rate and the cumulative number of events will also increase during macro-scale shear failure. With an increase in the bolt elongation, the energy rate and the event number increase during the shearing process. Furthermore, the peak energy rate, peak number of events and cumulative energy will all increase with the bolt elongation. The results of this study can provide guidance for the use of the acoustic emission technique in monitoring and predicting the static shear failure of anchored rock masses.

  11. Predicting Renal Failure Progression in Chronic Kidney Disease Using Integrated Intelligent Fuzzy Expert System.

    PubMed

    Norouzi, Jamshid; Yadollahpour, Ali; Mirbagheri, Seyed Ahmad; Mazdeh, Mitra Mahdavi; Hosseini, Seyed Ahmad

    2016-01-01

    Chronic kidney disease (CKD) is a covert disease. Accurate prediction of CKD progression over time is necessary for reducing its costs and mortality rates. The present study proposes an adaptive neurofuzzy inference system (ANFIS) for predicting the renal failure timeframe of CKD based on real clinical data. This study used 10-year clinical records of newly diagnosed CKD patients. The threshold value of 15 cc/kg/min/1.73 m(2) of glomerular filtration rate (GFR) was used as the marker of renal failure. A Takagi-Sugeno type ANFIS model was used to predict GFR values. Variables of age, sex, weight, underlying diseases, diastolic blood pressure, creatinine, calcium, phosphorus, uric acid, and GFR were initially selected for the predicting model. Weight, diastolic blood pressure, diabetes mellitus as underlying disease, and current GFR(t) showed significant correlation with GFRs and were selected as the inputs of model. The comparisons of the predicted values with the real data showed that the ANFIS model could accurately estimate GFR variations in all sequential periods (Normalized Mean Absolute Error lower than 5%). Despite the high uncertainties of human body and dynamic nature of CKD progression, our model can accurately predict the GFR variations at long future periods.

  12. Clinical retrospective study of self-reported penicillin allergy on dental implant failures and infections.

    PubMed

    French, David; Noroozi, Mehdi; Shariati, Batoul; Larjava, Hannu

    2016-01-01

    The aim of this retrospective study was to investigate whether self-reported allergy to penicillin may contribute to a higher rate of postsurgical infection and implant failure. This retrospective, non-interventional, open cohort study reports on implant survival and infection complications of 5,576 implants placed in private practice by one periodontist, and includes 4,132 implants that were followed for at least 1 year. Logistic regression was applied to examine the relationship between self-reported allergy to penicillin and implant survival, while controlling for potential confounders such as smoking, implant site, bone augmentation, loading protocol, immediate implantation, and bone level at baseline. The cumulative survival rate (CSR) was calculated according to the life table method and the Cox proportional hazard model was fitted to data. Out of 5,106 implants placed in patients taking penicillin it was found that 0.8% failed, while 2.1% failed of the 470 implants placed for patients with self-reported allergy to penicillin (P = .002). Odds of failure for implants placed in penicillin-allergic patients were 3.1 times higher than in non-allergic patients. For immediate implant placement, penicillin-allergic patients had a failure rate 10-times higher than the non-allergic cohort. Timing of implant failure occurring within 6 months following implantation was 80% in the penicillin-allergic group versus 54% in the non-allergic group. From the 48 implant sites showing postoperative infection: penicillin-allergic patients had an infection rate of 3.4% (n = 16/470) versus 0.6% in the non-allergic group (n = 32/5,106) (P < .05). Self-reported penicillin allergy was associated with a higher rate of infection, and primarily affected early implant failure.

  13. Damage Propagation Modeling for Aircraft Engine Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Goebel, Kai; Simon, Don; Eklund, Neil

    2008-01-01

    This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are generated via a thermo-dynamical simulation model for the engine as a function of variations of flow and efficiency of the modules of interest. An exponential rate of change for flow and efficiency loss was imposed for each data set, starting at a randomly chosen initial deterioration set point. The rate of change of the flow and efficiency denotes an otherwise unspecified fault with increasingly worsening effect. The rates of change of the faults were constrained to an upper threshold but were otherwise chosen randomly. Damage propagation was allowed to continue until a failure criterion was reached. A health index was defined as the minimum of several superimposed operational margins at any given time instant and the failure criterion is reached when health index reaches zero. Output of the model was the time series (cycles) of sensed measurements typically available from aircraft gas turbine engines. The data generated were used as challenge data for the Prognostics and Health Management (PHM) data competition at PHM 08.

  14. The second Sandia Fracture Challenge. Predictions of ductile failure under quasi-static and moderate-rate dynamic loading

    DOE PAGES

    Boyce, B. L.; Kramer, S. L. B.; Bosiljevac, T. R.; ...

    2016-03-14

    Ductile failure of structural metals is relevant to a wide range of engineering scenarios. Computational methods are employed to anticipate the critical conditions of failure, yet they sometimes provide inaccurate and misleading predictions. Challenge scenarios, such as the one presented in the current work, provide an opportunity to assess the blind, quantitative predictive ability of simulation methods against a previously unseen failure problem. Instead of evaluating the predictions of a single simulation approach, the Sandia Fracture Challenge relied on numerous volunteer teams with expertise in computational mechanics to apply a broad range of computational methods, numerical algorithms, and constitutive modelsmore » to the challenge. This exercise is intended to evaluate the state of health of technologies available for failure prediction. In the first Sandia Fracture Challenge, a wide range of issues were raised in ductile failure modeling, including a lack of consistency in failure models, the importance of shear calibration data, and difficulties in quantifying the uncertainty of prediction [see Boyce et al. (Int J Fract 186:5–68, 2014) for details of these observations]. This second Sandia Fracture Challenge investigated the ductile rupture of a Ti–6Al–4V sheet under both quasi-static and modest-rate dynamic loading (failure in ~ 0.1 s). Like the previous challenge, the sheet had an unusual arrangement of notches and holes that added geometric complexity and fostered a competition between tensile- and shear-dominated failure modes. The teams were asked to predict the fracture path and quantitative far-field failure metrics such as the peak force and displacement to cause crack initiation. Fourteen teams contributed blind predictions, and the experimental outcomes were quantified in three independent test labs. In addition, shortcomings were revealed in this second challenge such as inconsistency in the application of appropriate boundary conditions, need for a thermomechanical treatment of the heat generation in the dynamic loading condition, and further difficulties in model calibration based on limited real-world engineering data. As with the prior challenge, this work not only documents the ‘state-of-the-art’ in computational failure prediction of ductile tearing scenarios, but also provides a detailed dataset for non-blind assessment of alternative methods.« less

  15. The second Sandia Fracture Challenge. Predictions of ductile failure under quasi-static and moderate-rate dynamic loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyce, B. L.; Kramer, S. L. B.; Bosiljevac, T. R.

    Ductile failure of structural metals is relevant to a wide range of engineering scenarios. Computational methods are employed to anticipate the critical conditions of failure, yet they sometimes provide inaccurate and misleading predictions. Challenge scenarios, such as the one presented in the current work, provide an opportunity to assess the blind, quantitative predictive ability of simulation methods against a previously unseen failure problem. Instead of evaluating the predictions of a single simulation approach, the Sandia Fracture Challenge relied on numerous volunteer teams with expertise in computational mechanics to apply a broad range of computational methods, numerical algorithms, and constitutive modelsmore » to the challenge. This exercise is intended to evaluate the state of health of technologies available for failure prediction. In the first Sandia Fracture Challenge, a wide range of issues were raised in ductile failure modeling, including a lack of consistency in failure models, the importance of shear calibration data, and difficulties in quantifying the uncertainty of prediction [see Boyce et al. (Int J Fract 186:5–68, 2014) for details of these observations]. This second Sandia Fracture Challenge investigated the ductile rupture of a Ti–6Al–4V sheet under both quasi-static and modest-rate dynamic loading (failure in ~ 0.1 s). Like the previous challenge, the sheet had an unusual arrangement of notches and holes that added geometric complexity and fostered a competition between tensile- and shear-dominated failure modes. The teams were asked to predict the fracture path and quantitative far-field failure metrics such as the peak force and displacement to cause crack initiation. Fourteen teams contributed blind predictions, and the experimental outcomes were quantified in three independent test labs. In addition, shortcomings were revealed in this second challenge such as inconsistency in the application of appropriate boundary conditions, need for a thermomechanical treatment of the heat generation in the dynamic loading condition, and further difficulties in model calibration based on limited real-world engineering data. As with the prior challenge, this work not only documents the ‘state-of-the-art’ in computational failure prediction of ductile tearing scenarios, but also provides a detailed dataset for non-blind assessment of alternative methods.« less

  16. A Micromechanics-Based Elastoplastic Damage Model for Rocks with a Brittle-Ductile Transition in Mechanical Response

    NASA Astrophysics Data System (ADS)

    Hu, Kun; Zhu, Qi-zhi; Chen, Liang; Shao, Jian-fu; Liu, Jian

    2018-06-01

    As confining pressure increases, crystalline rocks of moderate porosity usually undergo a transition in failure mode from localized brittle fracture to diffused damage and ductile failure. This transition has been widely reported experimentally for several decades; however, satisfactory modeling is still lacking. The present paper aims at modeling the brittle-ductile transition process of rocks under conventional triaxial compression. Based on quantitative analyses of experimental results, it is found that there is a quite satisfactory linearity between the axial inelastic strain at failure and the confining pressure prescribed. A micromechanics-based frictional damage model is then formulated using an associated plastic flow rule and a strain energy release rate-based damage criterion. The analytical solution to the strong plasticity-damage coupling problem is provided and applied to simulate the nonlinear mechanical behaviors of Tennessee marble, Indiana limestone and Jinping marble, each presenting a brittle-ductile transition in stress-strain curves.

  17. PCI fuel failure analysis: a report on a cooperative program undertaken by Pacific Northwest Laboratory and Chalk River Nuclear Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohr, C.L.; Pankaskie, P.J.; Heasler, P.G.

    Reactor fuel failure data sets in the form of initial power (P/sub i/), final power (P/sub f/), transient increase in power (..delta..P), and burnup (Bu) were obtained for pressurized heavy water reactors (PHWRs), boiling water reactors (BWRs), and pressurized water reactors (PWRs). These data sets were evaluated and used as the basis for developing two predictive fuel failure models, a graphical concept called the PCI-OGRAM, and a nonlinear regression based model called PROFIT. The PCI-OGRAM is an extension of the FUELOGRAM developed by AECL. It is based on a critical threshold concept for stress dependent stress corrosion cracking. The PROFITmore » model, developed at Pacific Northwest Laboratory, is the result of applying standard statistical regression methods to the available PCI fuel failure data and an analysis of the environmental and strain rate dependent stress-strain properties of the Zircaloy cladding.« less

  18. Thermomechanical Controls on the Success and Failure of Continental Rift Systems

    NASA Astrophysics Data System (ADS)

    Brune, S.

    2017-12-01

    Studies of long-term continental rift evolution are often biased towards rifts that succeed in breaking the continent like the North Atlantic, South China Sea, or South Atlantic rifts. However there are many prominent rift systems on Earth where activity stopped before the formation of a new ocean basin such as the North Sea, the West and Central African Rifts, or the West Antarctic Rift System. The factors controlling the success and failure of rifts can be divided in two groups: (1) Intrinsic processes - for instance frictional weakening, lithospheric thinning, shear heating or the strain-dependent growth of rift strength by replacing weak crust with strong mantle. (2) External processes - such as a change of plate divergence rate, the waning of a far-field driving force, or the arrival of a mantle plume. Here I use numerical and analytical modeling to investigate the role of these processes for the success and failure of rift systems. These models show that a change of plate divergence rate under constant force extension is controlled by the non-linearity of lithospheric materials. For successful rifts, a strong increase in divergence velocity can be expected to take place within few million years, a prediction that agrees with independent plate tectonic reconstructions of major Mesozoic and Cenozoic ocean-forming rift systems. Another model prediction is that oblique rifting is mechanically favored over orthogonal rifting, which means that simultaneous deformation within neighboring rift systems of different obliquity and otherwise identical properties will lead to success and failure of the more and less oblique rift, respectively. This can be exemplified by the Cretaceous activity within the Equatorial Atlantic and the West African Rifts that lead to the formation of a highly oblique oceanic spreading center and the failure of the West African Rift System. While in nature the circumstances of rift success or failure may be manifold, simplified numerical and analytical models allow the isolated analysis of various contributing factors and to define a characteristic time scale for each process.

  19. Physician-Directed Heart Failure Transitional Care Program: A Retrospective Case Review

    PubMed Central

    Ota, Ken S.; Beutler, David S.; Gerkin, Richard D.; Weiss, Jessica L.; Loli, Akil I.

    2013-01-01

    Background Despite a variety of national efforts to improve transitions of care for patients at risk for rehospitalization, 30-day rehospitalization rates for patients with heart failure have remained largely unchanged. Methods This is a retrospective review of 73 patients enrolled in our hospital-based, physican-directed Heart Failure Transitional Care Program (HFTCP). This study evaluated the 30- and 90- day readmission rates before and after enrollment in the program. The Transitionalist’s services focused on bedside consultation prior to hospital discharge, follow-up home visits within 72 hours of discharge, frequent follow-up phone calls, disease-specific education, outpatient intravenous diuretic therapy, and around-the-clock telephone access to the Transitionalist. Results The pre-enrollment 30-day readmission rates for acute decompensated heart failure (ADHF) and all-cause readmission was 26.0% and 28.8%, respectively, while the post-enrollment rates for ADHF and all-cause readmission were 4.1% (P < 0.001) and 8.2% (P = 0.002), respectively. The pre-enrollment 90-day all-cause and ADHF readmission rates were 69.8%, and 58.9% respectively, while the post-enrollment rates for all-cause and ADHF were 27.3% (P < 0.001) and 16.4% (P < 0.001) respectively. Conclusions Our physician-implemented HFTCP reduced rehospitalization risk for patients enrolled in the program. This program may serve as a model to assist other hospital systems to reduce readmission rates of patients with HF. PMID:23976905

  20. Space Shuttle Main Engine Quantitative Risk Assessment: Illustrating Modeling of a Complex System with a New QRA Software Package

    NASA Technical Reports Server (NTRS)

    Smart, Christian

    1998-01-01

    During 1997, a team from Hernandez Engineering, MSFC, Rocketdyne, Thiokol, Pratt & Whitney, and USBI completed the first phase of a two year Quantitative Risk Assessment (QRA) of the Space Shuttle. The models for the Shuttle systems were entered and analyzed by a new QRA software package. This system, termed the Quantitative Risk Assessment System(QRAS), was designed by NASA and programmed by the University of Maryland. The software is a groundbreaking PC-based risk assessment package that allows the user to model complex systems in a hierarchical fashion. Features of the software include the ability to easily select quantifications of failure modes, draw Event Sequence Diagrams(ESDs) interactively, perform uncertainty and sensitivity analysis, and document the modeling. This paper illustrates both the approach used in modeling and the particular features of the software package. The software is general and can be used in a QRA of any complex engineered system. The author is the project lead for the modeling of the Space Shuttle Main Engines (SSMEs), and this paper focuses on the modeling completed for the SSMEs during 1997. In particular, the groundrules for the study, the databases used, the way in which ESDs were used to model catastrophic failure of the SSMES, the methods used to quantify the failure rates, and how QRAS was used in the modeling effort are discussed. Groundrules were necessary to limit the scope of such a complex study, especially with regard to a liquid rocket engine such as the SSME, which can be shut down after ignition either on the pad or in flight. The SSME was divided into its constituent components and subsystems. These were ranked on the basis of the possibility of being upgraded and risk of catastrophic failure. Once this was done the Shuttle program Hazard Analysis and Failure Modes and Effects Analysis (FMEA) were used to create a list of potential failure modes to be modeled. The groundrules and other criteria were used to screen out the many failure modes that did not contribute significantly to the catastrophic risk. The Hazard Analysis and FMEA for the SSME were also used to build ESDs that show the chain of events leading from the failure mode occurence to one of the following end states: catastrophic failure, engine shutdown, or siccessful operation( successful with respect to the failure mode under consideration).

  1. Instantaneous and controllable integer ambiguity resolution: review and an alternative approach

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyu; Wu, Meiping; Li, Tao; Zhang, Kaidong

    2015-11-01

    In the high-precision application of Global Navigation Satellite System (GNSS), integer ambiguity resolution is the key step to realize precise positioning and attitude determination. As the necessary part of quality control, integer aperture (IA) ambiguity resolution provides the theoretical and practical foundation for ambiguity validation. It is mainly realized by acceptance testing. Due to the constraint of correlation between ambiguities, it is impossible to realize the controlling of failure rate according to analytical formula. Hence, the fixed failure rate approach is implemented by Monte Carlo sampling. However, due to the characteristics of Monte Carlo sampling and look-up table, we have to face the problem of a large amount of time consumption if sufficient GNSS scenarios are included in the creation of look-up table. This restricts the fixed failure rate approach to be a post process approach if a look-up table is not available. Furthermore, if not enough GNSS scenarios are considered, the table may only be valid for a specific scenario or application. Besides this, the method of creating look-up table or look-up function still needs to be designed for each specific acceptance test. To overcome these problems in determination of critical values, this contribution will propose an instantaneous and CONtrollable (iCON) IA ambiguity resolution approach for the first time. The iCON approach has the following advantages: (a) critical value of acceptance test is independently determined based on the required failure rate and GNSS model without resorting to external information such as look-up table; (b) it can be realized instantaneously for most of IA estimators which have analytical probability formulas. The stronger GNSS model, the less time consumption; (c) it provides a new viewpoint to improve the research about IA estimation. To verify these conclusions, multi-frequency and multi-GNSS simulation experiments are implemented. Those results show that IA estimators based on iCON approach can realize controllable ambiguity resolution. Besides this, compared with ratio test IA based on look-up table, difference test IA and IA least square based on the iCON approach most of times have higher success rates and better controllability to failure rates.

  2. A Sensitivity Analysis of Triggers and Mechanisms of Mass Movements in Fjords

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Lintern, G.; Hill, P.

    2016-12-01

    Fjords are characterized by rapid sedimentation as they typically drain glaciated river catchments with high seasonal discharges and large sediment evacuation rates. For this reason, fjords commonly experience submarine mass movements; failures of the steep delta front that trigger tsunamis, and turbidity currents or debris flows. Repeat high-resolution bathymetric surveys, and in-situ process measurements collected in fjords in British Columbia, Canada, indicate that mass movements occur many times per year in some fjords and are more rare and of larger magnitude in other fjords. We ask whether these differences can be attributed to river discharge characteristics or to grainsize characteristics of the delivered sediment. To test our ideas, we couple a climate-driven river sediment transport model, HydroTrend, and a marine sedimentation model, Sedflux2D, to explore the triggers of submarine failures and mechanisms of subsequent turbidity and debris flows. HydroTrend calculates water and suspended sediment transport on a daily basis based on catchment characteristics, glaciated area, lakes and temperature and precipitation regime. Sedflux uses the generated river time-series to simulate delta plumes, failures and mass movements with separate process models. Model uncertainty and parameter sensitivity are assessed using Dakota Tools, which allows for a systematic exploration of the effects of river basin characteristics and climate scenarios on occurrence of hyperpycnal events, delta front sedimentation rate, submarine pore pressure, failure frequency and size, and run-out distances. Preliminary simulation results point to the importance of proglacial lakes and lakes abundance in the river basin, which has profound implications for event-based sediment delivery to the delta apex. Discharge-sediment rating curves can be highly variable based on these parameters. Distinction of turbidity currents and debris flows was found to be most sensitive to both earthquake frequency and delta front grainsize. As a first step we compare these model experiments against field data from the Squamish River and Delta in Howe Sound, BC.

  3. Modelling river bank retreat by combining fluvial erosion, seepage and mass failure

    NASA Astrophysics Data System (ADS)

    Dapporto, S.; Rinaldi, M.

    2003-04-01

    Streambank erosion processes contribute significantly to the sediment yielded from a river system and represent an important issue in the contexts of soil degradation and river management. Bank retreat is controlled by a complex interaction of hydrologic, geotechnical, and hydraulic processes. The capability of modelling these different components allows for a full reconstruction and comprehension of the causes and rates of bank erosion. River bank retreat during a single flow event has been modelled by combining simulation of fluvial erosion, seepage, and mass failures. The study site, along the Sieve River (Central Italy), has been subject to extensive researches, including monitoring of pore water pressures for a period of 4 years. The simulation reconstructs fairly faithfully the observed changes, and is used to: a) test the potentiality and discuss advantages and limitations of such type of methodology for modelling bank retreat; c) quantify the contribution and mutual role of the different processes determining bank retreat. The hydrograph of the event is divided in a series of time steps. Modelling of the riverbank retreat includes for each step the following components: a) fluvial erosion and consequent changes in bank geometry; b) finite element seepage analysis; c) stability analysis by limit equilibrium method. Direct fluvial shear erosion is computed using empirically derived relationships expressing lateral erosion rate as a function of the excess of shear stress to the critical entrainment value for the different materials along the bank profile. Lateral erosion rate has been calibrated on the basis of the total bank retreat measured by digital terrestrial photogrammetry. Finite element seepage analysis is then conducted to reconstruct the saturated and unsaturated flow within the bank and the pore water pressure distribution for each time step. The safety factor for mass failures is then computed, using the pore water pressure distribution obtained by the seepage analysis, and the geometry of the upper bank is modified in case of failure.

  4. Development of the Nontechnical Skills for Officers of the Deck (NTSOD) Rating Form

    DTIC Science & Technology

    2010-12-01

    organizational model of human error commonly described as the ‘ Swiss Cheese ’ model. This model allows for the identification of active failures and latent...complete list). The authors did identify organizational and management issues as underlying causes to mishaps, similar to Reason’s Swiss Cheese model. 24

  5. Declining Risk of Sudden Death in Heart Failure.

    PubMed

    Shen, Li; Jhund, Pardeep S; Petrie, Mark C; Claggett, Brian L; Barlera, Simona; Cleland, John G F; Dargie, Henry J; Granger, Christopher B; Kjekshus, John; Køber, Lars; Latini, Roberto; Maggioni, Aldo P; Packer, Milton; Pitt, Bertram; Solomon, Scott D; Swedberg, Karl; Tavazzi, Luigi; Wikstrand, John; Zannad, Faiez; Zile, Michael R; McMurray, John J V

    2017-07-06

    The risk of sudden death has changed over time among patients with symptomatic heart failure and reduced ejection fraction with the sequential introduction of medications including angiotensin-converting-enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, and mineralocorticoid-receptor antagonists. We sought to examine this trend in detail. We analyzed data from 40,195 patients who had heart failure with reduced ejection fraction and were enrolled in any of 12 clinical trials spanning the period from 1995 through 2014. Patients who had an implantable cardioverter-defibrillator at the time of trial enrollment were excluded. Weighted multivariable regression was used to examine trends in rates of sudden death over time. Adjusted hazard ratios for sudden death in each trial group were calculated with the use of Cox regression models. The cumulative incidence rates of sudden death were assessed at different time points after randomization and according to the length of time between the diagnosis of heart failure and randomization. Sudden death was reported in 3583 patients. Such patients were older and were more often male, with an ischemic cause of heart failure and worse cardiac function, than those in whom sudden death did not occur. There was a 44% decline in the rate of sudden death across the trials (P=0.03). The cumulative incidence of sudden death at 90 days after randomization was 2.4% in the earliest trial and 1.0% in the most recent trial. The rate of sudden death was not higher among patients with a recent diagnosis of heart failure than among those with a longer-standing diagnosis. Rates of sudden death declined substantially over time among ambulatory patients with heart failure with reduced ejection fraction who were enrolled in clinical trials, a finding that is consistent with a cumulative benefit of evidence-based medications on this cause of death. (Funded by the China Scholarship Council and the University of Glasgow.).

  6. The Remote Detection of Incipient Catastrophic Failure in Large Landslides

    NASA Astrophysics Data System (ADS)

    Petley, D.; Bulmer, M. H.; Murphy, W.; Mantovani, F.

    2001-12-01

    Landslide movement is commonly associated with brittle failure and ductile deformation. Kilburn and Petley (2001) proposed that cracking in landslides occurs due to downslope stress acting on the deforming horizon. If the assumption that a given crack event breaks a fixed distance of unbroken rock or soil the rate of cracking becomes equivalent to the number of crack events per unit time. Where crack growth (not nucleation) is occurring, the inverse rate of displacement changes linearly with time. Failure can be assumed to be the time at which displacement rates become infinitely large. Thus, for a slope heading towards catastrophic failure due to the development of a failure plane, this relationship would be linear, with the point at which failure will occur being the time when the line intercepts the x-axis. Increasing rates of deformation associated with ductile processes of crack nucleation would yield a curve with a negative gradient asymptopic to the x-axis. This hypothesis is being examined. In the 1960 movement of the Vaiont slide, Italy, although the rate of movement was accelerating, the plot of 1/deformation against time shows that it was increasing towards a steady state deformation. This movement has been associated with a low accumulated strain ductile phase of movement. In the 1963 movement event, the trend is linear. This was associated with a brittle phase of movement. A plot of 1/deformation against time for movement of the debris flow portion of the Tessina landslide (1998) shows a curve with a negative gradient asymptopic to the x-axis. This indicates that the debris flow moved as a result of ductile deformation processes. Plots of movement data for the Black Ven landslide over 1999 and 2001 also show curves that correlate with known deformation and catastrophic phases. The model results suggest there is a definable deformation pattern that is diagnostic of landslides approaching catastrophic failure. This pattern can be differentiated from landslides that are undergoing ductile deformation and those that are suffering crack nucleation.

  7. Contraceptive Failure in the United States: Estimates from the 2006-2010 National Survey of Family Growth.

    PubMed

    Sundaram, Aparna; Vaughan, Barbara; Kost, Kathryn; Bankole, Akinrinola; Finer, Lawrence; Singh, Susheela; Trussell, James

    2017-03-01

    Contraceptive failure rates measure a woman's probability of becoming pregnant while using a contraceptive. Information about these rates enables couples to make informed contraceptive choices. Failure rates were last estimated for 2002, and social and economic changes that have occurred since then necessitate a reestimation. To estimate failure rates for the most commonly used reversible methods in the United States, data from the 2006-2010 National Survey of Family Growth were used; some 15,728 contraceptive use intervals, contributed by 6,683 women, were analyzed. Data from the Guttmacher Institute's 2008 Abortion Patient Survey were used to adjust for abortion underreporting. Kaplan-Meier methods were used to estimate the associated single-decrement probability of failure by duration of use. Failure rates were compared with those from 1995 and 2002. Long-acting reversible contraceptives (the IUD and the implant) had the lowest failure rates of all methods (1%), while condoms and withdrawal carried the highest probabilities of failure (13% and 20%, respectively). However, the failure rate for the condom had declined significantly since 1995 (from 18%), as had the failure rate for all hormonal methods combined (from 8% to 6%). The failure rate for all reversible methods combined declined from 12% in 2002 to 10% in 2006-2010. These broad-based declines in failure rates reverse a long-term pattern of minimal change. Future research should explore what lies behind these trends, as well as possibilities for further improvements. © 2017 The Authors. Perspectives on Sexual and Reproductive Health published by Wiley Periodicals, Inc., on behalf of the Guttmacher Institute.

  8. Failure Time Distributions: Estimates and Asymptotic Results.

    DTIC Science & Technology

    1980-01-01

    of the models. A parametric family of distributions is proposed for approximating life distri- butions whose hazard rate is bath-tub shaped, this...of the limiting dirtributions of the models. A parametric family of distributions is proposed for approximating life distribution~s whose hazard rate...12. always justified. But, because of this gener- ality, the possible limit laws for the maximum form a very large family . The

  9. Efficient Simulation and Abuse Modeling of Mechanical-Electrochemical-Thermal Phenomena in Lithium-Ion Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanagopalan, Shriram; Smith, Kandler A; Graf, Peter A

    NREL's Energy Storage team is exploring the effect of mechanical crush of lithium ion cells on their thermal and electrical safety. PHEV cells, fresh as well as ones aged over 8 months under different temperatures, voltage windows, and charging rates, were subjected to destructive physical analysis. Constitutive relationship and failure criteria were developed for the electrodes, separator as well as packaging material. The mechanical models capture well, the various modes of failure across different cell components. Cell level validation is being conducted by Sandia National Laboratories.

  10. Lessons from (triggered) tremor

    USGS Publications Warehouse

    Gomberg, Joan

    2010-01-01

    I test a “clock-advance” model that implies triggered tremor is ambient tremor that occurs at a sped-up rate as a result of loading from passing seismic waves. This proposed model predicts that triggering probability is proportional to the product of the ambient tremor rate and a function describing the efficacy of the triggering wave to initiate a tremor event. Using data mostly from Cascadia, I have compared qualitatively a suite of teleseismic waves that did and did not trigger tremor with ambient tremor rates. Many of the observations are consistent with the model if the efficacy of the triggering wave depends on wave amplitude. One triggered tremor observation clearly violates the clock-advance model. The model prediction that larger triggering waves result in larger triggered tremor signals also appears inconsistent with the measurements. I conclude that the tremor source process is a more complex system than that described by the clock-advance model predictions tested. Results of this and previous studies also demonstrate that (1) conditions suitable for tremor generation exist in many tectonic environments, but, within each, only occur at particular spots whose locations change with time; (2) any fluid flow must be restricted to less than a meter; (3) the degree to which delayed failure and secondary triggering occurs is likely insignificant; and 4) both shear and dilatational deformations may trigger tremor. Triggered and ambient tremor rates correlate more strongly with stress than stressing rate, suggesting tremor sources result from time-dependent weakening processes rather than simple Coulomb failure.

  11. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  12. Factors affecting failure to quit smoking after exposure to pictorial cigarette pack warnings among employees in Thailand.

    PubMed

    Sujirarat, Dusit; Silpasuwan, Pimpan; Viwatwongkasem, Chukiat; Sirichothiratana, Nithat

    2011-07-01

    This study was carried out to determine whether health warning pictures(HWP) affect smoking cessation using a structured equation model for intending-to-quit smokers in work places. Data from a 1-year longitudinal followup of attempt-to-quit employees was obtained to determine if pack warnings affect tobacco cessation rates. Stratified simple random sampling, and Structured Equation Modeling (SEM) were employed. Approximately 20% of intending-to-quit smokers were successful. The integrated model, combining internal, interpersonal factors and health warning pictures as external factors, fit the fail to quit pattern of the model. Having a smoking father was the most significant proximate indicator linked with failure to quit. Although HWL pictures were an external factor in the decision to stop smoking, the direct and indirect causes of failure to quit smoking were the influence of the family members. Fathers contributed to the success or failure of smoking cessation in their children by having an influence on the decision making process. Future HWP should include information about factors that stimulate smokers to quit successfully. The role model of a father on quitting is also important.

  13. Analysis of Weibull Grading Test for Solid Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2010-01-01

    Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This, model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.

  14. Reliability of High-Voltage Tantalum Capacitors. Parts 3 and 4)

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2010-01-01

    Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.

  15. Heart failure remote monitoring: evidence from the retrospective evaluation of a real-world remote monitoring program.

    PubMed

    Agboola, Stephen; Jethwani, Kamal; Khateeb, Kholoud; Moore, Stephanie; Kvedar, Joseph

    2015-04-22

    Given the magnitude of increasing heart failure mortality, multidisciplinary approaches, in the form of disease management programs and other integrative models of care, are recommended to optimize treatment outcomes. Remote monitoring, either as structured telephone support or telemonitoring or a combination of both, is fast becoming an integral part of many disease management programs. However, studies reporting on the evaluation of real-world heart failure remote monitoring programs are scarce. This study aims to evaluate the effect of a heart failure telemonitoring program, Connected Cardiac Care Program (CCCP), on hospitalization and mortality in a retrospective database review of medical records of patients with heart failure receiving care at the Massachusetts General Hospital. Patients enrolled in the CCCP heart failure monitoring program at the Massachusetts General Hospital were matched 1:1 with usual care patients. Control patients received care from similar clinical settings as CCCP patients and were identified from a large clinical data registry. The primary endpoint was all-cause mortality and hospitalizations assessed during the 4-month program duration. Secondary outcomes included hospitalization and mortality rates (obtained by following up on patients over an additional 8 months after program completion for a total duration of 1 year), risk for multiple hospitalizations and length of stay. The Cox proportional hazard model, stratified on the matched pairs, was used to assess primary outcomes. A total of 348 patients were included in the time-to-event analyses. The baseline rates of hospitalizations prior to program enrollment did not differ significantly by group. Compared with controls, hospitalization rates decreased within the first 30 days of program enrollment: hazard ratio (HR)=0.52, 95% CI 0.31-0.86, P=.01). The differential effect on hospitalization rates remained consistent until the end of the 4-month program (HR=0.74, 95% CI 0.54-1.02, P=.06). The program was also associated with lower mortality rates at the end of the 4-month program: relative risk (RR)=0.33, 95% 0.11-0.97, P=.04). Additional 8-months follow-up following program completion did not show residual beneficial effects of the CCCP program on mortality (HR=0.64, 95% 0.34-1.21, P=.17) or hospitalizations (HR=1.12, 95% 0.90-1.41, P=.31). CCCP was associated with significantly lower hospitalization rates up to 90 days and significantly lower mortality rates over 120 days of the program. However, these effects did not persist beyond the 120-day program duration.

  16. Modeling the Influences of Electrostatic Discharge in Materials on a Failures of Onboard Electronic Equipment in under Microgcrogravity

    NASA Astrophysics Data System (ADS)

    Grichshenko, Valentina; Zhantayev, Zhumabek; Mukushev, Acemhan

    2016-07-01

    It is known, that during SV exploitation failures of automated systems happens as the result of complex influence of Space leading to SV's shorter life span, sometimes to their lose. All of the SV, functioning in the near-Earth Space (NES), subjected to influence of different Space factors. Causes and character of failure onboard equipment are different. Many researchers think that failures of onboard electronics connected to changing solar activity level. However, by the numerous onboard experiments established that even in the absence of solar burst in magnetostatic days there are registered failures of onboard electronics. In this paper discussed the results of modeling the impact of electrostatic discharge (ESD), occurring in the materials, on a failures of electronic onboard equipment in microgravity. The paper discusses the conditions of formation and influence of electrostatic discharge in microgravity on the elements of the onboard electronics in Space. Developed technique using circuit simulation in ISIS Proteus environment is discussed. Developed the recommendations for noise immunity of on-board equipment from ESD in Space. The results are used to predict the failure rate on-board electronics with the long term of space mission. Key words: microgravity, materials, failures, onboard electronics, Space

  17. Observation of the initiation and progression of damage in compressively loaded composite plates containing a cutout

    NASA Technical Reports Server (NTRS)

    Waas, A.; Babcock, C., Jr.

    1986-01-01

    A series of experiments was carried out to determine the mechanism of failure in compressively loaded laminated plates with a circular cutout. Real time holographic interferometry and photomicrography are used to observe the progression of failure. These observations together with post experiment plate sectioning and deplying for interior damage observation provide useful information for modelling the failure process. It is revealed that the failure is initiated as a localised instability in the zero layers, at the hole surface. With increasing load extensive delamination cracking is observed. The progression of failure is by growth of these delaminations induced by delamination buckling. Upon reaching a critical state, catastrophic failure of the plate is observed. The levels of applied load and the rate at which these events occur depend on the plate stacking sequence.

  18. Reliability analysis and initial requirements for FC systems and stacks

    NASA Astrophysics Data System (ADS)

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  19. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  20. Atrophy and growth failure of rat hindlimb muscles in tail-cast suspension

    NASA Technical Reports Server (NTRS)

    Jaspers, S. R.; Tischler, M. E.

    1984-01-01

    The primary objective of the present study is related to an evaluation of a modified tail-cast suspension model as a means of identifying metabolic factors which control or are associated with muscle atrophy and growth failure. Two different control conditions (normal and tail-casted weight bearing) were studied to determine the appropriate control for tail-cast suspension. A description is presented of a model which is most useful for studying atrophy of hindlimb muscles under certain conditions. Female Sprague-Dawley rats were employed in the experiments. Attention is given to growth rate and urinary excretion of urea and ammonia in different types of rats, the relationship between body weight and skeletal muscle weight, and the relationship between animal body weight and rates of protein synthesis and protein degradation.

  1. Microembossing of ultrafine grained Al: microstructural analysis and finite element modelling

    NASA Astrophysics Data System (ADS)

    Qiao, Xiao Guang; Bah, Mamadou T.; Zhang, Jiuwen; Gao, Nong; Moktadir, Zakaria; Kraft, Michael; Starink, Marco J.

    2010-10-01

    Ultra-fine-grained (UFG) Al-1050 processed by equal channel angular pressing and UFG Al-Mg-Cu-Mn processed by high-pressure torsion (HPT) were embossed at both room temperature and 300 °C, with the aim of producing micro-channels. The behaviour of Al alloys during the embossing process was analysed using finite element modelling. The cold embossing of both Al alloys is characterized by a partial pattern transfer, a large embossing force, channels with oblique sidewalls and a large failure rate of the mould. The hot embossing is characterized by straight channel sidewalls, fully transferred patterns and reduced loads which decrease the failure rate of the mould. Hot embossing of UFG Al-Mg-Cu-Mn produced by HPT shows a potential of fabrication of microelectromechanical system components with micro channels.

  2. A FORTRAN program for multivariate survival analysis on the personal computer.

    PubMed

    Mulder, P G

    1988-01-01

    In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.

  3. Percutaneous Coronary Revascularization for Chronic Total Occlusions: A Novel Predictive Score of Technical Failure Using Advanced Technologies.

    PubMed

    Galassi, Alfredo R; Boukhris, Marouane; Azzarelli, Salvatore; Castaing, Marine; Marzà, Francesco; Tomasello, Salvatore D

    2016-05-09

    The aims of this study were to describe the 10-year experience of a single operator dedicated to chronic total occlusion (CTO) and to establish a model for predicting technical failure. During the last decade, the interest in percutaneous coronary interventions (PCIs) of chronic total occlusions (CTOs) has increased, allowing the improvement of success rate. One thousand nineteen patients with CTO underwent 1,073 CTO procedures performed by a single CTO-dedicated operator. The study population was subdivided into 2 groups by time period: period 1 (January 2005 to December 2009, n = 378) and period 2 (January 2010 to December 2014, n = 641). Observations were randomly assigned to a derivation set and a validation set (in a 2:1 ratio). A prediction score was established by assigning points for each independent predictor of technical failure in the derivation set according to the beta coefficient and summing all points accrued. Lesions attempted in period 2 were more complex in comparison with those in period 1. Compared with period 1, both technical and clinical success rates significantly improved (from 87.8% to 94.4% [p = 0.001] and from 77.6% to 89.9% [p < 0.001], respectively). A prediction score for technical failure including age ≥75 years (1 point), ostial location (1 point), and collateral filling Rentrop grade <2 (2 points) was established, stratifying procedures into 4 difficulty groups: easy (0), intermediate (1), difficult (2), and very difficult (3 or 4), with decreasing technical success rates. In derivation and validation sets, areas under the curve were comparable (0.728 and 0.772, respectively). With growing expertise, the success rate has increased despite increasing complexity of attempted lesions. The established model predicted the probability of technical failure and thus might be applied to grading the difficulty of CTO procedures. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  4. Metformin improves cardiac function in mice with heart failure after myocardial infarction by regulating mitochondrial energy metabolism.

    PubMed

    Sun, Dan; Yang, Fei

    2017-04-29

    To investigate whether metformin can improve the cardiac function through improving the mitochondrial function in model of heart failure after myocardial infarction. Male C57/BL6 mice aged about 8 weeks were selected and the anterior descending branch was ligatured to establish the heart failure model after myocardial infarction. The cardiac function was evaluated via ultrasound after 3 days to determine the modeling was successful, and the mice were randomly divided into two groups. Saline group (Saline) received the intragastric administration of normal saline for 4 weeks, and metformin group (Met) received the intragastric administration of metformin for 4 weeks. At the same time, Shame group (Sham) was set up. Changes in cardiac function in mice were detected at 4 weeks after operation. Hearts were taken from mice after 4 weeks, and cell apoptosis in myocardial tissue was detected using TUNEL method; fresh mitochondria were taken and changes in oxygen consumption rate (OCR) and respiratory control rate (RCR) of mitochondria in each group were detected using bio-energy metabolism tester, and change in mitochondrial membrane potential (MMP) of myocardial tissue was detected via JC-1 staining; the expressions and changes in Bcl-2, Bax, Sirt3, PGC-1α and acetylated PGC-1α in myocardial tissue were detected by Western blot. RT-PCR was used to detect mRNA levels in Sirt3 in myocardial tissues. Metformin improved the systolic function of heart failure model rats after myocardial infarction and reduced the apoptosis of myocardial cells after myocardial infarction. Myocardial mitochondrial respiratory function and membrane potential were decreased after myocardial infarction, and metformin treatment significantly improved the mitochondrial respiratory function and mitochondrial membrane potential; Metformin up-regulated the expression of Sirt3 and the activity of PGC-1α in myocardial tissue of heart failure after myocardial infarction. Metformin decreases the acetylation level of PGC-1α through up-regulating Sirt3, mitigates the damage to mitochondrial membrane potential of model of heart failure after myocardial infarction and improves the respiratory function of mitochondria, thus improving the cardiac function of mice. Copyright © 2017. Published by Elsevier Inc.

  5. Analysis of risk factors for central venous port failure in cancer patients

    PubMed Central

    Hsieh, Ching-Chuan; Weng, Hsu-Huei; Huang, Wen-Shih; Wang, Wen-Ke; Kao, Chiung-Lun; Lu, Ming-Shian; Wang, Chia-Siu

    2009-01-01

    AIM: To analyze the risk factors for central port failure in cancer patients administered chemotherapy, using univariate and multivariate analyses. METHODS: A total of 1348 totally implantable venous access devices (TIVADs) were implanted into 1280 cancer patients in this cohort study. A Cox proportional hazard model was applied to analyze risk factors for failure of TIVADs. Log-rank test was used to compare actuarial survival rates. Infection, thrombosis, and surgical complication rates (χ2 test or Fisher’s exact test) were compared in relation to the risk factors. RESULTS: Increasing age, male gender and open-ended catheter use were significant risk factors reducing survival of TIVADs as determined by univariate and multivariate analyses. Hematogenous malignancy decreased the survival time of TIVADs; this reduction was not statistically significant by univariate analysis [hazard ratio (HR) = 1.336, 95% CI: 0.966-1.849, P = 0.080)]. However, it became a significant risk factor by multivariate analysis (HR = 1.499, 95% CI: 1.079-2.083, P = 0.016) when correlated with variables of age, sex and catheter type. Close-ended (Groshong) catheters had a lower thrombosis rate than open-ended catheters (2.5% vs 5%, P = 0.015). Hematogenous malignancy had higher infection rates than solid malignancy (10.5% vs 2.5%, P < 0.001). CONCLUSION: Increasing age, male gender, open-ended catheters and hematogenous malignancy were risk factors for TIVAD failure. Close-ended catheters had lower thrombosis rates and hematogenous malignancy had higher infection rates. PMID:19787834

  6. Relationship between sponsorship and failure rate of dental implants: a systematic approach.

    PubMed

    Popelut, Antoine; Valet, Fabien; Fromentin, Olivier; Thomas, Aurélie; Bouchard, Philippe

    2010-04-21

    The number of dental implant treatments increases annually. Dental implants are manufactured by competing companies. Systematic reviews and meta-analysis have shown a clear association between pharmaceutical industry funding of clinical trials and pro-industry results. So far, the impact of industry sponsorship on the outcomes and conclusions of dental implant clinical trials has never been explored. The aim of the present study was to examine financial sponsorship of dental implant trials, and to evaluate whether research funding sources may affect the annual failure rate. A systematic approach was used to identify systematic reviews published between January 1993 and December 2008 that specifically deal with the length of survival of dental implants. Primary articles were extracted from these reviews. The failure rate of the dental implants included in the trials was calculated. Data on publication year, Impact Factor, prosthetic design, periodontal status reporting, number of dental implants included in the trials, methodological quality of the studies, presence of a statistical advisor, and financial sponsorship were extracted by two independent reviewers (kappa = 0.90; CI(95%) [0.77-1.00]). Univariate quasi-Poisson regression models and multivariate analysis were used to identify variables that were significantly associated with failure rates. Five systematic reviews were identified from which 41 analyzable trials were extracted. The mean annual failure rate estimate was 1.09%.(CI(95%) [0.84-1.42]). The funding source was not reported in 63% of the trials (26/41). Sixty-six percent of the trials were considered as having a risk of bias (27/41). Given study age, both industry associated (OR = 0.21; CI(95%) [0.12-0.38]) and unknown funding source trials (OR = 0.33; (CI(95%) [0.21-0.51]) had a lower annual failure rates compared with non-industry associated trials. A conflict of interest statement was disclosed in 2 trials. When controlling for other factors, the probability of annual failure for industry associated trials is significantly lower compared with non-industry associated trials. This bias may have significant implications on tooth extraction decision making, research on tooth preservation, and governmental health care policies.

  7. The effects of heart rate control in chronic heart failure with reduced ejection fraction.

    PubMed

    Grande, Dario; Iacoviello, Massimo; Aspromonte, Nadia

    2018-07-01

    Elevated heart rate has been associated with worse prognosis both in the general population and in patients with heart failure. Heart rate is finely modulated by neurohormonal signals and it reflects the balance between the sympathetic and the parasympathetic limbs of the autonomic nervous system. For this reason, elevated heart rate in heart failure has been considered an epiphenomenon of the sympathetic hyperactivation during heart failure. However, experimental and clinical evidence suggests that high heart rate could have a direct pathogenetic role. Consequently, heart rate might act as a pathophysiological mediator of heart failure as well as a marker of adverse outcome. This hypothesis has been supported by the observation that the positive effect of beta-blockade could be linked to the degree of heart rate reduction. In addition, the selective heart rate control with ivabradine has recently been demonstrated to be beneficial in patients with heart failure and left ventricular systolic dysfunction. The objective of this review is to examine the pathophysiological implications of elevated heart rate in chronic heart failure and explore the mechanisms underlying the effects of pharmacological heart rate control.

  8. Conduit Stability and Collapse in Explosive Volcanic Eruptions: Coupling Conduit Flow and Failure Models

    NASA Astrophysics Data System (ADS)

    Mullet, B.; Segall, P.

    2017-12-01

    Explosive volcanic eruptions can exhibit abrupt changes in physical behavior. In the most extreme cases, high rates of mass discharge are interspaced by dramatic drops in activity and periods of quiescence. Simple models predict exponential decay in magma chamber pressure, leading to a gradual tapering of eruptive flux. Abrupt changes in eruptive flux therefore indicate that relief of chamber pressure cannot be the only control of the evolution of such eruptions. We present a simplified physics-based model of conduit flow during an explosive volcanic eruption that attempts to predict stress-induced conduit collapse linked to co-eruptive pressure loss. The model couples a simple two phase (gas-melt) 1-D conduit solution of the continuity and momentum equations with a Mohr-Coulomb failure condition for the conduit wall rock. First order models of volatile exsolution (i.e. phase mass transfer) and fragmentation are incorporated. The interphase interaction force changes dramatically between flow regimes, so smoothing of this force is critical for realistic results. Reductions in the interphase force lead to significant relative phase velocities, highlighting the deficiency of homogenous flow models. Lateral gas loss through conduit walls is incorporated using a membrane-diffusion model with depth dependent wall rock permeability. Rapid eruptive flux results in a decrease of chamber and conduit pressure, which leads to a critical deviatoric stress condition at the conduit wall. Analogous stress distributions have been analyzed for wellbores, where much work has been directed at determining conditions that lead to wellbore failure using Mohr-Coulomb failure theory. We extend this framework to cylindrical volcanic conduits, where large deviatoric stresses can develop co-eruptively leading to multiple distinct failure regimes depending on principal stress orientations. These failure regimes are categorized and possible implications for conduit flow are discussed, including cessation of eruption.

  9. A Bayesian network approach for modeling local failure in lung cancer

    NASA Astrophysics Data System (ADS)

    Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam

    2011-03-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.

  10. A new method to estimate location and slip of simulated rock failure events

    NASA Astrophysics Data System (ADS)

    Heinze, Thomas; Galvan, Boris; Miller, Stephen Andrew

    2015-05-01

    At the laboratory scale, identifying and locating acoustic emissions (AEs) is a common method for short term prediction of failure in geomaterials. Above average AE typically precedes the failure process and is easily measured. At larger scales, increase in micro-seismic activity sometimes precedes large earthquakes (e.g. Tohoku, L'Aquilla, oceanic transforms), and can be used to assess seismic risk. The goal of this work is to develop a methodology and numerical algorithms for extracting a measurable quantity analogous to AE arising from the solution of equations governing rock deformation. Since there is no physical property to quantify AE derivable from the governing equations, an appropriate rock-mechanical analog needs to be found. In this work, we identify a general behavior of the AE generation process preceding rock failure. This behavior includes arbitrary localization of low magnitude events during pre-failure stage, followed by increase in number and amplitude, and finally localization around the incipient failure plane during macroscopic failure. We propose deviatoric strain rate as the numerical analog that mimics this behavior, and develop two different algorithms designed to detect rapid increases in deviatoric strain using moving averages. The numerical model solves a fully poro-elasto-plastic continuum model and is coupled to a two-phase flow model. We test our model by comparing simulation results with experimental data of drained compression and of fluid injection experiments. We find for both cases that occurrence and amplitude of our AE analog mimic the observed general behavior of the AE generation process. Our technique can be extended to modeling at the field scale, possibly providing a mechanistic basis for seismic hazard assessment from seismicity that occasionally precedes large earthquakes.

  11. A Percolation Model for Fracking

    NASA Astrophysics Data System (ADS)

    Norris, J. Q.; Turcotte, D. L.; Rundle, J. B.

    2014-12-01

    Developments in fracking technology have enabled the recovery of vast reserves of oil and gas; yet, there is very little publicly available scientific research on fracking. Traditional reservoir simulator models for fracking are computationally expensive, and require many hours on a supercomputer to simulate a single fracking treatment. We have developed a computationally inexpensive percolation model for fracking that can be used to understand the processes and risks associated with fracking. In our model, a fluid is injected from a single site and a network of fractures grows from the single site. The fracture network grows in bursts, the failure of a relatively strong bond followed by the failure of a series of relatively weak bonds. These bursts display similarities to micro seismic events observed during a fracking treatment. The bursts follow a power-law (Gutenburg-Richter) frequency-size distribution and have growth rates similar to observed earthquake moment rates. These are quantifiable features that can be compared to observed microseismicity to help understand the relationship between observed microseismicity and the underlying fracture network.

  12. Semiparametric regression analysis of failure time data with dependent interval censoring.

    PubMed

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-09-20

    Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. A Model for Integrating Program Development and Evaluation.

    ERIC Educational Resources Information Center

    Brown, J. Lynne; Kiernan, Nancy Ellen

    1998-01-01

    A communication model consisting of input from target audience, program delivery, and outcomes (receivers' perception of message) was applied to an osteoporosis-prevention program for working mothers ages 21 to 45. Due to poor completion rate on evaluation instruments and failure of participants to learn key concepts, the model was used to improve…

  14. Baseline Hemodynamics and Response to Contrast Media During Diagnostic Cardiac Catheterization Predict Adverse Events in Heart Failure Patients.

    PubMed

    Denardo, Scott J; Vock, David M; Schmalfuss, Carsten M; Young, Gregory D; Tcheng, James E; O'Connor, Christopher M

    2016-07-01

    Contrast media administered during cardiac catheterization can affect hemodynamic variables. However, little is documented about the effects of contrast on hemodynamics in heart failure patients or the prognostic value of baseline and changes in hemodynamics for predicting subsequent adverse events. In this prospective study of 150 heart failure patients, we measured hemodynamics at baseline and after administration of iodixanol or iopamidol contrast. One-year Kaplan-Meier estimates of adverse event-free survival (death, heart failure hospitalization, and rehospitalization) were generated, grouping patients by baseline measures of pulmonary capillary wedge pressure (PCWP) and cardiac index (CI), and by changes in those measures after contrast administration. We used Cox proportional hazards modeling to assess sequentially adding baseline PCWP and change in CI to 5 validated risk models (Seattle Heart Failure Score, ESCAPE [Evaluation Study of Congestive Heart Failure and Pulmonary Artery Catheterization Effectiveness], CHARM [Candesartan in Heart Failure: Assessment of Reduction in Mortality and Morbidity], CORONA [Controlled Rosuvastatin Multinational Trial in Heart Failure], and MAGGIC [Meta-Analysis Global Group in Chronic Heart Failure]). Median contrast volume was 109 mL. Both contrast media caused similarly small but statistically significant changes in most hemodynamic variables. There were 39 adverse events (26.0%). Adverse event rates increased using the composite metric of baseline PCWP and change in CI (P<0.01); elevated baseline PCWP and decreased CI after contrast correlated with the poorest prognosis. Adding both baseline PCWP and change in CI to the 5 risk models universally improved their predictive value (P≤0.02). In heart failure patients, the administration of contrast causes small but significant changes in hemodynamics. Calculating baseline PCWP with change in CI after contrast predicts adverse events and increases the predictive value of existing models. Patients with elevated baseline PCWP and decreased CI after contrast merit greatest concern. © 2016 American Heart Association, Inc.

  15. They Say We Suck: The Failure of IPEDS Graduation Rates to Fully Measure Student Success

    ERIC Educational Resources Information Center

    Weber, Jennifer Kathryn

    2017-01-01

    IPEDS graduation rates have become de facto means for higher education accountability in the United States, used by the federal government, state and local agencies, non-profits and media to compare and rank institutions. IPEDS uses a limited subset of students, as well as an institutional perspective to measure graduation rate. Under this model,…

  16. Strain Rate Dependant Material Model for Orthotropic Metals

    NASA Astrophysics Data System (ADS)

    Vignjevic, Rade

    2016-08-01

    In manufacturing processes anisotropic metals are often exposed to the loading with high strain rates in the range from 102 s-1 to 106 s-1 (e.g. stamping, cold spraying and explosive forming). These types of loading often involve generation and propagation of shock waves within the material. The material behaviour under such a complex loading needs to be accurately modelled, in order to optimise the manufacturing process and achieve appropriate properties of the manufactured component. The presented research is related to development and validation of a thermodynamically consistent physically based constitutive model for metals under high rate loading. The model is capable of modelling damage, failure and formation and propagation of shock waves in anisotropic metals. The model has two main parts: the strength part which defines the material response to shear deformation and an equation of state (EOS) which defines the material response to isotropic volumetric deformation [1]. The constitutive model was implemented into the transient nonlinear finite element code DYNA3D [2] and our in house SPH code. Limited model validation was performed by simulating a number of high velocity material characterisation and validation impact tests. The new damage model was developed in the framework of configurational continuum mechanics and irreversible thermodynamics with internal state variables. The use of the multiplicative decomposition of deformation gradient makes the model applicable to arbitrary plastic and damage deformations. To account for the physical mechanisms of failure, the concept of thermally activated damage initially proposed by Tuller and Bucher [3], Klepaczko [4] was adopted as the basis for the new damage evolution model. This makes the proposed damage/failure model compatible with the Mechanical Threshold Strength (MTS) model Follansbee and Kocks [5], 1988; Chen and Gray [6] which was used to control evolution of flow stress during plastic deformation. In addition the constitutive model is coupled with a vector shock equation of state which allows for modelling of shock wave propagation in orthotropic the material. Parameters for the new constitutive model are typically derived on the basis of the tensile tests (performed over a range of temperatures and strain rates), plate impact tests and Taylor anvil tests. The model was applied to simulate explosively driven fragmentation, blast loading and cold spraying impacts.

  17. Modeling the Rate-Dependent Durability of Reduced-Ag SAC Interconnects for Area Array Packages Under Torsion Loads

    NASA Astrophysics Data System (ADS)

    Srinivas, Vikram; Menon, Sandeep; Osterman, Michael; Pecht, Michael G.

    2013-08-01

    Solder durability models frequently focus on the applied strain range; however, the rate of applied loading, or strain rate, is also important. In this study, an approach to incorporate strain rate dependency into durability estimation for solder interconnects is examined. Failure data were collected for SAC105 solder ball grid arrays assembled with SAC305 solder that were subjected to displacement-controlled torsion loads. Strain-rate-dependent (Johnson-Cook model) and strain-rate-independent elastic-plastic properties were used to model the solders in finite-element simulation. Test data were then used to extract damage model constants for the reduced-Ag SAC solder. A generalized Coffin-Manson damage model was used to estimate the durability. The mechanical fatigue durability curve for reduced-silver SAC solder was generated and compared with durability curves for SAC305 and Sn-Pb from the literature.

  18. Revisiting the stability of mini-implants used for orthodontic anchorage.

    PubMed

    Yao, Chung-Chen Jane; Chang, Hao-Hueng; Chang, Jenny Zwei-Chieng; Lai, Hsiang-Hua; Lu, Shao-Chun; Chen, Yi-Jane

    2015-11-01

    The aim of this study is to comprehensively analyze the potential factors affecting the failure rates of three types of mini-implants used for orthodontic anchorage. Data were collected on 727 mini-implants (miniplates, predrilled titanium miniscrews, and self-drilling stainless steel miniscrews) in 220 patients. The factors related to mini-implant failure were investigated using a Chi-square test for univariate analysis and a generalized estimating equation model for multivariate analysis. The failure rate for miniplates was significantly lower than for miniscrews. All types of mini-implants, especially the self-drilling stainless steel miniscrews, showed decreased stability if the previous implantation had failed. The stability of predrilled titanium miniscrews and self-drilling stainless steel miniscrews were comparable at the first implantation. However, the failure rate of stainless steel miniscrews increased at the second implantation. The univariate analysis showed that the following variables had a significant influence on the failure rates of mini-implants: age of patient, type of mini-implant, site of implantation, and characteristics of the soft tissue around the mini-implants. The generalized estimating equation analysis revealed that mini-implants with miniscrews used in patients younger than 35 years, subjected to orthodontic loading after 30 days and implanted on the alveolar bone ridge, have a significantly higher risk of failure. This study revealed that once the dental surgeon becomes familiar with the procedure, the stability of orthodontic mini-implants depends on the type of mini-implant, age of the patient, implantation site, and the healing time of the mini-implant. Miniplates are a more feasible anchorage system when miniscrews fail repeatedly. Copyright © 2014. Published by Elsevier B.V.

  19. Clinical effectiveness of hymenoptera venom immunotherapy: a prospective observational multicenter study of the European academy of allergology and clinical immunology interest group on insect venom hypersensitivity.

    PubMed

    Ruëff, Franziska; Przybilla, Bernhard; Biló, Maria Beatrice; Müller, Ulrich; Scheipl, Fabian; Seitz, Michael J; Aberer, Werner; Bodzenta-Lukaszyk, Anna; Bonifazi, Floriano; Campi, Paolo; Darsow, Ulf; Haeberli, Gabrielle; Hawranek, Thomas; Küchenhoff, Helmut; Lang, Roland; Quercia, Oliviero; Reider, Norbert; Schmid-Grendelmeier, Peter; Severino, Maurizio; Sturm, Gunter Johannes; Treudler, Regina; Wüthrich, Brunello

    2013-01-01

    Treatment failure during venom immunotherapy (VIT) may be associated with a variety of risk factors. Our aim was to evaluate the association of baseline serum tryptase concentration (BTC) and of other parameters with the frequency of VIT failure during the maintenance phase. In this observational prospective multicenter study, we followed 357 patients with established honey bee or vespid venom allergy after the maintenance dose of VIT had been reached. In all patients, VIT effectiveness was either verified by sting challenge (n = 154) or patient self-reporting of the outcome of a field sting (n = 203). Data were collected on BTC, age, gender, preventive use of anti-allergic drugs (oral antihistamines and/or corticosteroids) right after a field sting, venom dose, antihypertensive medication, type of venom, side effects during VIT, severity of index sting reaction preceding VIT, and duration of VIT. Relative rates were calculated with generalized additive models. 22 patients (6.2%) developed generalized symptoms during sting challenge or after a field sting. A strong association between the frequency of VIT failure and BTC could be excluded. Due to wide confidence bands, however, weaker effects (odds ratios <3) of BTC were still possible, and were also suggested by a selective analysis of patients who had a sting challenge. The most important factor associated with VIT failure was a honey bee venom allergy. Preventive use of anti-allergic drugs may be associated with a higher protection rate. It is unlikely that an elevated BTC has a strong negative effect on the rate of treatment failures. The magnitude of the latter, however, may depend on the method of effectiveness assessment. Failure rate is higher in patients suffering from bee venom allergy.

  20. Pharmacokinetic-Pharmacodynamic Modeling of Unboosted Atazanavir in a Cohort of Stable HIV-Infected Patients

    PubMed Central

    Baudry, Thomas; Gagnieu, Marie-Claude; Boibieux, André; Livrozet, Jean-Michel; Peyramond, Dominique; Tod, Michel; Ferry, Tristan

    2013-01-01

    Limited data on the pharmacokinetics and pharmacodynamics (PK/PD) of unboosted atazanavir (uATV) in treatment-experienced patients are available. The aim of this work was to study the PK/PD of unboosted atazanavir in a cohort of HIV-infected patients. Data were available for 58 HIV-infected patients (69 uATV-based regimens). Atazanavir concentrations were analyzed by using a population approach, and the relationship between atazanavir PK and clinical outcome was examined using logistic regression. The final PK model was a linear one-compartment model with a mixture absorption model to account for two subgroups of absorbers. The mean (interindividual variability) of population PK parameters were as follows: clearance, 13.4 liters/h (40.7%), volume of distribution, 71.1 liters (29.7%), and fraction of regular absorbers, 0.49. Seven subjects experienced virological failure after switch to uATV. All of them were identified as low absorbers in the PK modeling. The absorption rate constant (0.38 ± 0.20 versus 0.75 ± 0.28 h−1; P = 0.002) and ATV exposure (area under the concentration-time curve from 0 to 24 h [AUC0–24], 10.3 ± 2.1 versus 22.4 ± 11.2 mg · h · liter−1; P = 0.001) were significantly lower in patients with virological failure than in patients without failure. In the logistic regression analysis, both the absorption rate constant and ATV trough concentration significantly influenced the probability of virological failure. A significant relationship between ATV pharmacokinetics and virological response was observed in a cohort of HIV patients who were administered unboosted atazanavir. This study also suggests that twice-daily administration of uATV may optimize drug therapy. PMID:23147727

  1. Risk stratification in middle-aged patients with congestive heart failure: prospective comparison of the Heart Failure Survival Score (HFSS) and a simplified two-variable model.

    PubMed

    Zugck, C; Krüger, C; Kell, R; Körber, S; Schellberg, D; Kübler, W; Haass, M

    2001-10-01

    The performance of a US-American scoring system (Heart Failure Survival Score, HFSS) was prospectively evaluated in a sample of ambulatory patients with congestive heart failure (CHF). Additionally, it was investigated whether the HFSS might be simplified by assessment of the distance ambulated during a 6-min walk test (6'WT) instead of determination of peak oxygen uptake (peak VO(2)). In 208 middle-aged CHF patients (age 54+/-10 years, 82% male, NYHA class 2.3+/-0.7; follow-up 28+/-14 months) the seven variables of the HFSS: CHF aetiology; heart rate; mean arterial pressure; serum sodium concentration; intraventricular conduction time; left ventricular ejection fraction (LVEF); and peak VO(2), were determined. Additionally, a 6'WT was performed. The HFSS allowed discrimination between patients at low, medium and high risk, with mortality rates of 16, 39 and 50%, respectively. However, the prognostic power of the HFSS was not superior to a two-variable model consisting only of LVEF and peak VO(2). The areas under the receiver operating curves (AUC) for prediction of 1-year survival were even higher for the two-variable model (0.84 vs. 0.74, P<0.05). Replacing peak VO(2) with 6'WT resulted in a similar AUC (0.83). The HFSS continued to predict survival when applied to this patient sample. However, the HFSS was inferior to a two-variable model containing only LVEF and either peak VO(2) or 6'WT. As the 6'WT requires no sophisticated equipment, a simplified two-variable model containing only LVEF and 6'WT may be more widely applicable, and is therefore recommended.

  2. Modelling of photodegradation in solar cell modules of substrate and superstrate design made with ethylene-vinyl acetate as pottant material

    NASA Technical Reports Server (NTRS)

    Somersall, A. C.; Guillet, J. E.

    1982-01-01

    A computer model which simulates, in principle, the chemical changes in the photooxidation of hydrocarbons using as input data a set of elementary reactions, corresponding kinetic rate data and appropriate initial conditions was developed. The Model was refined and exploited to examine more closely the photooxidation and photostabilization of a hydrocarbon polymer. The results lead to the following observations. (1) The time to failure, tau sub f (chosen as the level of 5% C-H bond oxidation which is within the range anticipated for marked change in mechanical properties) varies as the inverse square root of the light intensity. However, tau sub f is almost unaffected by both the photoinitiator type and concentration. (2) The time to failure decreases with the rate of abstraction of C-H by peroxy radicals but increases with the rate of bimolecular radical termination controlled by diffusion. (3) Of the various stabilization mechanisms considered, the trapping of peroxy radicals is distinctly the most effective, although the concommitant decomposition of hydroperoxide is also desirable.

  3. Modeling fault diagnosis as the activation and use of a frame system. [for pilot problem-solving rating

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.; Giffin, Walter C.; Rockwell, Thomas H.; Thomas, Mark

    1986-01-01

    Twenty pilots with instrument flight ratings were asked to perform a fault-diagnosis task for which they had relevant domain knowledge. The pilots were asked to think out loud as they requested and interpreted information. Performances were then modeled as the activation and use of a frame system. Cognitive biases, memory distortions and losses, and failures to correctly diagnose the problem were studied in the context of this frame system model.

  4. Numerical Modelling of Glass Fibre Reinforced Laminates Subjected to a Low Velocity Impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J. Y.; Guana, Z. W.; Cantwell, W. J.

    2010-05-21

    This paper presents a series of numerical predictions of the perforation behaviour of glass fibre laminates subjected to quasi-static and low-velocity impact loading. Both shear and tensile failure criteria were used in the finite element models to simulate the post-failure processes via an automatic element removal procedure. The appropriate material properties, obtained through a series of uniaxial tension and bending tests on the composites, were used in the numerical models. Four, eight and sixteen ply glass fibre laminates panels were perforated at quasi-static rates and under low-velocity impact loading. Reasonably good correlation was obtained between the numerical simulations and themore » experimental results, both in terms of the failure modes and the load-deflection relationships before and during the penetration phase. The predicted impact energies of the GFRP panels were compared with the experimental data and reasonable agreement was observed.« less

  5. Numerical Simulation and Experimental Verification of Hollow and Foam-Filled Flax-Fabric-Reinforced Epoxy Tubular Energy Absorbers Subjected to Crashing

    NASA Astrophysics Data System (ADS)

    Sliseris, J.; Yan, L.; Kasal, B.

    2017-09-01

    Numerical methods for simulating hollow and foam-filled flax-fabric-reinforced epoxy tubular energy absorbers subjected to lateral crashing are presented. The crashing characteristics, such as the progressive failure, load-displacement response, absorbed energy, peak load, and failure modes, of the tubes were simulated and calculated numerically. A 3D nonlinear finite-element model that allows for the plasticity of materials using an isotropic hardening model with strain rate dependence and failure is proposed. An explicit finite-element solver is used to address the lateral crashing of the tubes considering large displacements and strains, plasticity, and damage. The experimental nonlinear crashing load vs. displacement data are successfully described by using the finite-element model proposed. The simulated peak loads and absorbed energy of the tubes are also in good agreement with experimental results.

  6. Optimization of Artificial Neural Network using Evolutionary Programming for Prediction of Cascading Collapse Occurrence due to the Hidden Failure Effect

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Salim, N. A.; Othman, M. M.; Yasin, Z. M.

    2018-03-01

    This paper presents the Evolutionary Programming (EP) which proposed to optimize the training parameters for Artificial Neural Network (ANN) in predicting cascading collapse occurrence due to the effect of protection system hidden failure. The data has been collected from the probability of hidden failure model simulation from the historical data. The training parameters of multilayer-feedforward with backpropagation has been optimized with objective function to minimize the Mean Square Error (MSE). The optimal training parameters consists of the momentum rate, learning rate and number of neurons in first hidden layer and second hidden layer is selected in EP-ANN. The IEEE 14 bus system has been tested as a case study to validate the propose technique. The results show the reliable prediction of performance validated through MSE and Correlation Coefficient (R).

  7. Long-term administration of tolvaptan increases myocardial remodeling and mortality via exacerbation of congestion in mice heart failure model after myocardial infarction.

    PubMed

    Eguchi, Akiyo; Iwasaku, Toshihiro; Okuhara, Yoshitaka; Naito, Yoshiro; Mano, Toshiaki; Masuyama, Tohru; Hirotani, Shinichi

    2016-10-15

    In contrast to loop diuretics, tolvaptan does not cause neurohormonal activation in several animal heart failure models. However, it remains unknown whether chronic vasopressin type 2 receptor blockade exerts beneficial effects on mortality in murine heart failure after myocardial infarction (MI). In an experimental heart failure model, we tested the hypothesis that tolvaptan reduces myocardial remodeling and mortality. MI was induced in 9-week-old male C57Bl6/J by the left coronary artery ligation. In study 1, animals were randomly assigned to treatment with placebo or tolvaptan starting 14days post-MI. In study 2, animals were randomized to tolvaptan or furosemide+tolvaptan starting 14days post-MI. Interestingly, results showed lower survival rate in tolvaptan group compared to placebo. Tolvaptan group had higher serum osmolality, heavier body weight, more severe myocardial remodeling, and lung congestion at day 28 of drug administration compared to placebo. In study 2, addition of furosemide significantly reduced mortality rate seen with tolvaptan, and presented with decreased osmolality, myocardial remodeling, and lung congestion compared to tolvaptan-treated mice. Increase in proximal tubular expression of aquaporin 1, Angiotensin II, and vasopressin seen with tolvaptan treatments were normalized to basal levels, similar to levels in placebo-treated mice. Contrary to our hypothesis, tolvaptan was associated with increased mortality in murine heart failure after MI. This increase in lung congestion, myocardial remodeling, could be prevented by co-administration of furosemide, which resulted in normalized serum osmolality, neurohormonal activation, and renal aquaporin 1 expression, and hence decreased mortality post-MI. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Using Roadside Billboard Posters to Increase Admission Rates to Problem Gambling Services: Reflections on Failure.

    PubMed

    Calderwood, Kimberly A; Wellington, William J

    2015-07-01

    Based on the stimulus-response model of advertising, this study sought to increase admission rates to a local problem gambling service (PGS) in Windsor, Ontario, Canada, by adding a series of locally based 10 foot by 20 foot roadside billboard posters to PGS's existing communications tools for a 24-week period. Using proof of performance reports, a pre-post survey of new callers to PGS, a website visit counter, and a media awareness survey, the findings showed that at least some individuals were influenced by billboard exposure, but admission rates continued to decline during the billboard campaign period. While one possible explanation for the communications failure was that the whole PGS communications campaign was below the minimal threshold for communications perception, another possible explanation is that the stimulus-response model of advertising used may not have been appropriate for such advertising that targets behavior change. Reflections on using an information-processing model instead of a stimulus-response model, and considerations of a two-step flow of communication, are provided. Recommendations are made regarding matching communications messages to stages of behavior change, use of online promotion, and strategies for future research. © 2015 Society for Public Health Education.

  9. LARC-1: a Los Alamos release calculation program for fission product transport in HTGRs during the LOFC accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carruthers, L.M.; Lee, C.E.

    1976-10-01

    The theoretical and numerical data base development of the LARC-1 code is described. Four analytical models of fission product release from an HTGR core during the loss of forced circulation accident are developed. Effects of diffusion, adsorption and evaporation of the metallics and precursors are neglected in this first LARC model. Comparison of the analytic models indicates that the constant release-renormalized model is adequate to describe the processes involved. The numerical data base for release constants, temperature modeling, fission product release rates, coated fuel particle failure fraction and aged coated fuel particle failure fractions is discussed. Analytic fits and graphicmore » displays for these data are given for the Ft. St. Vrain and GASSAR models.« less

  10. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  11. California State University, Northridge: Hybrid Lab Courses

    ERIC Educational Resources Information Center

    EDUCAUSE, 2014

    2014-01-01

    California State University, Northridge's Hybrid Lab course model targets high failure rate, multisection, gateway courses in which prerequisite knowledge is a key to success. The Hybrid Lab course model components incorporate interventions and practices that have proven successful at CSUN and other campuses in supporting students, particularly…

  12. Improved Accuracy of Automated Estimation of Cardiac Output Using Circulation Time in Patients with Heart Failure.

    PubMed

    Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi

    2016-11-01

    Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Risk factors for failure of glass fiber-reinforced composite post restorations: a prospective observational clinical study.

    PubMed

    Naumann, Michael; Blankenstein, Felix; Kiessling, Saskia; Dietrich, Thomas

    2005-12-01

    Glass fiber-reinforced endodontic posts are considered to have favorable mechanical properties for the reconstruction of endodontically treated teeth. The aim of the present investigation was to evaluate the survival of two tapered and one parallel-sided glass fiber-reinforced endodontic post systems in teeth with different stages of hard tissue loss and to identify risk factors for restoration failure. One-hundred and forty-nine glass fiber-reinforced endodontic posts in 122 patients were followed-up for 5-56 months [mean +/- standard deviation (SD): 39 +/- 11 months]. Glass fiber-reinforced endodontic posts were adhesively luted and the core was built with a composite resin. Cox proportional hazards models were used to evaluate the association of clinical variables and failure rate. Higher failure rates were found for restorations of anterior teeth compared with posterior teeth [Hazard-Ratios (HR): 3.1; 95% confidence interval (CI): 1.3-7.4], for restorations in teeth with no proximal contacts compared with at least one proximal contact (HR: 3.0; 95% CI: 1.0-9.0), and for teeth restored with single crowns compared with fixed bridges (HR: 4.3; 95% CI: 1.1-16.2). Tooth type, type of final restoration and the presence of adjacent teeth were found to be significant predictors of failure rates in endodontically treated teeth restored with glass fiber-reinforced endodontic posts.

  14. Universal avalanche statistics and triggering close to failure in a mean-field model of rheological fracture

    NASA Astrophysics Data System (ADS)

    Baró, Jordi; Davidsen, Jörn

    2018-03-01

    The hypothesis of critical failure relates the presence of an ultimate stability point in the structural constitutive equation of materials to a divergence of characteristic scales in the microscopic dynamics responsible for deformation. Avalanche models involving critical failure have determined common universality classes for stick-slip processes and fracture. However, not all empirical failure processes exhibit the trademarks of criticality. The rheological properties of materials introduce dissipation, usually reproduced in conceptual models as a hardening of the coarse grained elements of the system. Here, we investigate the effects of transient hardening on (i) the activity rate and (ii) the statistical properties of avalanches. We find the explicit representation of transient hardening in the presence of generalized viscoelasticity and solve the corresponding mean-field model of fracture. In the quasistatic limit, the accelerated energy release is invariant with respect to rheology and the avalanche propagation can be reinterpreted in terms of a stochastic counting process. A single universality class can be defined from such analogy, and all statistical properties depend only on the distance to criticality. We also prove that interevent correlations emerge due to the hardening—even in the quasistatic limit—that can be interpreted as "aftershocks" and "foreshocks."

  15. Scalable Failure Masking for Stencil Computations using Ghost Region Expansion and Cell to Rank Remapping

    DOE PAGES

    Gamell, Marc; Teranishi, Keita; Kolla, Hemanth; ...

    2017-10-26

    In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less

  16. Scalable Failure Masking for Stencil Computations using Ghost Region Expansion and Cell to Rank Remapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamell, Marc; Teranishi, Keita; Kolla, Hemanth

    In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less

  17. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang

    Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vsmore » treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.« less

  18. Parasympathetic Nervous System Reactivity Moderates Associations Between Children's Executive Functioning and Social and Academic Competence.

    PubMed

    McQuade, Julia D; Penzel, Taylor E; Silk, Jennifer S; Lee, Kyung Hwa

    2017-10-01

    This study examined whether children with poor executive functioning (EF) evidenced less social and academic impairments, compared to other children, if they demonstrated adaptive parasympathetic nervous system (PNS) regulation during experiences of failure. Participants with and without clinical elevations in ADHD symptoms (N = 61; 9-13 years; 48% male; 85% Caucasian) were administered a battery of EF tests and completed manipulated social and cognitive failure tasks. While participants completed failure tasks, respiratory sinus arrhythmia reactivity (RSA-R) was measured as an indicator of PNS reactivity. Children's social and academic impairment in daily life was assessed based on parent and teacher report on multiple measures. RSA-R during social failure moderated the association between poor EF and adult-rated social impairment and RSA-R during cognitive failure moderated the association between poor EF and adult-rated academic impairment. Simple effects indicated that poor EF was significantly associated with impairment when children demonstrated RSA activation (increased PNS activity) but not when children demonstrated RSA withdrawal (decreases in PNS activity). Domain-crossed models (e.g., reactivity to social failure predicting academic impairment) were not significant, suggesting that the moderating effect of RSA-R was domain-specific. Results suggest that not all children with poor EF evidence social and academic impairment; RSA withdrawal during experiences of failure may be protective specifically for children with impaired EF skills.

  19. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less

  20. A Comparative Study of Defibrillator Leads at a Large-Volume Implanting Hospital: Results From the Pacemaker and Implantable Defibrillator Leads Survival Study ("PAIDLESS").

    PubMed

    Cohen, Todd J; Asheld, Wilbur J; Germano, Joseph; Islam, Shahidul; Patel, Dhimesh

    2015-06-01

    The purpose of the study was to examine survival in the implantable defibrillator subset of implanted leads at a large-volume implanting hospital. Implantable lead survival has been the subject of many multicenter studies over the past decade. Fewer large implanting volume single-hospital studies have examined defibrillator lead failure as it relates to patient survival and lead construction. This investigator-initiated retrospective study examined defibrillator lead failure in those who underwent implantation of a defibrillator between February 1, 1996 and December 31, 2011. Lead failure was defined as: failure to capture/sense, abnormal pacing and/or defibrillator impedance, visual insulation defect or lead fracture, extracardiac stimulation, cardiac perforation, tricuspid valve entrapment, lead tip fracture and/or lead dislodgment. Patient characteristics, implant approach, lead manufacturers, lead models, recalled status, patient mortality, and core lead design elements were compared using methods that include Kaplan Meier analysis, univariate and multivariable Cox regression models. A total of 4078 defibrillator leads were implanted in 3802 patients (74% male; n = 2812) with a mean age of 70 ± 13 years at Winthrop University Hospital. Lead manufacturers included: Medtronic: [n = 1834; 801 recalled]; St. Jude Medical: [n = 1707; 703 recalled]; Boston Scientific: [n = 537; 0 recalled]. Kaplan-Meier analysis adjusted for multiple comparisons revealed that both Boston Scientific's and St. Jude Medical's leads had better survival than Medtronic's leads (P<.001 and P=.01, respectively). Lead survival was comparable between Boston Scientific and St. Jude Medical (P=.80). A total of 153 leads failed (3.5% of all leads) during the study. There were 99 lead failures from Medtronic (5.4% failure rate); 56 were recalled Sprint Fidelis leads. There were 36 lead failures from St. Jude (2.1% failure rate); 20 were recalled Riata or Riata ST leads. There were 18 lead failures from Boston Scientific (3.35% failure rate); none were recalled. Kaplan Meier analysis also showed lead failure occurred sooner in the recalled leads (P=.01). A total of 1493 patients died during the study (mechanism of death was largely unknown). There was a significant increase in mortality in the recalled lead group as compared with non-recalled leads (P=.01), but no significant difference in survival when comparing recalled leads from Medtronic with St. Jude Medical (P=.67). A multivariable Cox regression model revealed younger age, history of percutaneous coronary intervention, baseline rhythm other than atrial fibrillation or atrial flutter, combination polyurethane and silicone lead insulation, a second defibrillation coil, and recalled lead status all contributed to lead failure. This study demonstrated a significantly improved lead performance in the Boston Scientific and St. Jude leads as compared with Medtronic leads. Some lead construction variables (insulation and number of coils) also had a significant impact on lead failure, which was independent of the manufacturer. Recalled St. Jude leads performed better than recalled Medtronic leads in our study. Recalled St. Jude leads had no significant difference in lead failure when compared with the other manufacturer's non-recalled leads. Defibrillator recalled lead status was associated with an increased mortality as compared with non-recalled leads. This correlation was independent of the lead manufacturer and clinically significant even when considering known mortality risk factors. These results must be tempered by the largely unknown mechanism of death in these patients.

  1. Pharmacological heart rate lowering in patients with a preserved ejection fraction-review of a failing concept.

    PubMed

    Meyer, Markus; Rambod, Mehdi; LeWinter, Martin

    2018-07-01

    Epidemiological studies have demonstrated that high resting heart rates are associated with increased mortality. Clinical studies in patients with heart failure and reduced ejection fraction have shown that heart rate lowering with beta-blockers and ivabradine improves survival. It is therefore often assumed that heart rate lowering is beneficial in other patients as well. Here, we critically appraise the effects of pharmacological heart rate lowering in patients with both normal and reduced ejection fraction with an emphasis on the effects of pharmacological heart rate lowering in hypertension and heart failure. Emerging evidence from recent clinical trials and meta-analyses suggest that pharmacological heart rate lowering is not beneficial in patients with a normal or preserved ejection fraction. This has just begun to be reflected in some but not all guideline recommendations. The detrimental effects of pharmacological heart rate lowering are due to an increase in central blood pressures, higher left ventricular systolic and diastolic pressures, and increased ventricular wall stress. Therefore, we propose that heart rate lowering per se reproduces the hemodynamic effects of diastolic dysfunction and imposes an increased arterial load on the left ventricle, which combine to increase the risk of heart failure and atrial fibrillation. Pharmacologic heart rate lowering is clearly beneficial in patients with a dilated cardiomyopathy but not in patients with normal chamber dimensions and normal systolic function. These conflicting effects can be explained based on a model that considers the hemodynamic and ventricular structural effects of heart rate changes.

  2. A comprehensive analysis of the performance characteristics of the Mount Laguna solar photovoltaic installation

    NASA Technical Reports Server (NTRS)

    Shumka, A.; Sollock, S. G.

    1981-01-01

    This paper represents the first comprehensive survey of the Mount Laguna Photovoltaic Installation. The novel techniques used for performing the field tests have been effective in locating and characterizing defective modules. A comparative analysis on the two types of modules used in the array indicates that they have significantly different failure rates, different distributions in degradational space and very different failure modes. A life cycle model is presented to explain a multimodal distribution observed for one module type. A statistical model is constructed and it is shown to be in good agreement with the field data.

  3. Chest Wall Thickness and Decompression Failure: A Systematic Review and Meta-analysis Comparing Anatomic Locations in Needle Thoracostomy

    PubMed Central

    Laan, Danuel V.; Vu, Trang Diem N.; Thiels, Cornelius A.; Pandian, T. K.; Schiller, Henry J.; Murad, M. Hassan; Aho, Johnathon M.

    2015-01-01

    Introduction Current Advanced Trauma Life Support guidelines recommend decompression for thoracic tension physiology using a 5-cm angiocatheter at the second intercostal space (ICS) on the midclavicular line (MCL). High failure rates occur. Through systematic review and meta-analysis, we aimed to determine the chest wall thickness (CWT) of the 2nd ICS-MCL, the 4th/5th ICS at the anterior axillary line (AAL), the 4th/5th ICS mid axillary line (MAL) and needle thoracostomy failure rates using the currently recommended 5-cm angiocatheter. Methods A comprehensive search of several databases from their inception to July 24, 2014 was conducted. The search was limited to the English language, and all study populations were included. Studies were appraised by two independent reviewers according to a priori defined PRISMA inclusion and exclusion criteria. Continuous outcomes (CWT) were evaluated using weighted mean difference and binary outcomes (failure with 5-cm needle) were assessed using incidence rate. Outcomes were pooled using the random-effects model. Results The search resulted in 34,652 studies of which 15 were included for CWT analysis, 13 for NT effectiveness. Mean CWT was 42.79 mm (95% CI, 38.78–46.81) at 2nd ICS-MCL, 39.85 mm (95% CI, 28.70–51.00) at MAL, and 34.33 mm (95% CI, 28.20–40.47) at AAL (P=0.08). Mean failure rate was 38% (95% CI, 24–54) at 2nd ICS-MCL, 31% (95% CI, 10–64) at MAL, and 13% (95% CI, 8–22) at AAL (P=0.01). Conclusion Evidence from observational studies suggests that the 4th/5th ICS-AAL has the lowest predicted failure rate of needle decompression in multiple populations. PMID:26724173

  4. Chest wall thickness and decompression failure: A systematic review and meta-analysis comparing anatomic locations in needle thoracostomy.

    PubMed

    Laan, Danuel V; Vu, Trang Diem N; Thiels, Cornelius A; Pandian, T K; Schiller, Henry J; Murad, M Hassan; Aho, Johnathon M

    2016-04-01

    Current Advanced Trauma Life Support guidelines recommend decompression for thoracic tension physiology using a 5-cm angiocatheter at the second intercostal space (ICS) on the midclavicular line (MCL). High failure rates occur. Through systematic review and meta-analysis, we aimed to determine the chest wall thickness (CWT) of the 2nd ICS-MCL, the 4th/5th ICS at the anterior axillary line (AAL), the 4th/5th ICS mid axillary line (MAL) and needle thoracostomy failure rates using the currently recommended 5-cm angiocatheter. A comprehensive search of several databases from their inception to July 24, 2014 was conducted. The search was limited to the English language, and all study populations were included. Studies were appraised by two independent reviewers according to a priori defined PRISMA inclusion and exclusion criteria. Continuous outcomes (CWT) were evaluated using weighted mean difference and binary outcomes (failure with 5-cm needle) were assessed using incidence rate. Outcomes were pooled using the random-effects model. The search resulted in 34,652 studies of which 15 were included for CWT analysis, 13 for NT effectiveness. Mean CWT was 42.79 mm (95% CI, 38.78-46.81) at 2nd ICS-MCL, 39.85 mm (95% CI, 28.70-51.00) at MAL, and 34.33 mm (95% CI, 28.20-40.47) at AAL (P=.08). Mean failure rate was 38% (95% CI, 24-54) at 2nd ICS-MCL, 31% (95% CI, 10-64) at MAL, and 13% (95% CI, 8-22) at AAL (P=.01). Evidence from observational studies suggests that the 4th/5th ICS-AAL has the lowest predicted failure rate of needle decompression in multiple populations. Level 3 SR/MA with up to two negative criteria. Therapeutic. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Software dependability in the Tandem GUARDIAN system

    NASA Technical Reports Server (NTRS)

    Lee, Inhwan; Iyer, Ravishankar K.

    1995-01-01

    Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.

  6. Multiscale Modeling of Fracture in an SiO2 Nanorod

    NASA Astrophysics Data System (ADS)

    Mallik, Aditi

    2005-11-01

    The fracture of a 108 particle SiO2 nanorod under uniaxial strain is described using an NDDO quantum mechanics. The stress -- strain curve to failure is calculated as a function of strain rate to show a domain that is independent of strain rate. A pair potential for use in classical MD is constructed such that the elastic portion of the quantum curve is reproduced. However, it is shown that the classical analysis does not describe accurately the large strain behavior and failure. Finally, a composite rod is constructed with a small subsystem described by quantum mechanics and the remainder described by classical MD ^1. The stress -- strain curves for the classical, quantum, and composite rods are compared and contrasted. 1. ``Multiscale Modeling of Materials -- Concepts and Illustration'', A. Mallik, K. Runge, J. Dufty, and H-P Cheng, cond-mat 0507558.

  7. Effects of hydromechanical loading history and antecedent soil mechanical damage on shallow landslide triggering

    NASA Astrophysics Data System (ADS)

    Fan, Linfeng; Lehmann, Peter; Or, Dani

    2015-10-01

    Evidence suggests that the sudden triggering of rainfall-induced shallow landslides is preceded by accumulation of local internal failures in the soil mantle before their abrupt coalescence into a landslide failure plane. The mechanical status of a hillslope at any given time reflects competition between local damage accumulated during antecedent rainfall events and rates of mechanical healing (e.g., rebonding of microcracks and root regrowth). This dynamic interplay between damage accumulation and healing rates determines the initial mechanical state for landslide modeling. We evaluated the roles of these dynamic processes on landslide characteristics and patterns using a hydromechanical landslide-triggering model for a sequence of rainfall scenarios. The progressive nature of soil failure was represented by the fiber bundle model formalism that considers threshold strength of mechanical bonds linking adjacent soil columns and bedrock. The antecedent damage induced by prior rainfall events was expressed by the fraction of broken fibers that gradually regain strength or mechanically heal at rates specific to soil and roots. Results indicate that antecedent damage accelerates landslide initiation relative to pristine (undamaged) hillslopes. The volumes of first triggered landslides increase with increasing antecedent damage; however, for heavily damaged hillslopes, landslide volumes tend to decrease. Elapsed time between rainfall events allows mechanical healing that reduces the effects of antecedent damage. This study proposed a quantitative framework for systematically incorporating hydromechanical loading history and information on precursor events (e.g., such as recorded by acoustic emissions) into shallow landslide hazard assessment.

  8. A retrospective survey of the causes of bracket- and tube-bonding failures.

    PubMed

    Roelofs, Tom; Merkens, Nico; Roelofs, Jeroen; Bronkhorst, Ewald; Breuning, Hero

    2017-01-01

    To investigate the causes of bonding failures of orthodontic brackets and tubes and the effect of premedicating for saliva reduction. Premedication with atropine sulfate was administered randomly. Failure rate of brackets and tubes placed in a group of 158 consecutive patients was evaluated after a mean period of 67 weeks after bonding. The failure rate in the group without atropine sulfate premedication was 2.4%. In the group with premedication, the failure rate was 2.7%. The Cox regression analysis of these groups showed that atropine application did not lead to a reduction in bond failures. Statistically significant differences in the hazard ratio were found for the bracket regions and for the dental assistants who prepared for the bonding procedure. Premedication did not lead to fewer bracket failures. The roles of the dental assistant and patient in preventing failures was relevant. A significantly higher failure rate for orthodontic appliances was found in the posterior regions.

  9. Effect of Coulomb stress on the Gutenberg-Richter law

    NASA Astrophysics Data System (ADS)

    Navas-Portella, V.; Corral, A.; Jimenez, A.

    2017-12-01

    Coulomb stress theory has been used for years in seismology to understand how earthquakes trigger each other. Whenever an earthquake occurs, the stress field changes in its neighbourhood, with places with positive values brought closer to failure, whereas negative values distance away that location from failure. Earthquake models that relate rate changes and Coulomb stress after a main event, such as the rate-and-state model, assume negative and positive stress values affect rate changes according to the same functional form. As a first order approximation, under uniform background seismicity before the main event, different values of the b-exponent in the Gutenberg-Richter law would indicate different behaviour for positive and negative stress. In this work, we study the Gutenberg-Richter law in the aftershock sequence of the Landers earthquake (California, 1992, MW=7.3). By using a statistically based fitting method, we discuss whether the sign of Coulomb stresses and the distance to the fault have a significant effect on the value of the b-exponent.

  10. Resting Heart Rate as Predictor for Left Ventricular Dysfunction and Heart Failure: The Multi-Ethnic Study of Atherosclerosis

    PubMed Central

    Opdahl, Anders; Venkatesh, Bharath Ambale; Fernandes, Veronica R. S.; Wu, Colin O.; Nasir, Khurram; Choi, Eui-Young; Almeida, Andre L. C.; Rosen, Boaz; Carvalho, Benilton; Edvardsen, Thor; Bluemke, David A.; Lima, Joao A. C.

    2014-01-01

    OBJECTIVE To investigate the relationship between baseline resting heart rate and incidence of heart failure (HF) and global and regional left ventricular (LV) dysfunction. BACKGROUND The association of resting heart rate to HF and LV function is not well described in an asymptomatic multi-ethnic population. METHODS Participants in the Multi-Ethnic Study of Atherosclerosis had resting heart rate measured at inclusion. Incident HF was registered (n=176) during follow-up (median 7 years) in those who underwent cardiac MRI (n=5000). Changes in ejection fraction (ΔEF) and peak circumferential strain (Δεcc) were measured as markers of developing global and regional LV dysfunction in 1056 participants imaged at baseline and 5 years later. Time to HF (Cox model) and Δεcc and ΔEF (multiple linear regression models) were adjusted for demographics, traditional cardiovascular risk factors, calcium score, LV end-diastolic volume and mass in addition to resting heart rate. RESULTS Cox analysis demonstrated that for 1 bpm increase in resting heart rate there was a 4% greater adjusted relative risk for incident HF (Hazard Ratio: 1.04 (1.02, 1.06 (95% CI); P<0.001). Adjusted multiple regression models demonstrated that resting heart rate was positively associated with deteriorating εcc and decrease in EF, even in analyses when all coronary heart disease events were excluded from the model. CONCLUSION Elevated resting heart rate is associated with increased risk for incident HF in asymptomatic participants in MESA. Higher heart rate is related to development of regional and global LV dysfunction independent of subclinical atherosclerosis and coronary heart disease. PMID:24412444

  11. Recognition during recall failure: Semantic feature matching as a mechanism for recognition of semantic cues when recall fails.

    PubMed

    Cleary, Anne M; Ryals, Anthony J; Wagner, Samantha R

    2016-01-01

    Research suggests that a feature-matching process underlies cue familiarity-detection when cued recall with graphemic cues fails. When a test cue (e.g., potchbork) overlaps in graphemic features with multiple unrecalled studied items (e.g., patchwork, pitchfork, pocketbook, pullcork), higher cue familiarity ratings are given during recall failure of all of the targets than when the cue overlaps in graphemic features with only one studied target and that target fails to be recalled (e.g., patchwork). The present study used semantic feature production norms (McRae et al., Behavior Research Methods, Instruments, & Computers, 37, 547-559, 2005) to examine whether the same holds true when the cues are semantic in nature (e.g., jaguar is used to cue cheetah). Indeed, test cues (e.g., cedar) that overlapped in semantic features (e.g., a_tree, has_bark, etc.) with four unretrieved studied items (e.g., birch, oak, pine, willow) received higher cue familiarity ratings during recall failure than test cues that overlapped in semantic features with only two (also unretrieved) studied items (e.g., birch, oak), which in turn received higher familiarity ratings during recall failure than cues that did not overlap in semantic features with any studied items. These findings suggest that the feature-matching theory of recognition during recall failure can accommodate recognition of semantic cues during recall failure, providing a potential mechanism for conceptually-based forms of cue recognition during target retrieval failure. They also provide converging evidence for the existence of the semantic features envisaged in feature-based models of semantic knowledge representation and for those more concretely specified by the production norms of McRae et al. (Behavior Research Methods, Instruments, & Computers, 37, 547-559, 2005).

  12. Clinical Correlates and Prognostic Value of Proenkephalin in Acute and Chronic Heart Failure.

    PubMed

    Matsue, Yuya; Ter Maaten, Jozine M; Struck, Joachim; Metra, Marco; O'Connor, Christopher M; Ponikowski, Piotr; Teerlink, John R; Cotter, Gad; Davison, Beth; Cleland, John G; Givertz, Michael M; Bloomfield, Daniel M; Dittrich, Howard C; van Veldhuisen, Dirk J; van der Meer, Peter; Damman, Kevin; Voors, Adriaan A

    2017-03-01

    Proenkephalin (pro-ENK) has emerged as a novel biomarker associated with both renal function and cardiac function. However, its clinical and prognostic value have not been well evaluated in symptomatic patients with heart failure. The association between pro-ENK and markers of renal function was evaluated in 95 patients with chronic heart failure who underwent renal hemodynamic measurements, including renal blood flow (RBF) and glomerular filtration rate (GFR) with the use of 131 I-Hippuran and 125 I-iothalamate clearances, respectively. The association between pro-ENK and clinical outcome in acute heart failure was assessed in another 1589 patients. Pro-ENK was strongly correlated with both RBF (P < .001) and GFR (P < .001), but not with renal tubular markers. In the acute heart failure cohort, pro-ENK was a predictor of death through 180 days, heart failure rehospitalization through 60 days, and death or cardiovascular or renal rehospitalization through day 60 in univariable analyses, but its predictive value was lost in a multivariable model when other renal markers were entered in the model. In patients with chronic and acute heart failure, pro-ENK is strongly associated with glomerular function, but not with tubular damage. Pro-ENK provides limited prognostic information in patients with acute heart failure on top of established renal markers. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Effectiveness and predictors of failure of noninvasive mechanical ventilation in acute respiratory failure.

    PubMed

    Martín-González, F; González-Robledo, J; Sánchez-Hernández, F; Moreno-García, M N; Barreda-Mellado, I

    2016-01-01

    To assess the effectiveness and identify predictors of failure of noninvasive ventilation. A retrospective, longitudinal descriptive study was made. Adult patients with acute respiratory failure. A total of 410 consecutive patients with noninvasive ventilation treated in an Intensive Care Unit of a tertiary university hospital from 2006 to 2011. Noninvasive ventilation. Demographic variables and clinical and laboratory test parameters at the start and two hours after the start of noninvasive ventilation. Evolution during admission to the Unit and until hospital discharge. The failure rate was 50%, with an overall mortality rate of 33%. A total of 156 patients had hypoxemic respiratory failure, 87 postextubation respiratory failure, 78 exacerbation of chronic obstructive pulmonary disease, 61 hypercapnic respiratory failure without chronic obstructive pulmonary disease, and 28 had acute pulmonary edema. The failure rates were 74%, 54%, 27%, 31% and 21%, respectively. The etiology of respiratory failure, serum bilirubin at the start, APACHEII score, radiological findings, the need for sedation to tolerate noninvasive ventilation, changes in level of consciousness, PaO2/FIO2 ratio, respiratory rate and heart rate from the start and two hours after the start of noninvasive ventilation were independently associated to failure. The effectiveness of noninvasive ventilation varies according to the etiology of respiratory failure. Its use in hypoxemic respiratory failure and postextubation respiratory failure should be assessed individually. Predictors of failure could be useful to prevent delayed intubation. Copyright © 2015 Elsevier España, S.L.U. and SEMICYUC. All rights reserved.

  14. Flight test results of the strapdown ring laser gyro tetrad inertial navigation system

    NASA Technical Reports Server (NTRS)

    Carestia, R. A.; Hruby, R. J.; Bjorkman, W. S.

    1983-01-01

    A helicopter flight test program undertaken to evaluate the performance of Tetrad (a strap down, laser gyro, inertial navigation system) is described. The results of 34 flights show a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n. mi., with a standard deviation of 1.48 n. mi.; and a modeled mean position error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. No laser gyro or accelerometer failures were detected during the flight tests. Off line parity residual studies used simulated failures with the prerecorded flight test and laboratory test data. The airborne Tetrad system's failure--detection logic, exercised during the tests, successfully demonstrated the detection of simulated ""hard'' failures and the system's ability to continue successfully to navigate by removing the simulated faulted sensor from the computations. Tetrad's four ring laser gyros provided reliable and accurate angular rate sensing during the 4 yr of the test program, and no sensor failures were detected during the evaluation of free inertial navigation performance.

  15. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  16. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  17. Teacher Adoption of Moodle LMS: A K-12 Diffusion Study

    ERIC Educational Resources Information Center

    Gagnon, Daniel A.

    2012-01-01

    This paper describes the diffusion of Moodle within Cherokee County Schools. The diffusion is evaluated using the Bass Model and the RIPPLES model in order to evaluate relative success or failure. The Bass Model of Diffusion was calculated utilizing forecasting by analogy in order to analyze the adoption rates in a county high school. The adoption…

  18. Using Seismic Signals to Forecast Volcanic Processes

    NASA Astrophysics Data System (ADS)

    Salvage, R.; Neuberg, J. W.

    2012-04-01

    Understanding seismic signals generated during volcanic unrest have the ability to allow scientists to more accurately predict and understand active volcanoes since they are intrinsically linked to rock failure at depth (Voight, 1988). In particular, low frequency long period signals (LP events) have been related to the movement of fluid and the brittle failure of magma at depth due to high strain rates (Hammer and Neuberg, 2009). This fundamentally relates to surface processes. However, there is currently no physical quantitative model for determining the likelihood of an eruption following precursory seismic signals, or the timing or type of eruption that will ensue (Benson et al., 2010). Since the beginning of its current eruptive phase, accelerating LP swarms (< 10 events per hour) have been a common feature at Soufriere Hills volcano, Montserrat prior to surface expressions such as dome collapse or eruptions (Miller et al., 1998). The dynamical behaviour of such swarms can be related to accelerated magma ascent rates since the seismicity is thought to be a consequence of magma deformation as it rises to the surface. In particular, acceleration rates can be successfully used in collaboration with the inverse material failure law; a linear relationship against time (Voight, 1988); in the accurate prediction of volcanic eruption timings. Currently, this has only been investigated for retrospective events (Hammer and Neuberg, 2009). The identification of LP swarms on Montserrat and analysis of their dynamical characteristics allows a better understanding of the nature of the seismic signals themselves, as well as their relationship to surface processes such as magma extrusion rates. Acceleration and deceleration rates of seismic swarms provide insights into the plumbing system of the volcano at depth. The application of the material failure law to multiple LP swarms of data allows a critical evaluation of the accuracy of the method which further refines current understanding of the relationship between seismic signals and volcanic eruptions. It is hoped that such analysis will assist the development of real time forecasting models.

  19. Failure of fertility therapy and subsequent adverse cardiovascular events

    PubMed Central

    Udell, Jacob A.; Lu, Hong; Redelmeier, Donald A.

    2017-01-01

    BACKGROUND: Infertility may indicate an underlying predisposition toward premature cardiovascular disease, yet little is known about potential long-term cardiovascular events following fertility therapy. We investigated whether failure of fertility therapy is associated with subsequent adverse cardiovascular events. METHODS: We performed a population-based cohort analysis of women who received gonadotropin-based fertility therapy between Apr. 1, 1993, and Mar. 31, 2011, distinguishing those who subsequently gave birth and those who did not. Using multivariable Poisson regression models, we estimated the relative rate ratio of adverse cardiovascular events associated with fertility therapy failure, accounting for age, year, baseline risk factors, health care history and number of fertility cycles. The primary outcome was subsequent treatment for nonfatal coronary ischemia, stroke, transient ischemic attack, heart failure or thromboembolism. RESULTS: Of 28 442 women who received fertility therapy, 9349 (32.9%) subsequently gave birth and 19 093 (67.1%) did not. The median number of fertility treatments was 3 (interquartile range 1–5). We identified 2686 cardiovascular events over a median 8.4 years of follow-up. The annual rate of cardiovascular events was 19% higher among women who did not give birth after fertility therapy than among those who did (1.08 v. 0.91 per 100 patient-years, p < 0.001), equivalent to a 21% relative increase in the annual rate (95% confidence interval 13%–30%). We observed no association between event rates and number of treatment cycles. INTERPRETATION: Fertility therapy failure was associated with an increased risk of long-term adverse cardiovascular events. These women merit surveillance for subsequent cardiovascular events. PMID:28385819

  20. Mitochondrial reactive oxygen species production and respiratory complex activity in rats with pressure overload-induced heart failure

    PubMed Central

    Schwarzer, Michael; Osterholt, Moritz; Lunkenbein, Anne; Schrepper, Andrea; Amorim, Paulo; Doenst, Torsten

    2014-01-01

    We investigated the impact of cardiac reactive oxygen species (ROS) during the development of pressure overload-induced heart failure. We used our previously described rat model where transverse aortic constriction (TAC) induces compensated hypertrophy after 2 weeks, heart failure with preserved ejection fraction at 6 and 10 weeks, and heart failure with systolic dysfunction after 20 weeks. We measured mitochondrial ROS production rates, ROS damage and assessed the therapeutic potential of in vivo antioxidant therapies. In compensated hypertrophy (2 weeks of TAC) ROS production rates were normal at both mitochondrial ROS production sites (complexes I and III). Complex I ROS production rates increased with the appearance of diastolic dysfunction (6 weeks of TAC) and remained high thereafter. Surprisingly, maximal ROS production at complex III peaked at 6 weeks of pressure overload. Mitochondrial respiratory capacity (state 3 respiration) was elevated 2 and 6 weeks after TAC, decreased after this point and was significantly impaired at 20 weeks, when contractile function was also impaired and ROS damage was found with increased hydroxynonenal. Treatment with the ROS scavenger α-phenyl-N-tert-butyl nitrone or the uncoupling agent dinitrophenol significantly reduced ROS production rates at 6 weeks. Despite the decline in ROS production capacity, no differences in contractile function between treated and untreated animals were observed. Increased ROS production occurs early in the development of heart failure with a peak at the onset of diastolic dysfunction. However, ROS production may not be related to the onset of contractile dysfunction. PMID:24951621

  1. Re‐estimated effects of deep episodic slip on the occurrence and probability of great earthquakes in Cascadia

    USGS Publications Warehouse

    Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy

    2013-01-01

    Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.

  2. Treatment Failure With Rhythm and Rate Control Strategies in Patients With Atrial Fibrillation and Congestive Heart Failure: An AF-CHF Substudy.

    PubMed

    Dyrda, Katia; Roy, Denis; Leduc, Hugues; Talajic, Mario; Stevenson, Lynne Warner; Guerra, Peter G; Andrade, Jason; Dubuc, Marc; Macle, Laurent; Thibault, Bernard; Rivard, Lena; Khairy, Paul

    2015-12-01

    Rate and rhythm control strategies for atrial fibrillation (AF) are not always effective or well tolerated in patients with congestive heart failure (CHF). We assessed reasons for treatment failure, associated characteristics, and effects on survival. A total of 1,376 patients enrolled in the AF-CHF trial were followed for 37  ±  19 months, 206 (15.0%) of whom failed initial therapy leading to crossover. Rhythm control was abandoned more frequently than rate control (21.0% vs. 9.1%, P < 0.0001). Crossovers from rhythm to rate control were driven by inefficacy, whereas worsening heart failure was the most common reason to crossover from rate to rhythm control. In multivariate analyses, failure of rhythm control was associated with female sex, higher serum creatinine, functional class III or IV symptoms, lack of digoxin, and oral anticoagulation. Factors independently associated with failure of rate control were paroxysmal (vs. persistent) AF, statin therapy, and presence of an implantable cardioverter-defibrillator. Crossovers were not associated with cardiovascular mortality (hazard ratio [HR] 1.11 from rhythm to rate control; 95% confidence interval [95% CI, 0.73-1.73]; P = 0.6069; HR 1.29 from rate to rhythm control; 95% CI, 0.73-2.25; P = 0.3793) or all-cause mortality (HR 1.16 from rhythm to rate control, 95% CI [0.79-1.72], P = 0.4444; HR 1.15 from rate to rhythm control, 95% [0.69, 1.91], P = 0.5873). Rhythm control is abandoned more frequently than rate control in patients with AF and CHF. The most common reasons for treatment failure are inefficacy for rhythm control and worsening heart failure for rate control. Changing strategies does not impact survival. © 2015 Wiley Periodicals, Inc.

  3. An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks

    PubMed Central

    Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei

    2014-01-01

    The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005

  4. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  5. Numerical simulations of SHPB experiments for the dynamic compressive strength and failure of ceramics

    NASA Astrophysics Data System (ADS)

    Anderson, Charles E., Jr.; O'Donoghue, Padraic E.; Lankford, James; Walker, James D.

    1992-06-01

    Complementary to a study of the compressive strength of ceramic as a function of strain rate and confinement, numerical simulations of the split-Hopkinson pressure bar (SHPB) experiments have been performed using the two-dimensional wave propagation computer program HEMP. The numerical effort had two main thrusts. Firstly, the interpretation of the experimental data relies on several assumptions. The numerical simulations were used to investigate the validity of these assumptions. The second part of the effort focused on computing the idealized constitutive response of a ceramic within the SHPB experiment. These numerical results were then compared against experimental data. Idealized models examined included a perfectly elastic material, an elastic-perfectly plastic material, and an elastic material with failure. Post-failure material was modeled as having either no strength, or a strength proportional to the mean stress. The effects of confinement were also studied. Conclusions concerning the dynamic behavior of a ceramic up to and after failure are drawn from the numerical study.

  6. Yield and Failure Behavior Investigated for Cross-Linked Phenolic Resins Using Molecular Dynamics

    NASA Technical Reports Server (NTRS)

    Monk, Joshua D.; Lawson, John W.

    2016-01-01

    Molecular dynamics simulations were conducted to fundamentally evaluate the yield and failure behavior of cross-linked phenolic resins at temperatures below the glass transition. Yield stress was investigated at various temperatures, strain rates, and degrees of cross-linking. The onset of non-linear behavior in the cross-linked phenolic structures was caused by localized irreversible molecular rearrangements through the rotation of methylene linkers followed by the formation or annihilation of neighboring hydrogen bonds. The yield stress results, with respect to temperature and strain rate, could be fit by existing models used to describe yield behavior of amorphous glasses. The degree of cross-linking only indirectly influences the maximum yield stress through its influence on glass transition temperature (Tg), however there is a strong relationship between the degree of cross-linking and the failure mechanism. Low cross-linked samples were able to separate through void formation, whereas the highly cross-linked structures exhibited bond scission.

  7. Budget impact analysis of 8 hormonal contraceptive options.

    PubMed

    Crespi, Simone; Kerrigan, Matthew; Sood, Vipan

    2013-07-01

    To develop a model comparing costs of 8 hormonal contraceptives and determine whether acquisition costs for implants and intrauterine devices (IUDs) were offset by decreased pregnancy-related costs over a 3-year time horizon from a managed care perspective. A model was developed to assess the budget impact of branded or generic oral contraceptives (OCs), quarterly intramuscular depot medroxyprogesterone, etonogestrel/ethinyl estradiol vaginal ring, etonogestrel implant, levonorgestrel IUD, norelgestromin/ethinyl estradiol transdermal contraceptive, and ethinyl estradiol/levonorgestrel extended-cycle OC. Major variables included drug costs, typical use failure rates, discontinuation rates, and pregnancy costs. The base case assessed costs for 1000 women initiating each of the hormonal contraceptives. The etonogestrel implant and levonorgestrel IUD resulted in the fewest pregnancies, 63 and 85, respectively, and the least cost, $1.75 million and $2.0 million, respectively. In comparison, generic OC users accounted for a total of 243 pregnancies and $3.4 million in costs. At the end of year 1, costs for the etonogestrel implant ($800,471) and levonorgestrel IUD ($949,721) were already lower than those for generic OCs ($1,146,890). Sensitivity analysis showed that the cost of pregnancies, not product acquisition cost, was the primary cost driver. Higher initial acquisition costs for the etonogestrel implant and levonorgestrel IUD were offset within 1 year by lower contraceptive failure rates and consequent pregnancy costs. Thus, after accounting for typical use failure rates of contraceptive products, the etonogestrel implant and levonorgestrel IUD emerged as the least expensive hormonal contraceptives.

  8. Comparison of Sprint Fidelis and Riata defibrillator lead failure rates.

    PubMed

    Fazal, Iftikhar A; Shepherd, Ewen J; Tynan, Margaret; Plummer, Christopher J; McComb, Janet M

    2013-09-30

    Sprint Fidelis and Riata defibrillator leads are prone to early failure. Few data exist on the comparative failure rates and mortality related to lead failure. The aims of this study were to determine the failure rate of Sprint Fidelis and Riata leads, and to compare failure rates and mortality rates in both groups. Patients implanted with Sprint Fidelis leads and Riata leads at a single centre were identified and in July 2012, records were reviewed to ascertain lead failures, deaths, and relationship to device/lead problems. 113 patients had Sprint Fidelis leads implanted between June 2005 and September 2007; Riata leads were implanted in 106 patients between January 2003 and February 2008. During 53.0 ± 22.3 months of follow-up there were 13 Sprint Fidelis lead failures (11.5%, 2.60% per year) and 25 deaths. Mean time to failure was 45.1 ± 15.5 months. In the Riata lead cohort there were 32 deaths, and 13 lead failures (11.3%, 2.71% per year) over 54.8 ± 26.3 months follow-up with a mean time to failure of 53.5 ± 24.5 months. There were no significant differences in the lead failure-free Kaplan-Meier survival curve (p=0.77), deaths overall (p=0.17), or deaths categorised as sudden/cause unknown (p=0.54). Sprint Fidelis and Riata leads have a significant but comparable failure rate at 2.60% per year and 2.71% per year of follow-up respectively. The number of deaths in both groups is similar and no deaths have been identified as being related to lead failure in either cohort. Copyright © 2012. Published by Elsevier Ireland Ltd.

  9. The Obesity and Heart Failure Epidemics Among African Americans: Insights From the Jackson Heart Study.

    PubMed

    Krishnamoorthy, Arun; Greiner, Melissa A; Bertoni, Alain G; Eapen, Zubin J; O'Brien, Emily C; Curtis, Lesley H; Hernandez, Adrian F; Mentz, Robert J

    2016-08-01

    Higher rates of obesity and heart failure have been observed in African Americans, but associations with mortality are not well-described. We examined intermediate and long-term clinical implications of obesity in African Americans and associations between obesity and all-cause mortality, heart failure, and heart failure hospitalization. We conducted a retrospective analysis of a community sample of 5292 African Americans participating in the Jackson Heart Study between September 2000 and January 2013. The main outcomes were associations between body mass index (BMI) and all-cause mortality at 9 years and heart failure hospitalization at 7 years using Cox proportional hazards models and interval development of heart failure (median 8 years' follow-up) using a modified Poisson model. At baseline, 1406 (27%) participants were obese and 1416 (27%) were morbidly obese. With increasing BMI, the cumulative incidence of mortality decreased (P= .007), whereas heart failure increased (P < .001). Heart failure hospitalization was more common among morbidly obese participants (9.0%; 95% confidence interval [CI] 7.6-11.7) than among normal-weight patients (6.3%; 95% CI 4.7-8.4). After risk adjustment, BMI was not associated with mortality. Each 1-point increase in BMI was associated with a 5% increase in the risk of heart failure (hazard ratio 1.05; 95% CI 1.03-1.06; P < .001) and the risk of heart failure hospitalization for BMI greater than 32 kg/m(2) (hazard ratio 1.05; 95% CI 1.03-1.07; P < .001). Obesity and morbid obesity were common in a community sample of African Americans, and both were associated with increased heart failure and heart failure hospitalization. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Computer modeling of tank track elastomers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lesuer, D.R.; Goldberg, A.; Patt, J.

    Computer models of the T142, T156 and the British Chieftain tank tracks have been studied as part of a program to examine the tank-track-pad failure problem. The modeling is based on the finite element method with two different models being used to evaluate the thermal and mechanical response of the tracks. Modeling has enabled us to evaluate the influence of track design, elastomer formulation and operating scenario on the response of the track. the results of these analyses have been evaluated with experimental tests that quantify the extent of damage development in elastomers and thus indicate the likelihood of padmore » failure due to ''cutting and chunking.'' The primary characteristics influencing the temperatures achieved in the track are the heat-generation rate and the track geometry. The heat-generation rate is related to the viscoelastic material properties of the elastomer, track design and loading/operating scenario. For all designs and materials studied, stresses produced during contact with a flat roadway surface were not considered large enough to damage the pad. Operating scenarios were studied in which the track pad contacts rigid bars representing idealized obstacles in cross country terrain. A highly localized obstacle showed the possibility for subsurface mechanical damage to the track pad due to obstacle contact. Contact with a flat rigid bar produced higher tensile stresses that were near the damage thresholds for this material and thus capable of producing cutting and chunking failures.« less

  11. Long-Term Overexpression of Hsp70 Does Not Protect against Cardiac Dysfunction and Adverse Remodeling in a MURC Transgenic Mouse Model with Chronic Heart Failure and Atrial Fibrillation

    PubMed Central

    Bernardo, Bianca C.; Sapra, Geeta; Patterson, Natalie L.; Cemerlang, Nelly; Kiriazis, Helen; Ueyama, Tomomi; Febbraio, Mark A.; McMullen, Julie R.

    2015-01-01

    Previous animal studies had shown that increasing heat shock protein 70 (Hsp70) using a transgenic, gene therapy or pharmacological approach provided cardiac protection in models of acute cardiac stress. Furthermore, clinical studies had reported associations between Hsp70 levels and protection against atrial fibrillation (AF). AF is the most common cardiac arrhythmia presenting in cardiology clinics and is associated with increased rates of heart failure and stroke. Improved therapies for AF and heart failure are urgently required. Despite promising observations in animal studies which targeted Hsp70, we recently reported that increasing Hsp70 was unable to attenuate cardiac dysfunction and pathology in a mouse model which develops heart failure and intermittent AF. Given our somewhat unexpected finding and the extensive literature suggesting Hsp70 provides cardiac protection, it was considered important to assess whether Hsp70 could provide protection in another mouse model of heart failure and AF. The aim of the current study was to determine whether increasing Hsp70 could attenuate adverse cardiac remodeling, cardiac dysfunction and episodes of arrhythmia in a mouse model of heart failure and AF due to overexpression of Muscle-Restricted Coiled-Coil (MURC). Cardiac function and pathology were assessed in mice at approximately 12 months of age. We report here, that chronic overexpression of Hsp70 was unable to provide protection against cardiac dysfunction, conduction abnormalities, fibrosis or characteristic molecular markers of the failing heart. In summary, elevated Hsp70 may provide protection in acute cardiac stress settings, but appears insufficient to protect the heart under chronic cardiac disease conditions. PMID:26660322

  12. Long-Term Overexpression of Hsp70 Does Not Protect against Cardiac Dysfunction and Adverse Remodeling in a MURC Transgenic Mouse Model with Chronic Heart Failure and Atrial Fibrillation.

    PubMed

    Bernardo, Bianca C; Sapra, Geeta; Patterson, Natalie L; Cemerlang, Nelly; Kiriazis, Helen; Ueyama, Tomomi; Febbraio, Mark A; McMullen, Julie R

    2015-01-01

    Previous animal studies had shown that increasing heat shock protein 70 (Hsp70) using a transgenic, gene therapy or pharmacological approach provided cardiac protection in models of acute cardiac stress. Furthermore, clinical studies had reported associations between Hsp70 levels and protection against atrial fibrillation (AF). AF is the most common cardiac arrhythmia presenting in cardiology clinics and is associated with increased rates of heart failure and stroke. Improved therapies for AF and heart failure are urgently required. Despite promising observations in animal studies which targeted Hsp70, we recently reported that increasing Hsp70 was unable to attenuate cardiac dysfunction and pathology in a mouse model which develops heart failure and intermittent AF. Given our somewhat unexpected finding and the extensive literature suggesting Hsp70 provides cardiac protection, it was considered important to assess whether Hsp70 could provide protection in another mouse model of heart failure and AF. The aim of the current study was to determine whether increasing Hsp70 could attenuate adverse cardiac remodeling, cardiac dysfunction and episodes of arrhythmia in a mouse model of heart failure and AF due to overexpression of Muscle-Restricted Coiled-Coil (MURC). Cardiac function and pathology were assessed in mice at approximately 12 months of age. We report here, that chronic overexpression of Hsp70 was unable to provide protection against cardiac dysfunction, conduction abnormalities, fibrosis or characteristic molecular markers of the failing heart. In summary, elevated Hsp70 may provide protection in acute cardiac stress settings, but appears insufficient to protect the heart under chronic cardiac disease conditions.

  13. Relationship between Sponsorship and Failure Rate of Dental Implants: A Systematic Approach

    PubMed Central

    Popelut, Antoine; Valet, Fabien; Fromentin, Olivier; Thomas, Aurélie; Bouchard, Philippe

    2010-01-01

    Background The number of dental implant treatments increases annually. Dental implants are manufactured by competing companies. Systematic reviews and meta-analysis have shown a clear association between pharmaceutical industry funding of clinical trials and pro-industry results. So far, the impact of industry sponsorship on the outcomes and conclusions of dental implant clinical trials has never been explored. The aim of the present study was to examine financial sponsorship of dental implant trials, and to evaluate whether research funding sources may affect the annual failure rate. Methods and Findings A systematic approach was used to identify systematic reviews published between January 1993 and December 2008 that specifically deal with the length of survival of dental implants. Primary articles were extracted from these reviews. The failure rate of the dental implants included in the trials was calculated. Data on publication year, Impact Factor, prosthetic design, periodontal status reporting, number of dental implants included in the trials, methodological quality of the studies, presence of a statistical advisor, and financial sponsorship were extracted by two independent reviewers (kappa  = 0.90; CI95% [0.77–1.00]). Univariate quasi-Poisson regression models and multivariate analysis were used to identify variables that were significantly associated with failure rates. Five systematic reviews were identified from which 41 analyzable trials were extracted. The mean annual failure rate estimate was 1.09%.(CI95% [0.84–1.42]). The funding source was not reported in 63% of the trials (26/41). Sixty-six percent of the trials were considered as having a risk of bias (27/41). Given study age, both industry associated (OR = 0.21; CI95% [0.12–0.38]) and unknown funding source trials (OR = 0.33; (CI95% [0.21–0.51]) had a lower annual failure rates compared with non-industry associated trials. A conflict of interest statement was disclosed in 2 trials. Conclusions When controlling for other factors, the probability of annual failure for industry associated trials is significantly lower compared with non-industry associated trials. This bias may have significant implications on tooth extraction decision making, research on tooth preservation, and governmental health care policies. PMID:20422000

  14. The use of test structures for reliability prediction and process control of integrated circuits and photovoltaics

    NASA Astrophysics Data System (ADS)

    Trachtenberg, I.

    How a reliability model might be developed with new data from accelerated stress testing, failure mechanisms, process control monitoring, and test structure evaluations is illustrated. The effects of the acceleration of temperature on operating life is discussed. Test structures that will further accelerate the failure rate are discussed. Corrosion testing is addressed. The uncoated structure is encapsulated in a variety of mold compounds and subjected to pressure-cooker testing.

  15. Effects of antithyroid drugs on radioiodine treatment: systematic review and meta-analysis of randomised controlled trials.

    PubMed

    Walter, Martin A; Briel, Matthias; Christ-Crain, Mirjam; Bonnema, Steen J; Connell, John; Cooper, David S; Bucher, Heiner C; Müller-Brand, Jan; Müller, Beat

    2007-03-10

    To determine the effect of adjunctive antithyroid drugs on the risk of treatment failure, hypothyroidism, and adverse events after radioiodine treatment. Meta-analysis. Electronic databases (Cochrane central register of controlled trials, Medline, Embase) searched to August 2006 and contact with experts. Review methods Three reviewers independently assessed trial eligibility and quality. Pooled relative risks for treatment failure and hypothyroidism after radioiodine treatment with and without adjunctive antithyroid drugs were calculated with a random effects model. We identified 14 relevant randomised controlled trials with a total of 1306 participants. Adjunctive antithyroid medication was associated with an increased risk of treatment failure (relative risk 1.28, 95% confidence interval 1.07 to 1.52; P=0.006) and a reduced risk for hypothyroidism (0.68, 0.53 to 0.87; P=0.006) after radioiodine treatment. We found no difference in summary estimates for the different antithyroid drugs or for whether antithyroid drugs were given before or after radioiodine treatment. Antithyroid drugs potentially increase rates of failure and reduce rates of hypothyroidism if they are given in the week before or after radioiodine treatment, respectively.

  16. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  17. Survival Predictions of Ceramic Crowns Using Statistical Fracture Mechanics

    PubMed Central

    Nasrin, S.; Katsube, N.; Seghi, R.R.; Rokhlin, S.I.

    2017-01-01

    This work establishes a survival probability methodology for interface-initiated fatigue failures of monolithic ceramic crowns under simulated masticatory loading. A complete 3-dimensional (3D) finite element analysis model of a minimally reduced molar crown was developed using commercially available hardware and software. Estimates of material surface flaw distributions and fatigue parameters for 3 reinforced glass-ceramics (fluormica [FM], leucite [LR], and lithium disilicate [LD]) and a dense sintered yttrium-stabilized zirconia (YZ) were obtained from the literature and incorporated into the model. Utilizing the proposed fracture mechanics–based model, crown survival probability as a function of loading cycles was obtained from simulations performed on the 4 ceramic materials utilizing identical crown geometries and loading conditions. The weaker ceramic materials (FM and LR) resulted in lower survival rates than the more recently developed higher-strength ceramic materials (LD and YZ). The simulated 10-y survival rate of crowns fabricated from YZ was only slightly better than those fabricated from LD. In addition, 2 of the model crown systems (FM and LD) were expanded to determine regional-dependent failure probabilities. This analysis predicted that the LD-based crowns were more likely to fail from fractures initiating from margin areas, whereas the FM-based crowns showed a slightly higher probability of failure from fractures initiating from the occlusal table below the contact areas. These 2 predicted fracture initiation locations have some agreement with reported fractographic analyses of failed crowns. In this model, we considered the maximum tensile stress tangential to the interfacial surface, as opposed to the more universally reported maximum principal stress, because it more directly impacts crack propagation. While the accuracy of these predictions needs to be experimentally verified, the model can provide a fundamental understanding of the importance that pre-existing flaws at the intaglio surface have on fatigue failures. PMID:28107637

  18. Evaluation of possible prognostic factors for the success, survival, and failure of dental implants.

    PubMed

    Geckili, Onur; Bilhan, Hakan; Geckili, Esma; Cilingir, Altug; Mumcu, Emre; Bural, Canan

    2014-02-01

    To analyze the prognostic factors that are associated with the success, survival, and failure rates of dental implants. Data including implant sizes, insertion time, implant location, and prosthetic treatment of 1656 implants have been collected, and the association of these factors with success, survival, and failure of implants was analyzed. The success rate was lower for short and maxillary implants. The failure rate of maxillary implants exceeded that of mandibular implants, and the failure rate of implants that were placed in the maxillary anterior region was significantly higher than other regions. The failure rates of implants that were placed 5 years ago or more were higher than those that were placed later. Anterior maxilla is more critical for implant loss than other sites. Implants in the anterior mandible show better success compared with other locations, and longer implants show better success rates. The learning curve of the clinician influences survival and success rates of dental implants.

  19. Association between bilirubin and mode of death in severe systolic heart failure.

    PubMed

    Wu, Audrey H; Levy, Wayne C; Welch, Kathleen B; Neuberg, Gerald W; O'Connor, Christopher M; Carson, Peter E; Miller, Alan B; Ghali, Jalal K

    2013-04-15

    The bilirubin level has been associated with worse outcomes, but it has not been studied as a predictor for the mode of death in patients with systolic heart failure. The Prospective Randomized Amlodipine Evaluation Study (PRAISE) cohort (including New York Heart Association class IIIB-IV patients with left ventricular ejection fraction <30%, n = 1,135) was analyzed, divided by bilirubin level: ≤0.6 mg/dl, group 1; >0.6 to 1.2 mg/dl, group 2; and >1.2 mg/dl, group 3. Multivariate Cox proportional hazards models were used to determine the association of bilirubin with the risk of sudden or pump failure death. Total bilirubin was entered as a base 2 log-transformed variable (log2 bilirubin), indicating doubling of the bilirubin level corresponding to each increase in variable value. The higher bilirubin groups had a lower ejection fraction (range 19% to 21%), sodium (range 138 to 139 mmol/L), and systolic blood pressure (range 111 to 120 mm Hg), a greater heart rate (range 79 to 81 beats/min), and greater diuretic dosages (range 86 to 110 furosemide-equivalent total daily dose in mg). The overall survival rates declined with increasing bilirubin (24.3, 31.3, and 44.3 deaths per 100 person-years, respectively, for groups 1, 2, and 3). Although a positive relation was seen between log2 bilirubin and both pump failure risk and sudden death risk, the relation in multivariate modeling was significant only for pump failure mortality (hazard ratio 1.47, 95% confidence interval 1.19 to 1.82, p = 0.0004), not for sudden death mortality (hazard ratio 1.21, 95% confidence interval 0.98 to 1.49, p = 0.08). In conclusion, an increasing bilirubin level was significantly associated with the risk of pump failure death but not for sudden death in patients with severe systolic heart failure. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. A Bayesian Approach Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data

    DOE PAGES

    Yue, Meng; Toto, Tami; Jensen, Michael P.; ...

    2017-05-18

    Severe weather events such as strong thunderstorms are some of the most significant and frequent threats to the electrical grid infrastructure. Outages resulting from storms can be very costly. While some tools are available to utilities to predict storm occurrences and damage, they are typically very crude and provide little means of facilitating restoration efforts. This study developed a methodology to use historical high-resolution (both temporal and spatial) radar observations of storm characteristics and outage information to develop weather condition dependent failure rate models (FRMs) for different grid components. Such models can provide an estimation or prediction of the outagemore » numbers in small areas of a utility’s service territory once the real-time measurement or forecasted data of weather conditions become available as the input to the models. Considering the potential value provided by real-time outages reported, a Bayesian outage prediction (BOP) algorithm is proposed to account for both strength and uncertainties of the reported outages and failure rate models. The potential benefit of this outage prediction scheme is illustrated in this study.« less

  1. A Bayesian Approach Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yue, Meng; Toto, Tami; Jensen, Michael P.

    Severe weather events such as strong thunderstorms are some of the most significant and frequent threats to the electrical grid infrastructure. Outages resulting from storms can be very costly. While some tools are available to utilities to predict storm occurrences and damage, they are typically very crude and provide little means of facilitating restoration efforts. This study developed a methodology to use historical high-resolution (both temporal and spatial) radar observations of storm characteristics and outage information to develop weather condition dependent failure rate models (FRMs) for different grid components. Such models can provide an estimation or prediction of the outagemore » numbers in small areas of a utility’s service territory once the real-time measurement or forecasted data of weather conditions become available as the input to the models. Considering the potential value provided by real-time outages reported, a Bayesian outage prediction (BOP) algorithm is proposed to account for both strength and uncertainties of the reported outages and failure rate models. The potential benefit of this outage prediction scheme is illustrated in this study.« less

  2. High-sensitivity cardiac troponin I and risk of heart failure in patients with suspected acute coronary syndrome: a cohort study.

    PubMed

    Stelzle, Dominik; Shah, Anoop S V; Anand, Atul; Strachan, Fiona E; Chapman, Andrew R; Denvir, Martin A; Mills, Nicholas L; McAllister, David A

    2018-01-01

    Heart failure may occur following acute myocardial infarction, but with the use of high-sensitivity cardiac troponin assays we increasingly diagnose patients with minor myocardial injury. Whether troponin concentrations remain a useful predictor of heart failure in patients with acute coronary syndrome is uncertain. We identified all consecutive patients (n = 4748) with suspected acute coronary syndrome (61 ± 16 years, 57% male) presenting to three secondary and tertiary care hospitals. Cox-regression models were used to evaluate the association between high-sensitivity cardiac troponin I concentration and subsequent heart failure hospitalization. C-statistics were estimated to evaluate the predictive value of troponin for heart failure hospitalization. Over 2071 years of follow-up there were 83 heart failure hospitalizations. Patients with troponin concentrations above the upper reference limit (URL) were more likely to be hospitalized with heart failure than patients below the URL (118/1000 vs. 17/1000 person years, adjusted hazard ratio: 7.0). Among patients with troponin concentrations

  3. Simulated effects of the 2003 permitted withdrawals and water-management alternatives on reservoir storage and firm yields of three surface-water supplies, Ipswich River Basin, Massachusetts

    USGS Publications Warehouse

    Zarriello, Phillip J.

    2004-01-01

    The Hydrologic Simulation ProgramFORTRAN (HSPF) model of the Ipswich River Basin previously developed by the U.S. Geological Survey was modified to evaluate the effects of the 2003 withdrawal permits and water-management alternatives on reservoir storage and yields of the Lynn, Peabody, and SalemBeverly water-supply systems. These systems obtain all or part of their water from the Ipswich River Basin. The HSPF model simulated the complex water budgets to the three supply systems, including effects of regulations that restrict withdrawals by the time of year, minimum streamflow thresholds, and the capacity of each system to pump water from the river. The 2003 permits restrict withdrawals from the Ipswich River between November 1 and May 31 to streamflows above a 1.0 cubic foot per second per square mile (ft3/s/mi2) threshold, to high flows between June 1 and October 31, and to a maximum annual volume. Yields and changes in reservoir storage over the 35-year simulation period (196195) were also evaluated for each system with a hypothetical low-capacity pump, alternative seasonal streamflow thresholds, and withdrawals that result in successive failures (depleted storage). The firm yields, the maximum yields that can be met during a severe drought, calculated for each water-supply system, under the 2003 permitted withdrawals, were 7.31 million gallons per day (Mgal/d) for the Lynn, 3.01 Mgal/d for the Peabody, and 7.98 Mgal/d for the SalemBeverly systems; these yields are 31, 49, and 21 percent less than their average 19982000 demands, respectively. The simulations with the same permit restrictions and a hypothetical low-capacity pump for each system resulted in slightly increased yields for the Lynn and SalemBeverly systems, but a slightly decreased yield for the Peabody system. Simulations to evaluate the effects of alternative streamflow thresholds on water supply indicated that firm yields were generally about twice as sensitive to decreases in the NovemberFebruary or MarchMay thresholds than to increases in these thresholds. Firm yields were also generally slightly less sensitive to changes in the NovemberFebruary than to changes in the MarchMay thresholds in the Peabody and SalemBeverly water-supply systems. Decreases in the JuneOctober streamflow threshold did not affect any of the system's firm yield. Simulations of withdrawal rates that resulted in successive near failures during the 196195 period indicated the tradeoff between increased yield and risks. The Lynn and Peabody systems were allowed to near failure up to six times. At the sixth near failure, yields of these systems increased to 10.18 and 4.43 Mgal/d, respectively; these rates increased the amount of water obtained from the Ipswich River Basin (relative to the firm-yield rate), as a percentage of average 19982000 demands, from 68 to 96 percent and from 51 to 75 percent, respectively. The SalemBeverly system was able to meet demands after the third near failure. Reservoir storage was depleted about 6 percent of the time at the withdrawal rate that caused the sixth near failure in the Lynn and Peabody system and about 3 percent of the time at the withdrawal rate that caused the third near failure in the SalemBeverly system. Supply systems are at greatest risk of failure from persistent droughts (lasting more than 1 year), but short-term droughts also present risks during the fall and winter when the supply systems are most vulnerable. Uncertainties in model performance, simplification of reservoir systems and their management, and the possibility of droughts of severity greater than simulated in this investigation underscore the fact that the firm yield calculated for each system cannot be considered a withdrawal rate that is absolutely fail-safe. Thus, the consequences of failure are an important consideration in the planning and management of these systems.

  4. Reconfigurable Control with Neural Network Augmentation for a Modified F-15 Aircraft

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    2007-01-01

    This paper describes the performance of a simplified dynamic inversion controller with neural network supplementation. This 6 DOF (Degree-of-Freedom) simulation study focuses on the results with and without adaptation of neural networks using a simulation of the NASA modified F-15 which has canards. One area of interest is the performance of a simulated surface failure while attempting to minimize the inertial cross coupling effect of a [B] matrix failure (a control derivative anomaly associated with a jammed or missing control surface). Another area of interest and presented is simulated aerodynamic failures ([A] matrix) such as a canard failure. The controller uses explicit models to produce desired angular rate commands. The dynamic inversion calculates the necessary surface commands to achieve the desired rates. The simplified dynamic inversion uses approximate short period and roll axis dynamics. Initial results indicated that the transient response for a [B] matrix failure using a Neural Network (NN) improved the control behavior when compared to not using a neural network for a given failure, However, further evaluation of the controller was comparable, with objections io the cross coupling effects (after changes were made to the controller). This paper describes the methods employed to reduce the cross coupling effect and maintain adequate tracking errors. The IA] matrix failure results show that control of the aircraft without adaptation is more difficult [leas damped) than with active neural networks, Simulation results show Neural Network augmentation of the controller improves performance in terms of backing error and cross coupling reduction and improved performance with aerodynamic-type failures.

  5. Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations

    NASA Technical Reports Server (NTRS)

    Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor

    2014-01-01

    One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.

  6. Cycle life test and failure model of nickel-hydrogen cells

    NASA Technical Reports Server (NTRS)

    Smithrick, J. J.

    1983-01-01

    Six ampere hour individual pressure vessel nickel hydrogen cells were charge/discharge cycled to failure. Failure as used here is defined to occur when the end of discharge voltage degraded to 0.9 volts. They were cycled under a low earth orbit cycle regime to a deep depth of discharge (80 percent of rated ampere hour capacity). Both cell designs were fabricated by the same manufacturer and represent current state of the art. A failure model was advanced which suggests both cell designs have inadequate volume tolerance characteristics. The limited existing data base at a deep depth of discharge (DOD) was expanded. Two cells of each design were cycled. One COMSAT cell failed at cycle 1712 and the other failed at cycle 1875. For the Air Force/Hughes cells, one cell failed at cycle 2250 and the other failed at cycle 2638. All cells, of both designs, failed due to low end of discharge voltage (0.9 volts). No cell failed due to electrical shorts. After cell failure, three different reconditioning tests (deep discharge, physical reorientation, and open circuit voltage stand) were conducted on all cells of each design. A fourth reconditioning test (electrolyte addition) was conducted on one cell of each design. In addition post cycle cell teardown and failure analysis were performed on the one cell of each design which did not have electrolyte added after failure.

  7. Reactive flow modeling of small scale detonation failure experiments for a baseline non-ideal explosive

    NASA Astrophysics Data System (ADS)

    Kittell, David E.; Cummock, Nick R.; Son, Steven F.

    2016-08-01

    Small scale characterization experiments using only 1-5 g of a baseline ammonium nitrate plus fuel oil (ANFO) explosive are discussed and simulated using an ignition and growth reactive flow model. There exists a strong need for the small scale characterization of non-ideal explosives in order to adequately survey the wide parameter space in sample composition, density, and microstructure of these materials. However, it is largely unknown in the scientific community whether any useful or meaningful result may be obtained from detonation failure, and whether a minimum sample size or level of confinement exists for the experiments. In this work, it is shown that the parameters of an ignition and growth rate law may be calibrated using the small scale data, which is obtained from a 35 GHz microwave interferometer. Calibration is feasible when the samples are heavily confined and overdriven; this conclusion is supported with detailed simulation output, including pressure and reaction contours inside the ANFO samples. The resulting shock wave velocity is most likely a combined chemical-mechanical response, and simulations of these experiments require an accurate unreacted equation of state (EOS) in addition to the calibrated reaction rate. Other experiments are proposed to gain further insight into the detonation failure data, as well as to help discriminate between the role of the EOS and reaction rate in predicting the measured outcome.

  8. Reactive flow modeling of small scale detonation failure experiments for a baseline non-ideal explosive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kittell, David E.; Cummock, Nick R.; Son, Steven F.

    2016-08-14

    Small scale characterization experiments using only 1–5 g of a baseline ammonium nitrate plus fuel oil (ANFO) explosive are discussed and simulated using an ignition and growth reactive flow model. There exists a strong need for the small scale characterization of non-ideal explosives in order to adequately survey the wide parameter space in sample composition, density, and microstructure of these materials. However, it is largely unknown in the scientific community whether any useful or meaningful result may be obtained from detonation failure, and whether a minimum sample size or level of confinement exists for the experiments. In this work, itmore » is shown that the parameters of an ignition and growth rate law may be calibrated using the small scale data, which is obtained from a 35 GHz microwave interferometer. Calibration is feasible when the samples are heavily confined and overdriven; this conclusion is supported with detailed simulation output, including pressure and reaction contours inside the ANFO samples. The resulting shock wave velocity is most likely a combined chemical-mechanical response, and simulations of these experiments require an accurate unreacted equation of state (EOS) in addition to the calibrated reaction rate. Other experiments are proposed to gain further insight into the detonation failure data, as well as to help discriminate between the role of the EOS and reaction rate in predicting the measured outcome.« less

  9. Experience with the artificial urinary sphincter model AS800 in 148 patients.

    PubMed

    Fishman, I J; Shabsigh, R; Scott, F B

    1989-02-01

    The latest version of the artificial urinary sphincter, AS800, was used in 148 patients with urinary incontinence of different etiologies. Followup ranged from 3 to 37 months, with an average of 20.8 months. There were 112 (76 per cent) male and 36 (24 per cent) female patients. The cuff was implanted around the bladder neck in 78 patients (53 per cent) and around the bulbar urethra in 70 (47 per cent). Socially acceptable urinary control was achieved in 90 per cent of the 139 patients with active devices in place. It was necessary to remove the sphincter in 11 patients (7.4 per cent). The reasons for removal were infection and erosion in 8 patients (5.4 per cent), infection without erosion in 2 (1.3 per cent), and erosion due to excess pressure and poor tissues in 1 (0.7 per cent). Comparison of success and failure rates associated with incontinence of different etiologies revealed that patients with incontinence after failure of a conventional antistress incontinence operation and those with incontinence after transurethral resection or radical prostactectomy had the highest success rate, and that patients with incontinence secondary to pelvic fracture or exstrophy and epispadias had the highest failure rates. The deactivation feature (the lock) of the new artificial sphincter model was beneficial for primary deactivation, urethral catheterization or cystoscopy, or for elective nocturnal decompression of the bladder neck or urethral tissues.

  10. Risk assessment of component failure modes and human errors using a new FMECA approach: application in the safety analysis of HDR brachytherapy.

    PubMed

    Giardina, M; Castiglia, F; Tomarchio, E

    2014-12-01

    Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.

  11. Revised Risk Priority Number in Failure Mode and Effects Analysis Model from the Perspective of Healthcare System

    PubMed Central

    Rezaei, Fatemeh; Yarmohammadian, Mohmmad H.; Haghshenas, Abbas; Fallah, Ali; Ferdosi, Masoud

    2018-01-01

    Background: Methodology of Failure Mode and Effects Analysis (FMEA) is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN)” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1) defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss), occurrence of incident (time - unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) then, (2) risks priority criteria quantified by using RPN index (361 for the highest rate failure). The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models. PMID:29441184

  12. Long-Term Exposure to Road Traffic Noise and Nitrogen Dioxide and Risk of Heart Failure: A Cohort Study

    PubMed Central

    Wendelboe Nielsen, Olav; Sajadieh, Ahmad; Ketzel, Matthias; Tjønneland, Anne; Overvad, Kim; Raaschou-Nielsen, Ole

    2017-01-01

    Background: Although air pollution and road traffic noise have been associated with higher risk of cardiovascular diseases, associations with heart failure have received only little attention. Objectives: We aimed to investigate whether long-term exposure to road traffic noise and nitrogen dioxide (NO2) were associated with incident heart failure. Methods: In a cohort of 57,053 people 50–64 y of age at enrollment in the period 1993–1997, we identified 2,550 cases of first-ever hospital admission for heart failure during a mean follow-up time of 13.4 y. Present and historical residential addresses from 1987 to 2011 were found in national registers, and road traffic noise (Lden) and NO2 were modeled for all addresses. Analyses were done using Cox proportional hazard model. Results: An interquartile range higher 10-y time-weighted mean exposure for Lden and NO2 was associated with incidence rate ratios (IRR) for heart failure of 1.14 (1.08–1.21) and 1.11 (1.07–1.16), respectively, in models adjusted for gender, lifestyle, and socioeconomic status. In models with mutual exposure adjustment, IRRs were 1.08 (1.00–1.16) for Lden and 1.07 (1.01–1.14) for NO2. We found statistically significant modification of the NO2–heart failure association by gender (strongest association among men), baseline hypertension (strongest association among hypertensive), and diabetes (strongest association among diabetics). The same tendencies were seen for noise, but interactions were not statistically significant. Conclusions: Long-term exposure to NO2 and road traffic noise was associated with higher risk of heart failure, mainly among men, in both single- and two-pollutant models. High exposure to both pollutants was associated with highest risk. https://doi.org/10.1289/EHP1272 PMID:28953453

  13. Death of the bee hive: understanding the failure of an insect society.

    PubMed

    Barron, Andrew B

    2015-08-01

    Since 2007 honey bee colony failure rates overwinter have averaged about 30% across much of North America. In addition, cases of extremely rapid colony failure have been reported, which has been termed colony collapse disorder. Both phenomena result from an increase in the frequency and intensity of chronic diseases and environmental stressors. Colonies are often challenged by multiple stressors, which can interact: for example, pesticides can enhance disease transmission in colonies. Colonies may be particularly vulnerable to sublethal effects of pathogens and pesticides since colony functions are compromised whether a stressor kills workers, or causes them to fail at foraging. Modelling provides a way to understand the processes of colony failure by relating impacts of stressors to colony-level functions. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Visual Attention Allocation Between Robotic Arm and Environmental Process Control: Validating the STOM Task Switching Model

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Vieanne, Alex; Clegg, Benjamin; Sebok, Angelia; Janes, Jessica

    2015-01-01

    Fifty six participants time shared a spacecraft environmental control system task with a realistic space robotic arm control task in either a manual or highly automated version. The former could suffer minor failures, whose diagnosis and repair were supported by a decision aid. At the end of the experiment this decision aid unexpectedly failed. We measured visual attention allocation and switching between the two tasks, in each of the eight conditions formed by manual-automated arm X expected-unexpected failure X monitoring- failure management. We also used our multi-attribute task switching model, based on task attributes of priority interest, difficulty and salience that were self-rated by participants, to predict allocation. An un-weighted model based on attributes of difficulty, interest and salience accounted for 96 percent of the task allocation variance across the 8 different conditions. Task difficulty served as an attractor, with more difficult tasks increasing the tendency to stay on task.

  15. Validation Study of Unnotched Charpy and Taylor-Anvil Impact Experiments using Kayenta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamojjala, Krishna; Lacy, Jeffrey; Chu, Henry S.

    2015-03-01

    Validation of a single computational model with multiple available strain-to-failure fracture theories is presented through experimental tests and numerical simulations of the standardized unnotched Charpy and Taylor-anvil impact tests, both run using the same material model (Kayenta). Unnotched Charpy tests are performed on rolled homogeneous armor steel. The fracture patterns using Kayenta’s various failure options that include aleatory uncertainty and scale effects are compared against the experiments. Other quantities of interest include the average value of the absorbed energy and bend angle of the specimen. Taylor-anvil impact tests are performed on Ti6Al4V titanium alloy. The impact speeds of the specimenmore » are 321 m/s and 393 m/s. The goal of the numerical work is to reproduce the damage patterns observed in the laboratory. For the numerical study, the Johnson-Cook failure model is used as the ductile fracture criterion, and aleatory uncertainty is applied to rate-dependence parameters to explore its effect on the fracture patterns.« less

  16. Markov and semi-Markov processes as a failure rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabski, Franciszek

    2016-06-08

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

  17. Spacecraft Parachute Recovery System Testing from a Failure Rate Perspective

    NASA Technical Reports Server (NTRS)

    Stewart, Christine E.

    2013-01-01

    Spacecraft parachute recovery systems, especially those with a parachute cluster, require testing to identify and reduce failures. This is especially important when the spacecraft in question is human-rated. Due to the recent effort to make spaceflight affordable, the importance of determining a minimum requirement for testing has increased. The number of tests required to achieve a mature design, with a relatively constant failure rate, can be estimated from a review of previous complex spacecraft recovery systems. Examination of the Apollo parachute testing and the Shuttle Solid Rocket Booster recovery chute system operation will clarify at which point in those programs the system reached maturity. This examination will also clarify the risks inherent in not performing a sufficient number of tests prior to operation with humans on-board. When looking at complex parachute systems used in spaceflight landing systems, a pattern begins to emerge regarding the need for a minimum amount of testing required to wring out the failure modes and reduce the failure rate of the parachute system to an acceptable level for human spaceflight. Not only a sufficient number of system level testing, but also the ability to update the design as failure modes are found is required to drive the failure rate of the system down to an acceptable level. In addition, sufficient data and images are necessary to identify incipient failure modes or to identify failure causes when a system failure occurs. In order to demonstrate the need for sufficient system level testing prior to an acceptable failure rate, the Apollo Earth Landing System (ELS) test program and the Shuttle Solid Rocket Booster Recovery System failure history will be examined, as well as some experiences in the Orion Capsule Parachute Assembly System will be noted.

  18. A morphologic characterisation of the 1963 Vajont Slide, Italy, using long-range terrestrial photogrammetry

    NASA Astrophysics Data System (ADS)

    Wolter, Andrea; Stead, Doug; Clague, John J.

    2014-02-01

    The 1963 Vajont Slide in northeast Italy is an important engineering and geological event. Although the landslide has been extensively studied, new insights can be derived by applying modern techniques such as remote sensing and numerical modelling. This paper presents the first digital terrestrial photogrammetric analyses of the failure scar, landslide deposits, and the area surrounding the failure, with a focus on the scar. We processed photogrammetric models to produce discontinuity stereonets, residual maps and profiles, and slope and aspect maps, all of which provide information on the failure scar morphology. Our analyses enabled the creation of a preliminary semi-quantitative morphologic classification of the Vajont failure scar based on the large-scale tectonic folds and step-paths that define it. The analyses and morphologic classification have implications for the kinematics, dynamics, and mechanism of the slide. Metre- and decametre-scale features affected the initiation, direction, and displacement rate of sliding. The most complexly folded and stepped areas occur close to the intersection of orthogonal synclinal features related to the Dinaric and Neoalpine deformation events. Our analyses also highlight, for the first time, the evolution of the Vajont failure scar from 1963 to the present.

  19. Subsidence and well failure in the South Belridge Diatomite field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouffignac, E.P. de; Bondor, P.L.; Karanikas, J.M. Hara, S.K.

    1995-12-31

    Withdrawal of fluids from shallow, thick and low strength rock can cause substantial reservoir compaction leading to surface subsidence and well failure. This is the case for the Diatomite reservoir, where over 10 ft of subsidence have occurred in some areas. Well failure rates have averaged over 3% per year, resulting in several million dollars per year in well replacement and repair costs in the South Belridge Diatomite alone. A program has been underway to address this issue, including experimental, modeling and field monitoring work. An updated elastoplastic rock law based on laboratory data has been generated which includes notmore » only standard shear failure mechanisms but also irreversible pore collapse occurring at low effective stresses (<150 psi). This law was incorporated into a commercial finite element geomechanics simulator. Since the late 1980s, a network of level survey monuments has been used to monitor subsidence at Belridge. Model predictions of subsidence in Section 33 compare very well with field measured data, which show that water injection reduces subsidence from 7--8 inches per year to 1--2 inches per year, but does not abate well failure.« less

  20. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  1. Family Caregiver Contribution to Self-care of Heart Failure: An Application of the Information-Motivation-Behavioral Skills Model.

    PubMed

    Chen, Yuxia; Zou, Huijing; Zhang, Yanting; Fang, Wenjie; Fan, Xiuzhen

    Adherence to self-care behaviors improves outcomes of patients with heart failure (HF). Caregivers play an important role in contributing to self-care. We aimed to explore the relationships among HF knowledge, perceived control, social support, and family caregiver contribution to self-care of HF, based on the Information-Motivation-Behavioral Skills Model. Two hundred forty-seven dyads of eligible patients with HF and family caregivers were recruited from a general hospital in China. Structural equation modeling was used to analyze the data obtained with the Caregiver Contribution to Self-care of Heart Failure Index, the Heart Failure Knowledge Test, the Control Attitudes Scale, and the Social Support Rating Scale. In this model, caregiver contribution to self-care maintenance was positively affected by perceived control (β = .148, P = .015) and caregiver confidence in contribution to self-care (β = .293, P < .001). Caregiver contribution to self-care management was positively affected by HF knowledge (β = .270, P < .001), perceived control (β = .140, P = .007), social support (β = .123, P = .019), caregiver confidence in contribution to self-care (β = .328, P < .001), and caregiver contribution to self-care maintenance (β = .148, P = .006). Caregiver confidence in contribution to self-care was positively affected by HF knowledge (β = .334, P < .001). Heart failure knowledge, perceived control, and social support facilitated family caregiver contribution to self-care of HF. Targeted interventions that consider these variables may effectively improve family caregiver contributions to self-care.

  2. The Prognostic Accuracy of Suggested Predictors of Failure of Medical Management in Patients With Nontuberculous Spinal Epidural Abscess.

    PubMed

    Stratton, Alexandra; Faris, Peter; Thomas, Kenneth

    2018-05-01

    Retrospective cohort study. To test the external validity of the 2 published prediction criteria for failure of medical management in patients with spinal epidural abscess (SEA). Patients with SEA over a 10-year period at a tertiary care center were identified using ICD-10 (International Classification of Diseases, 10th Revision) diagnostic codes; electronic and paper charts were reviewed. The incidence of SEA and the proportion of patients with SEA that were treated medically were calculated. The rate of failure of medical management was determined. The published prediction models were applied to our data to determine how predictive they were of failure in our cohort. A total of 550 patients were identified using ICD-10 codes, 160 of whom had a magnetic resonance imaging-confirmed diagnosis of SEA. The incidence of SEA was 16 patients per year. Seventy-five patients were found to be intentionally managed medically and were included in the analysis. Thirteen of these 75 patients failed medical management (17%). Based on the published prediction criteria, 26% (Kim et al) and 45% (Patel et al) of our patients were expected to fail. Published prediction models for failure of medical management of SEA were not valid in our cohort. However, once calibrated to our cohort, Patel's model consisting of positive blood culture, presence of diabetes, white blood cells >12.5, and C-reactive protein >115 was the better model for our data.

  3. Anger, hostility, and hospitalizations in patients with heart failure.

    PubMed

    Keith, Felicia; Krantz, David S; Chen, Rusan; Harris, Kristie M; Ware, Catherine M; Lee, Amy K; Bellini, Paula G; Gottlieb, Stephen S

    2017-09-01

    Heart failure patients have a high hospitalization rate, and anger and hostility are associated with coronary heart disease morbidity and mortality. Using structural equation modeling, this prospective study assessed the predictive validity of anger and hostility traits for cardiovascular and all-cause rehospitalizations in patients with heart failure. 146 heart failure patients were administered the STAXI and Cook-Medley Hostility Inventory to measure anger, hostility, and their component traits. Hospitalizations were recorded for up to 3 years following baseline. Causes of hospitalizations were categorized as heart failure, total cardiac, noncardiac, and all-cause (sum of cardiac and noncardiac). Measurement models were separately fit for Anger and Hostility, followed by a Confirmatory Factor Analysis to estimate the relationship between the Anger and Hostility constructs. An Anger model consisted of State Anger, Trait Anger, Anger Expression Out, and Anger Expression In, and a Hostility model included Cynicism, Hostile Affect, Aggressive Responding, and Hostile Attribution. The latent construct of Anger did not predict any of the hospitalization outcomes, but Hostility significantly predicted all-cause hospitalizations. Analyses of individual trait components of each of the 2 models indicated that Anger Expression Out predicted all-cause and noncardiac hospitalizations, and Trait Anger predicted noncardiac hospitalizations. None of the individual components of Hostility were related to rehospitalizations or death. The construct of Hostility and several components of Anger are predictive of hospitalizations that were not specific to cardiac causes. Mechanisms common to a variety of health problems, such as self-care and risky health behaviors, may be involved in these associations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Retrospective cohort study of the clinical performance of 1-stage dental implants.

    PubMed

    Carr, Alan B; Choi, Yong-Geun; Eckert, Steven E; Desjardins, Ronald P

    2003-01-01

    To evaluate long-term clinical performance of 1-stage dental implant prostheses at a single clinic, emphasizing clinical and demographic characteristics that affect implant survival. Dental records of all 308 patients (674 implants) treated with 1-stage implants at Mayo Clinic from October 1993 through May 2000 were reviewed from implant placement to last visit. Exposure and outcome variables affecting performance were collected separately to control bias in the data collection process. Additional confounding factors (age and sex) were adjusted with the stratified Cox proportional hazards model. Implant survival was determined by means of a Kaplan-Meier survival estimate. The log-rank test was used to determine the role of clinical and demographic variables in implant survival. The relative risk associated with the possible effect of clinical and demographic variables on implant survival was estimated with the Cox proportional hazards model. The implant survival rate (n = 654 implants) was 97% (mean +/- SD follow-up, 21.0 +/- 18.8 months; range, 1 to 78 months). Performance bias was limited because nearly all patients were treated by 1 prosthodontist. Two implants failed after loading (6 and 9 months). The incidence of complications was less than 4%. Among the implant failures, use of heterogeneous bone graft was associated with 4.8 times more failures than was use of autogenous bone graft (P = .04). After augmentation, delaying implant placement for 5 to 6 months resulted in 8.6 times more failures than the rate after earlier placement (P < .001). Retrospective review of the clinical performance of a 1-stage dental implant system yielded a 97% survival rate, with no failures noted after 13 months. Prosthetic complications were low, especially for fixed implant prostheses. Clinical performance of 1-stage dental implant prostheses between 1993 and 2000 demonstrated a high level of predictability.

  5. Post-exercise heart rate recovery independently predicts mortality risk in patients with chronic heart failure.

    PubMed

    Tang, Yi-Da; Dewland, Thomas A; Wencker, Detlef; Katz, Stuart D

    2009-12-01

    Post-exercise heart rate recovery (HRR) is an index of parasympathetic function associated with clinical outcomes in populations with and without documented coronary heart disease. Decreased parasympathetic activity is thought to be associated with disease progression in chronic heart failure (HF), but an independent association between post-exercise HRR and clinical outcomes among such patients has not been established. We measured HRR (calculated as the difference between heart rate at peak exercise and after 1 minute of recovery) in 202 HF subjects and recorded 17 mortality and 15 urgent transplantation outcome events over 624 days of follow-up. Reduced post-exercise HRR was independently associated with increased event risk after adjusting for other exercise-derived variables (peak oxygen uptake and change in minute ventilation per change in carbon dioxide production slope), for the Heart Failure Survival Score (adjusted HR 1.09 for 1 beat/min reduction, 95% CI 1.05-1.13, P < .0001), and the Seattle Heart Failure Model score (adjusted HR 1.08 for one beat/min reduction, 95% CI 1.05-1.12, P < .0001). Subjects in the lowest risk tertile based on post-exercise HRR (>or=30 beats/min) had low risk of events irrespective of the risk predicted by the survival scores. In a subgroup of 15 subjects, reduced post-exercise HRR was associated with increased serum markers of inflammation (interleukin-6, r = 0.58, P = .024; high-sensitivity C-reactive protein, r = 0.66, P = .007). Post-exercise HRR predicts mortality risk in patients with HF and provides prognostic information independent of previously described survival models. Pathophysiologic links between autonomic function and inflammation may be mediators of this association.

  6. Evaluation of Left Ventricular Diastolic Dysfunction with Early Systolic Dysfunction Using Two-Dimensional Speckle Tracking Echocardiography in Canine Heart Failure Model.

    PubMed

    Wu, Wei-Chun; Ma, Hong; Xie, Rong-Ai; Gao, Li-Jian; Tang, Yue; Wang, Hao

    2016-04-01

    This study evaluated the role of two-dimensional speckle tracking echocardiography (2DSTE) for predicting left ventricular (LV) diastolic dysfunction in pacing-induced canine heart failure. Pacing systems were implanted in 8 adult mongrel dogs, and continuous rapid right ventricular pacing (RVP, 240 beats/min) was maintained for 2 weeks. The obtained measurements from 2DSTE included global strain rate during early diastole (SRe) and during late diastole (SRa) in the longitudinal (L-SRe, L-SRa), circumferential (C-SRe, C-SRa), and radial directions (R-SRe, R-SRa). Changes in heart morphology were observed by light microscopy and transmission electron microscopy at 2 weeks. The onset of LV diastolic dysfunction with early systolic dysfunction occurred 3 days after RVP initiation. Most of the strain rate imaging indices were altered at 1 or 3 days after RVP onset and continued to worsen until heart failure developed. Light and transmission electron microscopy showed myocardial vacuolar degeneration and mitochondrial swelling in the left ventricular at 2 weeks after RVP onset. Pearson's correlation analysis revealed that parameters of conventional echocardiography and 2DSTE showed moderate correlation with LV pressure parameters, including E/Esep' (r = 0.58, P < 0.01), L-SRe (r = -0.58, P < 0.01), E/L-SRe (r = 0.65, P < 0.01), and R-SRe (r = 0.53, P < 0.01). ROC curves analysis showed that these indices of conventional echocardiography and strain rate imaging could effectively predict LV diastolic dysfunction (area under the curve: E/Esep' 0.78; L-SRe 0.84; E/L-SRe 0.80; R-SRe 0.80). 2DSTE was a sensitive and accurate technique that could be used for predicting LV diastolic dysfunction in canine heart failure model. © 2015, Wiley Periodicals, Inc.

  7. Prediction of Fatigue Crack Growth in Rail Steels.

    DOT National Transportation Integrated Search

    1981-10-01

    Measures to prevent derailments due to fatigue failures of rails require adequate knowledge of the rate of propagation of fatigue cracks under service loading. The report presents a computational model for the prediction of crack growth in rails. The...

  8. Adjusting survival estimates for premature transmitter failure: A case study from the Sacramento-San Joaquin Delta

    USGS Publications Warehouse

    Holbrook, Christopher M.; Perry, Russell W.; Brandes, Patricia L.; Adams, Noah S.

    2013-01-01

    In telemetry studies, premature tag failure causes negative bias in fish survival estimates because tag failure is interpreted as fish mortality. We used mark-recapture modeling to adjust estimates of fish survival for a previous study where premature tag failure was documented. High rates of tag failure occurred during the Vernalis Adaptive Management Plan’s (VAMP) 2008 study to estimate survival of fall-run Chinook salmon (Oncorhynchus tshawytscha) during migration through the San Joaquin River and Sacramento-San Joaquin Delta, California. Due to a high rate of tag failure, the observed travel time distribution was likely negatively biased, resulting in an underestimate of tag survival probability in this study. Consequently, the bias-adjustment method resulted in only a small increase in estimated fish survival when the observed travel time distribution was used to estimate the probability of tag survival. Since the bias-adjustment failed to remove bias, we used historical travel time data and conducted a sensitivity analysis to examine how fish survival might have varied across a range of tag survival probabilities. Our analysis suggested that fish survival estimates were low (95% confidence bounds range from 0.052 to 0.227) over a wide range of plausible tag survival probabilities (0.48–1.00), and this finding is consistent with other studies in this system. When tags fail at a high rate, available methods to adjust for the bias may perform poorly. Our example highlights the importance of evaluating the tag life assumption during survival studies, and presents a simple framework for evaluating adjusted survival estimates when auxiliary travel time data are available.

  9. Length-scale and strain rate-dependent mechanism of defect formation and fracture in carbon nanotubes under tensile loading

    NASA Astrophysics Data System (ADS)

    Javvaji, Brahmanandam; Raha, S.; Mahapatra, D. Roy

    2017-02-01

    Electromagnetic and thermo-mechanical forces play a major role in nanotube-based materials and devices. Under high-energy electron transport or high current densities, carbon nanotubes fail via sequential fracture. The failure sequence is governed by certain length scale and flow of current. We report a unified phenomenological model derived from molecular dynamic simulation data, which successfully captures the important physics of the complex failure process. Length-scale and strain rate-dependent defect nucleation, growth, and fracture in single-walled carbon nanotubes with diameters in the range of 0.47 to 2.03 nm and length which is about 6.17 to 26.45 nm are simulated. Nanotubes with long length and small diameter show brittle fracture, while those with short length and large diameter show transition from ductile to brittle fracture. In short nanotubes with small diameters, we observe several structural transitions like Stone-Wales defect initiation, its propagation to larger void nucleation, formation of multiple chains of atoms, conversion to monatomic chain of atoms, and finally complete fracture of the carbon nanotube. Hybridization state of carbon-carbon bonds near the end cap evolves, leading to the formation of monatomic chain in short nanotubes with small diameter. Transition from ductile to brittle fracture is also observed when strain rate exceeds a critical value. A generalized analytical model of failure is established, which correlates the defect energy during the formation of atomic chain with aspect ratio of the nanotube and strain rate. Variation in the mechanical properties such as elastic modulus, tensile strength, and fracture strain with the size and strain rate shows important implications in mitigating force fields and ways to enhance the life of electronic devices and nanomaterial conversion via fracture in manufacturing.

  10. Strain Rate Dependent Deformation and Strength Modeling of a Polymer Matrix Composite Utilizing a Micromechanics Approach. Degree awarded by Cincinnati Univ.

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    1999-01-01

    Potential gas turbine applications will expose polymer matrix composites to very high strain rate loading conditions, requiring an ability to understand and predict the material behavior under extreme conditions. Specifically, analytical methods designed for these applications must have the capability of properly capturing the strain rate sensitivities and nonlinearities that are present in the material response. The Ramaswamy-Stouffer constitutive equations, originally developed to analyze the viscoplastic deformation of metals, have been modified to simulate the nonlinear deformation response of ductile, crystalline polymers. The constitutive model is characterized and correlated for two representative ductile polymers. Fiberite 977-2 and PEEK, and the computed results correlate well with experimental values. The polymer constitutive equations are implemented in a mechanics of materials based composite micromechanics model to predict the nonlinear, rate dependent deformation response of a composite ply. Uniform stress and uniform strain assumptions are applied to compute the effective stresses of a composite unit cell from the applied strains. The micromechanics equations are successfully verified for two polymer matrix composites. IM7/977-2 and AS4/PEEK. The ultimate strength of a composite ply is predicted with the Hashin failure criteria that were implemented in the composite micromechanics model. The failure stresses of the two composite material systems are accurately predicted for a variety of fiber orientations and strain rates. The composite deformation model is implemented in LS-DYNA, a commercially available transient dynamic explicit finite element code. The matrix constitutive equations are converted into an incremental form, and the model is implemented into LS-DYNA through the use of a user defined material subroutine. The deformation response of a bulk polymer and a polymer matrix composite are predicted by finite element analyses. The results compare reasonably well to experimental values, with some discrepancies. The discrepancies are at least partially caused by the method used to integrate the rate equations in the polymer constitutive model.

  11. A bivariate model for analyzing recurrent multi-type automobile failures

    NASA Astrophysics Data System (ADS)

    Sunethra, A. A.; Sooriyarachchi, M. R.

    2017-09-01

    The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by the bivariate model. The proposed model can be used to determine the time and type of failure that would occur in the automobiles considered here.

  12. Creep Damage Analysis of a Lattice Truss Panel Structure

    NASA Astrophysics Data System (ADS)

    Jiang, Wenchun; Li, Shaohua; Luo, Yun; Xu, Shugen

    2017-01-01

    The creep failure for a lattice truss sandwich panel structure has been predicted by finite element method (FEM). The creep damage is calculated by three kinds of stresses: as-brazed residual stress, operating thermal stress and mechanical load. The creep damage at tensile and compressive loads have been calculated and compared. The creep rate calculated by FEM, Gibson-Ashby and Hodge-Dunand models have been compared. The results show that the creep failure is located at the fillet at both tensile and creep loads. The damage rate at the fillet at tensile load is 50 times as much as that at compressive load. The lattice truss panel structure has a better creep resistance to compressive load than tensile load, because the creep and stress triaxiality at the fillet has been decreased at compressive load. The maximum creep strain at the fillet and the equivalent creep strain of the panel structure increase with the increase of applied load. Compared with Gibson-Ashby model and Hodge-Dunand models, the modified Gibson-Ashby model has a good prediction result compared with FEM. However, a more accurate model considering the size effect of the structure still needs to be developed.

  13. AT1 receptor blocker azilsartan medoxomil normalizes plasma miR-146a and miR-342-3p in a murine heart failure model.

    PubMed

    Kaneko, Manami; Satomi, Tomoko; Fujiwara, Shuji; Uchiyama, Hidefumi; Kusumoto, Keiji; Nishimoto, Tomoyuki

    Our study measured circulating microRNA (miRNA) levels in the plasma of calsequestrin (CSQ)-tg mouse, a severe heart failure model, and evaluated whether treatment with angiotensin II type 1 receptor blocker, azilsartan medoxomil (AZL-M) influenced their levels using miRNA array analysis. MiR-146a, miR-149, miR-150, and miR-342-3p were reproducibly reduced in the plasma of CSQ-tg mice. Among them, miR-146a and miR-342-3p were significantly restored by AZL-M, which were associated with improvement of survival rate and reduction of congestion. These results suggest that miRNA, especially miR-146a and miR-342-3p, could be used as potential biomarkers for evaluating the efficacy of anti-heart failure drugs.

  14. Risk of Sprint Fidelis defibrillator lead failure is highly dependent on age.

    PubMed

    Girerd, Nicolas; Nonin, Emilie; Pinot, Julien; Morel, Elodie; Flys, Carine; Scridon, Alina; Chevalier, Philippe

    2011-01-01

    In 2007, Medtronic Sprint Fidelis defibrillator leads were taken off the market due to a high rate of lead failure. Current data do not allow for risk stratification of patients with regard to lead failure. We sought to determine predictors of Sprint Fidelis lead failure. Between 2004 and 2007, 269 Sprint Fidelis leads were implanted in 258 patients in our centre. Variables associated with lead failure were assessed by the Kaplan-Meier method and a Cox survival model. During a median follow-up of 2.80 years (maximum 5.32), we observed 33 (12.3%) Sprint Fidelis lead failures (5-year survival, 65.6% ± 7.5%). In univariate analysis, age was the only predictor of lead failure (hazard ratio [HR] for 1-year increase 0.97; 95% confidence interval [CI] 0.95-0.99; p=0.009). Patients aged<62.5 years (median) had a significantly increased risk of lead failure compared with patients aged>62.5 years (HR 2.80; CI 1.30-6.02; p=0.009). Survival without Sprint Fidelis lead failure was 55.6% ± 10.4%) in patients aged<62.5 years (24/134 leads) vs 78.6% ± 8.8% in patients aged>62.5 years (9/135 leads). The annual incidence of lead failure in patients aged<62.5 years was 11.6% ± 4.9% during the fourth year after implantation and 22.9% ± 13.2% during the fifth year. Overall, we found a higher rate of Sprint Fidelis lead dysfunction than previously described. Lead failure was much more frequent in younger patients. Our results emphasize the need for close follow-up of younger patients with Sprint Fidelis leads and suggest that, in these patients, the implantation of a new implantable cardioverter defibrillator lead at the time of generator replacement might be reasonable. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  15. Origin of Slope Failure in the Ursa Region, Northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Stigall, J.; Dugan, B.

    2008-12-01

    We use one-dimensional fluid flow and stability models to predict the evolution of overpressure and stability conditions of IODP Expedition Sites U1322 and U1324 in the Ursa region, northern Gulf of Mexico. Simulations of homogenous mud deposited at 3 and 12 mm/yr for Sites U1322 and U1324, with permeability (k) on the order of 10-17m2 and bulk compressibility of .4 /MPa, predict overpressures up to .45MPa and 1MPa in shallow sediments (<200m below sea floor). With limit equilibrium calculations for an infinite slope, these overpressures equate to a factor of safety (FS) greater than 10 and 4.5 for a internal friction angle of 26° and a seafloor slope of 2°. This implies stability throughout the last 50,000 years. Seismic and core observations, however, document major slope failures that span the entire Ursa region. Permeability in this region is well constrained by laboratory experiments, so we investigate how pulsed (high-to-low) sedimentation rates could have created unstable conditions, FS <1. Models with periods of high sedimentation generate overpressure that create unstable conditions while maintaining the time-averaged sedimentation rates. Other factors which are not possible to simulate in one dimension, such as a complex basin geometry, also influence the conditions that caused the past failures. A two-dimensional model linking lateral flow between the sites with the interpreted geometry from seismic stratigraphy gives a better picture of the flow field and instability within the basin. Asymmetrical loading of permeable sediments could have created a lateral difference in pore pressures which would have driven lateral flow from Site U1324 to Site U1322 where overpressures are higher than our one-dimensional models suggest. We anticipate that two-dimensional models with transient sedimentation patterns will enhance our understanding of flow in marginally stable environments and triggers of slope failures in passive margin systems.

  16. Identification of the actual state and entity availability forecasting in power engineering using neural-network technologies

    NASA Astrophysics Data System (ADS)

    Protalinsky, O. M.; Shcherbatov, I. A.; Stepanov, P. V.

    2017-11-01

    A growing number of severe accidents in RF call for the need to develop a system that could prevent emergency situations. In a number of cases accident rate is stipulated by careless inspections and neglects in developing repair programs. Across the country rates of accidents are growing because of a so-called “human factor”. In this regard, there has become urgent the problem of identification of the actual state of technological facilities in power engineering using data on engineering processes running and applying artificial intelligence methods. The present work comprises four model states of manufacturing equipment of engineering companies: defect, failure, preliminary situation, accident. Defect evaluation is carried out using both data from SCADA and ASEPCR and qualitative information (verbal assessments of experts in subject matter, photo- and video materials of surveys processed using pattern recognition methods in order to satisfy the requirements). Early identification of defects makes possible to predict the failure of manufacturing equipment using mathematical techniques of artificial neural network. In its turn, this helps to calculate predicted characteristics of reliability of engineering facilities using methods of reliability theory. Calculation of the given parameters provides the real-time estimation of remaining service life of manufacturing equipment for the whole operation period. The neural networks model allows evaluating possibility of failure of a piece of equipment consistent with types of actual defects and their previous reasons. The article presents the grounds for a choice of training and testing samples for the developed neural network, evaluates the adequacy of the neural networks model, and shows how the model can be used to forecast equipment failure. There have been carried out simulating experiments using a computer and retrospective samples of actual values for power engineering companies. The efficiency of the developed model for different types of manufacturing equipment has been proved. There have been offered other research areas in terms of the presented subject matter.

  17. Computational Simulation of the High Strain Rate Tensile Response of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    2002-01-01

    A research program is underway to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. Under these types of loading conditions, the material response can be highly strain rate dependent and nonlinear. State variable constitutive equations based on a viscoplasticity approach have been developed to model the deformation of the polymer matrix. The constitutive equations are then combined with a mechanics of materials based micromechanics model which utilizes fiber substructuring to predict the effective mechanical and thermal response of the composite. To verify the analytical model, tensile stress-strain curves are predicted for a representative composite over strain rates ranging from around 1 x 10(exp -5)/sec to approximately 400/sec. The analytical predictions compare favorably to experimentally obtained values both qualitatively and quantitatively. Effective elastic and thermal constants are predicted for another composite, and compared to finite element results.

  18. Stress corrosion crack initiation of Zircaloy-4 cladding tubes in an iodine vapor environment during creep, relaxation, and constant strain rate tests

    NASA Astrophysics Data System (ADS)

    Jezequel, T.; Auzoux, Q.; Le Boulch, D.; Bono, M.; Andrieu, E.; Blanc, C.; Chabretou, V.; Mozzani, N.; Rautenberg, M.

    2018-02-01

    During accidental power transient conditions with Pellet Cladding Interaction (PCI), the synergistic effect of the stress and strain imposed on the cladding by thermal expansion of the fuel, and corrosion by iodine released as a fission product, may lead to cladding failure by Stress Corrosion Cracking (SCC). In this study, internal pressure tests were conducted on unirradiated cold-worked stress-relieved Zircaloy-4 cladding tubes in an iodine vapor environment. The goal was to investigate the influence of loading type (constant pressure tests, constant circumferential strain rate tests, or constant circumferential strain tests) and test temperature (320, 350, or 380 °C) on iodine-induced stress corrosion cracking (I-SCC). The experimental results obtained with different loading types were consistent with each other. The apparent threshold hoop stress for I-SCC was found to be independent of the test temperature. SEM micrographs of the tested samples showed many pits distributed over the inner surface, which tended to coalesce into large pits in which a microcrack could initiate. A model for the time-to-failure of a cladding tube was developed using finite element simulations of the viscoplastic mechanical behavior of the material and a modified Kachanov's damage growth model. The times-to-failure predicted by this model are consistent with the experimental data.

  19. Frequency of pacemaker malfunction associated with monopolar electrosurgery during pulse generator replacement or upgrade surgery.

    PubMed

    Lin, Yun; Melby, Daniel P; Krishnan, Balaji; Adabag, Selcuk; Tholakanahalli, Venkatakrishna; Li, Jian-Ming

    2017-08-01

    The aim of this study is to investigate the frequency of electrosurgery-related pacemaker malfunction. A retrospective study was conducted to investigate electrosurgery-related pacemaker malfunction in consecutive patients undergoing pulse generator (PG) replacement or upgrade from two large hospitals in Minneapolis, MN between January 2011 and January 2014. The occurrence of this pacemaker malfunction was then studied by using MAUDE database for all four major device vendors. A total of 1398 consecutive patients from 2 large tertiary referral centers in Minneapolis, MN undergoing PG replacement or upgrade surgery were retrospectively studied. Four patients (0.3% of all patients), all with pacemakers from St Jude Medical (2.8%, 4 of 142) had output failure or inappropriately low pacing rate below 30 bpm during electrosurgery, despite being programmed in an asynchronous mode. During the same period, 1174 cases of pacemaker malfunctions were reported on the same models in MAUDE database, 37 of which (3.2%) were electrosurgery-related. Twenty-four cases (65%) had output failure or inappropriate low pacing rate. The distribution of adverse events was loss of pacing (59.5%), reversion to backup pacing (32.4%), inappropriate low pacing rate (5.4%), and ventricular fibrillation (2.7%). The majority of these (78.5%) occurred during PG replacement at ERI or upgrade surgery. No electrosurgery-related malfunction was found in MAUDE database on 862 pacemaker malfunction cases during the same period from other vendors. Electrosurgery during PG replacement or upgrade surgery can trigger output failure or inappropriate low pacing rate in certain models of modern pacemakers. Cautions should be taken for pacemaker-dependent patients.

  20. Supplemental Instruction Online: As Effective as the Traditional Face-to-Face Model?

    ERIC Educational Resources Information Center

    Hizer, Suzanne E.; Schultz, P. W.; Bray, Richard

    2017-01-01

    Supplemental Instruction (SI) is a well-recognized model of academic assistance with a history of empirical evidence demonstrating increases in student grades and decreases in failure rates across many higher education institutions. However, as college students become more accustomed to learning in online venues, what is not known is whether an SI…

  1. Projectile Impact Evaluation on Ballistic Gelatin

    DTIC Science & Technology

    2011-06-13

    g en ormu a on w near u mo u us Impact modeling • Damage model [DuBois, Kolling, LSTC] Methods – Gelatin Calibration • Calibration (10%/4°C...FE (LS-Dyna v971) w/ LSOPT • High strain rate material properties w/ damage/failure L i ALE EFG f l ti Loading [Impact] • agrang an, , ormu a ons

  2. Aeronautical Engineering. A Continuing Bibliography with Indexes

    DTIC Science & Technology

    1987-09-01

    engines 482 01 AERONAUTICS (GENERAL) i-10 aircraft equipped with turbine engine ...rate adaptive control with applications to lateral Statistics on aircraft gas turbine engine rotor failures Unified model for the calculation of blade ...PUMPS p 527 A87-35669 to test data for a composite prop-tan model Gas turbine combustor and engine augmentor tube GENERAL AVIATION AIRCRAFT

  3. Modeling the roles of damage accumulation and mechanical healing on rainfall-induced landslides

    NASA Astrophysics Data System (ADS)

    Fan, Linfeng; Lehmann, Peter; Or, Dani

    2014-05-01

    The abrupt release of rainfall-induced shallow landslides is preceded by local failures that may abruptly coalesce and form a continuous failure plane within a hillslope. The mechanical status of hillslopes reflects a competition between the extent of severity of accumulated local damage during prior rainfall events and the rates of mechanically healing (i.e. regaining of strength) by closure of micro-cracks, regrowth of roots, etc. The interplay of these processes affects the initial conditions for landslide modeling and shapes potential failure patterns during future rainfall events. We incorporated these competing mechanical processes in a hydro-mechanical landslide triggering model subjected to a sequence of rainfall scenarios. The model employs the Fiber Bundle Model (FBM) with bonds (fiber bundle) with prescribed threshold linking adjacent soil columns and soil to bedrock. Prior damage was represented by a fraction of broken fibers during previous rainfall events, and the healing of broken fibers was described by strength regaining models for soil and roots at different characteristic time scales. Results show that prior damage and healing introduce highly nonlinear response to landslide triggering. For small prior damage, mechanical bonds at soil-bedrock interface may fail early in next rainfall event but lead to small perturbations onto lateral bonds without triggering a landslide. For more severe damage weakening lateral bonds, excess load due to failure at soil-bedrock interface accumulates at downslope soil columns resulting in early soil failure with patterns strongly correlated with prior damage distribution. Increasing prior damage over the hillslope decreases the volume of first landslide and prolongs the time needed to trigger the second landslide due to mechanical relaxation of the system. The mechanical healing of fibers diminishes effects of prior damage on the time of failure, and shortens waiting time between the first and second landslides. These findings highlight the need to improve definition of initial conditions and the shortcomings of assuming pristine hillslopes.

  4. How and why of orthodontic bond failures: An in vivo study

    PubMed Central

    Vijayakumar, R. K.; Jagadeep, Raju; Ahamed, Fayyaz; Kanna, Aprose; Suresh, K.

    2014-01-01

    Introduction: The bonding of orthodontic brackets and their failure rates by both direct and in-direct procedures are well-documented in orthodontic literature. Over the years different adhesive materials and various indirect bonding transfer procedures have been compared and evaluated for bond failure rates. The aim of our study is to highlight the use of a simple, inexpensive and ease of manipulation of a single thermo-plastic transfer tray and the use the of a single light cure adhesive to evaluate the bond failure rates in clinical situations. Materials and Methods: A total of 30 patients were randomly divided into two groups (Group A and Group B). A split-mouth study design was used, for, both the groups so that they were distributed equally with-out bias. After initial prophylaxis, both the procedures were done as per manufactures instructions. All patients were initially motivated and reviewed for bond failures rates for 6 months. Results: Bond failure rates were assessed for over-all direct and indirect procedures, anterior and posterior arches, and for individual tooth. Z-test was used for statistically analyzing, the normal distribution of the sample in a spilt mouth study. The results of the two groups were compared and P value was calculated using Z-proportion test to assess the significance of the bond failure. Conclusion: Over-all bond failure was more for direct bonding. Anterior bracket failure was more in-direct bonding than indirect procedure, which showed more posterior bracket failures. In individual tooth bond failure, mandibular incisor, and premolar brackets showed more failure, followed by maxillary premolars and canines. PMID:25210392

  5. Does the United States economy affect heart failure readmissions? A single metropolitan center analysis.

    PubMed

    Thompson, Keith A; Morrissey, Ryan P; Phan, Anita; Schwarz, Ernst R

    2012-08-01

    To determine the effects of the US economy on heart failure hospitalization rates. The recession was associated with worsening unemployment, loss of private insurance and prescription medication benefits, medication nonadherence, and ultimately increased rates of hospitalization for heart failure. We compared hospitalization rates at a large, single, academic medical center from July 1, 2006 to February 28, 2007, a time of economic stability, and July 1, 2008 to February 28, 2009, a time of economic recession in the United States. Significantly fewer patients had private medical insurance during the economic recession than during the control period (36.5% vs 46%; P = 0.04). Despite this, there were no differences in the heart failure hospitalization or readmission rates, length of hospitalization, need for admission to an intensive care unit, in-hospital mortality, or use of guideline-recommended heart failure medications between the 2 study periods. We conclude that despite significant effects on medical insurance coverage, rates of heart failure hospitalization at our institution were not significantly affected by the recession. Additional large-scale population-based research is needed to better understand the effects of fluctuations in the US economy on heart failure hospitalization rates. © 2012 Wiley Periodicals, Inc.

  6. Surveillance of in vivo resistance of Plasmodium falciparum to antimalarial drugs from 1992 to 1999 in Malabo (Equatorial Guinea).

    PubMed

    Roche, Jesús; Guerra-Neira, Ana; Raso, José; Benito, Agustîn

    2003-05-01

    From 1992-1999, we have assessed the therapeutic efficacy of three malaria treatment regimens (chloroquine 25 mg/kg over three days, pyrimethamine/sulfadoxine 1.25/25 mg/kg in one dose, and quinine 25-30 mg/kg daily in three oral doses over a four-, five-, or seven-day period) in 1,189 children under age 10 at Malabo Regional Hospital in Equatorial Guinea. Of those children, 958 were followed up clinically and parasitologically for 14 days. With chloroquine, the failure rate varied from 55% in 1996 to 40% in 1999; the early treatment failure rate increased progressively over the years, from 6% in 1992 to 30% in 1999. With pyrimethamine/sulfadoxine, the failure rate varied from 0% in 1996 to 16% in 1995. The short quinine treatment regimens used in 1992 and 1993 (4 and 5 days, respectively) resulted in significantly higher failure rates (19% and 22%, respectively) than the 7d regimen (3-5.5%). We conclude that: a) failure rates for chloroquine are in the change period (> 25%), and urgent action is needed; b) pyrimethamine/ sulfadoxine failure rates are in the alert period (6-15%), and surveillance must be continued; and c) quinine failure rates are in the grace period (< 6%), so quinine can be recommended.

  7. Heart Rate at Hospital Discharge in Patients With Heart Failure Is Associated With Mortality and Rehospitalization

    PubMed Central

    Laskey, Warren K.; Alomari, Ihab; Cox, Margueritte; Schulte, Phillip J.; Zhao, Xin; Hernandez, Adrian F.; Heidenreich, Paul A.; Eapen, Zubin J.; Yancy, Clyde; Bhatt, Deepak L.; Fonarow, Gregg C.

    2015-01-01

    Background Whether heart rate upon discharge following hospitalization for heart failure is associated with long‐term adverse outcomes and whether this association differs between patients with sinus rhythm (SR) and atrial fibrillation (AF) have not been well studied. Methods and Results We conducted a retrospective cohort study from clinical registry data linked to Medicare claims for 46 217 patients participating in Get With The Guidelines®–Heart Failure. Cox proportional‐hazards models were used to estimate the association between discharge heart rate and all‐cause mortality, all‐cause readmission, and the composite outcome of mortality/readmission through 1 year. For SR and AF patients with heart rate ≥75, the association between heart rate and mortality (expressed as hazard ratio [HR] per 10 beats‐per‐minute increment) was significant at 0 to 30 days (SR: HR 1.30, 95% CI 1.22 to 1.39; AF: HR 1.23, 95% CI 1.16 to 1.29) and 31 to 365 days (SR: HR 1.15, 95% CI 1.12 to 1.20; AF: HR 1.05, 95% CI 1.01 to 1.08). Similar associations between heart rate and all‐cause readmission and the composite outcome were obtained for SR and AF patients from 0 to 30 days but only in the composite outcome for SR patients over the longer term. The HR from 0 to 30 days exceeded that from 31 to 365 days for both SR and AF patients. At heart rates <75, an association was significant for mortality only for both SR and AF patients. Conclusions Among older patients hospitalized with heart failure, higher discharge heart rate was associated with increased risks of death and rehospitalization, with higher risk in the first 30 days and for SR compared with AF. PMID:25904590

  8. Determining 30-day readmission risk for heart failure patients: the Readmission After Heart Failure scale

    PubMed Central

    Chamberlain, Ronald S; Sond, Jaswinder; Mahendraraj, Krishnaraj; Lau, Christine SM; Siracuse, Brianna L

    2018-01-01

    Background Chronic heart failure (CHF), which affects >5 million Americans, accounts for >1 million hospitalizations annually. As a part of the Hospital Readmission Reduction Program, the Affordable Care Act requires that the Centers for Medicare and Medicaid Services reduce payments to hospitals with excess readmissions. This study sought to develop a scale that reliably predicts readmission rates among patients with CHF. Methods The State Inpatient Database (2006–2011) was utilized, and discharge data including demographic and clinical characteristics on 642,448 patients with CHF from California and New York (derivation cohort) and 365,359 patients with CHF from Florida and Washington (validation cohort) were extracted. The Readmission After Heart Failure (RAHF) scale was developed to predict readmission risk. Results The 30-day readmission rates were 9.42 and 9.17% (derivation and validation cohorts, respectively). Age <65 years, male gender, first income quartile, African American race, race other than African American or Caucasian, Medicare, Medicaid, self-pay/no insurance, drug abuse, renal failure, chronic pulmonary disorder, diabetes, depression, and fluid and electrolyte disorder were associated with higher readmission risk after hospitalization for CHF. The RAHF scale was created and explained the 95% of readmission variability within the validation cohort. The RAHF scale was then used to define the following three levels of risk for readmission: low (RAHF score <12; 7.58% readmission rate), moderate (RAHF score 12–15; 9.78% readmission rate), and high (RAHF score >15; 12.04% readmission rate). The relative risk of readmission was 1.67 for the high-risk group compared with the low-risk group. Conclusion The RAHF scale reliably predicts a patient’s 30-day CHF readmission risk based on demographic and clinical factors present upon initial admission. By risk-stratifying patients, using models such as the RAHF scale, strategies tailored to each patient can be implemented to improve patient outcomes and reduce health care costs. PMID:29670391

  9. A study of Mariner 10 flight experiences and some flight piece part failure rate computations

    NASA Technical Reports Server (NTRS)

    Paul, F. A.

    1976-01-01

    The problems and failures encountered in Mariner flight are discussed and the data available through a quantitative accounting of all electronic piece parts on the spacecraft are summarized. It also shows computed failure rates for electronic piece parts. It is intended that these computed data be used in the continued updating of the failure rate base used for trade-off studies and predictions for future JPL space missions.

  10. Heart Rate Dynamics During A Treadmill Cardiopulmonary Exercise Test in Optimized Beta-Blocked Heart Failure Patients

    PubMed Central

    Carvalho, Vitor Oliveira; Guimarães, Guilherme Veiga; Ciolac, Emmanuel Gomes; Bocchi, Edimar Alcides

    2008-01-01

    BACKGROUND Calculating the maximum heart rate for age is one method to characterize the maximum effort of an individual. Although this method is commonly used, little is known about heart rate dynamics in optimized beta-blocked heart failure patients. AIM The aim of this study was to evaluate heart rate dynamics (basal, peak and % heart rate increase) in optimized beta-blocked heart failure patients compared to sedentary, normal individuals (controls) during a treadmill cardiopulmonary exercise test. METHODS Twenty-five heart failure patients (49±11 years, 76% male), with an average LVEF of 30±7%, and fourteen controls were included in the study. Patients with atrial fibrillation, a pacemaker or noncardiovascular functional limitations or whose drug therapy was not optimized were excluded. Optimization was considered to be 50 mg/day or more of carvedilol, with a basal heart rate between 50 to 60 bpm that was maintained for 3 months. RESULTS Basal heart rate was lower in heart failure patients (57±3 bpm) compared to controls (89±14 bpm; p<0.0001). Similarly, the peak heart rate (% maximum predicted for age) was lower in HF patients (65.4±11.1%) compared to controls (98.6±2.2; p<0.0001). Maximum respiratory exchange ratio did not differ between the groups (1.2±0.5 for controls and 1.15±1 for heart failure patients; p=0.42). All controls reached the maximum heart rate for their age, while no patients in the heart failure group reached the maximum. Moreover, the % increase of heart rate from rest to peak exercise between heart failure (48±9%) and control (53±8%) was not different (p=0.157). CONCLUSION No patient in the heart failure group reached the maximum heart rate for their age during a treadmill cardiopulmonary exercise test, despite the fact that the percentage increase of heart rate was similar to sedentary normal subjects. A heart rate increase in optimized beta-blocked heart failure patients during cardiopulmonary exercise test over 65% of the maximum age-adjusted value should be considered an effort near the maximum. This information may be useful in rehabilitation programs and ischemic tests, although further studies are required. PMID:18719758

  11. Failure rate of single-unit restorations on posterior vital teeth: A systematic review.

    PubMed

    Afrashtehfar, Kelvin I; Emami, Elham; Ahmadi, Motahareh; Eilayyan, Owis; Abi-Nader, Samer; Tamimi, Faleh

    2017-03-01

    No knowledge synthesis exists concerning when to use a direct restoration versus a complete-coverage indirect restoration in posterior vital teeth. The purpose of this systematic review was to identify the failure rate of conventional single-unit tooth-supported restorations in posterior permanent vital teeth as a function of remaining tooth structure. Four databases were searched electronically, and 8 selected journals were searched manually up to February 2015. Clinical studies of tooth-supported single-unit restorative treatments with a mean follow-up period of at least 3 years were selected. The outcome measured was the restorations' clinical or radiological failure. Following the Preferred Reporting Items for Systematic reviews and Meta-Analyses guidelines, the Cochrane Collaboration procedures for randomized control trials, the Strengthening the Reporting of Observational Studies in Epidemiology criteria for observational studies, 2 reviewers independently applied eligibility criteria, extracted data, and assessed the quality of the evidence of the included studies using the American Association of Critical Care Nurses' system. The weighted-mean group 5-year failure rates of the restorations were reported according to the type of treatment and remaining tooth structure. A metaregression model was used to assess the correlation between the number of remaining tooth walls and the weighted-mean 5-year failure rates. Five randomized controlled trials and 9 observational studies were included and their quality ranged from low to moderate. These studies included a total of 358 crowns, 4804 composite resins, and 303582 amalgams. Data obtained from the randomized controlled trials showed that, regardless of the amount of remaining tooth structure, amalgams presented better outcomes than composite resins. Furthermore, in teeth with fewer than 2 remaining walls, high-quality observational studies demonstrated that crowns were better than amalgams. A clear inverse correlation was found between the amount of remaining tooth structure and restoration failure. Insufficient high-quality data are available to support one restorative treatment or material over another for the restoration of vital posterior teeth. However, the current evidence suggests that the failure rates of treatments may depend on the amount of remaining tooth structure and types of treatment. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  12. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  13. Reliability of a k—out—of—n : G System with Identical Repairable Elements

    NASA Astrophysics Data System (ADS)

    Sharifi, M.; Nia, A. Torabi; Shafie, P.; Norozi-Zare, F.; Sabet-Ghadam, A.

    2009-09-01

    k—out—of—n models, are one of the most useful models to calculate the reliability of complex systems like electrical and mechanical devices. In this paper, we consider a k—out—of—n : G system with identical elements. The failure rate of each element is constant. The elements are repairable and the repair rate of each element is constant. The system works when at least k elements work. The system of equations are established and sought for the parameters like MTTF in real time situation. It seems that this model can tackle more realistic situations.

  14. Application of Single Crystal Failure Criteria: Theory and Turbine Blade Case Study

    NASA Technical Reports Server (NTRS)

    Sayyah, Tarek; Swanson, Gregory R.; Schonberg, W. P.

    1999-01-01

    The orientation of the single crystal material within a structural component is known to affect the strength and life of the part. The first stage blade of the High Pressure Fuel Turbopump (HPFTP)/ Alternative Turbopump Development (ATD), of the Space Shuttle Main Engine (SSME) was used to study the effects of secondary axis'orientation angles on the failure rate of the blade. A new failure criterion was developed based on normal and shear strains on the primary crystallographic planes. The criterion was verified using low cycle fatigue (LCF) specimen data and a finite element model of the test specimens. The criterion was then used to study ATD/HPFTP first stage blade failure events. A detailed ANSYS finite element model of the blade was used to calculate the failure parameter for the different crystallographic orientations. A total of 297 cases were run to cover a wide range of acceptable orientations within the blade. Those orientations are related to the base crystallographic coordinate system that was created in the ANSYS finite element model. Contour plots of the criterion as a function of orientation for the blade tip and attachment were obtained. Results of the analysis revealed a 40% increase in the failure parameter due to changing of the primary and secondary axes of material orientations. A comparison between failure criterion predictions and actual engine test data was then conducted. The engine test data comes from two ATD/HPFTP builds (units F3- 4B and F6-5D), which were ground tested on the SSME at the Stennis Space Center in Mississippi. Both units experienced cracking of the airfoil tips in multiple blades, but only a few cracks grew all the way across the wall of the hollow core airfoil.

  15. Submarine landslides triggered by destabilization of high-saturation hydrate anomalies

    NASA Astrophysics Data System (ADS)

    Handwerger, Alexander L.; Rempel, Alan W.; Skarbek, Rob M.

    2017-07-01

    Submarine landslides occur along continental margins at depths that often intersect the gas hydrate stability zone, prompting suggestions that slope stability may be affected by perturbations that arise from changes in hydrate stability. Here we develop a numerical model to identify the conditions under which the destabilization of hydrates results in slope failure. Specifically, we focus on high-saturation hydrate anomalies at fine-grained to coarse-grained stratigraphic boundaries that can transmit bridging stresses that decrease the effective stress at sediment contacts and disrupt normal sediment consolidation. We evaluate slope stability before and after hydrate destabilization. Hydrate anomalies act to significantly increase the overall slope stability due to large increases in effective cohesion. However, when hydrate anomalies destabilize there is a loss of cohesion and increase in effective stress that causes the sediment grains to rapidly consolidate and generate pore pressures that can either trigger immediate slope failure or weaken the surrounding sediment until the pore pressure diffuses away. In cases where failure does not occur, the sediment can remain weakened for months. In cases where failure does occur, we quantify landslide dynamics using a rate and state frictional model and find that landslides can display either slow or dynamic (i.e., catastrophic) motion depending on the rate-dependent properties, size of the stress perturbation, and the size of the slip patch relative to a critical nucleation length scale. Our results illustrate the fundamental mechanisms through which the destabilization of gas hydrates can pose a significant geohazard.

  16. Nonequilibrium shock-heated nitrogen flows using a rovibrational state-to-state method

    NASA Astrophysics Data System (ADS)

    Panesi, M.; Munafò, A.; Magin, T. E.; Jaffe, R. L.

    2014-07-01

    A rovibrational collisional model is developed to study the internal energy excitation and dissociation processes behind a strong shock wave in a nitrogen flow. The reaction rate coefficients are obtained from the ab initio database of the NASA Ames Research Center. The master equation is coupled with a one-dimensional flow solver to study the nonequilibrium phenomena encountered in the gas during a hyperbolic reentry into Earth's atmosphere. The analysis of the populations of the rovibrational levels demonstrates how rotational and vibrational relaxation proceed at the same rate. This contrasts with the common misconception that translational and rotational relaxation occur concurrently. A significant part of the relaxation process occurs in non-quasi-steady-state conditions. Exchange processes are found to have a significant impact on the relaxation of the gas, while predissociation has a negligible effect. The results obtained by means of the full rovibrational collisional model are used to assess the validity of reduced order models (vibrational collisional and multitemperature) which are based on the same kinetic database. It is found that thermalization and dissociation are drastically overestimated by the reduced order models. The reasons of the failure differ in the two cases. In the vibrational collisional model the overestimation of the dissociation is a consequence of the assumption of equilibrium between the rotational energy and the translational energy. The multitemperature model fails to predict the correct thermochemical relaxation due to the failure of the quasi-steady-state assumption, used to derive the phenomenological rate coefficient for dissociation.

  17. Bruxism and dental implant failures: a multilevel mixed effects parametric survival analysis approach.

    PubMed

    Chrcanovic, B R; Kisch, J; Albrektsson, T; Wennerberg, A

    2016-11-01

    Recent studies have suggested that the insertion of dental implants in patients being diagnosed with bruxism negatively affected the implant failure rates. The aim of the present study was to investigate the association between the bruxism and the risk of dental implant failure. This retrospective study is based on 2670 patients who received 10 096 implants at one specialist clinic. Implant- and patient-related data were collected. Descriptive statistics were used to describe the patients and implants. Multilevel mixed effects parametric survival analysis was used to test the association between bruxism and risk of implant failure adjusting for several potential confounders. Criteria from a recent international consensus (Lobbezoo et al., J Oral Rehabil, 40, 2013, 2) and from the International Classification of Sleep Disorders (International classification of sleep disorders, revised: diagnostic and coding manual, American Academy of Sleep Medicine, Chicago, 2014) were used to define and diagnose the condition. The number of implants with information available for all variables totalled 3549, placed in 994 patients, with 179 implants reported as failures. The implant failure rates were 13·0% (24/185) for bruxers and 4·6% (155/3364) for non-bruxers (P < 0·001). The statistical model showed that bruxism was a statistically significantly risk factor to implant failure (HR 3·396; 95% CI 1·314, 8·777; P = 0·012), as well as implant length, implant diameter, implant surface, bone quantity D in relation to quantity A, bone quality 4 in relation to quality 1 (Lekholm and Zarb classification), smoking and the intake of proton pump inhibitors. It is suggested that the bruxism may be associated with an increased risk of dental implant failure. © 2016 John Wiley & Sons Ltd.

  18. Dynamic mechanical characterization of aluminum: analysis of strain-rate-dependent behavior

    NASA Astrophysics Data System (ADS)

    Rahmat, Meysam

    2018-05-01

    A significant number of materials show different mechanical behavior under dynamic loads compared to quasi-static (Salvado et al. in Prog. Mater. Sci. 88:186-231, 2017). Therefore, a comprehensive study of material dynamic behavior is essential for applications in which dynamic loads are dominant (Li et al. in J. Mater. Process. Technol. 255:373-386, 2018). In this work, aluminum 6061-T6, as an example of ductile alloys with numerous applications including in the aerospace industry, has been studied under quasi-static and dynamic tensile tests with strain rates of up to 156 s^{-1}. Dogbone specimens were designed, instrumented and tested with a high speed servo-hydraulic load frame, and the results were validated with the literature. It was observed that at a strain rate of 156 s^{-1} the yield and ultimate strength increased by 31% and 33% from their quasi-static values, respectively. Moreover, the failure elongation and fracture energy per unit volume also increased by 18% and 52%, respectively. A Johnson-Cook model was used to capture the behavior of the material at different strain rates, and a modified version of this model was presented to enhance the capabilities of the original model, especially in predicting material properties close to the failure point. Finally, the fracture surfaces of specimens tested under quasi-static and dynamic loads were compared and conclusions about the differences were drawn.

  19. Sterilization failures in Singapore: an examination of ligation techniques and failure rates.

    PubMed

    Cheng, M C; Wong, Y M; Rochat, R W; Ratnam, S S

    1977-04-01

    The University Department of Obstetrics and Gynecology, Kandang Kerbau Hospital in Singapore, initiated a study in early 1974 of failure rates for various methods of sterilization and the factors responsible for the failures. During the period January 1974 to March 1976, 51 cases of first pregnancy following ligation were discovered. Cumulative failure rates at 24 months were 0.34 per 100 women for abdominal sterilization, 1.67 for culdoscopic, 3.12 for vaginal, and 4.49 for laparoscopic procedures. Findings for 35 patients who underwent religation showed that recanalization and the establishment of a fistulous opening caused the majority of failures. Clearly, more effective methods of tubal occlusion in sterilization are needed.

  20. Cost-Effectiveness of Sacubitril-Valsartan Combination Therapy Compared With Enalapril for the Treatment of Heart Failure With Reduced Ejection Fraction.

    PubMed

    King, Jordan B; Shah, Rashmee U; Bress, Adam P; Nelson, Richard E; Bellows, Brandon K

    2016-05-01

    The objective of this study was to determine the cost-effectiveness and cost per quality-adjusted life year (QALY) gained of sacubitril-valsartan relative to enalapril for treatment of heart failure with reduced ejection fraction (HFrEF). Compared with enalapril, combination angiotensin receptor-neprilysin inhibition (ARNI), as is found in sacubitril-valsartan, reduces cardiovascular death and heart failure hospitalization rates in patients with HFrEF. Using a Markov model, costs, effects, and cost-effectiveness were estimated for sacubitril-valsartan and enalapril therapies for the treatment of HFrEF. Patients were 60 years of age at model entry and were modeled over a lifetime (40 years) from a third-party payer perspective. Clinical probabilities were derived predominantly from PARADIGM-HF (Prospective Comparison of ARNI With ACEI to Determine Impact on Global Mortality and Morbidity in Heart Failure). All costs and effects were discounted at a 3% rate annually and are presented in 2015 U.S. dollars. In the base case, sacubitril-valsartan, compared with enalapril, was more costly ($60,391 vs. $21,758) and more effective (6.49 vs. 5.74 QALYs) over a lifetime. The cost-effectiveness of sacubitril-valsartan was highly dependent on duration of treatment, ranging from $249,411 per QALY at 3 years to $50,959 per QALY gained over a lifetime. Sacubitril-valsartan may be a cost-effective treatment option depending on the willingness-to-pay threshold. Future investigations should incorporate real-world evidence with sacubitril-valsartan to further inform decision making. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  1. An evaluation of the Johnson-Cook model to simulate puncture of 7075 aluminum plates.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corona, Edmundo; Orient, George Edgar

    The objective of this project was to evaluate the use of the Johnson-Cook strength and failure models in an adiabatic finite element model to simulate the puncture of 7075- T651 aluminum plates that were studied as part of an ASC L2 milestone by Corona et al (2012). The Johnson-Cook model parameters were determined from material test data. The results show a marked improvement, in particular in the calculated threshold velocity between no puncture and puncture, over those obtained in 2012. The threshold velocity calculated using a baseline model is just 4% higher than the mean value determined from experiment, inmore » contrast to 60% in the 2012 predictions. Sensitivity studies showed that the threshold velocity predictions were improved by calibrating the relations between the equivalent plastic strain at failure and stress triaxiality, strain rate and temperature, as well as by the inclusion of adiabatic heating.« less

  2. Investigating Brittle Rock Failure and Associated Seismicity Using Laboratory Experiments and Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Zhao, Qi

    Rock failure process is a complex phenomenon that involves elastic and plastic deformation, microscopic cracking, macroscopic fracturing, and frictional slipping of fractures. Understanding this complex behaviour has been the focus of a significant amount of research. In this work, the combined finite-discrete element method (FDEM) was first employed to study (1) the influence of rock discontinuities on hydraulic fracturing and associated seismicity and (2) the influence of in-situ stress on seismic behaviour. Simulated seismic events were analyzed using post-processing tools including frequency-magnitude distribution (b-value), spatial fractal dimension (D-value), seismic rate, and fracture clustering. These simulations demonstrated that at the local scale, fractures tended to propagate following the rock mass discontinuities; while at reservoir scale, they developed in the direction parallel to the maximum in-situ stress. Moreover, seismic signature (i.e., b-value, D-value, and seismic rate) can help to distinguish different phases of the failure process. The FDEM modelling technique and developed analysis tools were then coupled with laboratory experiments to further investigate the different phases of the progressive rock failure process. Firstly, a uniaxial compression experiment, monitored using a time-lapse ultrasonic tomography method, was carried out and reproduced by the numerical model. Using this combination of technologies, the entire deformation and failure processes were studied at macroscopic and microscopic scales. The results not only illustrated the rock failure and seismic behaviours at different stress levels, but also suggested several precursory behaviours indicating the catastrophic failure of the rock. Secondly, rotary shear experiments were conducted using a newly developed rock physics experimental apparatus ERDmu-T) that was paired with X-ray micro-computed tomography (muCT). This combination of technologies has significant advantages over conventional rotary shear experiments since it allowed for the direct observation of how two rough surfaces interact and deform without perturbing the experimental conditions. Some intriguing observations were made pertaining to key areas of the study of fault evolution, making possible for a more comprehensive interpretation of the frictional sliding behaviour. Lastly, a carefully calibrated FDEM model that was built based on the rotary experiment was utilized to investigate facets that the experiment was not able to resolve, for example, the time-continuous stress condition and the seismic activity on the shear surface. The model reproduced the mechanical behaviour observed in the laboratory experiment, shedding light on the understanding of fault evolution.

  3. Canonical failure modes of real-time control systems: insights from cognitive theory

    NASA Astrophysics Data System (ADS)

    Wallace, Rodrick

    2016-04-01

    Newly developed necessary conditions statistical models from cognitive theory are applied to generalisation of the data-rate theorem for real-time control systems. Rather than graceful degradation under stress, automatons and man/machine cockpits appear prone to characteristic sudden failure under demanding fog-of-war conditions. Critical dysfunctions span a spectrum of phase transition analogues, ranging from a ground state of 'all targets are enemies' to more standard data-rate instabilities. Insidious pathologies also appear possible, akin to inattentional blindness consequent on overfocus on an expected pattern. Via no-free-lunch constraints, different equivalence classes of systems, having structure and function determined by 'market pressures', in a large sense, will be inherently unreliable under different but characteristic canonical stress landscapes, suggesting that deliberate induction of failure may often be relatively straightforward. Focusing on two recent military case histories, these results provide a caveat emptor against blind faith in the current path-dependent evolutionary trajectory of automation for critical real-time processes.

  4. Interdependence theory of tissue failure: bulk and boundary effects.

    PubMed

    Suma, Daniel; Acun, Aylin; Zorlutuna, Pinar; Vural, Dervis Can

    2018-02-01

    The mortality rate of many complex multicellular organisms increases with age, which suggests that net ageing damage is accumulative, despite remodelling processes. But how exactly do these little mishaps in the cellular level accumulate and spread to become a systemic catastrophe? To address this question we present experiments with synthetic tissues, an analytical model consistent with experiments, and a number of implications that follow the analytical model. Our theoretical framework describes how shape, curvature and density influences the propagation of failure in a tissue subjected to oxidative damage. We propose that ageing is an emergent property governed by interaction between cells, and that intercellular processes play a role that is at least as important as intracellular ones.

  5. Interdependence theory of tissue failure: bulk and boundary effects

    NASA Astrophysics Data System (ADS)

    Suma, Daniel; Acun, Aylin; Zorlutuna, Pinar; Vural, Dervis Can

    2018-02-01

    The mortality rate of many complex multicellular organisms increases with age, which suggests that net ageing damage is accumulative, despite remodelling processes. But how exactly do these little mishaps in the cellular level accumulate and spread to become a systemic catastrophe? To address this question we present experiments with synthetic tissues, an analytical model consistent with experiments, and a number of implications that follow the analytical model. Our theoretical framework describes how shape, curvature and density influences the propagation of failure in a tissue subjected to oxidative damage. We propose that ageing is an emergent property governed by interaction between cells, and that intercellular processes play a role that is at least as important as intracellular ones.

  6. Modeling of ductile fragmentation that includes void interactions

    NASA Astrophysics Data System (ADS)

    Meulbroek Fick, J. P.; Ramesh, K. T.; Swaminathan, P. K.

    2015-12-01

    The failure and fragmentation of ductile materials through the nucleation, growth, and coalescence of voids is important to the understanding of key structural materials. In this model of development effort, ductile fragmentation of an elastic-viscoplastic material is studied through a computational approach which couples these key stages of ductile failure with nucleation site distributions and wave propagation, and predicts fragment spacing within a uniaxial strain approximation. This powerful tool is used to investigate the mechanical and thermal response of OFHC copper at a strain rate of 105. Once the response of the material is understood, the fragmentation of this test material is considered. The average fragment size as well as the fragment size distribution is formulated.

  7. Rate of change of heart size before congestive heart failure in dogs with mitral regurgitation.

    PubMed

    Lord, P; Hansson, K; Kvart, C; Häggström, J

    2010-04-01

    The objective of the study was to examine the changes in vertebral heart scale, and left atrial and ventricular dimensions before and at onset of congestive heart failure in cavalier King Charles spaniels with mitral regurgitation. Records and radiographs from 24 cavalier King Charles spaniels with mitral regurgitation were used. Vertebral heart scale (24 dogs), and left atrial dimension and left ventricular end diastolic and end systolic diameters (18 dogs) and their rate of increase were measured at intervals over years to the onset of congestive heart failure. They were plotted against time to onset of congestive heart failure. Dimensions and rates of change of all parameters were highest at onset of congestive heart failure, the difference between observed and chance outcome being highly significant using a two-tailed chi-square test (P<0.001). The left heart chambers increase in size rapidly only in the last year before the onset of congestive heart failure. Increasing left ventricular end systolic dimension is suggestive of myocardial failure before the onset of congestive heart failure. Rate of increase of heart dimensions may be a useful indicator of impending congestive heart failure.

  8. Effects of antithyroid drugs on radioiodine treatment: systematic review and meta-analysis of randomised controlled trials

    PubMed Central

    Briel, Matthias; Christ-Crain, Mirjam; Bonnema, Steen J; Connell, John; Cooper, David S; Bucher, Heiner C; Müller-Brand, Jan; Müller, Beat

    2007-01-01

    Objective To determine the effect of adjunctive antithyroid drugs on the risk of treatment failure, hypothyroidism, and adverse events after radioiodine treatment. Design Meta-analysis. Data sources Electronic databases (Cochrane central register of controlled trials, Medline, Embase) searched to August 2006 and contact with experts. Review methods Three reviewers independently assessed trial eligibility and quality. Pooled relative risks for treatment failure and hypothyroidism after radioiodine treatment with and without adjunctive antithyroid drugs were calculated with a random effects model. Results We identified 14 relevant randomised controlled trials with a total of 1306 participants. Adjunctive antithyroid medication was associated with an increased risk of treatment failure (relative risk 1.28, 95% confidence interval 1.07 to 1.52; P=0.006) and a reduced risk for hypothyroidism (0.68, 0.53 to 0.87; P=0.006) after radioiodine treatment. We found no difference in summary estimates for the different antithyroid drugs or for whether antithyroid drugs were given before or after radioiodine treatment. Conclusions Antithyroid drugs potentially increase rates of failure and reduce rates of hypothyroidism if they are given in the week before or after radioiodine treatment, respectively. PMID:17309884

  9. Disparity between online and offline tests in accelerated aging tests of LED lamps under electric stress.

    PubMed

    Wang, Yao; Jing, Lei; Ke, Hong-Liang; Hao, Jian; Gao, Qun; Wang, Xiao-Xun; Sun, Qiang; Xu, Zhi-Jun

    2016-09-20

    The accelerated aging tests under electric stress for one type of LED lamp are conducted, and the differences between online and offline tests of the degradation of luminous flux are studied in this paper. The transformation of the two test modes is achieved with an adjustable AC voltage stabilized power source. Experimental results show that the exponential fitting of the luminous flux degradation in online tests possesses a higher fitting degree for most lamps, and the degradation rate of the luminous flux by online tests is always lower than that by offline tests. Bayes estimation and Weibull distribution are used to calculate the failure probabilities under the accelerated voltages, and then the reliability of the lamps under rated voltage of 220 V is estimated by use of the inverse power law model. Results show that the relative error of the lifetime estimation by offline tests increases as the failure probability decreases, and it cannot be neglected when the failure probability is less than 1%. The relative errors of lifetime estimation are 7.9%, 5.8%, 4.2%, and 3.5%, at the failure probabilities of 0.1%, 1%, 5%, and 10%, respectively.

  10. Heart failure and atrial fibrillation: current concepts and controversies.

    PubMed Central

    Van den Berg, M. P.; Tuinenburg, A. E.; Crijns, H. J.; Van Gelder, I. C.; Gosselink, A. T.; Lie, K. I.

    1997-01-01

    Heart failure and atrial fibrillation are very common, particularly in the elderly. Owing to common risk factors both disorders are often present in the same patient. In addition, there is increasing evidence of a complex, reciprocal relation between heart failure and atrial fibrillation. Thus heart failure may cause atrial fibrillation, with electromechanical feedback and neurohumoral activation playing an important mediating role. In addition, atrial fibrillation may promote heart failure; in particular, when there is an uncontrolled ventricular rate, tachycardiomyopathy may develop and thereby heart failure. Eventually, a vicious circle between heart failure and atrial fibrillation may form, in which neurohumoral activation and subtle derangement of rate control are involved. Treatment should aim at unloading of the heart, adequate control of ventricular rate, and correction of neurohumoral activation. Angiotensin converting enzyme inhibitors may help to achieve these goals. Treatment should also include an attempt to restore sinus rhythm through electrical cardioversion, though appropriate timing of cardioversion is difficult. His bundle ablation may be used to achieve adequate rate control in drug refractory cases. PMID:9155607

  11. Application of the health belief model in promotion of self-care in heart failure patients.

    PubMed

    Baghianimoghadam, Mohammad Hosein; Shogafard, Golamreza; Sanati, Hamid Reza; Baghianimoghadam, Behnam; Mazloomy, Seyed Saeed; Askarshahi, Mohsen

    2013-01-01

    Heart failure (HF) is a condition due to a problem with the structure or function of the heart impairs its ability to supply sufficient blood flow to meet the body's needs. In developing countries, around 2% of adults suffer from heart failure, but in people over the age of 65, this rate increases to 6-10%. In Iran, around 3.3% of adults suffer from heart failure. The Health Belief Model (HBM) is one of the most widely used models in public health theoretical framework. This was a cohort experimental study, in which education as intervention factor was presented to case group. 180 Heart failure patients were randomly selected from patients who were referred to the Shahid Rajaee center of Heart Research in Tehran and allocated to two groups (90 patients in the case group and 90 in the control group). HBM was used to compare health behaviors. The questionnaire included 69 questions. All data were collected before and 2 months after intervention. About 38% of participants don't know what, the heart failure is and 43% don't know that using the salt is not suitable for them. More than 40% of participants didn't weigh any time their selves. There was significant differences between the mean grades score of variables (perceived susceptibility, perceived threat, knowledge, Perceived benefits, Perceived severity, self-efficacy Perceived barriers, cues to action, self- behavior) in the case and control groups after intervention that was not significant before it. Based on our study and also many other studies, HBM has the potential to be used as a tool to establish educational programs for individuals and communities. Therefore, this model can be used effectively to prevent different diseases and their complications including heart failure. © 2013 Tehran University of Medical Sciences. All rights reserved.

  12. Postbuckling and Growth of Delaminations in Composite Plates Subjected to Axial Compression

    NASA Technical Reports Server (NTRS)

    Reeder, James R.; Chunchu, Prasad B.; Song, Kyongchan; Ambur, Damodar R.

    2002-01-01

    The postbuckling response and growth of circular delaminations in flat and curved plates are investigated as part of a study to identify the criticality of delamination locations through the laminate thickness. The experimental results from tests on delaminated plates are compared with finite element analysis results generated using shell models. The analytical prediction of delamination growth is obtained by assessing the strain energy release rate results from the finite element model and comparing them to a mixed-mode fracture toughness failure criterion. The analytical results for onset of delamination growth compare well with experimental results generated using a 3-dimensional displacement visualization system. The record of delamination progression measured in this study has resulted in a fully 3-dimensional test case with which progressive failure models can be validated.

  13. Does Technical Success of Angioplasty in Dysfunctional Hemodialysis Accesses Correlate with Access Patency?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sidhu, Arshdeep; Tan, Kong T.; Noel-Lamy, Maxime

    2016-10-15

    PurposeTo study if <30 % residual stenosis post angioplasty (PTA) correlates with primary access circuit patency, and if any variables predict technical success.Materials and MethodsA prospective observational study was performed between January 2009 and December 2012, wherein 76 patients underwent 154 PTA events in 56 prosthetic grafts (AVG) and 98 autogenous fistulas (AVF). Data collected included patient age, gender, lesion location and laterality, access type and location, number of prior interventions, and transonic flow rates pre- and postintervention. Impact of technical outcome on access patency was assessed. Univariate logistic regression was used to assess the impact of variables on technical success withmore » significant factors assessed with a multiple variable model.ResultsTechnical success rates of PTA in AVFs and AVGs were 79.6 and 76.7 %, respectively. Technical failures of PTA were associated with an increased risk of patency loss among circuits with AVFs (p < 0.05), but not with AVGs (p = 0.7). In AVFs, primary access patency rates between technical successes and failures at three and 6 months were 74.4 versus 61.9 % (p = 0.3) and 53.8 versus 23.8 % (p < 0.05), respectively. In AVGs, primary access patency rates between technical successes and failures at three and six months were 72.1 versus 53.9 % (p = 0.5) and 33.6 versus 38.5 % (p = 0.8), respectively. Transonic flow rates did not significantly differ among technically successful or failed outcomes at one or three months.ConclusionTechnical failures of PTA had a significant impact on access patency among AVFs with a trend toward poorer access patency within AVGs.« less

  14. Does Technical Success of Angioplasty in Dysfunctional Hemodialysis Accesses Correlate with Access Patency?

    PubMed

    Sidhu, Arshdeep; Tan, Kong T; Noel-Lamy, Maxime; Simons, Martin E; Rajan, Dheeraj K

    2016-10-01

    To study if <30 % residual stenosis post angioplasty (PTA) correlates with primary access circuit patency, and if any variables predict technical success. A prospective observational study was performed between January 2009 and December 2012, wherein 76 patients underwent 154 PTA events in 56 prosthetic grafts (AVG) and 98 autogenous fistulas (AVF). Data collected included patient age, gender, lesion location and laterality, access type and location, number of prior interventions, and transonic flow rates pre- and postintervention. Impact of technical outcome on access patency was assessed. Univariate logistic regression was used to assess the impact of variables on technical success with significant factors assessed with a multiple variable model. Technical success rates of PTA in AVFs and AVGs were 79.6 and 76.7 %, respectively. Technical failures of PTA were associated with an increased risk of patency loss among circuits with AVFs (p < 0.05), but not with AVGs (p = 0.7). In AVFs, primary access patency rates between technical successes and failures at three and 6 months were 74.4 versus 61.9 % (p = 0.3) and 53.8 versus 23.8 % (p < 0.05), respectively. In AVGs, primary access patency rates between technical successes and failures at three and six months were 72.1 versus 53.9 % (p = 0.5) and 33.6 versus 38.5 % (p = 0.8), respectively. Transonic flow rates did not significantly differ among technically successful or failed outcomes at one or three months. Technical failures of PTA had a significant impact on access patency among AVFs with a trend toward poorer access patency within AVGs.

  15. Verification of the Multi-Axial, Temperature and Time Dependent (MATT) Failure Criterion

    NASA Technical Reports Server (NTRS)

    Richardson, David E.; Macon, David J.

    2005-01-01

    An extensive test and analytical effort has been completed by the Space Shuttle's Reusable Solid Rocket Motor (KSKM) nozzle program to characterize the failure behavior of two epoxy adhesives (TIGA 321 and EA946). As part of this effort, a general failure model, the "Multi-Axial, Temperature, and Time Dependent" or MATT failure criterion was developed. In the initial development of this failure criterion, tests were conducted to provide validation of the theory under a wide range of test conditions. The purpose of this paper is to present additional verification of the MATT failure criterion, under new loading conditions for the adhesives TIGA 321 and EA946. In many cases, the loading conditions involve an extrapolation from the conditions under which the material models were originally developed. Testing was conducted using three loading conditions: multi-axial tension, torsional shear, and non-uniform tension in a bondline condition. Tests were conducted at constant and cyclic loading rates ranging over four orders of magnitude. Tests were conducted under environmental conditions of primary interest to the RSRM program. The temperature range was not extreme, but the loading ranges were extreme (varying by four orders of magnitude). It should be noted that the testing was conducted at temperatures below the glass transition temperature of the TIGA 321 adhesive. However for the EA946, the testing was conducted at temperatures that bracketed the glass transition temperature.

  16. Irreversible entropy model for damage diagnosis in resistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuadras, Angel, E-mail: angel.cuadras@upc.edu; Crisóstomo, Javier; Ovejas, Victoria J.

    2015-10-28

    We propose a method to characterize electrical resistor damage based on entropy measurements. Irreversible entropy and the rate at which it is generated are more convenient parameters than resistance for describing damage because they are essentially positive in virtue of the second law of thermodynamics, whereas resistance may increase or decrease depending on the degradation mechanism. Commercial resistors were tested in order to characterize the damage induced by power surges. Resistors were biased with constant and pulsed voltage signals, leading to power dissipation in the range of 4–8 W, which is well above the 0.25 W nominal power to initiate failure. Entropymore » was inferred from the added power and temperature evolution. A model is proposed to understand the relationship among resistance, entropy, and damage. The power surge dissipates into heat (Joule effect) and damages the resistor. The results show a correlation between entropy generation rate and resistor failure. We conclude that damage can be conveniently assessed from irreversible entropy generation. Our results for resistors can be easily extrapolated to other systems or machines that can be modeled based on their resistance.« less

  17. Construct validity of the Heart Failure Screening Tool (Heart-FaST) to identify heart failure patients at risk of poor self-care: Rasch analysis.

    PubMed

    Reynolds, Nicholas A; Ski, Chantal F; McEvedy, Samantha M; Thompson, David R; Cameron, Jan

    2018-02-14

    The aim of this study was to psychometrically evaluate the Heart Failure Screening Tool (Heart-FaST) via: (1) examination of internal construct validity; (2) testing of scale function in accordance with design; and (3) recommendation for change/s, if items are not well adjusted, to improve psychometric credential. Self-care is vital to the management of heart failure. The Heart-FaST may provide a prospective assessment of risk, regarding the likelihood that patients with heart failure will engage in self-care. Psychometric validation of the Heart-FaST using Rasch analysis. The Heart-FaST was administered to 135 patients (median age = 68, IQR = 59-78 years; 105 males) enrolled in a multidisciplinary heart failure management program. The Heart-FaST is a nurse-administered tool for screening patients with HF at risk of poor self-care. A Rasch analysis of responses was conducted which tested data against Rasch model expectations, including whether items serve as unbiased, non-redundant indicators of risk and measure a single construct and that rating scales operate as intended. The results showed that data met Rasch model expectations after rescoring or deleting items due to poor discrimination, disordered thresholds, differential item functioning, or response dependence. There was no evidence of multidimensionality which supports the use of total scores from Heart-FaST as indicators of risk. Aggregate scores from this modified screening tool rank heart failure patients according to their "risk of poor self-care" demonstrating that the Heart-FaST items constitute a meaningful scale to identify heart failure patients at risk of poor engagement in heart failure self-care. © 2018 John Wiley & Sons Ltd.

  18. Long‐term Cardiovascular Risks Associated With an Elevated Heart Rate: The Framingham Heart Study

    PubMed Central

    Ho, Jennifer E.; Larson, Martin G.; Ghorbani, Anahita; Cheng, Susan; Coglianese, Erin E.; Vasan, Ramachandran S.; Wang, Thomas J.

    2014-01-01

    Background Higher heart rate has been associated with an adverse prognosis, but most prior studies focused on individuals with known cardiovascular disease or examined a limited number of outcomes. We sought to examine the association of baseline heart rate with both fatal and nonfatal outcomes during 2 decades of follow‐up. Methods and Results Our study included 4058 Framingham Heart Study participants (mean age 55 years, 56% women). Cox models were performed with multivariable adjustment for clinical risk factors and physical activity. A total of 708 participants developed incident cardiovascular disease (303 heart failure, 343 coronary heart disease, and 216 stroke events), 48 received a permanent pacemaker, and 1186 died. Baseline heart rate was associated with incident cardiovascular disease (hazard ratio [HR] 1.15 per 1 SD [11 bpm] increase in heart rate, 95% CI 1.07 to 1.24, P=0.0002), particularly heart failure (HR 1.32, 95% CI 1.18 to 1.48, P<0.0001). Higher heart rate was also associated with higher all‐cause (HR 1.17, 95% CI 1.11 to 1.24, P<0.0001) and cardiovascular mortality (HR 1.18, 95% CI 1.04 to 1.33, P=0.01). Spline analyses did not suggest a lower threshold beyond which the benefit of a lower heart rate abated or increased. In contrast, individuals with a higher heart rate had a lower risk of requiring permanent pacemaker placement (HR 0.55, 95% CI 0.38 to 0.79, P=0.001). Conclusions Individuals with a higher heart rate are at elevated long‐term risk for cardiovascular events, in particular, heart failure, and all‐cause death. On the other hand, a higher heart rate is associated with a lower risk of future permanent pacemaker implantation. PMID:24811610

  19. Age differences in learning emerge from an insufficient representation of uncertainty in older adults

    PubMed Central

    Nassar, Matthew R.; Bruckner, Rasmus; Gold, Joshua I.; Li, Shu-Chen; Heekeren, Hauke R.; Eppinger, Ben

    2016-01-01

    Healthy aging can lead to impairments in learning that affect many laboratory and real-life tasks. These tasks often involve the acquisition of dynamic contingencies, which requires adjusting the rate of learning to environmental statistics. For example, learning rate should increase when expectations are uncertain (uncertainty), outcomes are surprising (surprise) or contingencies are more likely to change (hazard rate). In this study, we combine computational modelling with an age-comparative behavioural study to test whether age-related learning deficits emerge from a failure to optimize learning according to the three factors mentioned above. Our results suggest that learning deficits observed in healthy older adults are driven by a diminished capacity to represent and use uncertainty to guide learning. These findings provide insight into age-related cognitive changes and demonstrate how learning deficits can emerge from a failure to accurately assess how much should be learned. PMID:27282467

  20. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  1. Sudden cardiac death and pump failure death prediction in chronic heart failure by combining ECG and clinical markers in an integrated risk model

    PubMed Central

    Orini, Michele; Mincholé, Ana; Monasterio, Violeta; Cygankiewicz, Iwona; Bayés de Luna, Antonio; Martínez, Juan Pablo

    2017-01-01

    Background Sudden cardiac death (SCD) and pump failure death (PFD) are common endpoints in chronic heart failure (CHF) patients, but prevention strategies are different. Currently used tools to specifically predict these endpoints are limited. We developed risk models to specifically assess SCD and PFD risk in CHF by combining ECG markers and clinical variables. Methods The relation of clinical and ECG markers with SCD and PFD risk was assessed in 597 patients enrolled in the MUSIC (MUerte Súbita en Insuficiencia Cardiaca) study. ECG indices included: turbulence slope (TS), reflecting autonomic dysfunction; T-wave alternans (TWA), reflecting ventricular repolarization instability; and T-peak-to-end restitution (ΔαTpe) and T-wave morphology restitution (TMR), both reflecting changes in dispersion of repolarization due to heart rate changes. Standard clinical indices were also included. Results The indices with the greatest SCD prognostic impact were gender, New York Heart Association (NYHA) class, left ventricular ejection fraction, TWA, ΔαTpe and TMR. For PFD, the indices were diabetes, NYHA class, ΔαTpe and TS. Using a model with only clinical variables, the hazard ratios (HRs) for SCD and PFD for patients in the high-risk group (fifth quintile of risk score) with respect to patients in the low-risk group (first and second quintiles of risk score) were both greater than 4. HRs for SCD and PFD increased to 9 and 11 when using a model including only ECG markers, and to 14 and 13, when combining clinical and ECG markers. Conclusion The inclusion of ECG markers capturing complementary pro-arrhythmic and pump failure mechanisms into risk models based only on standard clinical variables substantially improves prediction of SCD and PFD in CHF patients. PMID:29020031

  2. Sudden cardiac death and pump failure death prediction in chronic heart failure by combining ECG and clinical markers in an integrated risk model.

    PubMed

    Ramírez, Julia; Orini, Michele; Mincholé, Ana; Monasterio, Violeta; Cygankiewicz, Iwona; Bayés de Luna, Antonio; Martínez, Juan Pablo; Laguna, Pablo; Pueyo, Esther

    2017-01-01

    Sudden cardiac death (SCD) and pump failure death (PFD) are common endpoints in chronic heart failure (CHF) patients, but prevention strategies are different. Currently used tools to specifically predict these endpoints are limited. We developed risk models to specifically assess SCD and PFD risk in CHF by combining ECG markers and clinical variables. The relation of clinical and ECG markers with SCD and PFD risk was assessed in 597 patients enrolled in the MUSIC (MUerte Súbita en Insuficiencia Cardiaca) study. ECG indices included: turbulence slope (TS), reflecting autonomic dysfunction; T-wave alternans (TWA), reflecting ventricular repolarization instability; and T-peak-to-end restitution (ΔαTpe) and T-wave morphology restitution (TMR), both reflecting changes in dispersion of repolarization due to heart rate changes. Standard clinical indices were also included. The indices with the greatest SCD prognostic impact were gender, New York Heart Association (NYHA) class, left ventricular ejection fraction, TWA, ΔαTpe and TMR. For PFD, the indices were diabetes, NYHA class, ΔαTpe and TS. Using a model with only clinical variables, the hazard ratios (HRs) for SCD and PFD for patients in the high-risk group (fifth quintile of risk score) with respect to patients in the low-risk group (first and second quintiles of risk score) were both greater than 4. HRs for SCD and PFD increased to 9 and 11 when using a model including only ECG markers, and to 14 and 13, when combining clinical and ECG markers. The inclusion of ECG markers capturing complementary pro-arrhythmic and pump failure mechanisms into risk models based only on standard clinical variables substantially improves prediction of SCD and PFD in CHF patients.

  3. Predicting spatio-temporal failure in large scale observational and micro scale experimental systems

    NASA Astrophysics Data System (ADS)

    de las Heras, Alejandro; Hu, Yong

    2006-10-01

    Forecasting has become an essential part of modern thought, but the practical limitations still are manifold. We addressed future rates of change by comparing models that take into account time, and models that focus more on space. Cox regression confirmed that linear change can be safely assumed in the short-term. Spatially explicit Poisson regression, provided a ceiling value for the number of deforestation spots. With several observed and estimated rates, it was decided to forecast using the more robust assumptions. A Markov-chain cellular automaton thus projected 5-year deforestation in the Amazonian Arc of Deforestation, showing that even a stable rate of change would largely deplete the forest area. More generally, resolution and implementation of the existing models could explain many of the modelling difficulties still affecting forecasting.

  4. Coaching behaviors associated with changes in fear of failure: changes in self-talk and need satisfaction as potential mechanisms.

    PubMed

    Conroy, David E; Coatsworth, J Douglas

    2007-04-01

    Cognitive-interpersonal and motivational mechanisms may regulate relations between youth perceptions of interpersonal aspects of the social ecology and their fear-of-failure (FF) levels. Youth (N=165) registered for a summer swim league rated their fear of failure at the beginning, middle, and end of the season. Extensive model comparisons indicated that youths' end-of-season ratings of coach behaviors could be reduced to three factors (affiliation, control, blame). Perceived control and blame from coaches predicted residualized change in corresponding aspects of youths' self-talk, but only changes in self-blame positively predicted changes in FF levels during the season. Perceived affiliation from coaches predicted autonomy need satisfaction which, in turn, negatively predicted the rate of change in FF levels during the season. These findings indicate that (a) youth perceptions of coaches were directly and indirectly related to acute socialization of FF and (b) both cognitive-interpersonal and motivational mechanisms contributed to this socialization process. Further research is needed to test for developmental differences in these mechanisms to determine whether findings generalize to more heterogeneous and at-risk populations and to investigate other potential social-ecological influences on socialization.

  5. Analysis of Composite Panel-Stiffener Debonding Using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James; Minguet, Pierre J.

    2007-01-01

    Interlaminar fracture mechanics has proven useful for characterizing the onset of delaminations in composites and has been used successfully primarily to investigate onset in fracture toughness specimens and laboratory size coupon type specimens. Future acceptance of the methodology by industry and certification authorities, however, requires the successful demonstration of the methodology on the structural level. For this purpose, a panel was selected that is reinforced with stiffeners. Shear loading causes the panel to buckle, and the resulting out-of-plane deformations initiate skin/stiffener separation at the location of an embedded defect. A small section of the stiffener foot, web and noodle as well as the panel skin in the vicinity of the delamination front were modeled with a local 3D solid model. Across the width of the stiffener foot, the mixedmode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. Computed failure indices were compared to corresponding results where the entire web was modeled with shell elements and only a small section of the stiffener foot and panel were modeled locally with solid elements. Including the stiffener web in the local 3D solid model increased the computed failure index. Further including the noodle and transition radius in the local 3D solid model changed the local distribution across the width. The magnitude of the failure index decreased with increasing transition radius and noodle area. For the transition radii modeled, the material properties used for the noodle area had a negligible effect on the results. The results of this study are intended to be used as a guide for conducting finite element and fracture mechanics analyses of delamination and debonding in complex structures such as integrally stiffened panels.

  6. A simplified method for determining reactive rate parameters for reaction ignition and growth in explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, P.J.

    1996-07-01

    A simplified method for determining the reactive rate parameters for the ignition and growth model is presented. This simplified ignition and growth (SIG) method consists of only two adjustable parameters, the ignition (I) and growth (G) rate constants. The parameters are determined by iterating these variables in DYNA2D hydrocode simulations of the failure diameter and the gap test sensitivity until the experimental values are reproduced. Examples of four widely different explosives were evaluated using the SIG model. The observed embedded gauge stress-time profiles for these explosives are compared to those calculated by the SIG equation and the results are described.

  7. Development of a Generic Model Aimed at Building Self-Love among Para-Alcoholic Native American Children. A Practicum Report.

    ERIC Educational Resources Information Center

    Merino, Claralynn

    Many Native American communities have high rates of alcoholism. Children growing up in alcoholic families often exhibit co-dependent or para-alcoholic behaviors, which place them at high risk of educational failure. The Love Bug model was designed to encourage culturally appropriate self-expression and to promote self-love and detachment from…

  8. Cost Effectiveness of Contraceptives in the United States

    PubMed Central

    Trussell, James; Lalla, Anjana M.; Doan, Quan V.; Reyes, Eileen; Pinto, Lionel; Gricar, Joseph

    2013-01-01

    Background The study was conducted to estimate the relative cost effectiveness of contraceptives in the United States from a payer’s perspective. Methods A Markov model was constructed to simulate costs for 16 contraceptive methods and no method over a 5-year period. Failure rates, adverse event rates, and resource utilization were derived from the literature. Sensitivity analyses were performed on costs and failure rates. Results Any contraceptive method is superior to “no method”. The three least expensive methods were the copper-T IUD ($647), vasectomy ($713) and LNG-20 IUS ($930). Results were sensitive to the cost of contraceptive methods, the cost of an unintended pregnancy, and plan disenrollment rates. Conclusion The copper-T IUD, vasectomy, and the LNG-20 IUS are the most cost-effective contraceptive methods available in the United States. Differences in method costs, the cost of an unintended pregnancy, and time horizon are influential factors that determine the overall value of a contraceptive method. PMID:19041435

  9. Investigation of PDC bit failure base on stick-slip vibration analysis of drilling string system plus drill bit

    NASA Astrophysics Data System (ADS)

    Huang, Zhiqiang; Xie, Dou; Xie, Bing; Zhang, Wenlin; Zhang, Fuxiao; He, Lei

    2018-03-01

    The undesired stick-slip vibration is the main source of PDC bit failure, such as tooth fracture and tooth loss. So, the study of PDC bit failure base on stick-slip vibration analysis is crucial to prolonging the service life of PDC bit and improving ROP (rate of penetration). For this purpose, a piecewise-smooth torsional model with 4-DOF (degree of freedom) of drilling string system plus PDC bit is proposed to simulate non-impact drilling. In this model, both the friction and cutting behaviors of PDC bit are innovatively introduced. The results reveal that PDC bit is easier to fail than other drilling tools due to the severer stick-slip vibration. Moreover, reducing WOB (weight on bit) and improving driving torque can effectively mitigate the stick-slip vibration of PDC bit. Therefore, PDC bit failure can be alleviated by optimizing drilling parameters. In addition, a new 4-DOF torsional model is established to simulate torsional impact drilling and the effect of torsional impact on PDC bit's stick-slip vibration is analyzed by use of an engineering example. It can be concluded that torsional impact can mitigate stick-slip vibration, prolonging the service life of PDC bit and improving drilling efficiency, which is consistent with the field experiment results.

  10. Multicentre analysis of second-line antiretroviral treatment in HIV-infected children: adolescents at high risk of failure.

    PubMed

    Boerma, Ragna S; Bunupuradah, Torsak; Dow, Dorothy; Fokam, Joseph; Kariminia, Azar; Lehman, Dara; Kityo, Cissy; Musiime, Victor; Palumbo, Paul; Schoffelen, Annelot; Sophan, Sam; Zanoni, Brian; Rinke de Wit, Tobias F; Calis, Job C J; Sigaloff, Kim C E

    2017-09-15

    The number of HIV-infected children and adolescents requiring second-line antiretroviral treatment (ART) is increasing in low- and middle-income countries (LMIC). However, the effectiveness of paediatric second-line ART and potential risk factors for virologic failure are poorly characterized. We performed an aggregate analysis of second-line ART outcomes for children and assessed the need for paediatric third-line ART. We performed a multicentre analysis by systematically reviewing the literature to identify cohorts of children and adolescents receiving second-line ART in LMIC, contacting the corresponding study groups and including patient-level data on virologic and clinical outcomes. Kaplan-Meier survival estimates and Cox proportional hazard models were used to describe cumulative rates and predictors of virologic failure. Virologic failure was defined as two consecutive viral load measurements >1000 copies/ml after at least six months of second-line treatment. We included 12 cohorts representing 928 children on second-line protease inhibitor (PI)-based ART in 14 countries in Asia and sub-Saharan Africa. After 24 months, 16.4% (95% confidence interval (CI): 13.9-19.4) of children experienced virologic failure. Adolescents (10-18 years) had failure rates of 14.5 (95% CI 11.9-17.6) per 100 person-years compared to 4.5 (95% CI 3.4-5.8) for younger children (3-9 years). Risk factors for virologic failure were adolescence (adjusted hazard ratio [aHR] 3.93, p  < 0.001) and short duration of first-line ART before treatment switch (aHR 0.64 and 0.53, p  = 0.008, for 24-48 months and >48 months, respectively, compared to <24 months). In LMIC, paediatric PI-based second-line ART was associated with relatively low virologic failure rates. However, adolescents showed exceptionally poor virologic outcomes in LMIC, and optimizing their HIV care requires urgent attention. In addition, 16% of children and adolescents failed PI-based treatment and will require integrase inhibitors to construct salvage regimens. These drugs are currently not available in LMIC.

  11. Characterization of Aftershock Sequences from Large Strike-Slip Earthquakes Along Geometrically Complex Faults

    NASA Astrophysics Data System (ADS)

    Sexton, E.; Thomas, A.; Delbridge, B. G.

    2017-12-01

    Large earthquakes often exhibit complex slip distributions and occur along non-planar fault geometries, resulting in variable stress changes throughout the region of the fault hosting aftershocks. To better discern the role of geometric discontinuities on aftershock sequences, we compare areas of enhanced and reduced Coulomb failure stress and mean stress for systematic differences in the time dependence and productivity of these aftershock sequences. In strike-slip faults, releasing structures, including stepovers and bends, experience an increase in both Coulomb failure stress and mean stress during an earthquake, promoting fluid diffusion into the region and further failure. Conversely, Coulomb failure stress and mean stress decrease in restraining bends and stepovers in strike-slip faults, and fluids diffuse away from these areas, discouraging failure. We examine spatial differences in seismicity patterns along structurally complex strike-slip faults which have hosted large earthquakes, such as the 1992 Mw 7.3 Landers, the 2010 Mw 7.2 El-Mayor Cucapah, the 2014 Mw 6.0 South Napa, and the 2016 Mw 7.0 Kumamoto events. We characterize the behavior of these aftershock sequences with the Epidemic Type Aftershock-Sequence Model (ETAS). In this statistical model, the total occurrence rate of aftershocks induced by an earthquake is λ(t) = λ_0 + \\sum_{i:t_i

  12. Economic Statistical Design of Integrated X-bar-S Control Chart with Preventive Maintenance and General Failure Distribution

    PubMed Central

    Caballero Morales, Santiago Omar

    2013-01-01

    The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082

  13. The influence of mandibular skeletal characteristics on inferior alveolar nerve block anesthesia.

    PubMed

    You, Tae Min; Kim, Kee-Deog; Huh, Jisun; Woo, Eun-Jung; Park, Wonse

    2015-09-01

    The inferior alveolar nerve block (IANB) is the most common anesthetic techniques in dentistry; however, its success rate is low. The purpose of this study was to determine the correlation between IANB failure and mandibular skeletal characteristics. In total, 693 cases of lower third molar extraction (n = 575 patients) were examined in this study. The ratio of the condylar and coronoid distances from the mandibular foramen (condyle-coronoid ratio [CC ratio]) was calculated, and the mandibular skeleton was then classified as normal, retrognathic, or prognathic. The correlation between IANB failure and sex, treatment side, and the CC ratio was assessed. The IANB failure rates for normal, retrognathic, and prognathic mandibles were 7.3%, 14.5%, and 9.5%, respectively, and the failure rate was highest among those with a CC ratio < 0.8 (severe retrognathic mandible). The failure rate was significantly higher in the retrognathic group than in normal group (P = 0.019), and there was no statistically significant difference between the other two groups. IANB failure could be attributable, in part, to the skeletal characteristics of the mandible. In addition, the failure rate was found to be significantly higher in the retrognathic group.

  14. The influence of mandibular skeletal characteristics on inferior alveolar nerve block anesthesia

    PubMed Central

    You, Tae Min; Kim, Kee-Deog; Huh, Jisun; Woo, Eun-Jung

    2015-01-01

    Background The inferior alveolar nerve block (IANB) is the most common anesthetic techniques in dentistry; however, its success rate is low. The purpose of this study was to determine the correlation between IANB failure and mandibular skeletal characteristics Methods In total, 693 cases of lower third molar extraction (n = 575 patients) were examined in this study. The ratio of the condylar and coronoid distances from the mandibular foramen (condyle-coronoid ratio [CC ratio]) was calculated, and the mandibular skeleton was then classified as normal, retrognathic, or prognathic. The correlation between IANB failure and sex, treatment side, and the CC ratio was assessed. Results The IANB failure rates for normal, retrognathic, and prognathic mandibles were 7.3%, 14.5%, and 9.5%, respectively, and the failure rate was highest among those with a CC ratio < 0.8 (severe retrognathic mandible). The failure rate was significantly higher in the retrognathic group than in normal group (P = 0.019), and there was no statistically significant difference between the other two groups. Conclusions IANB failure could be attributable, in part, to the skeletal characteristics of the mandible. In addition, the failure rate was found to be significantly higher in the retrognathic group. PMID:28879267

  15. A Predictive Framework for Thermomechanical Fatigue Life of High Silicon Molybdenum Ductile Cast Iron Based on Considerations of Strain Energy Dissipation

    NASA Astrophysics Data System (ADS)

    Avery, Katherine R.

    Isothermal low cycle fatigue (LCF) and anisothermal thermomechanical fatigue (TMF) tests were conducted on a high silicon molybdenum (HiSiMo) cast iron for temperatures up to 1073K. LCF and out-of-phase (OP) TMF lives were significantly reduced when the temperature was near 673K due to an embrittlement phenomenon which decreases the ductility of HiSiMo at this temperature. In this case, intergranular fracture was predominant, and magnesium was observed at the fracture surface. When the thermal cycle did not include 673K, the failure mode was predominantly transgranular, and magnesium was not present on the fracture surface. The in-phase (IP) TMF lives were unaffected when the thermal cycle included 673K, and the predominant failure mode was found to be transgranular fracture, regardless of the temperature. No magnesium was present on the IP TMF fracture surfaces. Thus, the embrittlement phenomenon was found to contribute to fatigue damage only when the temperature was near 673K and a tensile stress was present. To account for the temperature- and stress-dependence of the embrittlement phenomenon on the TMF life of HiSiMo cast iron, an original model based on the cyclic inelastic energy dissipation is proposed which accounts for temperature-dependent differences in the rate of fatigue damage accumulation in tension and compression. The proposed model has few empirical parameters. Despite the simplicity of the model, the predicted fatigue life shows good agreement with more than 130 uniaxial low cycle and thermomechanical fatigue tests, cyclic creep tests, and tests conducted at slow strain rates and with hold times. The proposed model was implemented in a multiaxial formulation and applied to the fatigue life prediction of an exhaust manifold subjected to severe thermal cycles. The simulation results show good agreement with the failure locations and number of cycles to failure observed in a component-level experiment.

  16. Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan Mohammad; Novack, Steven

    2015-01-01

    Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.

  17. Schooling as a Lottery: Racial Differences in School Advancement in Urban South Africa†

    PubMed Central

    Lam, David; Ardington, Cally; Leibbrandt, Murray

    2010-01-01

    This paper analyzes the large racial differences in progress through secondary school in South Africa. Using recently collected longitudinal data we find that grade advancement is strongly associated with scores on a baseline literacy and numeracy test. In grades 8-11 the effect of these scores on grade progression is much stronger for white and coloured students than for African students, while there is no racial difference in the impact of the scores on passing the nationally standardized grade 12 matriculation exam. We develop a stochastic model of grade repetition that generates predictions consistent with these results. The model predicts that a larger stochastic component in the link between learning and measured performance will generate higher enrollment, higher failure rates, and a weaker link between ability and grade progression. The results suggest that grade progression in African schools is poorly linked to actual ability and learning. The results point to the importance of considering the stochastic component of grade repetition in analyzing school systems with high failure rates. PMID:21499515

  18. Failure modes and effects criticality analysis and accelerated life testing of LEDs for medical applications

    NASA Astrophysics Data System (ADS)

    Sawant, M.; Christou, A.

    2012-12-01

    While use of LEDs in Fiber Optics and lighting applications is common, their use in medical diagnostic applications is not very extensive. Since the precise value of light intensity will be used to interpret patient results, understanding failure modes [1-4] is very important. We used the Failure Modes and Effects Criticality Analysis (FMECA) tool to identify the critical failure modes of the LEDs. FMECA involves identification of various failure modes, their effects on the system (LED optical output in this context), their frequency of occurrence, severity and the criticality of the failure modes. The competing failure modes/mechanisms were degradation of: active layer (where electron-hole recombination occurs to emit light), electrodes (provides electrical contact to the semiconductor chip), Indium Tin Oxide (ITO) surface layer (used to improve current spreading and light extraction), plastic encapsulation (protective polymer layer) and packaging failures (bond wires, heat sink separation). A FMECA table is constructed and the criticality is calculated by estimating the failure effect probability (β), failure mode ratio (α), failure rate (λ) and the operating time. Once the critical failure modes were identified, the next steps were generation of prior time to failure distribution and comparing with our accelerated life test data. To generate the prior distributions, data and results from previous investigations were utilized [5-33] where reliability test results of similar LEDs were reported. From the graphs or tabular data, we extracted the time required for the optical power output to reach 80% of its initial value. This is our failure criterion for the medical diagnostic application. Analysis of published data for different LED materials (AlGaInP, GaN, AlGaAs), the Semiconductor Structures (DH, MQW) and the mode of testing (DC, Pulsed) was carried out. The data was categorized according to the materials system and LED structure such as AlGaInP-DH-DC, AlGaInP-MQW-DC, GaN-DH-DC, and GaN-DH-DC. Although the reported testing was carried out at different temperature and current, the reported data was converted to the present application conditions of the medical environment. Comparisons between the model data and accelerated test results carried out in the present are reported. The use of accelerating agent modeling and regression analysis was also carried out. We have used the Inverse Power Law model with the current density J as the accelerating agent and the Arrhenius model with temperature as the accelerating agent. Finally, our reported methodology is presented as an approach for analyzing LED suitability for the target medical diagnostic applications.

  19. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Sheffler, K. D.; Demasi, J. T.

    1985-01-01

    A methodology was established to predict thermal barrier coating life in an environment simulative of that experienced by gas turbine airfoils. Specifically, work is being conducted to determine failure modes of thermal barrier coatings in the aircraft engine environment. Analytical studies coupled with appropriate physical and mechanical property determinations are being employed to derive coating life prediction model(s) on the important failure mode(s). An initial review of experimental and flight service components indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the metal-ceramic interface. Initial results from a laboratory test program designed to study the influence of various driving forces such as temperature, thermal cycle frequency, environment, and coating thickness, on ceramic coating spalling life suggest that bond coat oxidation damage at the metal-ceramic interface contributes significantly to thermomechanical cracking in the ceramic layer. Low cycle rate furnace testing in air and in argon clearly shows a dramatic increase of spalling life in the non-oxidizing environments.

  20. A review of economic evaluation models for cardiac resynchronization therapy with implantable cardioverter defibrillators in patients with heart failure.

    PubMed

    Tomini, F; Prinzen, F; van Asselt, A D I

    2016-12-01

    Cardiac resynchronization therapy with a biventricular pacemaker (CRT-P) is an effective treatment for dyssynchronous heart failure (DHF). Adding an implantable cardioverter defibrillator (CRT-D) may further reduce the risk of sudden cardiac death (SCD). However, if the majority of patients do not require shock therapy, the cost-effectiveness ratio of CRT-D compared to CRT-P may be high. The objective of this study was to systematically review decision models evaluating the cost-effectiveness of CRT-D for patients with DHF, compare the structure and inputs of these models and identify the main factors influencing the ICERs for CRT-D. A comprehensive search strategy of Medline (Ovid), Embase (Ovid) and EconLit identified eight cost-effectiveness models evaluating CRT-D against optimal pharmacological therapy (OPT) and/or CRT-P. The selected economic studies differed in terms of model structure, treatment path, time horizons, and sources of efficacy data. CRT-D was found cost-effective when compared to OPT but its cost-effectiveness became questionable when compared to CRT-P. Cost-effectiveness of CRT-D may increase depending on improvement of all-cause mortality rates and HF mortality rates in patients who receive CRT-D, costs of the device, and battery life. In particular, future studies need to investigate longer-term mortality rates and identify CRT-P patients that will gain the most, in terms of life expectancy, from being treated with a CRT-D.

  1. Skin-Stiffener Debond Prediction Based on Computational Fracture Analysis

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; Gates, Tom (Technical Monitor)

    2005-01-01

    Interlaminar fracture mechanics has proven useful for characterizing the onset of delaminations in composites and has been used with limited success primarily to investigate onset in fracture toughness specimens and laboratory size coupon type specimens. Future acceptance of the methodology by industry and certification authorities however, requires the successful demonstration of the methodology on structural level. For this purpose a panel was selected that is reinforced with stringers. Shear loading causes the panel to buckle and the resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. For finite element analysis, the panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot and the panel in the vicinity of the embedded defect were modeled with a local 3D solid model. Across the width of the stringer foot the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. For small applied loads the failure index is well below one across the entire width. With increasing load the failure index approaches one first near the edge of the stringer foot from which delamination is expected to grow. With increasing delamination lengths the buckling pattern of the panel changes and the failure index increases which suggests that rapid delamination growth from the initial defect is to be expected.

  2. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  3. A data driven partial ambiguity resolution: Two step success rate criterion, and its simulation demonstration

    NASA Astrophysics Data System (ADS)

    Hou, Yanqing; Verhagen, Sandra; Wu, Jie

    2016-12-01

    Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubin, M. B.; Vorobiev, O.; Vitali, E.

    Here, a large deformation thermomechanical model is developed for shock loading of a material that can exhibit elastic and inelastic anisotropy. Use is made of evolution equations for a triad of microstructural vectors m i(i=1,2,3) which model elastic deformations and directions of anisotropy. Specific constitutive equations are presented for a material with orthotropic elastic response. The rate of inelasticity depends on an orthotropic yield function that can be used to model weak fault planes with failure in shear and which exhibits a smooth transition to isotropic response at high compression. Moreover, a robust, strongly objective numerical algorithm is proposed formore » both rate-independent and rate-dependent response. The predictions of the continuum model are examined by comparison with exact steady-state solutions. Also, the constitutive equations are used to obtain a simplified continuum model of jointed rock which is compared with high fidelity numerical solutions that model a persistent system of joints explicitly in the rock medium.« less

  5. Modelling of Dynamic Rock Fracture Process with a Rate-Dependent Combined Continuum Damage-Embedded Discontinuity Model Incorporating Microstructure

    NASA Astrophysics Data System (ADS)

    Saksala, Timo

    2016-10-01

    This paper deals with numerical modelling of rock fracture under dynamic loading. For this end, a combined continuum damage-embedded discontinuity model is applied in finite element modelling of crack propagation in rock. In this model, the strong loading rate sensitivity of rock is captured by the rate-dependent continuum scalar damage model that controls the pre-peak nonlinear hardening part of rock behaviour. The post-peak exponential softening part of the rock behaviour is governed by the embedded displacement discontinuity model describing the mode I, mode II and mixed mode fracture of rock. Rock heterogeneity is incorporated in the present approach by random description of the rock mineral texture based on the Voronoi tessellation. The model performance is demonstrated in numerical examples where the uniaxial tension and compression tests on rock are simulated. Finally, the dynamic three-point bending test of a semicircular disc is simulated in order to show that the model correctly predicts the strain rate-dependent tensile strengths as well as the failure modes of rock in this test. Special emphasis is laid on modelling the loading rate sensitivity of tensile strength of Laurentian granite.

  6. Management of heart failure in the new era: the role of scores.

    PubMed

    Mantegazza, Valentina; Badagliacca, Roberto; Nodari, Savina; Parati, Gianfranco; Lombardi, Carolina; Di Somma, Salvatore; Carluccio, Erberto; Dini, Frank Lloyd; Correale, Michele; Magrì, Damiano; Agostoni, Piergiuseppe

    2016-08-01

    Heart failure is a widespread syndrome involving several organs, still characterized by high mortality and morbidity, and whose clinical course is heterogeneous and hardly predictable.In this scenario, the assessment of heart failure prognosis represents a fundamental step in clinical practice. A single parameter is always unable to provide a very precise prognosis. Therefore, risk scores based on multiple parameters have been introduced, but their clinical utility is still modest. In this review, we evaluated several prognostic models for acute, right, chronic, and end-stage heart failure based on multiple parameters. In particular, for chronic heart failure we considered risk scores essentially based on clinical evaluation, comorbidities analysis, baroreflex sensitivity, heart rate variability, sleep disorders, laboratory tests, echocardiographic imaging, and cardiopulmonary exercise test parameters. What is at present established is that a single parameter is not sufficient for an accurate prediction of prognosis in heart failure because of the complex nature of the disease. However, none of the scoring systems available is widely used, being in some cases complex, not user-friendly, or based on expensive or not easily available parameters. We believe that multiparametric scores for risk assessment in heart failure are promising but their widespread use needs to be experienced.

  7. Wide-range simulation of elastoplastic wave fronts and failure of solids under high-speed loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saveleva, Natalia, E-mail: saveleva@icmm.ru; Bayandin, Yuriy, E-mail: buv@icmm.ru; Naimark, Oleg, E-mail: naimark@icmm.ru

    2015-10-27

    The aim of this paper is numerical study of deformation processes and failure of vanadium under shock-wave loading. According developed statistical theory of solid with mesoscopic defects the constitutive equations were proposed in terms of two structural variables characterizing behavior of defects ensembles: defect density tensor and structural scaling parameter. On the basis of wide-range constitutive equations the mathematical model of deformation behavior and failure of vanadium was developed taking into account the bond relaxation mechanisms, multistage of fracture and nonlinearity kinetic of defects. Results of numerical simulation allow the description of the major effects of shock wave propagation (elasticmore » precursor decay, grow of spall strength under grow strain rate)« less

  8. Risk factors for eye bank preparation failure of Descemet membrane endothelial keratoplasty tissue.

    PubMed

    Vianna, Lucas M M; Stoeger, Christopher G; Galloway, Joshua D; Terry, Mark; Cope, Leslie; Belfort, Rubens; Jun, Albert S

    2015-05-01

    To assess the results of a single eye bank preparing a high volume of Descemet membrane endothelial keratoplasty (DMEK) tissues using multiple technicians to provide an overview of the experience and to identify possible risk factors for DMEK preparation failure. Cross-sectional study. setting: Lions VisionGift and Wilmer Eye Institute at Johns Hopkins Hospital. All 563 corneal tissues processed by technicians at Lions VisionGift for DMEK between October 2011 and May 2014 inclusive. Tissues were divided into 2 groups: DMEK preparation success and DMEK preparation failure. We compared donor characteristics, including past medical history. The overall tissue preparation failure rate was 5.2%. Univariate analysis showed diabetes mellitus (P = .000028) and its duration (P = .023), hypertension (P = .021), and hyperlipidemia or obesity (P = .0004) were more common in the failure group. Multivariate analysis showed diabetes mellitus (P = .0001) and hyperlipidemia or obesity (P = .0142) were more common in the failure group. Elimination of tissues from donors either with diabetes or with hyperlipidemia or obesity reduced the failure rate from 5.2% to 2.2%. Trends toward lower failure rates occurring with increased technician experience also were found. Our work showed that tissues from donors with diabetes mellitus (especially with longer disease duration) and hyperlipidemia or obesity were associated with higher failure rates in DMEK preparation. Elimination of tissues from donors either with diabetes mellitus or with hyperlipidemia or obesity reduced the failure rate. In addition, our data may provide useful initial guidelines and benchmark values for eye banks seeking to establish and maintain DMEK programs. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Orbiter post-tire failure and skid testing results

    NASA Technical Reports Server (NTRS)

    Daugherty, Robert H.; Stubbs, Sandy M.

    1989-01-01

    An investigation was conducted at the NASA Langley Research Center's Aircraft Landing Dynamics Facility (ALDF) to define the post-tire failure drag characteristics of the Space Shuttle Orbiter main tire and wheel assembly. Skid tests on various materials were also conducted to define their friction and wear rate characteristics under higher speed and bearing pressures than any previous tests. The skid tests were conducted to support a feasibility study of adding a skid to the orbiter strut between the main tires to protect an intact tire from failure due to overload should one of the tires fail. Roll-on-rim tests were conducted to define the ability of a standard and a modified orbiter main wheel to roll without a tire. Results of the investigation are combined into a generic model of strut drag versus time under failure conditions for inclusion into rollout simulators used to train the shuttle astronauts.

  10. Seismic precursory patterns before a cliff collapse and critical point phenomena

    USGS Publications Warehouse

    Amitrano, D.; Grasso, J.-R.; Senfaute, G.

    2005-01-01

    We analyse the statistical pattern of seismicity before a 1-2 103 m3 chalk cliff collapse on the Normandie ocean shore, Western France. We show that a power law acceleration of seismicity rate and energy in both 40 Hz-1.5 kHz and 2 Hz-10kHz frequency range, is defined on 3 orders of magnitude, within 2 hours from the collapse time. Simultaneously, the average size of the seismic events increases toward the time to failure. These in situ results are derived from the only station located within one rupture length distance from the rock fall rupture plane. They mimic the "critical point" like behavior recovered from physical and numerical experiments before brittle failures and tertiary creep failures. Our analysis of this first seismic monitoring data of a cliff collapse suggests that the thermodynamic phase transition models for failure may apply for cliff collapse. Copyright 2005 by the American Geophysical Union.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Sisi; Li, Yun; Levitt, Karl N.

    Consensus is a fundamental approach to implementing fault-tolerant services through replication where there exists a tradeoff between the cost and the resilience. For instance, Crash Fault Tolerant (CFT) protocols have a low cost but can only handle crash failures while Byzantine Fault Tolerant (BFT) protocols handle arbitrary failures but have a higher cost. Hybrid protocols enjoy the benefits of both high performance without failures and high resiliency under failures by switching among different subprotocols. However, it is challenging to determine which subprotocols should be used. We propose a moving target approach to switch among protocols according to the existing systemmore » and network vulnerability. At the core of our approach is a formalized cost model that evaluates the vulnerability and performance of consensus protocols based on real-time Intrusion Detection System (IDS) signals. Based on the evaluation results, we demonstrate that a safe, cheap, and unpredictable protocol is always used and a high IDS error rate can be tolerated.« less

  12. Key variables influencing patterns of lava dome growth and collapse

    NASA Astrophysics Data System (ADS)

    Husain, T.; Elsworth, D.; Voight, B.; Mattioli, G. S.; Jansma, P. E.

    2013-12-01

    Lava domes are conical structures that grow by the infusion of viscous silicic or intermediate composition magma from a central volcanic conduit. Dome growth can be characterized by repeated cycles of growth punctuated by collapse, as the structure becomes oversized for its composite strength. Within these cycles, deformation ranges from slow long term deformation to sudden deep-seated collapses. Collapses may range from small raveling failures to voluminous and fast-moving pyroclastic flows with rapid and long-downslope-reach from the edifice. Infusion rate and magma rheology together with crystallization temperature and volatile content govern the spatial distribution of strength in the structure. Solidification, driven by degassing-induced crystallization of magma leads to the formation of a continuously evolving frictional talus as a hard outer shell. This shell encapsulates the cohesion-dominated soft ductile core. Here we explore the mechanics of lava dome growth and failure using a two-dimensional particle-dynamics model. This meshless model follows the natural evolution of a brittle carapace formed by loss of volatiles and rheological stiffening and avoids difficulties of hour-glassing and mesh-entangelment typical in meshed models. We test the fidelity of the model against existing experimental and observational models of lava dome growth. The particle-dynamics model follows the natural development of dome growth and collapse which is infeasible using simple analytical models. The model provides insight into the triggers that lead to the transition in collapse mechasnism from shallow flank collapse to deep seated sector collapse. Increase in material stiffness due to decrease in infusion rate results in the transition of growth pattern from endogenous to exogenous. The material stiffness and strength are strongly controlled by the magma infusion rate. Increase in infusion rate decreases the time available for degassing induced crystallization leading to a transition in the growth pattern, while a decrease in infusion rate results in larger crystals causing the material to stiffen leading to formation of spines. Material stiffness controls the growth direction of the viscous plug in the lava dome interior. Material strength and stiffness controled by rate of infusion influence lava dome growth more significantly than coefficient of frictional of the talus.

  13. SURFplus Model Calibration for PBX 9502

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2017-12-06

    The SURFplus reactive burn model is calibrated for the TATB based explosive PBX 9502 at three initial temperatures; hot (75 C), ambient (23 C) and cold (-55 C). The CJ state depends on the initial temperature due to the variation in the initial density and initial specific energy of the PBX reactants. For the reactants, a porosity model for full density TATB is used. This allows the initial PBX density to be set to its measured value even though the coeffcient of thermal expansion for the TATB and the PBX differ. The PBX products EOS is taken as independent ofmore » the initial PBX state. The initial temperature also affects the sensitivity to shock initiation. The model rate parameters are calibrated to Pop plot data, the failure diameter, the limiting detonation speed just above the failure diameters, and curvature effect data for small curvature.« less

  14. Stochastic Model of Clogging in a Microfluidic Cell Sorter

    NASA Astrophysics Data System (ADS)

    Fai, Thomas; Rycroft, Chris

    2016-11-01

    Microfluidic devices for sorting cells by deformability show promise for various medical purposes, e.g. detecting sickle cell anemia and circulating tumor cells. One class of such devices consists of a two-dimensional array of narrow channels, each column containing several identical channels in parallel. Cells are driven through the device by an applied pressure or flow rate. Such devices allows for many cells to be sorted simultaneously, but cells eventually clog individual channels and change the device properties in an unpredictable manner. In this talk, we propose a stochastic model for the failure of such microfluidic devices by clogging and present preliminary theoretical and computational results. The model can be recast as an ODE that exhibits finite time blow-up under certain conditions. The failure time distribution is investigated analytically in certain limiting cases, and more realistic versions of the model are solved by computer simulation.

  15. Shuttle data book: SRM fragment velocity model. Presented to the SRB Fragment Model Review Panel

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This study was undertaken to determine the velocity of fragments generated by the range safety destruction (RSD) or random failure of a Space Transportation System (STS) Solid Rocket Motor (SRM). The specific requirement was to provide a fragment model for use in those Galileo and Ulysses RTG safety analyses concerned with possible fragment impact on the spacecraft radioisotope thermoelectric generators (RTGS). Good agreement was obtained between predictions and observations for fragment velocity, velocity distributions, azimuths, and rotation rates. Based on this agreement with the entire data base, the model was used to predict the probable fragment environments which would occur in the event of an STS-SRM RSD or randon failure at 10, 74, 84 and 110 seconds. The results of these predictions are the basis of the fragment environments presented in the Shuttle Data Book (NSTS-08116). The information presented here is in viewgraph form.

  16. A Mixed Methods Explanatory Study of the Failure/Drop Rate for Freshman STEM Calculus Students

    ERIC Educational Resources Information Center

    Worthley, Mary

    2013-01-01

    In a national context of high failure rates in freshman calculus courses, the purpose of this study was to understand who is struggling, and why. High failure rates are especially alarming given a local environment where students have access to a variety of academic, and personal, assistance. The sample consists of students at Colorado State…

  17. Training of residents in laparoscopic tubal sterilization: Long-term failure rates

    PubMed Central

    Rackow, Beth W.; Rhee, Maria C.; Taylor, Hugh S.

    2011-01-01

    Objectives Laparoscopic tubal sterilization with bipolar coagulation is a common and effective method of contraception, and a procedure much used to teach laparoscopic surgical skills to Obstetrics and Gynaecology residents (trainees); but it has an inherent risk of failure. This study investigated the long-term failure rate of this procedure when performed by Obstetrics and Gynaecology residents on women treated in their teaching clinics. Methods From 1991 to 1994, Obstetrics and Gynaecology residents carried out 386 laparoscopic tubal sterilizations with bipolar coagulation at Yale-New Haven Hospital. Six to nine years after the procedure, the women concerned were contacted by telephone and data were collected about sterilization failure. Results Two failures of laparoscopic tubal sterilization with bipolar coagulation were identified: an ectopic pregnancy and a spontaneous abortion. For this time period, the long-term sterilization failure rate was 1.9% (0–4.4%). Conclusions The long-term sterilization failure rate for laparoscopic tubal sterilization with bipolar coagulation performed by residents is comparable to the results of prior studies. These findings can be used to properly counsel women at a teaching clinic about the risks of sterilization failure with this procedure, and attest to the adequacy of residents’ training and supervision. PMID:18465476

  18. Rehospitalization in a national population of home health care patients with heart failure.

    PubMed

    Madigan, Elizabeth A; Gordon, Nahida H; Fortinsky, Richard H; Koroukian, Siran M; Piña, Ileana; Riggs, Jennifer S

    2012-12-01

    Patients with heart failure (HF) have high rates of rehospitalization. Home health care (HHC) patients with HF are not well studied in this regard. The objectives of this study were to determine patient, HHC agency, and geographic (i.e., area variation) factors related to 30-day rehospitalization in a national population of HHC patients with HF, and to describe the extent to which rehospitalizations were potentially avoidable. Chronic Condition Warehouse data from the Centers for Medicare & Medicaid Services. Retrospective cohort design. The 2005 national population of HHC patients was matched with hospital and HHC claims, the Provider of Service file, and the Area Resource File. The 30-day rehospitalization rate was 26 percent with 42 percent of patients having cardiac-related diagnoses for the rehospitalization. Factors with the strongest association with rehospitalization were consistent between the multilevel model and Cox proportional hazard models: number of prior hospital stays, higher HHC visit intensity category, and dyspnea severity at HHC admission. Substantial numbers of rehospitalizations were judged to be potentially avoidable. The persistently high rates of rehospitalization have been difficult to address. There are health care-specific actions and policy implications that are worth examining to improve rehospitalization rates. © Health Research and Educational Trust.

  19. Numerical modeling of injection, stress and permeability enhancement during shear stimulation at the Desert Peak Enhanced Geothermal System

    USGS Publications Warehouse

    Dempsey, David; Kelkar, Sharad; Davatzes, Nick; Hickman, Stephen H.; Moos, Daniel

    2015-01-01

    Creation of an Enhanced Geothermal System relies on stimulation of fracture permeability through self-propping shear failure that creates a complex fracture network with high surface area for efficient heat transfer. In 2010, shear stimulation was carried out in well 27-15 at Desert Peak geothermal field, Nevada, by injecting cold water at pressure less than the minimum principal stress. An order-of-magnitude improvement in well injectivity was recorded. Here, we describe a numerical model that accounts for injection-induced stress changes and permeability enhancement during this stimulation. In a two-part study, we use the coupled thermo-hydrological-mechanical simulator FEHM to: (i) construct a wellbore model for non-steady bottom-hole temperature and pressure conditions during the injection, and (ii) apply these pressures and temperatures as a source term in a numerical model of the stimulation. In this model, a Mohr-Coulomb failure criterion and empirical fracture permeability is developed to describe permeability evolution of the fractured rock. The numerical model is calibrated using laboratory measurements of material properties on representative core samples and wellhead records of injection pressure and mass flow during the shear stimulation. The model captures both the absence of stimulation at low wellhead pressure (WHP ≤1.7 and ≤2.4 MPa) as well as the timing and magnitude of injectivity rise at medium WHP (3.1 MPa). Results indicate that thermoelastic effects near the wellbore and the associated non-local stresses further from the well combine to propagate a failure front away from the injection well. Elevated WHP promotes failure, increases the injection rate, and cools the wellbore; however, as the overpressure drops off with distance, thermal and non-local stresses play an ongoing role in promoting shear failure at increasing distance from the well.

  20. An analysis of the value of spermicides in contraception.

    PubMed

    1979-11-01

    Development of the so-called modern methods of contraception has somewhat eclipsed interest in traditional methods. However, spermicides are still important for many couples and their use appears to be increasing. A brief history of the use of and research into spermicidal contraceptives is presented. The limitations of spermicides are: the necessity for use at the time of intercourse, and their high failure rate. Estimates of the failure rates of spermicides have ranged from .3 pregnancies per 100 woman-years of use to nearly 40, depending on the product used and the population tested. Just as their use depends on various social factors, so does their failure rate. Characteristics of the user deterine failure rates. Motivation is important in lowering failure rates as is education, the intracouple relationship, and previous experience with spermicides. Method failure is also caused by defects in the product, either in the active ingredient of the spermicide or in the base carrier. The main advantage of spermicidal contraception is its safety. Limited research is currently being conducted on spermicides. Areas for improvement in existing spermicides and areas for possible innovation are mentioned.

  1. Cost-Effectiveness Analysis of Intensity Modulated Radiation Therapy Versus 3-Dimensional Conformal Radiation Therapy for Anal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodges, Joseph C., E-mail: joseph.hodges@utsouthwestern.edu; Beg, Muhammad S.; Das, Prajnan

    2014-07-15

    Purpose: To compare the cost-effectiveness of intensity modulated radiation therapy (IMRT) and 3-dimensional conformal radiation therapy (3D-CRT) for anal cancer and determine disease, patient, and treatment parameters that influence the result. Methods and Materials: A Markov decision model was designed with the various disease states for the base case of a 65-year-old patient with anal cancer treated with either IMRT or 3D-CRT and concurrent chemotherapy. Health states accounting for rates of local failure, colostomy failure, treatment breaks, patient prognosis, acute and late toxicities, and the utility of toxicities were informed by existing literature and analyzed with deterministic and probabilistic sensitivitymore » analysis. Results: In the base case, mean costs and quality-adjusted life expectancy in years (QALY) for IMRT and 3D-CRT were $32,291 (4.81) and $28,444 (4.78), respectively, resulting in an incremental cost-effectiveness ratio of $128,233/QALY for IMRT compared with 3D-CRT. Probabilistic sensitivity analysis found that IMRT was cost-effective in 22%, 47%, and 65% of iterations at willingness-to-pay thresholds of $50,000, $100,000, and $150,000 per QALY, respectively. Conclusions: In our base model, IMRT was a cost-ineffective strategy despite the reduced acute treatment toxicities and their associated costs of management. The model outcome was sensitive to variations in local and colostomy failure rates, as well as patient-reported utilities relating to acute toxicities.« less

  2. Impact of ductility on hydraulic fracturing in shales

    NASA Astrophysics Data System (ADS)

    MacMinn, Chris; Auton, Lucy

    2016-04-01

    Hydraulic fracturing is a method for extracting natural gas and oil from low-permeability rocks such as shale via the high-pressure injection of fluid into the bulk of the rock. The goal is to initiate and propagate fractures that will provide hydraulic access deeper into the reservoir, enabling gas or oil to be collected from a larger region of the rock. Fracture is the tensile failure of a brittle material upon reaching a threshold tensile stress, but some shales have a high clay content and may yield plastically before fracturing. Plastic deformation is the shear failure of a ductile material, during which stress relaxes through irreversible rearrangements of the particles of the material. Here, we investigate the impact of the ductility of shales on hydraulic fracturing. We first consider a simple, axisymmetric model for radially outward fluid injection from a wellbore into a ductile porous rock. We use this model to show that plastic deformation greatly reduces the maximum tensile stress, and that this maximum stress does not always occur at the wellbore. We then complement these results with laboratory experiments in an analogue system, and with numerical simulations based on the discrete element method (DEM), both of which suggest that ductile failure can indeed dramatically change the resulting deformation pattern. These results imply that hydraulic fracturing may fail in ductile rocks, or that the required injection rate for fracking may be much larger than the rate predicted from models that assume purely elastic mechanical behavior.

  3. Intraventricular filling under increasing left ventricular wall stiffness and heart rates

    NASA Astrophysics Data System (ADS)

    Samaee, Milad; Lai, Hong Kuan; Schovanec, Joseph; Santhanakrishnan, Arvind; Nagueh, Sherif

    2015-11-01

    Heart failure with normal ejection fraction (HFNEF) is a clinical syndrome that is prevalent in over 50% of heart failure patients. HFNEF patients show increased left ventricle (LV) wall stiffness and clinical diagnosis is difficult using ejection fraction (EF) measurements. We hypothesized that filling vortex circulation strength would decrease with increasing LV stiffness irrespective of heart rate (HR). 2D PIV and hemodynamic measurements were acquired on LV physical models of varying wall stiffness under resting and exercise HRs. The LV models were comparatively tested in an in vitro flow circuit consisting of a two-element Windkessel model driven by a piston pump. The stiffer LV models were tested in comparison with the least stiff baseline model without changing pump amplitude, circuit compliance and resistance. Increasing stiffness at resting HR resulted in diminishing cardiac output without lowering EF below 50% as in HFNEF. Increasing HR to 110 bpm in addition to stiffness resulted in lowering EF to less than 50%. The circulation strength of the intraventricular filling vortex diminished with increasing stiffness and HR. The results suggest that filling vortex circulation strength could be potentially used as a surrogate measure of LV stiffness. This research was supported by the Oklahoma Center for Advancement of Science and Technology (HR14-022).

  4. Electromigration model for the prediction of lifetime based on the failure unit statistics in aluminum metallization

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Ahn, Byung Tae

    2003-01-01

    A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.

  5. Intermediate Temperature Stress Rupture of Woven SiC Fiber, BN Interphase, SiC Matrix Composites in Air

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.; Levine, Stanley (Technical Monitor)

    2000-01-01

    Tensile stress-rupture experiments were performed on woven Hi-Nicalon reinforced SiC matrix composites with BN interphases in air. Modal acoustic emission (AE) was used to monitor the damage accumulation in the composites during the tests and microstructural analysis was performed to determine the amount of matrix cracking that occurred for each sample. Fiber fractograph), was also performed for individual fiber failures at the specimen fracture surface to determine the strengths at which fibers failed. The rupture strengths were significantly worse than what would have been expected front the inherent degradation of the fibers themselves when subjected to similar rupture conditions. At higher applied stresses the rate of rupture "?as larger than at lower applied stresses. It was observed that the change in rupture rate corresponded to the onset of through-thickness cracking in the composites themselves. The primary cause of the sen,ere degradation was the ease with which fibers would bond to one another at their closest separation distances, less than 100 nanometers, when exposed to the environment. The near fiber-to-fiber contact in the woven tows enabled premature fiber failure over large areas of matrix cracks due to the stress-concentrations created b), fibers bonded to one another after one or a few fibers fail. i.e. the loss of global load sharing. An@, improvement in fiber-to-fiber separation of this composite system should result in improved stress- rupture properties. A model was den,eloped in order to predict the rupture life-time for these composites based on the probabilistic nature of indin,idual fiber failure at temperature. the matrix cracking state during the rupture test, and the rate of oxidation into a matrix crack. Also incorporated into the model were estimates of the stress-concentration that would occur between the outer rim of fibers in a load-bearing bundle and the unbridged region of a matrix crack after Xia et al. For the lower stresses, this source of stress-concentration was the likely cause for initial fiber failure that would trigger catastrophic failure of the composite.

  6. Meta-Analysis

    PubMed Central

    Kale-Pradhan, Pramodini B.; Mariani, Nicholas P.; Wilhelm, Sheila M.; Johnson, Leonard B.

    2015-01-01

    Background: Vancomycin is used to treat serious infections caused by methicillin-resistant Staphylococcus aureus (MRSA). It is unclear whether MRSA isolates with minimum inhibitory concentration (MIC) 1.5 to 2 µg/mL are successfully treated with vancomycin. Objective: Evaluate vancomycin failure rates in MRSA bacteremia with an MIC <1.5 versus ≥1.5 µg/mL, and MIC ≤1 versus ≥2 µg/mL. Methods: A literature search was conducted using MESH terms vancomycin, MRSA, bacteremia, MIC, treatment and vancomycin failure to identify human studies published in English. All studies of patients with MRSA bacteremia treated with vancomycin were included if they evaluated vancomycin failures, defined as mortality, and reported associated MICs determined by E-test. Study sample size, vancomycin failure rates, and corresponding MIC values were extracted and analyzed using RevMan 5.2.5. Results: Thirteen studies including 2955 patients met all criteria. Twelve studies including 2861 patients evaluated outcomes using an MIC cutoff of 1.5 µg/mL. A total of 413 of 1186 (34.8%) patients with an MIC <1.5 and 531 of 1675 (31.7%) patients with an MIC of ≥1.5 µg/mL experienced treatment failure (odds ratio = 0.72, 95% confidence interval = 0.49-1.04, P = .08). Six studies evaluated 728 patients using the cutoffs of ≤1 and ≥2 µg/mL. A total of 384 patients had isolates with MIC ≤1 µg/mL, 344 had an MIC ≥2 µg/mL. Therapeutic failure occurred in 87 and 102 patients, respectively (odds ratio = 0.61, 95% confidence interval = 0.34-1.10, P = .10). As heterogeneity between the studies was high, a random-effects model was used. Conclusion: Vancomycin MIC may not be an optimal sole indicator of vancomycin treatment failure in MRSA bacteremia.

  7. Survival among older adults with kidney failure is better in the first three years with chronic dialysis treatment than not.

    PubMed

    Tam-Tham, Helen; Quinn, Robert R; Weaver, Robert G; Zhang, Jianguo; Ravani, Pietro; Liu, Ping; Thomas, Chandra; King-Shier, Kathryn; Fruetel, Karen; James, Matt T; Manns, Braden J; Tonelli, Marcello; Murtagh, Fliss E M; Hemmelgarn, Brenda R

    2018-05-23

    Comparisons of survival between dialysis and nondialysis care for older adults with kidney failure have been limited to those managed by nephrologists, and are vulnerable to lead and immortal time biases. So we compared time to all-cause mortality among older adults with kidney failure treated vs. not treated with chronic dialysis. Our retrospective cohort study used linked administrative and laboratory data to identify adults aged 65 or more years of age in Alberta, Canada, with kidney failure (2002-2012), defined by two or more consecutive outpatient estimated glomerular filtration rates less than 10 mL/min/1.73m 2 , spanning 90 or more days. We used marginal structural Cox models to assess the association between receipt of dialysis and all-cause mortality by allowing control for both time-varying and baseline confounders. Overall, 838 patients met inclusion criteria (mean age 79.1; 48.6% male; mean estimated glomerular filtration rate 7.8 mL/min/1.73m 2 ). Dialysis treatment (vs. no dialysis) was associated with a significantly lower risk of death for the first three years of follow-up (hazard ratio 0.59 [95% confidence interval 0.46-0.77]), but not thereafter (1.22 [0.69-2.17]). However, dialysis was associated with a significantly higher risk of hospitalization (1.40 [1.16-1.69]). Thus, among older adults with kidney failure, treatment with dialysis was associated with longer survival up to three years after reaching kidney failure, though with a higher risk of hospital admissions. These findings may assist shared decision-making about treatment of kidney failure. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  8. Complications of short versus long cephalomedullary nail for intertrochanteric femur fractures, minimum 1 year follow-up.

    PubMed

    Vaughn, Josh; Cohen, Eric; Vopat, Bryan G; Kane, Patrick; Abbood, Emily; Born, Christopher

    2015-05-01

    Hip fractures are becoming increasingly common resulting in significant morbidity, mortality and raising healthcare costs. Both short and long cephalomedullary devices are currently employed to treat intertrochanteric hip fractures. However, which device is optimal continues to be debated as each implant has unique characteristics and theoretical advantages. This study looked to identify rates of complications associated with both long and short cephalomedullary nails for the treatment of intertrochanteric hip fractures. We retrospectively reviewed charts from 2006 to 2011, and we identified 256 patients were identified with AO class 31.1-32.3 fractures. Sixty were treated with short nails and 196 with long nails. Radiographs and charts were then analysed for failures and hardware complications. Catastrophic failure and hardware complication rates were not statistically different between short or long cephalomedullary nails. The overall catastrophic failure rate was 3.1 %; there was a 5 % failure rate in the short-nail group compared with a 2.6 % failure rate in the long-nail group (p = 0.191). There was a 3.33 % secondary femur fracture rate in the short-nail group, compared with none in the long-nail cohort (p = 0.054). The rate of proximal fixation failure was 1.67 % for the short-nail group and 2.0 % in the long-nail group (p = 0.406). Our data suggests equivocal outcomes as measured by similar catastrophic failure rate between both short and long cephalomedullary nails for intertrochanteric femur fractures. However, there was an increased risk of secondary femur fracture with short cephalomedullary nails when compared to long nails that approached statistical significance.

  9. Failure Forecasting in Triaxially Stressed Sandstones

    NASA Astrophysics Data System (ADS)

    Crippen, A.; Bell, A. F.; Curtis, A.; Main, I. G.

    2017-12-01

    Precursory signals to fracturing events have been observed to follow power-law accelerations in spatial, temporal, and size distributions leading up to catastrophic failure. In previous studies this behavior was modeled using Voight's relation of a geophysical precursor in order to perform `hindcasts' by solving for failure onset time. However, performing this analysis in retrospect creates a bias, as we know an event happened, when it happened, and we can search data for precursors accordingly. We aim to remove this retrospective bias, thereby allowing us to make failure forecasts in real-time in a rock deformation laboratory. We triaxially compressed water-saturated 100 mm sandstone cores (Pc= 25MPa, Pp = 5MPa, σ = 1.0E-5 s-1) to the point of failure while monitoring strain rate, differential stress, AEs, and continuous waveform data. Here we compare the current `hindcast` methods on synthetic and our real laboratory data. We then apply these techniques to increasing fractions of the data sets to observe the evolution of the failure forecast time with precursory data. We discuss these results as well as our plan to mitigate false positives and minimize errors for real-time application. Real-time failure forecasting could revolutionize the field of hazard mitigation of brittle failure processes by allowing non-invasive monitoring of civil structures, volcanoes, and possibly fault zones.

  10. Dropout Prevention: An Intervention Model for Today's High Schools.

    ERIC Educational Resources Information Center

    Maurer, Richard D.

    1982-01-01

    Describes Project Intercept in Ossining (New York), which cut the high school's dropout, absence, and failure rates by involving teachers, students, and families. The program uses four major strategies--teacher/staff inservice training, alternative academic programs, student training in social and interpersonal skills, and family intervention…

  11. Is neonatal head circumference related to caesarean section for failure to progress?

    PubMed

    de Vries, Bradley; Bryce, Bianca; Zandanova, Tatiana; Ting, Jason; Kelly, Patrick; Phipps, Hala; Hyett, Jon A

    2016-12-01

    There is global concern about rising caesarean section rates. Identification of risk factors could lead to preventative measures. To describe the association between neonatal head circumference and (i) caesarean section for failure to progress, (ii) intrapartum caesarean section overall. This was a retrospective cohort study of 11 687 singleton live births with cephalic presentation, attempted vaginal birth and at least 37 completed weeks gestation from January 2005 to June 2009. Neonatal head circumference was grouped into quartiles and multiple logistic regressions performed. The rates of caesarean section for failure to progress were 4.1, 6.4, 8.8 and 14.3% in successive head circumference quartiles. Rates of intrapartum caesarean section overall were 8.7, 12.1, 15.8 and 21.5%. The odds ratios for caesarean section for failure to progress were: 1.00, 1.33 (95% CI 1.02- 1.73), 1.54 (1.18-2.02) and 1.93 (1.44-2.57) for successive head circumference quartiles after adjusting for multiple demographic and clinical factors. The adjusted odds ratios for intrapartum caesarean section for any indication were: 1.00, 1.52 (95% CI 1.24-1.87), 1.99 (1.62-2.46) and 2.38 (1.89-3.00), respectively. There is a strong positive relationship between head circumference quartile and both caesarean section for failure to progress and caesarean for any indication. If this finding is confirmed using ultrasound measurements, there is potential for head circumference to be incorporated into predictive models for intrapartum caesarean section with a view to offering interventions to reduce the risk of caesarean section. © 2016 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.

  12. A new model of reaction-driven cracking: fluid volume consumption and tensile failure during serpentinization

    NASA Astrophysics Data System (ADS)

    Eichenbaum-Pikser, J. M.; Spiegelman, M. W.; Kelemen, P. B.; Wilson, C. R.

    2013-12-01

    Reactive fluid flow plays an important role in a wide range of geodynamic processes, such as melt migration, formation of hydrous minerals on fault surfaces, and chemical weathering. These processes are governed by the complex coupling between fluid transport, reaction, and solid deformation. Reaction-driven cracking is a potentially critical feedback mechanism, by which volume change associated with chemical reaction drives fracture in the surrounding rock. It has been proposed to play a role in both serpentinization and carbonation of peridotite, motivating consideration of its application to mineral carbon sequestration. Previous studies of reactive cracking have focused on the increase in solid volume, and as such, have considered failure in compression. However, if the consumption of fluid is considered in the overall volume budget, the reaction can be net volume reducing, potentially leading to failure in tension. To explore these problems, we have formulated and solved a 2-D model of coupled porous flow, reaction kinetics, and elastic deformation using the finite element model assembler TerraFERMA (Wilson et al, G3 2013 submitted). The model is applied to the serpentinization of peridotite, which can be reasonably approximated as the transfer of a single reactive component (H2O) between fluid and solid phases, making it a simple test case to explore the process. The behavior of the system is controlled by the competition between the rate of volume consumption by the reaction, and the rate of volume replacement by fluid transport, as characterized by a nondimensional parameter χ, which depends on permeability, reaction rate, and the bulk modulus of the solid. Large values of χ correspond to fast fluid transport relative to reaction rate, resulting in a low stress, volume replacing regime. At smaller values of χ, fluid transport cannot keep up with the reaction, resulting in pore fluid under-pressure and tensile solid stresses. For the range of χ relevant to the serpentinization of peridotite, these stresses can reach hundreds of MPa, exceeding the tensile strength of peridotite.

  13. Factors Influencing Progressive Failure Analysis Predictions for Laminated Composite Structure

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    2008-01-01

    Progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model for use with a nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details are described in the present paper. Parametric studies for laminated composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented and to demonstrate their influence on progressive failure analysis predictions.

  14. Improving the performance of a filling line based on simulation

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.

    2016-08-01

    The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.

  15. Interdependence theory of tissue failure: bulk and boundary effects

    PubMed Central

    Suma, Daniel; Acun, Aylin; Zorlutuna, Pinar

    2018-01-01

    The mortality rate of many complex multicellular organisms increases with age, which suggests that net ageing damage is accumulative, despite remodelling processes. But how exactly do these little mishaps in the cellular level accumulate and spread to become a systemic catastrophe? To address this question we present experiments with synthetic tissues, an analytical model consistent with experiments, and a number of implications that follow the analytical model. Our theoretical framework describes how shape, curvature and density influences the propagation of failure in a tissue subjected to oxidative damage. We propose that ageing is an emergent property governed by interaction between cells, and that intercellular processes play a role that is at least as important as intracellular ones. PMID:29515857

  16. Working group session report: Neutron beam line shielding.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, G. J.; Ikedo, Y.

    2001-01-01

    We have examined the differences between a 2-D model and a 3-D model for designing the beam-line shield for the HIPPO instrument at the Lujan Center at the Los Alamos National Laboratory. We have calculated the total (neutron and gamma ray) dose equivalent rate coming out of the personal access ports from the HIPPO instrument experiment cave. In order to answer this question, we have investigated two possible worst-case scenarios: (a) failure of the T{sub 0}-chopper and no sample at the sample position; and (b) failure of the T{sub 0}-chopper with a thick sample (a piece of Inconel-718, 10 cmmore » diam by 30 cm long) at the sample position.« less

  17. CO 2 storage and potential fault instability in the St. Lawrence Lowlands sedimentary basin (Quebec, Canada): Insights from coupled reservoir-geomechanical modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konstantinovskaya, E.; Rutqvist, J.; Malo, M.

    2014-01-21

    In this paper, coupled reservoir-geomechanical (TOUGH-FLAC) modeling is applied for the first time to the St. Lawrence Lowlands region to evaluate the potential for shear failure along pre-existing high-angle normal faults, as well as the potential for tensile failure in the caprock units (Utica Shale and Lorraine Group). This activity is part of a general assessment of the potential for safe CO 2 injection into a sandstone reservoir (the Covey Hill Formation) within an Early Paleozoic sedimentary basin. Field and subsurface data are used to estimate the sealing properties of two reservoir-bounding faults (Yamaska and Champlain faults). The spatial variationsmore » in fluid pressure, effective minimum horizontal stress, and shear strain are calculated for different injection rates, using a simplified 2D geological model of the Becancour area, located ~110 km southwest of Quebec City. The simulation results show that initial fault permeability affects the timing, localization, rate, and length of fault shear slip. Contrary to the conventional view, our results suggest that shear failure may start earlier for a permeable fault than for a sealing fault, depending on the site-specific geologic setting. In simulations of a permeable fault, shear slip is nucleated along a 60 m long fault segment in a thin and brittle caprock unit (Utica Shale) trapped below a thicker and more ductile caprock unit (Lorraine Group) – and then subsequently progresses up to the surface. In the case of a sealing fault, shear failure occurs later in time and is localized along a fault segment (300 m) below the caprock units. The presence of the inclined low-permeable Yamaska Fault close to the injection well causes asymmetric fluid-pressure buildup and lateral migration of the CO 2 plume away from the fault, reducing the overall risk of CO 2 leakage along faults. Finally, fluid-pressure-induced tensile fracturing occurs only under extremely high injection rates and is localized below the caprock units, which remain intact, preventing upward CO 2 migration.« less

  18. An experimental and computational investigation of dynamic ductile fracture in stainless steel welds

    NASA Astrophysics Data System (ADS)

    Kothnur, Vasanth Srinivasa

    The high strain rate viscoplastic flow and fracture behavior of NITRONIC-50 and AL6XN stainless steel weldments are studied under dynamic loading conditions. The study is primarily motivated by interest in modeling the micromechanics of dynamic ductile failure in heterogeneous weldments. The high strain rate response of specimens machined from the parent, weld and heat-affected zones of NITRONIC-50 and AL6XN weldments is reported here on the basis of experiments conducted in a compression Kolsky bar configuration. The failure response of specimens prepared from the various material zones is investigated under high rate loading conditions in a tension Kolsky bar set-up. The microstructure of voided fracture process zones in these weldments is studied using X-ray Computed Microtomography. To model the preferential evolution of damage near the heat-affected zone, a finite deformation elastic-viscoplastic constitutive model for porous materials is developed. The evolution of the macroscopic flow response and the porous microstructure have been analysed in two distinctive regimes: pre-coalescence and post-coalescence. The onset of void coalescence is analyzed on the basis of upper-bound models to obtain the limit-loads needed to sustain a localized mode of plastic flow in the inter-void ligament. A finite element framework for the integration of the porous material response under high rate loading conditions is implemented as a user-subroutine in ABAQUS/Explicit. To address the effect of mesh sensitivity of numerical simulations of ductile fracture, a microstructural length scale is used to discretize finite element models of test specimens. Results from a detailed finite element study of the deformation and damage evolution in AL6XN weldments are compared with experimental observations.

  19. Break-even analysis revisited: the need to adjust for profitability, the collection rate and autonomous income.

    PubMed

    Broyles, R W; Narine, L; Khaliq, A

    2003-08-01

    This paper modifies traditional break-even analysis and develops a model that reflects the influence of variation in payer mix, the collection rate, profitability and autonomous income on the desired volume alternative. The augmented model indicates that a failure to adjust for uncollectibles and the net surplus results in a systematic understatement of the desired volume alternative. Conversely, a failure to adjust for autonomous income derived from the operation of cafeterias, gift shops or an organization's investment in marketable securities produces an overstatement of the desired volume. In addition, this paper uses Microsoft Excel to develop a spreadsheet that constructs a pro forma income statement, expressed in terms of the contribution margin. The spreadsheet also relies on the percentage of sales or revenue approach to prepare a balance sheet from which indicators of fiscal performance are calculated. Hence, the analysis enables the organization to perform a sensitivity analysis of potential changes in the desired volume, the operating margin, the current ratio, the debt: equity ratio and the amount of cash derived from operations that are associated with expected variation in payer mix, the collection rate, grouped by payer, the net surplus and autonomous income.

  20. Evaluation of hawthorn extract on immunomodulatory biomarkers in a pressure overload model of heart failure.

    PubMed

    Bleske, Barry E; Zineh, Issam; Hwang, Hyun Seok; Welder, Gregory J; Ghannam, Michael M J; Boluyt, Marvin O

    2007-12-01

    Hawthorn extract (Crataegeus sp.) a botanical complementary and alternative medicine is often used to treat heart failure. The mechanism(s) by which hawthorn extract may treat heart failure is unknown but may include, theoretically, immunological effects. Therefore, the purpose of this study is to determine the effect of hawthorn extract on the immunomodulatory response in a pressure overload model of heart failure. A total of 62 male Sprague-Dawley rats were randomized to either aortic constriction + vehicle (AC; n=15), aortic constriction + hawthorn 1.3 mg/kg (HL, n=17), aortic constriction + hawthorn 13 mg/kg (HM, n=15), or aortic constriction + hawthorn 130 mg/kg (HH, n=15). Six months after surgical procedure animals were sacrificed and plasma samples obtained for the measurement of the following immunomodulatory markers: interleukin (IL) IL-1ss, IL-2, IL-6, IL-10; and leptin. The mortality rate following 6 months of aortic constriction was 40% in the AC group compared to 41%, 60%, and 53% for the HL, HM, and HH groups respectively (P>0.05 compared to AC). Aortic constriction produced a similar increase in the left ventricle/body weight ratio for all groups. Hawthorn extract had no effect on the immunomodulatory markers measured in this study, although there appeared to be a trend suggesting suppression of IL-2 plasma concentrations. In this animal model of heart failure, hawthorn extract failed to significantly affect the immunomodulatory response characterized after 6 months of pressure overload at a time when approximately 50% mortality was exhibited. Mechanisms other than immunological may better define hawthorn's effect in treating heart failure.

  1. Molecular dynamics simulations showing 1-palmitoyl-2-oleoyl-phosphatidylcholine (POPC) membrane mechanoporation damage under different strain paths.

    PubMed

    Murphy, M A; Mun, Sungkwang; Horstemeyer, M F; Baskes, M I; Bakhtiary, A; LaPlaca, Michelle C; Gwaltney, Steven R; Williams, Lakiesha N; Prabhu, R K

    2018-04-09

    Continuum finite element material models used for traumatic brain injury lack local injury parameters necessitating nanoscale mechanical injury mechanisms be incorporated. One such mechanism is membrane mechanoporation, which can occur during physical insults and can be devastating to cells, depending on the level of disruption. The current study investigates the strain state dependence of phospholipid bilayer mechanoporation and failure. Using molecular dynamics, a simplified membrane, consisting of 72 1-palmitoyl-2-oleoyl-phosphatidylcholine (POPC) phospholipids, was subjected to equibiaxial, 2:1 non-equibiaxial, 4:1 non-equibiaxial, strip biaxial, and uniaxial tensile deformations at a von Mises strain rate of 5.45 × 10 8 s -1 , resulting in velocities in the range of 1 to 4.6 m·s -1 . A water bridge forming through both phospholipid bilayer leaflets was used to determine structural failure. The stress magnitude, failure strain, headgroup clustering, and damage responses were found to be strain state-dependent. The strain state order of detrimentality in descending order was equibiaxial, 2:1 non-equibiaxial, 4:1 non-equibiaxial, strip biaxial, and uniaxial. The phospholipid bilayer failed at von Mises strains of .46, .47, .53, .77, and 1.67 during these respective strain path simulations. Additionally, a Membrane Failure Limit Diagram (MFLD) was created using the pore nucleation, growth, and failure strains to demonstrate safe and unsafe membrane deformation regions. This MFLD allowed representative equations to be derived to predict membrane failure from in-plane strains. These results provide the basis to implement a more accurate mechano-physiological internal state variable continuum model that captures lower length scale damage and will aid in developing higher fidelity injury models.

  2. The fluoroscopy time, door to balloon time, contrast volume use and prevalence of vascular access site failure with transradial versus transfemoral approach in ST segment elevation myocardial infarction: A systematic review & meta-analysis.

    PubMed

    Singh, Sukhchain; Singh, Mukesh; Grewal, Navsheen; Khosla, Sandeep

    2015-12-01

    The authors aimed to conduct first systematic review and meta-analysis in STEMI patients evaluating vascular access site failure rate, fluoroscopy time, door to balloon time and contrast volume used with transradial vs transfemoral approach (TRA vs TFA) for PCI. The PubMed, CINAHL, clinicaltrials.gov, Embase and CENTRAL databases were searched for randomized trials comparing TRA versus TFA. Random effect models were used to conduct this meta-analysis. Fourteen randomized trials comprising 3758 patients met inclusion criteria. The access site failure rate was significantly higher TRA compared to TFA (RR 3.30, CI 2.16-5.03; P=0.000). Random effect inverse variance weighted prevalence rate meta-analysis showed that access site failure rate was predicted to be 4% (95% CI 3.0-6.0%) with TRA versus 1% (95% CI 0.0-1.0 %) with TFA. Door to balloon time (Standardized mean difference [SMD] 0.30 min, 95% CI 0.23-0.37 min; P=0.000) and fluoroscopy time (Standardized mean difference 0.14 min, 95% CI 0.06-0.23 min; P=0.001) were also significantly higher in TRA. There was no difference in the amount of contrast volume used with TRA versus TFA (SMD -0.05 ml, 95% CI -0.14 to 0.04 ml; P=0.275). Statistical heterogeneity was low in cross-over rate and contrast volume use, moderate in fluoroscopy time but high in the door to balloon time comparison. Operators need to consider higher cross-over rate with TRA compared to TFA in STEMI patients while attempting PCI. Fluoroscopy and door to balloon times are negligibly higher with TRA but there is no difference in terms of contrast volume use. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Artificial Immune System for Flight Envelope Estimation and Protection

    DTIC Science & Technology

    2014-12-31

    Throttle Failure 103 5.3. Estimation Algorithms for Sensor AC 108 5.3.1. Roll Rate Sensor Bias 108...4.13. Reference Features-Pattern for a Roll Rate Sensor Under Low Severity Failure 93 Figure 4.14. Reference Features-Pattern for a Roll Rate...Average PI for Different ACs 134 Figure 6.9. Roll Response Under High Magnitude Stabilator Failure 135 Figure 6.10. Pitch

  4. Adaptive Control in the Presence of Simultaneous Sensor Bias and Actuator Failures

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2012-01-01

    The problem of simultaneously accommodating unknown sensor biases and unknown actuator failures in uncertain systems is considered in a direct model reference adaptive control (MRAC) setting for state tracking using state feedback. Sensor biases and actuator faults may be present at the outset or may occur at unknown instants of time during operation. A modified MRAC law is proposed, which combines sensor bias estimation with control gain adaptation for accommodation of sensor biases and actuator failures. This control law is shown to provide signal boundedness in the resulting system. For the case when an external asymptotically stable sensor bias estimator is available, an MRAC law is developed to accomplish asymptotic state tracking and signal boundedness. For a special case wherein biases are only present in the rate measurements and bias-free position measurements are available, an MRAC law is developed using a model-independent bias estimator, and is shown to provide asymptotic state tracking with signal boundedness.

  5. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  6. Analysis of Machine Learning Techniques for Heart Failure Readmissions.

    PubMed

    Mortazavi, Bobak J; Downing, Nicholas S; Bucholz, Emily M; Dharmarajan, Kumar; Manhapra, Ajay; Li, Shu-Xia; Negahban, Sahand N; Krumholz, Harlan M

    2016-11-01

    The current ability to predict readmissions in patients with heart failure is modest at best. It is unclear whether machine learning techniques that address higher dimensional, nonlinear relationships among variables would enhance prediction. We sought to compare the effectiveness of several machine learning algorithms for predicting readmissions. Using data from the Telemonitoring to Improve Heart Failure Outcomes trial, we compared the effectiveness of random forests, boosting, random forests combined hierarchically with support vector machines or logistic regression (LR), and Poisson regression against traditional LR to predict 30- and 180-day all-cause readmissions and readmissions because of heart failure. We randomly selected 50% of patients for a derivation set, and a validation set comprised the remaining patients, validated using 100 bootstrapped iterations. We compared C statistics for discrimination and distributions of observed outcomes in risk deciles for predictive range. In 30-day all-cause readmission prediction, the best performing machine learning model, random forests, provided a 17.8% improvement over LR (mean C statistics, 0.628 and 0.533, respectively). For readmissions because of heart failure, boosting improved the C statistic by 24.9% over LR (mean C statistic 0.678 and 0.543, respectively). For 30-day all-cause readmission, the observed readmission rates in the lowest and highest deciles of predicted risk with random forests (7.8% and 26.2%, respectively) showed a much wider separation than LR (14.2% and 16.4%, respectively). Machine learning methods improved the prediction of readmission after hospitalization for heart failure compared with LR and provided the greatest predictive range in observed readmission rates. © 2016 American Heart Association, Inc.

  7. Resting Heart Rate and Outcomes in Patients with Cardiovascular Disease: Where Do We Currently Stand?

    PubMed Central

    Menown, Ian BA; Davies, Simon; Gupta, Sandeep; Kalra, Paul R; Lang, Chim C; Morley, Chris; Padmanabhan, Sandosh

    2013-01-01

    Background Data from large epidemiological studies suggest that elevated heart rate is independently associated with cardiovascular and all-cause mortality in patients with hypertension and in those with established cardiovascular disease. Clinical trial findings also suggest that the favorable effects of beta-blockers and other heart rate–lowering agents in patients with acute myocardial infarction and congestive heart failure may be, at least in part, due to their heart rate–lowering effects. Contemporary clinical outcome prediction models such as the Global Registry of Acute Coronary Events (GRACE) score include admission heart rate as an independent risk factor. Aims This article critically reviews the key epidemiology concerning heart rate and cardiovascular risk, potential mechanisms through which an elevated resting heart rate may be disadvantageous and evaluates clinical trial outcomes associated with pharmacological reduction in resting heart rate. Conclusions Prospective randomised data from patients with significant coronary heart disease or heart failure suggest that intervention to reduce heart rate in those with a resting heart rate >70 bpm may reduce cardiovascular risk. Given the established observational data and randomised trial evidence, it now appears appropriate to include reduction of elevated resting heart rate by lifestyle +/− pharmacological therapy as part of a secondary prevention strategy in patients with cardiovascular disease. PMID:22954325

  8. Strategy for the management of uncomplicated retinal detachments: the European vitreo-retinal society retinal detachment study report 1.

    PubMed

    Adelman, Ron A; Parnes, Aaron J; Ducournau, Didier

    2013-09-01

    To study success and failure in the treatment of uncomplicated rhegmatogenous retinal detachments (RRDs). Nonrandomized, multicenter retrospective study. One hundred seventy-six surgeons from 48 countries spanning 5 continents provided information on the primary procedures for 7678 cases of RRDs including 4179 patients with uncomplicated RRDs. Reported data included specific clinical findings, the method of repair, and the outcome after intervention. Final failure of retinal detachment repair (level 1 failure rate), remaining silicone oil at the study's conclusion (level 2 failure rate), and need for additional procedures to repair the detachment (level 3 failure rate). Four thousand one hundred seventy-nine uncomplicated cases of RRD were included. Combining phakic, pseudophakic, and aphakic groups, those treated with scleral buckle alone (n = 1341) had a significantly lower final failure rate than those treated with vitrectomy, with or without a supplemental buckle (n = 2723; P = 0.04). In phakic patients, final failure rate was lower in the scleral buckle group compared with those who had vitrectomy, with or without a supplemental buckle (P = 0.028). In pseudophakic patients, the failure rate of the initial procedure was lower in the vitrectomy group compared with the scleral buckle group (P = 3×10(-8)). There was no statistically significant difference in failure rate between segmental (n = 721) and encircling (n = 351) buckles (P = 0.5). Those who underwent vitrectomy with a supplemental scleral buckle (n = 488) had an increased failure rate compared with those who underwent vitrectomy alone (n = 2235; P = 0.048). Pneumatic retinopexy was found to be comparable with scleral buckle when a retinal hole was present (P = 0.65), but not in cases with a flap tear (P = 0.034). In the treatment of uncomplicated phakic retinal detachments, repair using scleral buckle may be a good option. There was no significant difference between segmental versus 360-degree buckle. For pseudophakic uncomplicated retinal detachments, the surgeon should balance the risks and benefits of vitrectomy versus scleral buckle and keep in mind that the single-surgery reattachment rate may be higher with vitrectomy. However, if a vitrectomy is to be performed, these data suggest that a supplemental buckle is not helpful. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  9. Validity testing and neuropsychology practice in the VA healthcare system: results from recent practitioner survey (.).

    PubMed

    Young, J Christopher; Roper, Brad L; Arentsen, Timothy J

    2016-05-01

    A survey of neuropsychologists in the Veterans Health Administration examined symptom/performance validity test (SPVT) practices and estimated base rates for patient response bias. Invitations were emailed to 387 psychologists employed within the Veterans Affairs (VA), identified as likely practicing neuropsychologists, resulting in 172 respondents (44.4% response rate). Practice areas varied, with 72% at least partially practicing in general neuropsychology clinics and 43% conducting VA disability exams. Mean estimated failure rates were 23.0% for clinical outpatient, 12.9% for inpatient, and 39.4% for disability exams. Failure rates were the highest for mTBI and PTSD referrals. Failure rates were positively correlated with the number of cases seen and frequency and number of SPVT use. Respondents disagreed regarding whether one (45%) or two (47%) failures are required to establish patient response bias, with those administering more measures employing the more stringent criterion. Frequency of the use of specific SPVTs is reported. Base rate estimates for SPVT failure in VA disability exams are comparable to those in other medicolegal settings. However, failure in routine clinical exams is much higher in the VA than in other settings, possibly reflecting the hybrid nature of the VA's role in both healthcare and disability determination. Generally speaking, VA neuropsychologists use SPVTs frequently and eschew pejorative terms to describe their failure. Practitioners who require only one SPVT failure to establish response bias may overclassify patients. Those who use few or no SPVTs may fail to identify response bias. Additional clinical and theoretical implications are discussed.

  10. Scoping review: Hospital nursing factors associated with 30-day readmission rates of patients with heart failure.

    PubMed

    Jun, Jin; Faulkner, Kenneth M

    2018-04-01

    To review the current literature on hospital nursing factors associated with 30-day readmission rates of patients with heart failure. Heart failure is a common, yet debilitating chronic illness with high mortality and morbidity. One in five patients with heart failure will experience unplanned readmission to a hospital within 30 days. Given the significance of heart failure to individuals, families and healthcare system, the Center for Medicare and Medicaid Services has made reducing 30-day readmission rates a priority. Scoping review, which maps the key concepts of a research area, is used. Published primary studies in English assessing factors related to nurses in hospitals and readmission of patients with heart failure were included. Other inclusion criteria were written in English and published in peer-reviewed journals. The search resulted in 2,782 articles. After removing duplicates and reviewing the inclusion and exclusion criteria, five articles were selected. Three nursing workforce factors emerged as follows: (i) nursing staffing, (ii) nursing care and work environment, and (iii) nurses' knowledge of heart failure. This is the first scoping review examining the association between hospital nursing factors and 30-day readmission rates of patients with heart failure. Further studies examining the extent of nursing structural and process factors influencing the outcomes of patients with heart failure are needed. Nurses are an integral part of the healthcare system. Identifying the factors related to nurses in hospitals is important to ensure comprehensive delivery of care to the chronically ill population. Hospital administrators, managers and policymakers can use the findings from this review to implement strategies to reduce 30-day readmission rates of patients with heart failure. © 2018 John Wiley & Sons Ltd.

  11. A Generalized Orthotropic Elasto-Plastic Material Model for Impact Analysis

    NASA Astrophysics Data System (ADS)

    Hoffarth, Canio

    Composite materials are now beginning to provide uses hitherto reserved for metals in structural systems such as airframes and engine containment systems, wraps for repair and rehabilitation, and ballistic/blast mitigation systems. These structural systems are often subjected to impact loads and there is a pressing need for accurate prediction of deformation, damage and failure. There are numerous material models that have been developed to analyze the dynamic impact response of polymer matrix composites. However, there are key features that are missing in those models that prevent them from providing accurate predictive capabilities. In this dissertation, a general purpose orthotropic elasto-plastic computational constitutive material model has been developed to predict the response of composites subjected to high velocity impacts. The constitutive model is divided into three components - deformation model, damage model and failure model, with failure to be added at a later date. The deformation model generalizes the Tsai-Wu failure criteria and extends it using a strain-hardening-based orthotropic yield function with a non-associative flow rule. A strain equivalent formulation is utilized in the damage model that permits plastic and damage calculations to be uncoupled and capture the nonlinear unloading and local softening of the stress-strain response. A diagonal damage tensor is defined to account for the directionally dependent variation of damage. However, in composites it has been found that loading in one direction can lead to damage in multiple coordinate directions. To account for this phenomena, the terms in the damage matrix are semi-coupled such that the damage in a particular coordinate direction is a function of the stresses and plastic strains in all of the coordinate directions. The overall framework is driven by experimental tabulated temperature and rate-dependent stress-strain data as well as data that characterizes the damage matrix and failure. The developed theory has been implemented in a commercial explicit finite element analysis code, LS-DYNARTM, as MAT213. Several verification and validation tests using a commonly available carbon-fiber composite, Toyobo's T800/F3900, have been carried and the results show that the theory and implementation are efficient, robust and accurate.

  12. Racial differences in virologic failure associated with adherence and quality of life on efavirenz-containing regimens for initial HIV therapy: results of ACTG A5095.

    PubMed

    Schackman, Bruce R; Ribaudo, Heather J; Krambrink, Amy; Hughes, Valery; Kuritzkes, Daniel R; Gulick, Roy M

    2007-12-15

    Blacks had higher rates of virologic failure than whites on efavirenz-containing regimens in the AIDS Clinical Trials Group (ACTG) A5095 study; preliminary analyses also suggested an association with adherence. We rigorously examined associations over time among race, virologic failure, 4 self-reported adherence metrics, and quality of life (QOL). ACTG A5095 was a double-blind placebo-controlled study of treatment-naive HIV-positive patients randomized to zidovudine/lamivudine/abacavir versus zidovudine/lamivudine plus efavirenz versus zidovudine/lamivudine/abacavir plus efavirenz. Virologic failure was defined as confirmed HIV-1 RNA >or=200 copies/mL at >or=16 weeks on study. The zidovudine/lamivudine/abacavir arm was discontinued early because of virologic inferiority. We examined virologic failure differences for efavirenz-containing arms according to missing 0 (adherent) versus at least 1 dose (nonadherent) during the past 4 days, alternative self-reported adherence metrics, and QOL. Analyses used the Fisher exact, log rank tests, and Cox proportional hazards models. The study population included white (n = 299), black (n = 260), and Hispanic (n = 156) patients with >or=1 adherence evaluation. Virologic failure was associated with week 12 nonadherence during the past 4 days for blacks (53% nonadherent failed vs. 25% adherent; P < 0.001) but not for whites (20% nonadherent failed vs. 20% adherent; P = 0.91). After adjustment for baseline covariates and treatment, there was a significant interaction between race and week 12 adherence (P = 0.02). In time-dependent Cox models using self-reports over time to reflect recent adherence, there was a significantly higher failure risk for nonadherent subjects (hazard ratio [HR] = 2.07; P < 0.001). Significant race-adherence interactions were seen in additional models of adherence: missing at least 1 medication dose ever (P = 0.04), past month (P < 0.01), or past weekend (P = 0.05). Lower QOL was significantly associated with virologic failure (P < 0.001); there was no evidence of an interaction between QOL and race (P = 0.39) or adherence (P = 0.51) in predicting virologic failure. There was a greater effect of nonadherence on virologic failure in blacks given efavirenz-containing regimens than in whites. Self-reported adherence and QOL are independent predictors of virologic failure.

  13. Geo-mechanical modeling and selection of suitable layer for hydraulic fracturing operation in an oil reservoir (south west of Iran)

    NASA Astrophysics Data System (ADS)

    Darvish, Hoda; Nouri-Taleghani, Morteza; Shokrollahi, Amin; Tatar, Afshin

    2015-11-01

    According to the growth of demands to oil resources, increasing the rate of oil production seems necessary. However, oil production declines with time as a result of pressure drop in reservoir as well as sealing of microscopic cracks and pores in the reservoir rock. Hydraulic fracturing is one of the common methods with high performance, which is widely applied to oil and gas reservoirs. In this study, wells in three sections of east, center, and west sides of a field are compared regarding the suitable layer for hydraulic fracturing operation. Firstly, elastic modulus were obtained in both dynamic and static conditions, then uniaxial compressive strength (UCS), type of shear and tensile failures, the most accurate model of failure in wells, safe and stable mud window, the best zone and layers, and finally reference pressures are determined as nominates for hydraulic fracturing. Types of shear failure in minimum, and maximum range of model and in tensile model were determined to be "Shear failure wide breakout (SWBO)", "Shear narrow breakout (SNBO)", and "Tensile vertical failure (TVER)", respectively. The range of safe mud window (SMW) in the studied wells was almost in the same range as it was in every three spots of the field. This range was determined between 5200-8800psi and 5800-10100psi for Ilam and Sarvak zones, respectively. Initial fracture pressure ranges for selected layers were determined 11,759-14,722, 11,910-14,164, and 11,848-14,953psi for the eastern, central, and western wells. Thus, western wells have the best situation for Hydraulic fracturing operation. Finally, it was concluded that the operation is more economic in Sarvak zone and western wells.

  14. Durability, value, and reliability of selected electric powered wheelchairs.

    PubMed

    Fass, Megan V; Cooper, Rory A; Fitzgerald, Shirley G; Schmeler, Mark; Boninger, Michael L; Algood, S David; Ammer, William A; Rentschler, Andrew J; Duncan, John

    2004-05-01

    To compare the durability, value, and reliability of selected electric powered wheelchairs (EPWs), purchased in 1998. Engineering standards tests of quality and performance. A rehabilitation engineering center. Fifteen EPWs: 3 each of the Jazzy, Quickie, Lancer, Arrow, and Chairman models. Not applicable. Wheelchairs were evaluated for durability (lifespan), value (durability, cost), and reliability (rate of repairs) using 2-drum and curb-drop machines in accordance with the standards of the American National Standards Institute and Rehabilitation Engineering and Assistive Technology Society of North America. The 5 brands differed significantly (P

  15. Failure factors in non-life insurance companies in United Kingdom

    NASA Astrophysics Data System (ADS)

    Samsudin, Humaida Banu

    2013-04-01

    Failure in insurance company is a condition of financial distress where a company has difficulty paying off its financial obligations to its creditors. This study continues the research from the study in identifying the determinants for run-off non-life insurance companies in United Kingdom. The analysis continues to identify other variables that could lead companies to financial distress that is macroeconomic factors (GDP rates, inflation rates and interest rates); total companies failed a year before and average size for failed companies'. The result from the analysis indicates that inflation rates, interest rates, total companies failed a year before and average sizes for failed companies are the best predictors. An early detection of failure can prevent companies from bankruptcy and allow management to take action to reduce the failure costs.

  16. Development of an Input Suite for an Orthotropic Composite Material Model

    NASA Technical Reports Server (NTRS)

    Hoffarth, Canio; Shyamsunder, Loukham; Khaled, Bilal; Rajan, Subramaniam; Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Blankenhorn, Gunther

    2017-01-01

    An orthotropic three-dimensional material model suitable for use in modeling impact tests has been developed that has three major components elastic and inelastic deformations, damage and failure. The material model has been implemented as MAT213 into a special version of LS-DYNA and uses tabulated data obtained from experiments. The prominent features of the constitutive model are illustrated using a widely-used aerospace composite the T800S3900-2B[P2352W-19] BMS8-276 Rev-H-Unitape fiber resin unidirectional composite. The input for the deformation model consists of experimental data from 12 distinct experiments at a known temperature and strain rate: tension and compression along all three principal directions, shear in all three principal planes, and off axis tension or compression tests in all three principal planes, along with other material constants. There are additional input associated with the damage and failure models. The steps in using this model are illustrated composite characterization tests, verification tests and a validation test. The results show that the developed and implemented model is stable and yields acceptably accurate results.

  17. Analysis of silicon stress/strain relationships

    NASA Technical Reports Server (NTRS)

    Dillon, O.

    1985-01-01

    In the study of stress-strain relationships in silicon ribbon, numerous solutions were calculated for stresses, strain rates, and dislocation densities through the use of the Sumino model. It was concluded that many cases of failure of computer solutions to converge are analytical manifestations of shear bands (Luder's band) observed in experiments.

  18. A Model for Teaching an Introductory Programming Course Using ADRI

    ERIC Educational Resources Information Center

    Malik, Sohail Iqbal; Coldwell-Neilson, Jo

    2017-01-01

    High failure and drop-out rates from introductory programming courses continue to be of significant concern to computer science disciplines despite extensive research attempting to address the issue. In this study, we include the three entities of the didactic triangle, instructors, students and curriculum, to explore the learning difficulties…

  19. Failure in laboratory fault models in triaxial tests

    USGS Publications Warehouse

    Savage, J.C.; Lockner, D.A.; Byerlee, J.D.

    1996-01-01

    A model of a fault in the Earth is a sand-filled saw cut in a granite cylinder subjected to a triaxial test. The saw cut is inclined at an angle a to the cylinder axis, and the sand filling is intended to represent gouge. The triaxial test subjects the granite cylinder to a constant confining pressure and increasing axial stress to maintain a constant rate of shortening of the cylinder. The required axial stress increases at a decreasing rate to a maximum, beyond which a roughly constant axial stress is sufficient to maintain the constant rate of shortening: Such triaxial tests were run for saw cuts inclined at angles ?? of 20??, 25??, 30??, 35??, 40??, 45??, and 50?? to the cylinder axis, and the apparent coefficient of friction ??a (ratio of the shear stress to the normal stress, both stresses resolved onto the saw cut) at failure was determined. Subject to the assumption that the observed failure involves slip on Coulomb shears (orientation unspecified), the orientation of the principal compression axis within the gouge can be calculated as a function of ??a for a given value of the coefficient of internal friction ??i. The rotation of the principal stress axes within the gouge in a triaxial test can then be followed as the shear strain across the gouge layer increases. For ??i ??? 0.8, an appropriate value for highly sheared sand, the observed values ??a imply that the principal-axis of compression within the gouge rotates so as to approach being parallel to the cylinder axis for all saw cut angles (20?? < ?? < 50??). In the limiting state (principal compression axis parallel to cylinder axis) the stress state in the gouge layer would be the same as that in the granite cylinder, and the failure criterion would be independent of the saw cut angle.

  20. [Analysis of the failures of a cemented constrained liner model in patients with a high dislocation risk].

    PubMed

    Gallart, X; Gomez, J C; Fernández-Valencia, J A; Combalía, A; Bori, G; García, S; Rios, J; Riba, J

    2014-01-01

    To evaluate the short-term results of an ultra high molecular weight polyethylene retentive cup in patients at high risk of dislocation, either primary or revision surgery. Retrospective review of 38 cases in order to determine the rate of survival and failure analysis of a constrained cemented cup, with a mean follow-up of 27 months. We studied demographic data, complications, especially re-dislocations of the prosthesis and, also the likely causes of system failure analyzed. In 21.05% (8 cases) were primary surgery and 78.95% were revision surgery (30 cases). The overall survival rate by Kaplan-Meier method was 70.7 months. During follow-up 3 patients died due to causes unrelated to surgery and 2 infections occurred. 12 hips had at least two previous surgeries done. It wasn't any case of aseptic loosening. Four patients presented dislocation, all with a 22 mm head (P=.008). Our statistical analysis didn't found relationship between the abduction cup angle and implant failure (P=.22). The ultra high molecular weight polyethylene retentive cup evaluated in this series has provided satisfactory short-term results in hip arthroplasty patients at high risk of dislocation. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  1. Ephaptic coupling rescues conduction failure in weakly coupled cardiac tissue with voltage-gated gap junctions

    NASA Astrophysics Data System (ADS)

    Weinberg, S. H.

    2017-09-01

    Electrical conduction in cardiac tissue is usually considered to be primarily facilitated by gap junctions, providing a pathway between the intracellular spaces of neighboring cells. However, recent studies have highlighted the role of coupling via extracellular electric fields, also known as ephaptic coupling, particularly in the setting of reduced gap junction expression. Further, in the setting of reduced gap junctional coupling, voltage-dependent gating of gap junctions, an oft-neglected biophysical property in computational studies, produces a positive feedback that promotes conduction failure. We hypothesized that ephaptic coupling can break the positive feedback loop and rescue conduction failure in weakly coupled cardiac tissue. In a computational tissue model incorporating voltage-gated gap junctions and ephaptic coupling, we demonstrate that ephaptic coupling can rescue conduction failure in weakly coupled tissue. Further, ephaptic coupling increased conduction velocity in weakly coupled tissue, and importantly, reduced the minimum gap junctional coupling necessary for conduction, most prominently at fast pacing rates. Finally, we find that, although neglecting gap junction voltage-gating results in negligible differences in well coupled tissue, more significant differences occur in weakly coupled tissue, greatly underestimating the minimal gap junctional coupling that can maintain conduction. Our study suggests that ephaptic coupling plays a conduction-preserving role, particularly at rapid heart rates.

  2. Parameter estimation in Cox models with missing failure indicators and the OPPERA study.

    PubMed

    Brownstein, Naomi C; Cai, Jianwen; Slade, Gary D; Bair, Eric

    2015-12-30

    In a prospective cohort study, examining all participants for incidence of the condition of interest may be prohibitively expensive. For example, the "gold standard" for diagnosing temporomandibular disorder (TMD) is a physical examination by a trained clinician. In large studies, examining all participants in this manner is infeasible. Instead, it is common to use questionnaires to screen for incidence of TMD and perform the "gold standard" examination only on participants who screen positively. Unfortunately, some participants may leave the study before receiving the "gold standard" examination. Within the framework of survival analysis, this results in missing failure indicators. Motivated by the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study, a large cohort study of TMD, we propose a method for parameter estimation in survival models with missing failure indicators. We estimate the probability of being an incident case for those lacking a "gold standard" examination using logistic regression. These estimated probabilities are used to generate multiple imputations of case status for each missing examination that are combined with observed data in appropriate regression models. The variance introduced by the procedure is estimated using multiple imputation. The method can be used to estimate both regression coefficients in Cox proportional hazard models as well as incidence rates using Poisson regression. We simulate data with missing failure indicators and show that our method performs as well as or better than competing methods. Finally, we apply the proposed method to data from the OPPERA study. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Waitlist Outcomes for Patients Relisted Following Failed Donation After Cardiac Death Liver Transplant: Implications for Awarding Model for End-Stage Liver Disease Exception Scores.

    PubMed

    Croome, K P; Lee, D D; Nguyen, J H; Keaveny, A P; Taner, C B

    2017-09-01

    Understanding of outcomes for patients relisted for ischemic cholangiopathy following a donation after cardiac death (DCD) liver transplant (LT) will help standardization of a Model for End-Stage Liver Disease exception scheme for retransplantation. Early relisting (E-RL) for DCD graft failure caused by primary nonfunction (PNF) or hepatic artery thrombosis (HAT) was defined as relisting ≤14 days after DCD LT, and late relisting (L-RL) due to biliary complications was defined as relisting 14 days to 3 years after DCD LT. Of 3908 DCD LTs performed nationally between 2002 and 2016, 540 (13.8%) patients were relisted within 3 years of transplant (168 [4.3%] in the E-RL group, 372 [9.5%] in the L-RL group). The E-RL and L-RL groups had waitlist mortality rates of 15.4% and 10.5%, respectively, at 3 mo and 16.1% and 14.3%, respectively, at 1 year. Waitlist mortality in the L-RL group was higher than mortality and delisted rates for patients with exception points for both hepatocellular carcinoma (HCC) and hepatopulmonary syndrome (HPS) at 3- to 12-mo time points (p < 0.001). Waitlist outcomes differed in patients with early DCD graft failure caused by PNF or HAT compared with those with late DCD graft failure attributed to biliary complications. In L-RL, higher rates of waitlist mortality were noted compared with patients listed with exception points for HCC or HPS. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.

  4. Rates of Initial Virological Suppression and Subsequent Virological Failure After Initiating Highly Active Antiretroviral Therapy: The Impact of Aboriginal Ethnicity and Injection Drug Use

    PubMed Central

    Martin, L.J.; Houston, S.; Yasui, Y.; Wild, T.C.; Saunders, L.D.

    2010-01-01

    Objectives: To compare rates of initial virological suppression and subsequent virological failure by Aboriginal ethnicity after starting highly active antiretroviral therapy (HAART). Methods: We conducted a retrospective cohort study of antiretroviral-naïve HIV-patients starting HAART in January 1999-June 2005 (baseline), followed until December 31, 2005 in Alberta, Canada. We compared the odds of achieving initial virological suppression (viral load <500 copies/mL) by Aboriginal ethnicity using logistic regression and, among those achieving suppression, rates of virological failure (the first of two consecutive viral loads > 1000 copies/mL) by Aboriginal ethnicity using cumulative incidence curves and Cox proportional hazards models. Sex, injection drug use as an HIV exposure category (IDU), baseline age, CD4 cell count, viral load, calendar year, and HAART regimen were considered as potential confounders. Results: Of 461 study patients, 37% were Aboriginal and 48% were IDUs; 71% achieved initial virological suppression and were followed for 730.4 person-years. After adjusting for confounding variables, compared to non-Aboriginals with other exposures, the odds of achieving initial virological suppression were lower for Aboriginal IDUs (odds ratio (OR)=0.33, 95% CI=0.19-0.60, p=0.0002), non-Aboriginal IDUs (OR=0.30, 95% CI=0.15-0.60, p=0.0006), and Aboriginals with other exposures (OR=0.38, 95% CI=0.21-0.67, p=0.0009). Among those achieving suppression, Aboriginals experienced higher virological failure rates ≥1 year after suppression (hazard ratio=3.35, 95% CI=1.68-6.65, p=0.0006). Conclusions: Future research should investigate adherence among Aboriginals and IDUs treated with HAART and explore their treatment experiences to assess ways to improve outcomes. PMID:21187007

  5. Creep and intergranular cracking of Ni-Cr-Fe-C in 360[degree]C argon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angeliu, T.M.; Was, G.S.

    1994-06-01

    The influence of carbon and chromium on the creep and intergranular (IG) cracking behavior of controlled-purity Ni-xCr-9Fe-yC alloys in 360 C argon was investigated using constant extension rate tension (CERT) and constant load tension (CLT) testing. The CERT test results at 360 C show that the degree of IG cracking increases with decreasing bulk chromium or carbon content. The CLT test results at 360 C and 430 C reveal that, as the amounts of chromium and carbon in solution decrease, the steady-state creep rate increases. The occurrence of severe IG cracking correlates with a high steady-state creep rate, suggesting thatmore » creep plays a role in the IG cracking behavior in argon at 360 C. The failure mode of IG cracking and the deformation mode of creep are coupled through the formation of grain boundary voids that interlink to form grain boundary cavities, resulting in eventual failure by IG cavitation and ductile overload of the remaining ligaments. Grain boundary sliding may be enhancing grain boundary cavitation by redistributing the stress from inclined to more perpendicular boundaries and concentrating stress at discontinuities for the boundaries oriented 45 deg with respect to the tensile axis. Additions of carbon or chromium, which reduce the creep rate over all stress levels, also reduce the amount of IG fracture in CERT experiments. A damage accumulation model was formulated and applied to CERT tests to determine whether creep damage during a CERT test controls failure. Results show that, while creep plays a significant role in CERT experiments, failure is likely controlled by ductile overload caused by reduction in area resulting from grain boundary void formation and interlinkage.« less

  6. The effect of incident tuberculosis on immunological response of HIV patients on highly active anti-retroviral therapy at the university of Gondar hospital, northwest Ethiopia: a retrospective follow-up study.

    PubMed

    Assefa, Abate; Gelaw, Baye; Getnet, Gebeyaw; Yitayew, Gashaw

    2014-08-27

    Human immunodeficiency virus (HIV) infection is usually complicated by high rates of tuberculosis (TB) co-infection. Impaired immune response has been reported during HIV/TB co-infection and may have significant effect on anti-retroviral therapy (ART). TB/HIV co - infection is a major public health problem in Ethiopia. Therefore, the aim of the study was to assess the effect of TB incidence on immunological response of HIV patients during ART. A retrospective follow-up study was conducted among adult HIV patients who started ART at the University of Gondar Hospital. Changes in CD4+ T - lymphocyte count and incident TB episodes occurring during 42 months of follow up on ART were assessed. Life table was used to estimate the cumulative immunologic failure. Kaplan-Meier curve was used to compare survival curves between the different categories. Cox-proportional hazard model was employed to examine predictors of immunological failure. Among 400 HIV patients, 89(22.2%) were found to have immunological failure with a rate of 8.5 per 100 person-years (PY) of follow-up. Incident TB developed in 26(6.5%) of patients, with an incidence rate of 2.2 cases per 100 PY. The immunological failure rate was high (20.1/100PY) at the first year of treatment. At multivariate analysis, Cox regression analysis showed that baseline CD4+ T - cell count <100 cells/mm3 (adjusted hazard ratio (AHR) 1.8; 95%CI: 1.10 - 2.92, p = 0.023) and being male sex (AHR 1.6; 95%CI: 1.01 - 2.68, p = 0.046) were found to be significant predictors of immunological failure. There was borderline significant association with incident TB (AHR 2.2; 95%CI: 0.94 - 5.09, p = 0.06). The risk of immunological failure was significantly higher (38.5%) among those with incident TB compared with TB - free (21.1%) (Log rank p = 0.036). High incidence of immunological failure occurred within the first year of initiating ART. The proportions of patients with impaired immune restoration were higher among patients with incident TB. Lower baseline CD4+ T - cells count of <100 cells/mm3 and being male sex were significant predictors of immunological failure. The result highlighted the beneficial effects of earlier initiation of ART on CD4+ T - cell count recovery.

  7. A Dynamic Model of the Initial Spares Support List Development Process

    DTIC Science & Technology

    1979-06-01

    S117Z1NOTE NREI -NOT READI END ITERS IIT7INOTE GPEI -QUANTITY OF PARTS M. END ITER 11775NOTE FUSERF -PARTS USE RATE FACTOR U8WOTE OP U -OTHER PARTS USE...FAILURES ’I 1675R PtJER.L=(NREI.K) (QPEI) (PUSERF.K)+OPUR II7HNOTE PUSER -PARTS USE RATE II7t5NOTE NREI -NOT READY END ITEMS II756NOTE GPEI -QUANTITY

  8. Failure on the American Board of Surgery Examinations of General Surgery Residency Graduates Correlates Positively with States' Malpractice Risk.

    PubMed

    Dent, Daniel L; Al Fayyadh, Mohammed J; Rawlings, Jeremy A; Hassan, Ramy A; Kempenich, Jason W; Willis, Ross E; Stewart, Ronald M

    2018-03-01

    It has been suggested that in environments where there is greater fear of litigation, resident autonomy and education is compromised. Our aim was to examine failure rates on American Board of Surgery (ABS) examinations in comparison with medical malpractice payments in 47 US states/territories that have general surgery residency programs. We hypothesized higher ABS examination failure rates for general surgery residents who graduate from residencies in states with higher malpractice risk. We conducted a retrospective review of five-year (2010-2014) pass rates of first-time examinees of the ABS examinations. States' malpractice data were adjusted based on population. ABS examinations failure rates for programs in states with above and below median malpractice payments per capita were 31 and 24 per cent (P < 0.01) respectively. This difference was seen in university and independent programs regardless of size. Pearson correlation confirmed a significant positive correlation between board failure rates and malpractice payments per capita for Qualifying Examination (P < 0.02), Certifying Examination (P < 0.02), and Qualifying and Certifying combined index (P < 0.01). Malpractice risk correlates positively with graduates' failure rates on ABS examinations regardless of program size or type. We encourage further examination of training environments and their relationship to surgical residency graduate performance.

  9. Statistical forecasting of repetitious dome failures during the waning eruption of Redoubt Volcano, Alaska, February-April 1990

    USGS Publications Warehouse

    Page, R.A.; Lahr, J.C.; Chouet, B.A.; Power, J.A.; Stephens, C.D.

    1994-01-01

    The waning phase of the 1989-1990 eruption of Redoubt Volcano in the Cook Inlet region of south-central Alaska comprised a quasi-regular pattern of repetitious dome growth and destruction that lasted from February 15 to late April 1990. The dome failures produced ash plumes hazardous to airline traffic. In response to this hazard, the Alaska Volcano Observatory sought to forecast these ash-producing events using two approaches. One approach built on early successes in issuing warnings before major eruptions on December 14, 1989 and January 2, 1990. These warnings were based largely on changes in seismic activity related to the occurrence of precursory swarms of long-period seismic events. The search for precursory swarms of long-period seismicity was continued through the waning phase of the eruption and led to warnings before tephra eruptions on March 23 and April 6. The observed regularity of dome failures after February 15 suggested that a statistical forecasting method based on a constant-rate failure model might also be successful. The first statistical forecast was issued on March 16 after seven events had occurred, at an average interval of 4.5 days. At this time, the interval between dome failures abruptly lengthened. Accordingly, the forecast was unsuccessful and further forecasting was suspended until the regularity of subsequent failures could be confirmed. Statistical forecasting resumed on April 12, after four dome failure episodes separated by an average of 7.8 days. One dome failure (April 15) was successfully forecast using a 70% confidence window, and a second event (April 21) was narrowly missed before the end of the activity. The cessation of dome failures after April 21 resulted in a concluding false alarm. Although forecasting success during the eruption was limited, retrospective analysis shows that early and consistent application of the statistical method using a constant-rate failure model and a 90% confidence window could have yielded five successful forecasts and two false alarms; no events would have been missed. On closer examination, the intervals between successive dome failures are not uniform but tend to increase with time. This increase attests to the continuous, slowly decreasing supply of magma to the surface vent during the waning phase of the eruption. The domes formed in a precarious position in a breach in the summit crater rim where they were susceptible to gravitational collapse. The instability of the February 15-April 21 domes relative to the earlier domes is attributed to reaming the lip of the vent by a laterally directed explosion during the major dome-destroying eruption of February 15, a process which would leave a less secure foundation for subsequent domes. ?? 1994.

  10. Final Report: Multi-Scale Analysis of Deformation and Failure in Polycrystalline Titanium Alloys Under High Strain-Rates

    DTIC Science & Technology

    2015-12-28

    Masoud Anahid, Mahendra K. Samal , and Somnath Ghosh. Dwell fatigue crack nucleation model based on crystal plasticity finite element simulations of...induced crack nucleation in polycrystals. Model. Simul. Mater. Sci. Eng., 17, 064009. 19. Anahid, M., Samal , M. K. & Ghosh, S. (2011). Dwell fatigue...Jour. Plas., 24:428–454, 2008. 4. M. Anahid, M. K. Samal , and S. Ghosh. Dwell fatigue crack nucleation model based on crystal plasticity finite

  11. Warrior Injury Assessment Manikin (WIAMan) Lumbar Spine Model Validation: Development, Testing, and Analysis of Physical and Computational Models of the WIAMan Lumbar Spine Materials Demonstrator

    DTIC Science & Technology

    2016-08-01

    load. The 1 and 10 s-1 rate tests were run on a hydraulic high-rate Instron MTS (8821S), placed in a custom- designed tension fixture (Fig. 8...lateral compression prior to shear testing . The sides of the coupon rest on blocks at the bottom of the vice jaw to allow for travel of the center post ...mode of failure based on the lap shear testing . However, since the pretest spine survived all hits at the BRC speeds, it was decided to proceed with

  12. User-Defined Material Model for Progressive Failure Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F. Jr.; Reeder, James R. (Technical Monitor)

    2006-01-01

    An overview of different types of composite material system architectures and a brief review of progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model (or UMAT) for use with the ABAQUS/Standard1 nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details and use of the UMAT subroutine are described in the present paper. Parametric studies for composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented.

  13. The role of bank collapse on tidal creek ontogeny: A novel process-based model for bank retreat

    NASA Astrophysics Data System (ADS)

    Gong, Zheng; Zhao, Kun; Zhang, Changkuan; Dai, Weiqi; Coco, Giovanni; Zhou, Zeng

    2018-06-01

    Bank retreat in coastal tidal flats plays a primary role on the planimetric shape of tidal creeks and is commonly driven by both flow-induced bank erosion and gravity-induced bank collapse. However, existing modelling studies largely focus on bank erosion and overlook bank collapse. We build a bank retreat model coupling hydrodynamics, bank erosion and bank collapse. To simulate the process of bank collapse, a stress-deformation model is utilized to calculate the stress variation of bank soil after bank erosion, and the Mohr-Coulomb failure criterion is then applied to evaluate the stability of the tidal creek bank. Results show that the bank failure process can be categorized into three stages, i.e., shear failure at the bank toe (stage I), tensile failure on the bank top (stage II), and sectional cracking from the bank top to the toe (stage III). With only bank erosion, the planimetric shapes of tidal creeks are funneled due to the gradually seaward increasing discharge. In contrast to bank erosion, bank collapse is discontinuous, and the contribution of bank collapse to bank retreat can reach 85%, highlighting that the expansion of tidal creeks can be dominated by bank collapse process. The planimetric shapes of tidal creeks are funneled with a much faster expansion rate when bank collapse is considered. Overall, this study makes a further step toward more physical and realistic simulation of bank retreat in estuarine and coastal settings and the developed bank collapse module can be readily included in other morphodynamic models.

  14. Gas gun driven dynamic fracture and fragmentation of Ti-6Al-4V cylinders

    NASA Astrophysics Data System (ADS)

    Jones, D. R.; Chapman, D. J.; Eakins, D. E.

    2014-05-01

    The dynamic fracture and fragmentation of a material is a complex late stage phenomenon occurring in many shock loading scenarios. Improving our predictive capability depends upon exercising our current failure models against new loading schemes and data. We present axially-symmetric high strain rate (104 s-1) expansion of Ti-6Al-4V cylinders using a single stage light gas gun technique. A steel ogive insert was located inside the target cylinder, into which a polycarbonate rod was launched. Deformation of this rod around the insert drives the cylinder into rapid expansion. This technique we have developed facilitates repeatable loading, independent of the temperature of the sample cylinder, with straightforward adjustment of the radial strain rate. Expansion velocity was measured with multiple channels of photon Doppler velocimetry. High speed imaging was used to track the overall expansion process and record strain to failure and crack growth. Results from a cylinder at a temperature of 150 K are compared with work at room temperature, examining the deformation, failure mechanisms and differences in fragmentation.

  15. Impact of Hyperuricemia on Long-term Outcomes of Kidney Transplantation: Analysis of the FAVORIT Study.

    PubMed

    Kalil, Roberto S; Carpenter, Myra A; Ivanova, Anastasia; Gravens-Mueller, Lisa; John, Alin A; Weir, Matthew R; Pesavento, Todd; Bostom, Andrew G; Pfeffer, Marc A; Hunsicker, Lawrence G

    2017-12-01

    Elevated uric acid concentration is associated with higher rates of cardiovascular (CV) morbidity and mortality in the general population. It is not known whether hyperuricemia increases the risk for CV death or transplant failure in kidney transplant recipients. Post hoc cohort analysis of the FAVORIT Study, a randomized controlled trial that examined the effect of homocysteine-lowering vitamins on CV disease in kidney transplantation. Adult recipients of kidney transplants in the United States, Canada, or Brazil participating in the FAVORIT Study, with hyperhomocysteinemia, stable kidney function, and no known history of CV disease. Uric acid concentration. The primary end point was a composite of CV events. Secondary end points were all-cause mortality and transplant failure. Risk factors included in statistical models were age, sex, race, country, treatment assignment, smoking history, body mass index, presence of diabetes mellitus, history of CV disease, blood pressure, estimated glomerular filtration rate (eGFR), donor type, transplant vintage, lipid concentrations, albumin-creatinine ratio, and uric acid concentration. Cox proportional hazards models were fit to examine the association of uric acid concentration with study end points after risk adjustment. 3,512 of 4,110 FAVORIT participants with baseline uric acid concentrations were studied. Median follow-up was 3.9 (IQR, 3.0-5.3) years. 503 patients had a primary CV event, 401 died, and 287 had transplant failure. In unadjusted analyses, uric acid concentration was significantly related to each outcome. Uric acid concentration was also strongly associated with eGFR. The relationship between uric acid concentration and study end points was no longer significant in fully adjusted multivariable models (P=0.5 for CV events; P=0.09 for death, and P=0.1 for transplant failure). Unknown use of uric acid-lowering agents among study participants. Following kidney transplantation, uric acid concentrations are not independently associated with CV events, mortality, or transplant failure. The strong association between uric acid concentrations with traditional risk factors and eGFR is a possible explanation. Copyright © 2017 National Kidney Foundation, Inc. All rights reserved.

  16. Predictions and Experimental Microstructural Characterization of High Strain Rate Failure Modes in Layered Aluminum Composites

    NASA Astrophysics Data System (ADS)

    Khanikar, Prasenjit

    Different aluminum alloys can be combined, as composites, for tailored dynamic applications. Most investigations pertaining to metallic alloy layered composites, however, have been based on quasi-static approaches. The dynamic failure of layered metallic composites, therefore, needs to be characterized in terms of strength, toughness, and fracture response. A dislocation-density based crystalline plasticity formulation, finite-element techniques, rational crystallographic orientation relations and a new fracture methodology were used to predict the failure modes associated with the high strain rate behavior of aluminum layered composites. Two alloy layers, a high strength alloy, aluminum 2195, and an aluminum alloy 2139, with high toughness, were modeled with representative microstructures that included precipitates, dispersed particles, and different grain boundary (GB) distributions. The new fracture methodology, based on an overlap method and phantom nodes, is used with a fracture criteria specialized for fracture on different cleavage planes. One of the objectives of this investigation, therefore, was to determine the optimal arrangements of the 2139 and 2195 aluminum alloys for a metallic layered composite that would combine strength, toughness and fracture resistance for high strain-rate applications. Different layer arrangements were investigated for high strain-rate applications, and the optimal arrangement was with the high toughness 2139 layer on the bottom, which provided extensive shear strain localization, and the high strength 2195 layer on the top for high strength resistance. The layer thickness of the bottom high toughness layer also affected the bending behavior of the roll-boned interface and the potential delamination of the layers. Shear strain localization, dynamic cracking and delamination were the mutually competing failure mechanisms for the layered metallic composite, and control of these failure modes can be optimized for high strain-rate applications. The second major objective of this investigation was the use of recently developed dynamic fracture formulations to model and analyze the crack nucleation and propagation of aluminum layered composites subjected to high strain rate loading conditions and how microstructural effects, such as precipitates, dispersed particles, and GB orientations affect failure evolution. This dynamic fracture approach is used to investigate crack nucleation and crack growth as a function of the different microstructural characteristics of each alloy in layered composites with and without pre-existing cracks. The zigzag nature of the crack paths were mainly due to the microstructural features, such as precipitates and dispersed particles distributions and orientations ahead of the crack front, and it underscored the capabilities of the fracture methodology. The evolution of dislocation density and the formation of localized shear slip contributed to the blunting of the propagating crack. Extensive geometrical and thermal softening due to the localized plastic slip also affected crack path orientations and directions. These softening mechanisms resulted in the switching of cleavage planes, which affected crack path orientations. Interface delamination can also have an important role in the failure and toughening of the layered composites. Different scenarios of delamination were investigated, such as planar crack growth and crack penetration into the layers. The presence of brittle surface oxide platelets in the interface region also significantly influenced the interface delamination process. Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM) and Optical Microscopy (OM) characterization provided further physical insights and validation of the predictive capabilities. The inherent microstructural features of each alloy play a significant role in the dynamic fracture, shear strain localization, and interface delamination of the layered metallic composite. These microstructural features, such as precipitates, dispersed particles, and GB orientations and distributions can be optimized for desired behavior of metallic composites.

  17. A Nuclear Interaction Model for Understanding Results of Single Event Testing with High Energy Protons

    NASA Technical Reports Server (NTRS)

    Culpepper, William X.; ONeill, Pat; Nicholson, Leonard L.

    2000-01-01

    An internuclear cascade and evaporation model has been adapted to estimate the LET spectrum generated during testing with 200 MeV protons. The model-generated heavy ion LET spectrum is compared to the heavy ion LET spectrum seen on orbit. This comparison is the basis for predicting single event failure rates from heavy ions using results from a single proton test. Of equal importance, this spectra comparison also establishes an estimate of the risk of encountering a failure mode on orbit that was not detected during proton testing. Verification of the general results of the model is presented based on experiments, individual part test results, and flight data. Acceptance of this model and its estimate of remaining risk opens the hardware verification philosophy to the consideration of radiation testing with high energy protons at the board and box level instead of the more standard method of individual part testing with low energy heavy ions.

  18. Mechanical Properties of Transgenic Silkworm Silk Under High Strain Rate Tensile Loading

    NASA Astrophysics Data System (ADS)

    Chu, J.-M.; Claus, B.; Chen, W.

    2017-12-01

    Studies have shown that transgenic silkworm silk may be capable of having similar properties of spider silk while being mass-producible. In this research, the tensile stress-strain response of transgenic silkworm silk fiber is systematically characterized using a quasi-static load frame and a tension Kolsky bar over a range of strain-rates between 10^{-3} and 700/s. The results show that transgenic silkworm silk tends to have higher overall ultimate stress and failure strain at high strain rate (700/s) compared to quasi-static strain rates, indicating rate sensitivity of the material. The failure strain at the high strain rate is higher than that of spider silk. However, the stress levels are significantly below that of spider silk, and far below that of high-performance fiber. Failure surfaces are examined via scanning electron microscopy and reveal that the failure modes are similar to those of spider silk.

  19. Depression vulnerable and nonvulnerable smokers after a failure experience: examining cognitive self-regulation and motivation.

    PubMed

    Scott, Walter D; Beevers, Christopher G; Mermelstein, Robin J

    2008-07-01

    The present study extended previous tests of cognitive priming theories of depression by examining cognitive self-regulatory, motivational, and affective functioning of depression-vulnerable and nonvulnerable individuals after a failure experience. Participants were enrolled in a clinic-based smoking cessation program that consisted of seven group meetings. Major findings show that compared to the nonvulnerable group, depression-vulnerable individuals were less motivated to quit and experienced more negative affect, but only after a failure to quit smoking. However, after controlling for actual smoking rate, depression-vulnerable individuals did not evaluate their success any more negatively, nor did they indicate lower self-efficacy for quitting. Results are discussed in terms of cognitive self-regulatory and affect temperament models of motivation and depression.

  20. Preliminary design-lift/cruise fan research and technology airplane flight control system

    NASA Technical Reports Server (NTRS)

    Gotlieb, P.; Lewis, G. E.; Little, L. J.

    1976-01-01

    This report presents the preliminary design of a stability augmentation system for a NASA V/STOL research and technology airplane. This stability augmentation system is postulated as the simplest system that meets handling qualities levels for research and technology missions flown by NASA test pilots. The airplane studied in this report is a T-39 fitted with tilting lift/cruise fan nacelles and a nose fan. The propulsion system features a shaft interconnecting the three variable pitch fans and three power plants. The mathematical modeling is based on pre-wind tunnel test estimated data. The selected stability augmentation system uses variable gains scheduled with airspeed. Failure analysis of the system illustrates the benign effect of engine failure. Airplane rate sensor failure must be solved with redundancy.

  1. The preliminary design of a lift-cruise fan airplane flight control system

    NASA Technical Reports Server (NTRS)

    Gotlieb, P.

    1977-01-01

    This paper presents the preliminary design of a stability augmentation system for a NASA V/STOL research and technology airplane. This stability augmentation system is postulated as the simplest system that meets handling-quality levels for research and technology missions flown by NASA test pilots. The airplane studied in this report is a modified T-39 fitted with tilting lift/cruise fan nacelles and a nose fan. The propulsion system features a shaft that interconnects three variable-pitch fans and three powerplants. The mathematical modeling is based on pre-wind tunnel test estimated data. The selected stability augmentation system uses variable gains scheduled with airspeed. Failure analysis of the system illustrates the benign effect of engine failure. Airplane rate sensor failure must be solved with redundancy.

  2. A flexible cure rate model with dependent censoring and a known cure threshold.

    PubMed

    Bernhardt, Paul W

    2016-11-10

    We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Creep failure of a reactor pressure vessel lower head under severe accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilch, M.M.; Ludwigsen, J.S.; Chu, T.Y.

    A severe accident in a nuclear power plant could result in the relocation of large quantities of molten core material onto the lower head of he reactor pressure vessel (RPV). In the absence of inherent cooling mechanisms, failure of the RPV ultimately becomes possible under the combined effects of system pressure and the thermal heat-up of the lower head. Sandia National Laboratories has performed seven experiments at 1:5th scale simulating creep failure of a RPV lower head. This paper describes a modeling program that complements the experimental program. Analyses have been performed using the general-purpose finite-element code ABAQUS-5.6. In ordermore » to make ABAQUS solve the specific problem at hand, a material constitutive model that utilizes temperature dependent properties has been developed and attached to ABAQUS-executable through its UMAT utility. Analyses of the LHF-1 experiment predict instability-type failure. Predicted strains are delayed relative to the observed strain histories. Parametric variations on either the yield stress, creep rate, or both (within the range of material property data) can bring predictions into agreement with experiment. The analysis indicates that it is necessary to conduct material property tests on the actual material used in the experimental program. The constitutive model employed in the present analyses is the subject of a separate publication.« less

  4. Zero Gyro Kalman Filtering in the presence of a Reaction Wheel Failure

    NASA Technical Reports Server (NTRS)

    Hur-Diaz, Sun; Wirzburger, John; Smith, Dan; Myslinski, Mike

    2007-01-01

    Typical implementation of Kalman filters for spacecraft attitude estimation involves the use of gyros for three-axis rate measurements. When there are less than three axes of information available, the accuracy of the Kalman filter depends highly on the accuracy of the dynamics model. This is particularly significant during the transient period when a reaction wheel with a high momentum fails, is taken off-line, and spins down. This paper looks at how a reaction wheel failure can affect the zero-gyro Kalman filter performance for the Hubble Space Telescope and what steps are taken to minimize its impact.

  5. Zero Gyro Kalman Filtering in the Presence of a Reaction Wheel Failure

    NASA Technical Reports Server (NTRS)

    Hur-Diaz, Sun; Wirzburger, John; Smith, Dan; Myslinski, Mike

    2007-01-01

    Typical implementation of Kalman filters for spacecraft attitude estimation involves the use of gyros for three-axis rate measurements. When there are less than three axes of information available, the accuracy of the Kalman filter depends highly on the accuracy of the dynamics model. This is particularly significant during the transient period when a reaction wheel with a high momentum fails, is taken off-line, and spins down. This paper looks at how a reaction wheel failure can affect the zero-gyro Kalman filter performance for the Hubble Space Telescope and what steps are taken to minimize its impact.

  6. Impact of ductility on hydraulic fracturing in shales

    NASA Astrophysics Data System (ADS)

    Auton, Lucy; MacMinn, Chris

    2015-11-01

    Hydraulic fracturing is a method for extracting natural gas and oil from low-permeability rocks such as shale via the injection of fluid at high pressure. This creates fractures in the rock, providing hydraulic access deeper into the reservoir and enabling gas to be collected from a larger region of the rock. Fracture is the tensile failure of a brittle material upon reaching a threshold tensile stress, but some shales have a high clay content and may yield plastically before fracturing. Plastic deformation is the shear failure of a ductile material, during which stress relaxes through irreversible rearrangements of the particles of the material. Here, we investigate the impact of the ductility of shales on hydraulic fracturing. We consider a simple, axisymmetric model for radially outward fluid injection from a wellbore into a ductile porous rock. We solve the model semi-analytically at steady state, and numerically in general. We find that plastic deformation greatly reduces the maximum tensile stress, and that this maximum stress does not always occur at the wellbore. These results imply that hydraulic fracturing may fail in ductile rocks, or that the required injection rate for fracking may be much larger than the rate predicted from purely elastic models.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Benjamin L; Bronkhorst, Curt; Beyerlein, Irene

    The goal of this work is to formulate a constitutive model for the deformation of metals over a wide range of strain rates. Damage and failure of materials frequently occurs at a variety of deformation rates within the same sample. The present state of the art in single crystal constitutive models relies on thermally-activated models which are believed to become less reliable for problems exceeding strain rates of 10{sup 4} s{sup -1}. This talk presents work in which we extend the applicability of the single crystal model to the strain rate region where dislocation drag is believed to dominate. Themore » elastic model includes effects from volumetric change and pressure sensitive moduli. The plastic model transitions from the low-rate thermally-activated regime to the high-rate drag dominated regime. The direct use of dislocation density as a state parameter gives a measurable physical mechanism to strain hardening. Dislocation densities are separated according to type and given a systematic set of interactions rates adaptable by type. The form of the constitutive model is motivated by previously published dislocation dynamics work which articulated important behaviors unique to high-rate response in fcc systems. The proposed material model incorporates thermal coupling. The hardening model tracks the varying dislocation population with respect to each slip plane and computes the slip resistance based on those values. Comparisons can be made between the responses of single crystals and polycrystals at a variety of strain rates. The material model is fit to copper.« less

  8. Implementation of Laminate Theory Into Strain Rate Dependent Micromechanics Analysis of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    2000-01-01

    A research program is in progress to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to impact loads. Previously, strain rate dependent inelastic constitutive equations developed to model the polymer matrix were implemented into a mechanics of materials based micromechanics method. In the current work, the computation of the effective inelastic strain in the micromechanics model was modified to fully incorporate the Poisson effect. The micromechanics equations were also combined with classical laminate theory to enable the analysis of symmetric multilayered laminates subject to in-plane loading. A quasi-incremental trapezoidal integration method was implemented to integrate the constitutive equations within the laminate theory. Verification studies were conducted using an AS4/PEEK composite using a variety of laminate configurations and strain rates. The predicted results compared well with experimentally obtained values.

  9. Rapid behavioral maturation accelerates failure of stressed honey bee colonies

    PubMed Central

    Perry, Clint J.; Myerscough, Mary R.; Barron, Andrew B.

    2015-01-01

    Many complex factors have been linked to the recent marked increase in honey bee colony failure, including pests and pathogens, agrochemicals, and nutritional stressors. It remains unclear, however, why colonies frequently react to stressors by losing almost their entire adult bee population in a short time, resulting in a colony population collapse. Here we examine the social dynamics underlying such dramatic colony failure. Bees respond to many stressors by foraging earlier in life. We manipulated the demography of experimental colonies to induce precocious foraging in bees and used radio tag tracking to examine the consequences of precocious foraging for their performance. Precocious foragers completed far fewer foraging trips in their life, and had a higher risk of death in their first flights. We constructed a demographic model to explore how this individual reaction of bees to stress might impact colony performance. In the model, when forager death rates were chronically elevated, an increasingly younger forager force caused a positive feedback that dramatically accelerated terminal population decline in the colony. This resulted in a breakdown in division of labor and loss of the adult population, leaving only brood, food, and few adults in the hive. This study explains the social processes that drive rapid depopulation of a colony, and we explore possible strategies to prevent colony failure. Understanding the process of colony failure helps identify the most effective strategies to improve colony resilience. PMID:25675508

  10. Rapid behavioral maturation accelerates failure of stressed honey bee colonies.

    PubMed

    Perry, Clint J; Søvik, Eirik; Myerscough, Mary R; Barron, Andrew B

    2015-03-17

    Many complex factors have been linked to the recent marked increase in honey bee colony failure, including pests and pathogens, agrochemicals, and nutritional stressors. It remains unclear, however, why colonies frequently react to stressors by losing almost their entire adult bee population in a short time, resulting in a colony population collapse. Here we examine the social dynamics underlying such dramatic colony failure. Bees respond to many stressors by foraging earlier in life. We manipulated the demography of experimental colonies to induce precocious foraging in bees and used radio tag tracking to examine the consequences of precocious foraging for their performance. Precocious foragers completed far fewer foraging trips in their life, and had a higher risk of death in their first flights. We constructed a demographic model to explore how this individual reaction of bees to stress might impact colony performance. In the model, when forager death rates were chronically elevated, an increasingly younger forager force caused a positive feedback that dramatically accelerated terminal population decline in the colony. This resulted in a breakdown in division of labor and loss of the adult population, leaving only brood, food, and few adults in the hive. This study explains the social processes that drive rapid depopulation of a colony, and we explore possible strategies to prevent colony failure. Understanding the process of colony failure helps identify the most effective strategies to improve colony resilience.

  11. Influence of Prior Heart Failure Hospitalization on Cardiovascular Events in Patients with Reduced and Preserved Ejection Fraction

    PubMed Central

    Bello, Natalie A.; Claggett, Brian; Desai, Akshay S.; McMurray, John J.V.; Granger, Christopher B.; Yusuf, Salim; Swedberg, Karl; Pfeffer, Marc A.; Solomon, Scott D.

    2014-01-01

    Background Hospitalization for acute heart failure (HF) is associated with high rates of subsequent mortality and readmission. We assessed the influence of the time interval between prior HF hospitalization and randomization in the CHARM trials on clinical outcomes in patients with both reduced and preserved ejection fraction. Methods and Results CHARM enrolled 7,599 patients with NYHA class II-IV heart failure, of whom 5,426 had a history of prior HF hospitalization. Cox proportional hazards regression models were utilized to assess the association between time from prior HF hospitalization and randomization and the primary outcome of cardiovascular death or unplanned admission to hospital for the management of worsening HF over a median of 36.6 months. For patients with HF and reduced (HFrEF) or preserved (HFpEF) ejection fraction, rates of CV mortality and HF hospitalization were higher among patients with prior HF hospitalization than those without. The risk for mortality and hospitalization varied inversely with the time interval between hospitalization and randomization. Rates were higher for HFrEF patients within each category. Event rates for those with HFpEF and a HF hospitalization in the 6 months prior to randomization were comparable to the rate in HFrEF patients with no prior HF hospitalization. Conclusions Rates of CV death or HF hospitalization are greatest in those who have been previously hospitalized for HF. Independent of EF, rates of death and readmission decline as time from HF hospitalization to trial enrollment increased. Recent HF hospitalization identifies a high risk population for future clinical trials in HFrEF and HFpEF. Clinical Trial Registration URL: http://www.ClinicalTrials.gov. Unique identifier: NCT00634400. PMID:24874200

  12. C-Arm Computed Tomography-Assisted Adrenal Venous Sampling Improved Right Adrenal Vein Cannulation and Sampling Quality in Primary Aldosteronism.

    PubMed

    Park, Chung Hyun; Hong, Namki; Han, Kichang; Kang, Sang Wook; Lee, Cho Rok; Park, Sungha; Rhee, Yumie

    2018-05-04

    Adrenal venous sampling (AVS) is a gold standard for subtype classification of primary aldosteronism (PA). However, this procedure has a high failure rate because of the anatomical difficulties in accessing the right adrenal vein. We investigated whether C-arm computed tomography-assisted AVS (C-AVS) could improve the success rate of adrenal sampling. A total of 156 patients, diagnosed with PA who underwent AVS from May 2004 through April 2017, were included. Based on the medical records, we retrospectively compared the overall, left, and right catheterization success rates of adrenal veins during the periods without C-AVS (2004 to 2010, n=32) and with C-AVS (2011 to 2016, n=134). The primary outcome was adequate bilateral sampling defined as a selectivity index (SI) >5. With C-AVS, the rates of adequate bilateral AVS increased from 40.6% to 88.7% (P<0.001), with substantial decreases in failure rates (43.7% to 0.8%, P<0.001). There were significant increases in adequate sampling rates from right (43.7% to 91.9%, P<0.001) and left adrenal veins (53.1% to 95.9%, P<0.001) as well as decreases in catheterization failure from right adrenal vein (9.3% to 0.0%, P<0.001). Net improvement of SI on right side remained significant after adjustment for left side (adjusted SI, 1.1 to 9.0; P=0.038). C-AVS was an independent predictor of adequate bilateral sampling in the multivariate model (odds ratio, 9.01; P<0.001). C-AVS improved the overall success rate of AVS, possibly as a result of better catheterization of right adrenal vein. Copyright © 2018 Korean Endocrine Society.

  13. Rate of occurrence of failures based on a nonhomogeneous Poisson process: an ozone analyzer case study.

    PubMed

    de Moura Xavier, José Carlos; de Andrade Azevedo, Irany; de Sousa Junior, Wilson Cabral; Nishikawa, Augusto

    2013-02-01

    Atmospheric pollutant monitoring constitutes a primordial activity in public policies concerning air quality. In São Paulo State, Brazil, the São Paulo State Environment Company (CETESB) maintains an automatic network which continuously monitors CO, SO(2), NO(x), O(3), and particulate matter concentrations in the air. The monitoring process accuracy is a fundamental condition for the actions to be taken by CETESB. As one of the support systems, a preventive maintenance program for the different analyzers used is part of the data quality strategy. Knowledge of the behavior of analyzer failure times could help optimize the program. To achieve this goal, the failure times of an ozone analyzer-considered a repairable system-were modeled by means of the nonhomogeneous Poisson process. The rate of occurrence of failures (ROCOF) was estimated for the intervals 0-70,800 h and 0-88,320 h, in which six and seven failures were observed, respectively. The results showed that the ROCOF estimate is influenced by the choice of the observation period, t(0) = 70,800 h and t(7) = 88,320 h in the cases analyzed. Identification of preventive maintenance actions, mainly when parts replacement occurs in the last interval of observation, is highlighted, justifying the alteration in the behavior of the inter-arrival times. The performance of a follow-up on each analyzer is recommended in order to record the impact of the performed preventive maintenance program on the enhancement of its useful life.

  14. Cardiac myofibrillar contractile properties during the progression from hypertension to decompensated heart failure.

    PubMed

    Hanft, Laurin M; Emter, Craig A; McDonald, Kerry S

    2017-07-01

    Heart failure arises, in part, from a constellation of changes in cardiac myocytes including remodeling, energetics, Ca 2+ handling, and myofibrillar function. However, little is known about the changes in myofibrillar contractile properties during the progression from hypertension to decompensated heart failure. The aim of the present study was to provide a comprehensive assessment of myofibrillar functional properties from health to heart disease. A rodent model of uncontrolled hypertension was used to test the hypothesis that myocytes in compensated hearts exhibit increased force, higher rates of force development, faster loaded shortening, and greater power output; however, with progression to overt heart failure, we predicted marked depression in these contractile properties. We assessed contractile properties in skinned cardiac myocyte preparations from left ventricles of Wistar-Kyoto control rats and spontaneous hypertensive heart failure (SHHF) rats at ~3, ~12, and >20 mo of age to evaluate the time course of myofilament properties associated with normal aging processes compared with myofilaments from rats with a predisposition to heart failure. In control rats, the myofilament contractile properties were virtually unchanged throughout the aging process. Conversely, in SHHF rats, the rate of force development, loaded shortening velocity, and power all increased at ~12 mo and then significantly fell at the >20-mo time point, which coincided with a decrease in left ventricular fractional shortening. Furthermore, these changes occurred independent of changes in β-myosin heavy chain but were associated with depressed phosphorylation of myofibrillar proteins, and the fall in loaded shortening and peak power output corresponded with the onset of clinical signs of heart failure. NEW & NOTEWORTHY This novel study systematically examined the power-generating capacity of cardiac myofilaments during the progression from hypertension to heart disease. Previously undiscovered changes in myofibrillar power output were found and were associated with alterations in myofilament proteins, providing potential new targets to exploit for improved ventricular pump function in heart failure. Copyright © 2017 the American Physiological Society.

  15. Heart rate at admission is a predictor of in-hospital mortality in patients with acute coronary syndromes: Results from 58 European hospitals: The European Hospital Benchmarking by Outcomes in acute coronary syndrome Processes study.

    PubMed

    Jensen, Magnus T; Pereira, Marta; Araujo, Carla; Malmivaara, Anti; Ferrieres, Jean; Degano, Irene R; Kirchberger, Inge; Farmakis, Dimitrios; Garel, Pascal; Torre, Marina; Marrugat, Jaume; Azevedo, Ana

    2018-03-01

    The purpose of this study was to investigate the relationship between heart rate at admission and in-hospital mortality in patients with ST-segment elevation myocardial infarction (STEMI) and non-ST-segment elevation acute coronary syndrome (NSTE-ACS). Consecutive ACS patients admitted in 2008-2010 across 58 hospitals in six participant countries of the European Hospital Benchmarking by Outcomes in ACS Processes (EURHOBOP) project (Finland, France, Germany, Greece, Portugal and Spain). Cardiogenic shock patients were excluded. Associations between heart rate at admission in categories of 10 beats per min (bpm) and in-hospital mortality were estimated by logistic regression in crude models and adjusting for age, sex, obesity, smoking, hypertension, diabetes, known heart failure, renal failure, previous stroke and ischaemic heart disease. In total 10,374 patients were included. In both STEMI and NSTE-ACS patients, a U-shaped relationship between admission heart rate and in-hospital mortality was found. The lowest risk was observed for heart rates between 70-79 bpm in STEMI and 60-69 bpm in NSTE-ACS; risk of mortality progressively increased with lower or higher heart rates. In multivariable models, the relationship persisted but was significant only for heart rates >80 bpm. A similar relationship was present in both patients with or without diabetes, above or below age 75 years, and irrespective of the presence of atrial fibrillation or use of beta-blockers. Heart rate at admission is significantly associated with in-hospital mortality in patients with both STEMI and NSTE-ACS. ACS patients with admission heart rate above 80 bpm are at highest risk of in-hospital mortality.

  16. Modelling of the Impact Response of Fibre-Reinforced Composites

    DTIC Science & Technology

    1990-09-30

    observed under tensile loading alone, the damage accumulation process following initial tensile fracture of a fibre tow somewhere within the test specimen...results to be obtained which are not inconsistent with those observed experimentally. Sim- ilarly the delamination process is modelled assuming an...publication either in journals or in conference proceedings. 1 . J. Harding and K. Saka, "The effect of strain rate on the tensile failure of woven reinforced

  17. Numerical model of glulam beam delamination in dependence on cohesive strength

    NASA Astrophysics Data System (ADS)

    Kawecki, Bartosz; Podgórski, Jerzy

    2018-01-01

    This paper presents an attempt of using a finite element method for predicting delamination of a glue laminated timber beam through a cohesive layer. There were used cohesive finite elements, quadratic stress damage initiation criterion and mixed mode energy release rate failure model. Finite element damage was equal to its complete stiffness degradation. Timber material was considered to be an orthotropic with plastic behaviour after reaching bending limit.

  18. Risk stratification for death and all-cause hospitalization in heart failure clinic outpatients.

    PubMed

    Hummel, Scott L; Ghalib, Hussam H; Ratz, David; Koelling, Todd M

    2013-11-01

    Most heart failure (HF) risk stratification models were developed for inpatient use, and available outpatient models use a complex set of variables. We hypothesized that routinely collected clinical data could predict the 6-month risk of death and all-cause medical hospitalization in HF clinic outpatients. Using a quality improvement database and multivariable Cox modeling, we derived the Heart Failure Patient Severity Index (HFPSI) in the University of Michigan HF clinic (UM cohort, n = 1,536; 314 reached primary outcome). We externally validated the HFPSI in the Ann Arbor Veterans' Affairs HF clinic (VA cohort, n = 445; 106 outcomes) and explored "real-time" HFPSI use (VA-RT cohort, n = 486; 141 outcomes) by tracking VA patients for 6 months from their most recently calculated HFPSI, rather than using an arbitrary start date for the cohort. The HFPSI model included blood urea nitrogen, B-type natriuretic peptide, New York Heart Association class, diabetes status, history of atrial fibrillation/flutter, and all-cause hospitalization within the prior 1 and 2 to 6 months. The concordance c statistics in the UM/VA/VA-RT cohorts were 0.71/0.68/0.74. Kaplan-Meier curves and log-rank testing demonstrated excellent risk stratification, particularly between a large, low-risk group (40% of patients, 6-month event rates in the UM/VA/VA-RT cohorts 8%/12%/12%) and a small, high-risk group (10% of patients, 6-month event rates in the UM/VA/VA-RT cohorts 57%/58%/79%). The HFPSI uses readily available data to predict the 6-month risk of death and/or all-cause medical hospitalization in HF clinic outpatients and could potentially help allocate specialized HF resources within health systems. © 2013.

  19. Factors associated with failure to return for HIV test results in a free and anonymous screening centre.

    PubMed

    Laanani, Moussa; Dozol, Adrien; Meyer, Laurence; David, Stéphane; Camara, Sékou; Segouin, Christophe; Troude, Pénélope

    2015-07-01

    Free and anonymous screening centres (CDAG: Centres de Depistage Anononyme et Gratuit) are public facilities set up for HIV infection diagnosis in France. Some people visiting CDAG fail to return for test results and are not informed of their serology. This study aimed to assess factors associated with failure to return for HIV test results. Patients visiting the Fernand-Widal CDAG (Paris) for an HIV test in January-February 2011 were eligible to take part in the study. Data were collected with an anonymous self-administered questionnaire. Factors associated with failure to return were assessed using logistic regression models. Of the 710 participants (participation rate 88%), 46 patients failed to return. Not specifying birthplace and not living in the region of Paris were associated with failure to return. Those who perceived no risk of HIV infection and those who felt they were more at risk than other people were both statistically associated with failure to return. Self-perceived risk seemed to be of chief concern for failure to return for HIV test results and should be considered during pre-test counselling. © The Author(s) 2014.

  20. A homogenized localizing gradient damage model with micro inertia effect

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Poh, Leong Hien

    2018-07-01

    The conventional gradient enhancement regularizes structural responses during material failure. However, it induces a spurious damage growth phenomenon, which is shown here to persist in dynamics. Similar issues were reported with the integral averaging approach. Consequently, the conventional nonlocal enhancement cannot adequately describe the dynamic fracture of quasi-brittle materials, particularly in the high strain rate regime, where a diffused damage profile precludes the development of closely spaced macrocracks. To this end, a homogenization theory is proposed to translate the micro processes onto the macro scale. Starting with simple elementary models at the micro scale to describe the fracture mechanisms, an additional kinematic field is introduced to capture the variations in deformation and velocity within a unit cell. An energetic equivalence between micro and macro is next imposed to ensure consistency at the two scales. The ensuing homogenized microforce balance resembles closely the conventional gradient expression, albeit with an interaction domain that decreases with damage, complemented by a micro inertia effect. Considering a direct single pressure bar example, the homogenized model is shown to resolve the non-physical responses obtained with conventional nonlocal enhancement. The predictive capability of the homogenized model is furthermore demonstrated by considering the spall tests of concrete, with good predictions on failure characteristics such as fragmentation profiles and dynamic tensile strengths, at three different loading rates.

Top