Sample records for threshold failure time

  1. Stress/strain changes and triggered seismicity at The Geysers, California

    USGS Publications Warehouse

    Gomberg, J.; Davis, S.

    1996-01-01

    The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency or equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.

  2. Stress/strain changes and triggered seismicity at The Geysers, California

    NASA Astrophysics Data System (ADS)

    Gomberg, Joan; Davis, Scott

    1996-01-01

    The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency, or, equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.

  3. Reliability analysis and fault-tolerant system development for a redundant strapdown inertial measurement unit. [inertial platforms

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.

  4. Predicting the Lifetime of Dynamic Networks Experiencing Persistent Random Attacks.

    PubMed

    Podobnik, Boris; Lipic, Tomislav; Horvatic, Davor; Majdandzic, Antonio; Bishop, Steven R; Eugene Stanley, H

    2015-09-21

    Estimating the critical points at which complex systems abruptly flip from one state to another is one of the remaining challenges in network science. Due to lack of knowledge about the underlying stochastic processes controlling critical transitions, it is widely considered difficult to determine the location of critical points for real-world networks, and it is even more difficult to predict the time at which these potentially catastrophic failures occur. We analyse a class of decaying dynamic networks experiencing persistent failures in which the magnitude of the overall failure is quantified by the probability that a potentially permanent internal failure will occur. When the fraction of active neighbours is reduced to a critical threshold, cascading failures can trigger a total network failure. For this class of network we find that the time to network failure, which is equivalent to network lifetime, is inversely dependent upon the magnitude of the failure and logarithmically dependent on the threshold. We analyse how permanent failures affect network robustness using network lifetime as a measure. These findings provide new methodological insight into system dynamics and, in particular, of the dynamic processes of networks. We illustrate the network model by selected examples from biology, and social science.

  5. On the estimation of risk associated with an attenuation prediction

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1992-01-01

    Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.

  6. Load type influences motor unit recruitment in biceps brachii during a sustained contraction.

    PubMed

    Baudry, Stéphane; Rudroff, Thorsten; Pierpoint, Lauren A; Enoka, Roger M

    2009-09-01

    Twenty subjects participated in four experiments designed to compare time to task failure and motor-unit recruitment threshold during contractions sustained at 15% of maximum as the elbow flexor muscles either supported an inertial load (position task) or exerted an equivalent constant torque against a rigid restraint (force task). Subcutaneous branched bipolar electrodes were used to record single motor unit activity from the biceps brachii muscle during ramp contractions performed before and at 50 and 90% of the time to failure for the position task during both fatiguing contractions. The time to task failure was briefer for the position task than for the force task (P=0.0002). Thirty and 29 motor units were isolated during the force and position tasks, respectively. The recruitment threshold declined by 48 and 30% (P=0.0001) during the position task for motor units with an initial recruitment threshold below and above the target force, respectively, whereas no significant change in recruitment threshold was observed during the force task. Changes in recruitment threshold were associated with a decrease in the mean discharge rate (-16%), an increase in discharge rate variability (+40%), and a prolongation of the first two interspike intervals (+29 and +13%). These data indicate that there were faster changes in motor unit recruitment and rate coding during the position task than the force task despite a similar net muscle torque during both tasks. Moreover, the results suggest that the differential synaptic input observed during the position task influences most of the motor unit pool.

  7. An Abrupt Transition to an Intergranular Failure Mode in the Near-Threshold Fatigue Crack Growth Regime in Ni-Based Superalloys

    NASA Astrophysics Data System (ADS)

    Telesman, J.; Smith, T. M.; Gabb, T. P.; Ring, A. J.

    2018-06-01

    Cyclic near-threshold fatigue crack growth (FCG) behavior of two disk superalloys was evaluated and was shown to exhibit an unexpected sudden failure mode transition from a mostly transgranular failure mode at higher stress intensity factor ranges to an almost completely intergranular failure mode in the threshold regime. The change in failure modes was associated with a crossover of FCG resistance curves in which the conditions that produced higher FCG rates in the Paris regime resulted in lower FCG rates and increased ΔK th values in the threshold region. High-resolution scanning and transmission electron microscopy were used to carefully characterize the crack tips at these near-threshold conditions. Formation of stable Al-oxide followed by Cr-oxide and Ti-oxides was found to occur at the crack tip prior to formation of unstable oxides. To contrast with the threshold failure mode regime, a quantitative assessment of the role that the intergranular failure mode has on cyclic FCG behavior in the Paris regime was also performed. It was demonstrated that even a very limited intergranular failure content dominates the FCG response under mixed mode failure conditions.

  8. Performance and Reliability Analysis of Water Distribution Systems under Cascading Failures and the Identification of Crucial Pipes

    PubMed Central

    Shuang, Qing; Zhang, Mingyuan; Yuan, Yongbo

    2014-01-01

    As a mean of supplying water, Water distribution system (WDS) is one of the most important complex infrastructures. The stability and reliability are critical for urban activities. WDSs can be characterized by networks of multiple nodes (e.g. reservoirs and junctions) and interconnected by physical links (e.g. pipes). Instead of analyzing highest failure rate or highest betweenness, reliability of WDS is evaluated by introducing hydraulic analysis and cascading failures (conductive failure pattern) from complex network. The crucial pipes are identified eventually. The proposed methodology is illustrated by an example. The results show that the demand multiplier has a great influence on the peak of reliability and the persistent time of the cascading failures in its propagation in WDS. The time period when the system has the highest reliability is when the demand multiplier is less than 1. There is a threshold of tolerance parameter exists. When the tolerance parameter is less than the threshold, the time period with the highest system reliability does not meet minimum value of demand multiplier. The results indicate that the system reliability should be evaluated with the properties of WDS and the characteristics of cascading failures, so as to improve its ability of resisting disasters. PMID:24551102

  9. Simulated effects of the 2003 permitted withdrawals and water-management alternatives on reservoir storage and firm yields of three surface-water supplies, Ipswich River Basin, Massachusetts

    USGS Publications Warehouse

    Zarriello, Phillip J.

    2004-01-01

    The Hydrologic Simulation ProgramFORTRAN (HSPF) model of the Ipswich River Basin previously developed by the U.S. Geological Survey was modified to evaluate the effects of the 2003 withdrawal permits and water-management alternatives on reservoir storage and yields of the Lynn, Peabody, and SalemBeverly water-supply systems. These systems obtain all or part of their water from the Ipswich River Basin. The HSPF model simulated the complex water budgets to the three supply systems, including effects of regulations that restrict withdrawals by the time of year, minimum streamflow thresholds, and the capacity of each system to pump water from the river. The 2003 permits restrict withdrawals from the Ipswich River between November 1 and May 31 to streamflows above a 1.0 cubic foot per second per square mile (ft3/s/mi2) threshold, to high flows between June 1 and October 31, and to a maximum annual volume. Yields and changes in reservoir storage over the 35-year simulation period (196195) were also evaluated for each system with a hypothetical low-capacity pump, alternative seasonal streamflow thresholds, and withdrawals that result in successive failures (depleted storage). The firm yields, the maximum yields that can be met during a severe drought, calculated for each water-supply system, under the 2003 permitted withdrawals, were 7.31 million gallons per day (Mgal/d) for the Lynn, 3.01 Mgal/d for the Peabody, and 7.98 Mgal/d for the SalemBeverly systems; these yields are 31, 49, and 21 percent less than their average 19982000 demands, respectively. The simulations with the same permit restrictions and a hypothetical low-capacity pump for each system resulted in slightly increased yields for the Lynn and SalemBeverly systems, but a slightly decreased yield for the Peabody system. Simulations to evaluate the effects of alternative streamflow thresholds on water supply indicated that firm yields were generally about twice as sensitive to decreases in the NovemberFebruary or MarchMay thresholds than to increases in these thresholds. Firm yields were also generally slightly less sensitive to changes in the NovemberFebruary than to changes in the MarchMay thresholds in the Peabody and SalemBeverly water-supply systems. Decreases in the JuneOctober streamflow threshold did not affect any of the system's firm yield. Simulations of withdrawal rates that resulted in successive near failures during the 196195 period indicated the tradeoff between increased yield and risks. The Lynn and Peabody systems were allowed to near failure up to six times. At the sixth near failure, yields of these systems increased to 10.18 and 4.43 Mgal/d, respectively; these rates increased the amount of water obtained from the Ipswich River Basin (relative to the firm-yield rate), as a percentage of average 19982000 demands, from 68 to 96 percent and from 51 to 75 percent, respectively. The SalemBeverly system was able to meet demands after the third near failure. Reservoir storage was depleted about 6 percent of the time at the withdrawal rate that caused the sixth near failure in the Lynn and Peabody system and about 3 percent of the time at the withdrawal rate that caused the third near failure in the SalemBeverly system. Supply systems are at greatest risk of failure from persistent droughts (lasting more than 1 year), but short-term droughts also present risks during the fall and winter when the supply systems are most vulnerable. Uncertainties in model performance, simplification of reservoir systems and their management, and the possibility of droughts of severity greater than simulated in this investigation underscore the fact that the firm yield calculated for each system cannot be considered a withdrawal rate that is absolutely fail-safe. Thus, the consequences of failure are an important consideration in the planning and management of these systems.

  10. Methods of Muscle Activation Onset Timing Recorded During Spinal Manipulation.

    PubMed

    Currie, Stuart J; Myers, Casey A; Krishnamurthy, Ashok; Enebo, Brian A; Davidson, Bradley S

    2016-05-01

    The purpose of this study was to determine electromyographic threshold parameters that most reliably characterize the muscular response to spinal manipulation and compare 2 methods that detect muscle activity onset delay: the double-threshold method and cross-correlation method. Surface and indwelling electromyography were recorded during lumbar side-lying manipulations in 17 asymptomatic participants. Muscle activity onset delays in relation to the thrusting force were compared across methods and muscles using a generalized linear model. The threshold combinations that resulted in the lowest Detection Failures were the "8 SD-0 milliseconds" threshold (Detection Failures = 8) and the "8 SD-10 milliseconds" threshold (Detection Failures = 9). The average muscle activity onset delay for the double-threshold method across all participants was 149 ± 152 milliseconds for the multifidus and 252 ± 204 milliseconds for the erector spinae. The average onset delay for the cross-correlation method was 26 ± 101 for the multifidus and 67 ± 116 for the erector spinae. There were no statistical interactions, and a main effect of method demonstrated that the delays were higher when using the double-threshold method compared with cross-correlation. The threshold parameters that best characterized activity onset delays were an 8-SD amplitude and a 10-millisecond duration threshold. The double-threshold method correlated well with visual supervision of muscle activity. The cross-correlation method provides several advantages in signal processing; however, supervision was required for some results, negating this advantage. These results help standardize methods when recording neuromuscular responses of spinal manipulation and improve comparisons within and across investigations. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  11. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  12. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  13. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  14. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  15. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  16. Methods, apparatus and system for notification of predictable memory failure

    DOEpatents

    Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2017-01-03

    A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.

  17. New constraints on mechanisms of remotely triggered seismicity at Long Valley Caldera

    USGS Publications Warehouse

    Brodsky, E.E.; Prejean, S.G.

    2005-01-01

    Regional-scale triggering of local earthquakes in the crust by seismic waves from distant main shocks has now been robustly documented for over a decade. Some of the most thoroughly recorded examples of repeated triggering of a single site from multiple, large earthquakes are measured in geothermal fields of the western United States like Long Valley Caldera. As one of the few natural cases where the causality of an earthquake sequence is apparent, triggering provides fundamental constraints on the failure processes in earthquakes. We show here that the observed triggering by seismic waves is inconsistent with any mechanism that depends on cumulative shaking as measured by integrated energy density. We also present evidence for a frequency-dependent triggering threshold. On the basis of the seismic records of 12 regional and teleseismic events recorded at Long Valley Caldera, long-period waves (>30 s) are more effective at generating local seismicity than short-period waves of comparable amplitude. If the properties of the system are stationary over time, the failure threshold for long-period waves is ~0.05 cm/s vertical shaking. Assuming a phase velocity of 3.5 km/s and an elastic modulus of 3.5 x 1010Pa, the threshold in terms of stress is 5 kPa. The frequency dependence is due in part to the attenuation of the surface waves with depth. Fluid flow through a porous medium can produce the rest of the observed frequency dependence of the threshold. If the threshold is not stationary with time, pore pressures that are >99.5% of lithostatic and vary over time by a factor of 4 could explain the observations with no frequency dependence of the triggering threshold. Copyright 2005 by the American Geophysical Union.

  18. Entrepreneurial Failure as a Threshold Concept: The Effects of Student Experiences

    ERIC Educational Resources Information Center

    Bolinger, Alexander R.; Brown, Kory D.

    2015-01-01

    Some curricular elements are threshold concepts that involve "troublesome knowledge," not because they are difficult for students to comprehend per se, but because they are challenging for students to fully appreciate. In this article, we suggest that entrepreneurial failure is a threshold concept in entrepreneurship courses because…

  19. Methods for forewarning of critical condition changes in monitoring civil structures

    DOEpatents

    Abercrombie, Robert K.; Hively, Lee M.

    2013-04-02

    Sensor modules (12) including accelerometers (20) are placed on a physical structure (10) and tri-axial accelerometer data is converted to mechanical power (P) data (41) which then processed to provide a forewarning (57) of a critical event concerning the physical structure (10). The forewarning is based on a number of occurrences of a composite measure of dissimilarity (C.sub.i) exceeding a forewarning threshold over a defined sampling time; and a forewarning signal (58) is provided to a human observer through a visual, audible or tangible signal. A forewarning of a structural failure can also be provided based on a number of occurrences of (C.sub.i) above a failure value threshold.

  20. The prehospital intravenous access assessment: a prospective study on intravenous access failure and access delay in prehospital emergency medicine.

    PubMed

    Prottengeier, Johannes; Albermann, Matthias; Heinrich, Sebastian; Birkholz, Torsten; Gall, Christine; Schmidt, Joachim

    2016-12-01

    Intravenous access in prehospital emergency care allows for early administration of medication and extended measures such as anaesthesia. Cannulation may, however, be difficult, and failure and resulting delay in treatment and transport may have negative effects on the patient. Therefore, our study aims to perform a concise assessment of the difficulties of prehospital venous cannulation. We analysed 23 candidate predictor variables on peripheral venous cannulations in terms of cannulation failure and exceedance of a 2 min time threshold. Multivariate logistic regression models were fitted for variables of predictive value (P<0.25) and evaluated by the area under the curve (AUC>0.6) of their respective receiver operating characteristic curve. A total of 762 intravenous cannulations were enroled. In all, 22% of punctures failed on the first attempt and 13% of punctures exceeded 2 min. Model selection yielded a three-factor model (vein visibility without tourniquet, vein palpability with tourniquet and insufficient ambient lighting) of fair accuracy for the prediction of puncture failure (AUC=0.76) and a structurally congruent model of four factors (failure model factors plus vein visibility with tourniquet) for the exceedance of the 2 min threshold (AUC=0.80). Our study offers a simple assessment to identify cases of difficult intravenous access in prehospital emergency care. Of the numerous factors subjectively perceived as possibly exerting influences on cannulation, only the universal - not exclusive to emergency care - factors of lighting, vein visibility and palpability proved to be valid predictors of cannulation failure and exceedance of a 2 min threshold.

  1. A Queueing Approach to Optimal Resource Replication in Wireless Sensor Networks

    DTIC Science & Technology

    2009-04-29

    network (an energy- centric approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure- centric ...replication strategies in wireless sensor networks. The model can be used to minimize either the total transmission rate of the network (an energy- centric ...approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure- centric approach). The model explicitly

  2. Relationship Between Unusual High-Temperature Fatigue Crack Growth Threshold Behavior in Superalloys and Sudden Failure Mode Transitions

    NASA Technical Reports Server (NTRS)

    Telesman, J.; Smith, T. M.; Gabb, T. P.; Ring, A. J.

    2017-01-01

    An investigation of high temperature cyclic fatigue crack growth (FCG) threshold behavior of two advanced nickel disk alloys was conducted. The focus of the study was the unusual crossover effect in the near-threshold region of these type of alloys where conditions which produce higher crack growth rates in the Paris regime, produce higher resistance to crack growth in the near threshold regime. It was shown that this crossover effect is associated with a sudden change in the fatigue failure mode from a predominant transgranular mode in the Paris regime to fully intergranular mode in the threshold fatigue crack growth region. This type of a sudden change in the fracture mechanisms has not been previously reported and is surprising considering that intergranular failure is typically associated with faster crack growth rates and not the slow FCG rates of the near-threshold regime. By characterizing this behavior as a function of test temperature, environment and cyclic frequency, it was determined that both the crossover effect and the onset of intergranular failure are caused by environmentally driven mechanisms which have not as yet been fully identified. A plausible explanation for the observed behavior is proposed.

  3. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  4. Cadmium (Tank) Electroplating Alternative

    DTIC Science & Technology

    2011-08-01

    ASTM F519 HE: 75% NFS 200 hrs HRE : 45% NFS 150 hrs Threshold limit greater than /equal to LHE Cd (AMS 2417G) ASETS Defense Focused Workshop (2011...Test Specimens  Reporting Sustained/Threshold load (%NFS), Time to failure. HRE Testing Cd Zn-Ni IVD Al LHE Cd Re-Embrittlement Test Fluids:  DI...Hydrogen Embrittlement/ HRE ASTM F519 A5, Type 1.a.1 Brush Plating ASETS Defense Focused Workshop (2011) Luzmarie G. Santiago Materials Engineer Naval Air

  5. Landslide early warning based on failure forecast models: the example of the Mt. de La Saxe rockslide, northern Italy

    NASA Astrophysics Data System (ADS)

    Manconi, A.; Giordan, D.

    2015-07-01

    We apply failure forecast models by exploiting near-real-time monitoring data for the La Saxe rockslide, a large unstable slope threatening Aosta Valley in northern Italy. Starting from the inverse velocity theory, we analyze landslide surface displacements automatically and in near real time on different temporal windows and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here, we present the result obtained for the La Saxe rockslide, a large unstable slope located in Aosta Valley, northern Italy. Based on this case study, we identify operational thresholds that are established on the reliability of the forecast models. Our approach is aimed at supporting the management of early warning systems in the most critical phases of the landslide emergency.

  6. Why earthquakes correlate weakly with the solid Earth tides: Effects of periodic stress on the rate and probability of earthquake occurrence

    USGS Publications Warehouse

    Beeler, N.M.; Lockner, D.A.

    2003-01-01

    We provide an explanation why earthquake occurrence does not correlate well with the daily solid Earth tides. The explanation is derived from analysis of laboratory experiments in which faults are loaded to quasiperiodic failure by the combined action of a constant stressing rate, intended to simulate tectonic loading, and a small sinusoidal stress, analogous to the Earth tides. Event populations whose failure times correlate with the oscillating stress show two modes of response; the response mode depends on the stressing frequency. Correlation that is consistent with stress threshold failure models, e.g., Coulomb failure, results when the period of stress oscillation exceeds a characteristic time tn; the degree of correlation between failure time and the phase of the driving stress depends on the amplitude and frequency of the stress oscillation and on the stressing rate. When the period of the oscillating stress is less than tn, the correlation is not consistent with threshold failure models, and much higher stress amplitudes are required to induce detectable correlation with the oscillating stress. The physical interpretation of tn is the duration of failure nucleation. Behavior at the higher frequencies is consistent with a second-order dependence of the fault strength on sliding rate which determines the duration of nucleation and damps the response to stress change at frequencies greater than 1/tn. Simple extrapolation of these results to the Earth suggests a very weak correlation of earthquakes with the daily Earth tides, one that would require >13,000 earthquakes to detect. On the basis of our experiments and analysis, the absence of definitive daily triggering of earthquakes by the Earth tides requires that for earthquakes, tn exceeds the daily tidal period. The experiments suggest that the minimum typical duration of earthquake nucleation on the San Andreas fault system is ???1 year.

  7. Landslide early warning based on failure forecast models: the example of Mt. de La Saxe rockslide, northern Italy

    NASA Astrophysics Data System (ADS)

    Manconi, A.; Giordan, D.

    2015-02-01

    We investigate the use of landslide failure forecast models by exploiting near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here we describe the main concepts of our method, and show an example of application to a real emergency scenario, the La Saxe rockslide, Aosta Valley region, northern Italy. Based on the herein presented case study, we identify operational thresholds based on the reliability of the forecast models, in order to support the management of early warning systems in the most critical phases of the landslide emergency.

  8. A flexible cure rate model with dependent censoring and a known cure threshold.

    PubMed

    Bernhardt, Paul W

    2016-11-10

    We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Re‐estimated effects of deep episodic slip on the occurrence and probability of great earthquakes in Cascadia

    USGS Publications Warehouse

    Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy

    2013-01-01

    Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.

  10. A comparative analysis of the effectiveness of active versus passive atrial lead fixation in Chinese patients with cardiac implantable electrical devices: a long term, retrospective, observational, single-center study.

    PubMed

    Hao, Yinglu; Li, Yanping; Liao, Derong; Yang, Ling; Liu, Fangyan

    2017-03-01

    Data comparing active atrial lead fixation with passive atrial lead fixation in Chinese patients with cardiovascular implantable electronic devices (CIEDs) for atrial pacing is limited. Our study evaluated the effectiveness of active fixation versus passive fixation of atrial leads by observing the lead performance parameters. This retrospective, long-term, single-center study included a cohort of Chinese patients who underwent CIED implantation at the Department of Cardiology of People's Hospital of Yuxi City, China, from 1 March 2010 to 1 March 2015. Efficacy was determined by comparing implantation time, threshold values, incidence of lead dislocation/failure, and lead-related complications between the two groups. Of the 1217 patients, active and passive atrial lead fixation were performed in 530 (mean age, 69.37 ± 11.44 years) and 497 (mean age, 68.33 ± 10.96 years). The active fixation group reported significantly lower mean atrial implantation times (P = .0001) and threshold values (P = .044) compared with the passive atrial lead fixation group. In addition, threshold values in the active atrial lead fixation group were stable throughout the observation period. No instances of myocardial perforation, cardiac tamponade, implantation failure, or electrode dislocation/re-fixation were reported in the active atrial lead fixation group. A favorable decrease in patient comfort parameters such as bed rest time (P = .027) and duration of hospital stay (P = .038) were also observed in the active lead fixation group. Active atrial lead fixation demonstrated greater stability, steady long-term thresholds and minimal lead-related complications compared to passive lead fixation in Chinese patients with CIEDs.

  11. Optimal maintenance policy incorporating system level and unit level for mechanical systems

    NASA Astrophysics Data System (ADS)

    Duan, Chaoqun; Deng, Chao; Wang, Bingran

    2018-04-01

    The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.

  12. Development, characterization, and modeling of ballistic impact on composite laminates under compressive pre-stress

    NASA Astrophysics Data System (ADS)

    Kerr-Anderson, Eric

    Structural composite laminates were ballistically impacted while under in-plane compressive pre-stress. Residual properties, damage characterization, and energy absorption were compared to determine synergistic effects of in-plane compressive pre-stress and impact velocity. A fixture was developed to apply in-plane compressive loads up to 30 tons to structural composites during an impact event using a single-stage light-gas gun. Observed failure modes included typical conical delamination, the development of an impact initiated shear crack (IISC), and the shear failure of a pre-stressed composite due to impact. It was observed that the compressive failure threshold quadratically decreased in relation to the impact velocity up to velocities that caused partial penetration. For all laminates impacted at velocities causing partial or full penetration up to 350 ms-1, the failure threshold was consistent and used as an experimental normalization. Samples impacted below 65% of the failure threshold witnessed no significant change in damage morphology or residual properties when compared to typical conical delamination. Samples impacted above 65% of the failure threshold witnessed additional damage in the form of a shear crack extending perpendicular to the applied load from the point of impact. The presence of an IISC reduced the residual properties and even caused failure upon impact at extreme combinations. Four failure envelopes have been established as: transient failure, steady state failure, impact initiated shear crack, and conical damage. Boundaries and empirically based equations for residual compressive strength have been developed for each envelope with relation to two E-glass/vinyl ester laminate systems. Many aspects of pre-stressed impact have been individually examined, but there have been no comprehensive examinations of pre-stressed impact. This research has resulted in the exploration and characterization of compressively pre-stressed damage for impact velocities resulting in reflection, partial penetration, and penetration at pre-stress levels resulting in conical damage, shear cracking, and failure.

  13. Low-Energy Defibrillation Failure Correction is Possible Through Nonlinear Analysis of Spatiotemporal Arrhythmia Data

    NASA Astrophysics Data System (ADS)

    Simonotto, Jennifer; Furman, Michael; Beaver, Thomas; Spano, Mark; Kavanagh, Katherine; Iden, Jason; Hu, Gang; Ditto, William

    2004-03-01

    Explanted Porcine hearts were Langendorff-perfused, administered a voltage-sensitive fluorescent dye (Di-4-ANEPPS) and illuminated with a ND:Yag laser (532 nm); the change in fluorescence resulting from electrical activity on the heart surface was recorded with an 80 x 80 pixel CCD camera at 1000 frames per second. The heart was put into fibrillation with rapid ventricular pacing and shocks were administered close to the defibrillation threshold. Defibrillation failure data was analyzed using synchronization, space-time volume plots and recurrence quantification. Preliminary spatiotemporal synchronization results reveal a short window of time ( 1 second) after defibrillation failure in which the disordered electrical activity becomes ordered; this ordered period occurs 4-5 seconds after the defibrillation shock. Recurrence analysis of a single time series confirmed these results, thus opening the avenue for dynamic defibrillators that can detect an optimal window for cardioversion.

  14. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  15. Source accuracy data reveal the thresholded nature of human episodic memory.

    PubMed

    Harlow, Iain M; Donaldson, David I

    2013-04-01

    Episodic recollection supports conscious retrieval of past events. It is unknown why recollected memories are often vivid, but at other times we struggle to remember. Such experiences might reflect a recollection threshold: Either the threshold is exceeded and information is retrieved, or recollection fails completely. Alternatively, retrieval failure could reflect weak memory: Recollection could behave as a continuous signal, always yielding some variable degree of information. Here we reconcile these views, using a novel source memory task that measures retrieval accuracy directly. We show that recollection is thresholded, such that retrieval sometimes simply fails. Our technique clarifies a fundamental property of memory and allows responses to be accurately measured, without recourse to subjective introspection. These findings raise new questions about how successful retrieval is determined and why it declines with age and disease.

  16. Insights from internet-based remote intrathoracic impedance monitoring as part of a heart failure disease management program.

    PubMed

    Mullens, Wilfried; Oliveira, Leonardo P J; Verga, Tanya; Wilkoff, Bruce L; Tang, Wai Hong Wilson

    2010-01-01

    Changes in intrathoracic impedance (Z) leading to crossing of a derived fluid index (FI) threshold has been associated with heart failure (HF) hospitalization. The authors developed a remote monitoring program as part of HF disease management and prospectively examined the feasibility and resource utilization of monitoring individuals with an implanted device capable of measuring Z. An HF nurse analyzed all transmitted data daily, as they were routinely uploaded as part of quarterly remote device monitoring, and called the patient if the FI crossed the threshold (arbitrarily defined at 60 Omega) to identify clinically relevant events (CREs) that occurred during this period (eg, worsening dyspnea or increase in edema or weight). A total of 400 uploads were completed during the 4-month study period. During this period, 34 patients (18%) had an FI threshold crossing, averaging 0.52 FI threshold crossings per patient-year. Thirty-two of 34 patients contacted by telephone (94%) with FI threshold crossing had evidence of CREs during this period. However, only 6 (18%) had HF hospitalizations, 19 (56%) had reported changes in HF therapy, and 13 (38%) reported drug and/or dietary plan nonadherence. The average data analysis time required was 30 min daily when focusing on those with FI threshold crossing, averaging 8 uploads for review per working day and 5 telephone follow-ups per week. Our pilot observations suggested that Internet-based remote monitoring of Z trends from existing device interrogation uploads is feasible as part of a daily routine of HF disease management. 2010 Wiley Periodicals, Inc.

  17. Asymmetries and Visual Field Summaries as Predictors of Glaucoma in the Ocular Hypertension Treatment Study

    PubMed Central

    Levine, Richard A.; Demirel, Shaban; Fan, Juanjuan; Keltner, John L.; Johnson, Chris A.; Kass, Michael A.

    2007-01-01

    Purpose To evaluate whether baseline visual field data and asymmetries between eyes predict the onset of primary open-angle glaucoma (POAG) in Ocular Hypertension Treatment Study (OHTS) participants. Methods A new index, mean prognosis (MP), was designed for optimal combination of visual field thresholds, to discriminate between eyes that developed POAG from eyes that did not. Baseline intraocular pressure (IOP) in fellow eyes was used to construct measures of IOP asymmetry. Age-adjusted baseline thresholds were used to develop indicators of visual field asymmetry and summary measures of visual field defects. Marginal multivariate failure time models were constructed that relate the new index MP, IOP asymmetry, and visual field asymmetry to POAG onset for OHTS participants. Results The marginal multivariate failure time analysis showed that the MP index is significantly related to POAG onset (P < 0.0001) and appears to be a more highly significant predictor of POAG onset than either mean deviation (MD; P = 0.17) or pattern standard deviation (PSD; P = 0.046). A 1-mm Hg increase in IOP asymmetry between fellow eyes is associated with a 17% increase in risk for development of POAG. When threshold asymmetry between eyes existed, the eye with lower thresholds was at a 37% greater risk of development of POAG, and this feature was more predictive of POAG onset than the visual field index MD, though not as strong a predictor as PSD. Conclusions The MP index, IOP asymmetry, and binocular test point asymmetry can assist in clinical evaluation of eyes at risk of development of POAG. PMID:16936102

  18. Statistical study of the reliability of oxide-defined stripe cw lasers of (AlGa)As

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ettenberg, M.

    1979-03-01

    In this report, we describe a statistical study of the reliability of oxide-defined stripe-contact cw injection lasers of (AlGa)As. These devices have one facet coated with Al/sub 2/O/sub 3/ and one facet coated with an Al/sub 2/O/sub 3//Si dichroic reflector; the lasers are optimized for cw low-threshold currents at room temperature, with values typically about 50 mA. Lifetests were carried out at 70 /sup 0/C ambient, in the cw mode of operation with about 5 mW output. Previous lifetests showed that the degradation rate followed a 0.95-eV activation energy so the 70 /sup 0/C environment provides a degradation acceleration factormore » of 190 over that at room temperature. We have found that the device failures follow a log-normal distribution, characterized by a mean time before failure of 4200 h and a standard deviation of 1.3. This corresponds to a mean time to failure (MTTF) of 10/sup 6/ h at room temperature. Failure is defined here as the inability of the device to emit 1 mW of stimulated cw output at 70 /sup 0/C, and assumes that optical feedback will be employed to adjust the laser current during operation. If a constant-current drive is envisioned, the failures for a 3-dB drop in light output also follow a log-normal distribution with a similar slope (standard deviation=1.1) and a MTTF of 2000 h at 70 /sup 0/C (500 000 h at room temperature). The failures were found to be mainly due to bulk gradual degradation and not facet or contact failure. Careful study of lasers before and after lifetest showed a significant increase in contact thermal resistance. However, this increase accounts for only a small portion of the nearly 70% increase in room-temperature cw threshold after failure at 70 /sup 0/C. After failure at 70 /sup 0/C, we also noted a degradation in the near-field and associated far-field pattern of the laser.« less

  19. GaN HEMTs with p-GaN gate: field- and time-dependent degradation

    NASA Astrophysics Data System (ADS)

    Meneghesso, G.; Meneghini, M.; Rossetto, I.; Canato, E.; Bartholomeus, J.; De Santi, C.; Trivellin, N.; Zanoni, E.

    2017-02-01

    GaN-HEMTs with p-GaN gate have recently demonstrated to be excellent normally-off devices for application in power conversion systems, thanks to the high and robust threshold voltage (VTH>1 V), the high breakdown voltage, and the low dynamic Ron increase. For this reason, studying the stability and reliability of these devices under high stress conditions is of high importance. This paper reports on our most recent results on the field- and time-dependent degradation of GaN-HEMTs with p-GaN gate submitted to stress with positive gate bias. Based on combined step-stress experiments, constant voltage stress and electroluminescence testing we demonstrated that: (i) when submitted to high/positive gate stress, the transistors may show a negative threshold voltage shift, that is ascribed to the injection of holes from the gate metal towards the p-GaN/AlGaN interface; (ii) in a step-stress experiment, the analyzed commercial devices fail at gate voltages higher than 9-10 V, due to the extremely high electric field over the p-GaN/AlGaN stack; (iii) constant voltage stress tests indicate that the failure is also time-dependent and Weibull distributed. The several processes that can explain the time-dependent failure are discussed in the following.

  20. Applying the Seattle Heart Failure Model in the Office Setting in the Era of Electronic Medical Records.

    PubMed

    Williams, Brent A; Agarwal, Shikhar

    2018-02-23

    Prediction models such as the Seattle Heart Failure Model (SHFM) can help guide management of heart failure (HF) patients, but the SHFM has not been validated in the office environment. This retrospective cohort study assessed the predictive performance of the SHFM among patients with new or pre-existing HF in the context of an office visit.Methods and Results:SHFM elements were ascertained through electronic medical records at an office visit. The primary outcome was all-cause mortality. A "warranty period" for the baseline SHFM risk estimate was sought by examining predictive performance over time through a series of landmark analyses. Discrimination and calibration were estimated according to the proposed warranty period. Low- and high-risk thresholds were proposed based on the distribution of SHFM estimates. Among 26,851 HF patients, 14,380 (54%) died over a mean 4.7-year follow-up period. The SHFM lost predictive performance over time, with C=0.69 and C<0.65 within 3 and beyond 12 months from baseline respectively. The diminishing predictive value was attributed to modifiable SHFM elements. Discrimination (C=0.66) and calibration for 12-month mortality were acceptable. A low-risk threshold of ∼5% mortality risk within 12 months reflects the 10% of HF patients in the office setting with the lowest risk. The SHFM has utility in the office environment.

  1. Reliability-based management of buried pipelines considering external corrosion defects

    NASA Astrophysics Data System (ADS)

    Miran, Seyedeh Azadeh

    Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.

  2. Risk management.

    PubMed

    Chambers, David W

    2010-01-01

    Every plan contains risk. To proceed without planning some means of managing that risk is to court failure. The basic logic of risk is explained. It consists in identifying a threshold where some corrective action is necessary, the probability of exceeding that threshold, and the attendant cost should the undesired outcome occur. This is the probable cost of failure. Various risk categories in dentistry are identified, including lack of liquidity; poor quality; equipment or procedure failures; employee slips; competitive environments; new regulations; unreliable suppliers, partners, and patients; and threats to one's reputation. It is prudent to make investments in risk management to the extent that the cost of managing the risk is less than the probable loss due to risk failure and when risk management strategies can be matched to type of risk. Four risk management strategies are discussed: insurance, reducing the probability of failure, reducing the costs of failure, and learning. A risk management accounting of the financial meltdown of October 2008 is provided.

  3. Complexity in congestive heart failure: A time-frequency approach

    NASA Astrophysics Data System (ADS)

    Banerjee, Santo; Palit, Sanjay K.; Mukherjee, Sayan; Ariffin, MRK; Rondoni, Lamberto

    2016-03-01

    Reconstruction of phase space is an effective method to quantify the dynamics of a signal or a time series. Various phase space reconstruction techniques have been investigated. However, there are some issues on the optimal reconstructions and the best possible choice of the reconstruction parameters. This research introduces the idea of gradient cross recurrence (GCR) and mean gradient cross recurrence density which shows that reconstructions in time frequency domain preserve more information about the dynamics than the optimal reconstructions in time domain. This analysis is further extended to ECG signals of normal and congestive heart failure patients. By using another newly introduced measure—gradient cross recurrence period density entropy, two classes of aforesaid ECG signals can be classified with a proper threshold. This analysis can be applied to quantifying and distinguishing biomedical and other nonlinear signals.

  4. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  5. Redundancy management of inertial systems.

    NASA Technical Reports Server (NTRS)

    Mckern, R. A.; Musoff, H.

    1973-01-01

    The paper reviews developments in failure detection and isolation techniques applicable to gimballed and strapdown systems. It examines basic redundancy management goals of improved reliability, performance and logistic costs, and explores mechanizations available for both input and output data handling. The meaning of redundant system reliability in terms of available coverage, system MTBF, and mission time is presented and the practical hardware performance limitations of failure detection and isolation techniques are explored. Simulation results are presented illustrating implementation coverages attainable considering IMU performance models and mission detection threshold requirements. The implications of a complete GN&C redundancy management method on inertial techniques are also explored.

  6. Increasing time to postoperative stereotactic radiation therapy for patients with resected brain metastases: investigating clinical outcomes and identifying predictors associated with time to initiation.

    PubMed

    Yusuf, Mehran B; Amsbaugh, Mark J; Burton, Eric; Nelson, Megan; Williams, Brian; Koutourousiou, Maria; Nauta, Haring; Woo, Shiao

    2018-02-01

    We sought to determine the impact of time to initiation (TTI) of post-operative radiosurgery on clinical outcomes for patients with resected brain metastases and to identify predictors associated with TTI. All patients with resected brain metastases treated with postoperative SRS or fractionated stereotactic radiation therapy (fSRT) from 2012 to 2016 at a single institution were reviewed. TTI was defined as the interval from resection to first day of radiosurgery. Receiver operating characteristic (ROC) curves were used to identify an optimal threshold for TTI with respect to local failure (LF). Survival outcomes were estimated using the Kaplan-Meier method and analyzed using the log-rank test and Cox proportional hazards models. Logistic regression models were used to identify factors associated with ROC-determined TTI covariates. A total of 79 resected lesions from 73 patients were evaluated. An ROC curve of LF and TTI identified an optimal threshold for TTI of 30.5 days, with an area under the curve of 0.637. TTI > 30 days was associated with an increased hazard of LF (HR 4.525, CI 1.239-16.527) but was not significantly associated with survival (HR 1.002, CI 0.547-1.823) or distant brain failure (DBF, HR 1.943, CI 0.989-3.816). Fifteen patients (20.5%) required post-operative inpatient rehabilitation. Post-operative rehabilitation was associated with TTI > 30 days (OR 1.48, CI 1.142-1.922). In our study of resected brain metastases, longer time to initiation of post-operative radiosurgery was associated with increased local failure. Ideally, post-op SRS should be initiated within 30 days of resection if feasible.

  7. The diagnostic accuracy of the natriuretic peptides in heart failure: systematic review and diagnostic meta-analysis in the acute care setting.

    PubMed

    Roberts, Emmert; Ludman, Andrew J; Dworzynski, Katharina; Al-Mohammad, Abdallah; Cowie, Martin R; McMurray, John J V; Mant, Jonathan

    2015-03-04

    To determine and compare the diagnostic accuracy of serum natriuretic peptide levels (B type natriuretic peptide, N terminal probrain natriuretic peptide (NTproBNP), and mid-regional proatrial natriuretic peptide (MRproANP)) in people presenting with acute heart failure to acute care settings using thresholds recommended in the 2012 European Society of Cardiology guidelines for heart failure. Systematic review and diagnostic meta-analysis. Medline, Embase, Cochrane central register of controlled trials, Cochrane database of systematic reviews, database of abstracts of reviews of effects, NHS economic evaluation database, and Health Technology Assessment up to 28 January 2014, using combinations of subject headings and terms relating to heart failure and natriuretic peptides. Eligible studies evaluated one or more natriuretic peptides (B type natriuretic peptide, NTproBNP, or MRproANP) in the diagnosis of acute heart failure against an acceptable reference standard in consecutive or randomly selected adults in an acute care setting. Studies were excluded if they did not present sufficient data to extract or calculate true positives, false positives, false negatives, and true negatives, or report age independent natriuretic peptide thresholds. Studies not available in English were also excluded. 37 unique study cohorts described in 42 study reports were included, with a total of 48 test evaluations reporting 15 263 test results. At the lower recommended thresholds of 100 ng/L for B type natriuretic peptide and 300 ng/L for NTproBNP, the natriuretic peptides have sensitivities of 0.95 (95% confidence interval 0.93 to 0.96) and 0.99 (0.97 to 1.00) and negative predictive values of 0.94 (0.90 to 0.96) and 0.98 (0.89 to 1.0), respectively, for a diagnosis of acute heart failure. At the lower recommended threshold of 120 pmol/L, MRproANP has a sensitivity ranging from 0.95 (range 0.90-0.98) to 0.97 (0.95-0.98) and a negative predictive value ranging from 0.90 (0.80-0.96) to 0.97 (0.96-0.98). At higher thresholds the sensitivity declined progressively and specificity remained variable across the range of values. There was no statistically significant difference in diagnostic accuracy between plasma B type natriuretic peptide and NTproBNP. At the rule-out thresholds recommended in the 2012 European Society of Cardiology guidelines for heart failure, plasma B type natriuretic peptide, NTproBNP, and MRproANP have excellent ability to exclude acute heart failure. Specificity is variable, and so imaging to confirm a diagnosis of heart failure is required. There is no statistical difference between the diagnostic accuracy of plasma B type natriuretic peptide and NTproBNP. Introduction of natriuretic peptide measurement in the investigation of patients with suspected acute heart failure has the potential to allow rapid and accurate exclusion of the diagnosis. © Roberts et al 2015.

  8. Optimal threshold estimator of a prognostic marker by maximizing a time-dependent expected utility function for a patient-centered stratified medicine.

    PubMed

    Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe

    2018-06-01

    Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.

  9. Stress/strain changes and triggered seismicity following the MW7.3 Landers, California, earthquake

    USGS Publications Warehouse

    Gomberg, J.

    1996-01-01

    Calculations of dynamic stresses and strains, constrained by broadband seismograms, are used to investigate their role in generating the remotely triggered seismicity that followed the June 28, 1992, MW7.3 Landers, California earthquake. I compare straingrams and dynamic Coulomb failure functions calculated for the Landers earthquake at sites that did experience triggered seismicity with those at sites that did not. Bounds on triggering thresholds are obtained from analysis of dynamic strain spectra calculated for the Landers and MW,6.1 Joshua Tree, California, earthquakes at various sites, combined with results of static strain investigations by others. I interpret three principal results of this study with those of a companion study by Gomberg and Davis [this issue]. First, the dynamic elastic stress changes themselves cannot explain the spatial distribution of triggered seismicity, particularly the lack of triggered activity along the San Andreas fault system. In addition to the requirement to exceed a Coulomb failure stress level, this result implies the need to invoke and satisfy the requirements of appropriate slip instability theory. Second, results of this study are consistent with the existence of frequency- or rate-dependent stress/strain triggering thresholds, inferred from the companion study and interpreted in terms of earthquake initiation involving a competition of processes, one promoting failure and the other inhibiting it. Such competition is also part of relevant instability theories. Third, the triggering threshold must vary from site to site, suggesting that the potential for triggering strongly depends on site characteristics and response. The lack of triggering along the San Andreas fault system may be correlated with the advanced maturity of its fault gouge zone; the strains from the Landers earthquake were either insufficient to exceed its larger critical slip distance or some other critical failure parameter; or the faults failed stably as aseismic creep events. Variations in the triggering threshold at sites of triggered seismicity may be attributed to variations in gouge zone development and properties. Finally, these interpretations provide ready explanations for the time delays between the Landers earthquake and the triggered events.

  10. PBX 9502 air-gap tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, Peter; Novak, Alan M.; Foley, Timothy J.

    A small number of simple air-gap tests were performed on 1-inch diameter PBX 9502 cylinders to determine an approximate threshold for detonation failure. The primary diagnostics were streak imaging and dent measurements in a steel witness plate. Relight was found to occur, with negligible excess transit time, for air gaps up to 1 mm. Relight did not occur with a 3-mm air gap.

  11. A new qualitative acoustic emission parameter based on Shannon's entropy for damage monitoring

    NASA Astrophysics Data System (ADS)

    Chai, Mengyu; Zhang, Zaoxiao; Duan, Quan

    2018-02-01

    An important objective of acoustic emission (AE) non-destructive monitoring is to accurately identify approaching critical damage and to avoid premature failure by means of the evolutions of AE parameters. One major drawback of most parameters such as count and rise time is that they are strongly dependent on the threshold and other settings employed in AE data acquisition system. This may hinder the correct reflection of original waveform generated from AE sources and consequently bring difficulty for the accurate identification of the critical damage and early failure. In this investigation, a new qualitative AE parameter based on Shannon's entropy, i.e. AE entropy is proposed for damage monitoring. Since it derives from the uncertainty of amplitude distribution of each AE waveform, it is independent of the threshold and other time-driven parameters and can characterize the original micro-structural deformations. Fatigue crack growth test on CrMoV steel and three point bending test on a ductile material are conducted to validate the feasibility and effectiveness of the proposed parameter. The results show that the new parameter, compared to AE amplitude, is more effective in discriminating the different damage stages and identifying the critical damage.

  12. Drought will not leave your glass empty: Low risk of hydraulic failure revealed by long-term drought observations in world's top wine regions.

    PubMed

    Charrier, Guillaume; Delzon, Sylvain; Domec, Jean-Christophe; Zhang, Li; Delmas, Chloe E L; Merlin, Isabelle; Corso, Deborah; King, Andrew; Ojeda, Hernan; Ollat, Nathalie; Prieto, Jorge A; Scholach, Thibaut; Skinner, Paul; van Leeuwen, Cornelis; Gambetta, Gregory A

    2018-01-01

    Grapevines are crops of global economic importance that will face increasing drought stress because many varieties are described as highly sensitive to hydraulic failure as frequency and intensity of summer drought increase. We developed and used novel approaches to define water stress thresholds for preventing hydraulic failure, which were compared to the drought stress experienced over a decade in two of the world's top wine regions, Napa and Bordeaux. We identified the physiological thresholds for drought-induced mortality in stems and leaves and found small intervarietal differences. Long-term observations in Napa and Bordeaux revealed that grapevines never reach their lethal water-potential thresholds under seasonal droughts, owing to a vulnerability segmentation promoting petiole embolism and leaf mortality. Our findings will aid farmers in reducing water use without risking grapevine hydraulic integrity.

  13. Drought will not leave your glass empty: Low risk of hydraulic failure revealed by long-term drought observations in world’s top wine regions

    PubMed Central

    Charrier, Guillaume; Delzon, Sylvain; Domec, Jean-Christophe; Zhang, Li; Delmas, Chloe E. L.; Merlin, Isabelle; Corso, Deborah; King, Andrew; Ojeda, Hernan; Ollat, Nathalie; Prieto, Jorge A.; Scholach, Thibaut; Skinner, Paul; van Leeuwen, Cornelis; Gambetta, Gregory A.

    2018-01-01

    Grapevines are crops of global economic importance that will face increasing drought stress because many varieties are described as highly sensitive to hydraulic failure as frequency and intensity of summer drought increase. We developed and used novel approaches to define water stress thresholds for preventing hydraulic failure, which were compared to the drought stress experienced over a decade in two of the world’s top wine regions, Napa and Bordeaux. We identified the physiological thresholds for drought-induced mortality in stems and leaves and found small intervarietal differences. Long-term observations in Napa and Bordeaux revealed that grapevines never reach their lethal water-potential thresholds under seasonal droughts, owing to a vulnerability segmentation promoting petiole embolism and leaf mortality. Our findings will aid farmers in reducing water use without risking grapevine hydraulic integrity. PMID:29404405

  14. A threshold-free summary index of prediction accuracy for censored time to event data.

    PubMed

    Yuan, Yan; Zhou, Qian M; Li, Bingying; Cai, Hengrui; Chow, Eric J; Armstrong, Gregory T

    2018-05-10

    Prediction performance of a risk scoring system needs to be carefully assessed before its adoption in clinical practice. Clinical preventive care often uses risk scores to screen asymptomatic population. The primary clinical interest is to predict the risk of having an event by a prespecified future time t 0 . Accuracy measures such as positive predictive values have been recommended for evaluating the predictive performance. However, for commonly used continuous or ordinal risk score systems, these measures require a subjective cutoff threshold value that dichotomizes the risk scores. The need for a cutoff value created barriers for practitioners and researchers. In this paper, we propose a threshold-free summary index of positive predictive values that accommodates time-dependent event status and competing risks. We develop a nonparametric estimator and provide an inference procedure for comparing this summary measure between 2 risk scores for censored time to event data. We conduct a simulation study to examine the finite-sample performance of the proposed estimation and inference procedures. Lastly, we illustrate the use of this measure on a real data example, comparing 2 risk score systems for predicting heart failure in childhood cancer survivors. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Drug Concentration Thresholds Predictive of Therapy Failure and Death in Children With Tuberculosis: Bread Crumb Trails in Random Forests

    PubMed Central

    Swaminathan, Soumya; Pasipanodya, Jotam G.; Ramachandran, Geetha; Hemanth Kumar, A. K.; Srivastava, Shashikant; Deshpande, Devyani; Nuermberger, Eric; Gumbo, Tawanda

    2016-01-01

    Background. The role of drug concentrations in clinical outcomes in children with tuberculosis is unclear. Target concentrations for dose optimization are unknown. Methods. Plasma drug concentrations measured in Indian children with tuberculosis were modeled using compartmental pharmacokinetic analyses. The children were followed until end of therapy to ascertain therapy failure or death. An ensemble of artificial intelligence algorithms, including random forests, was used to identify predictors of clinical outcome from among 30 clinical, laboratory, and pharmacokinetic variables. Results. Among the 143 children with known outcomes, there was high between-child variability of isoniazid, rifampin, and pyrazinamide concentrations: 110 (77%) completed therapy, 24 (17%) failed therapy, and 9 (6%) died. The main predictors of therapy failure or death were a pyrazinamide peak concentration <38.10 mg/L and rifampin peak concentration <3.01 mg/L. The relative risk of these poor outcomes below these peak concentration thresholds was 3.64 (95% confidence interval [CI], 2.28–5.83). Isoniazid had concentration-dependent antagonism with rifampin and pyrazinamide, with an adjusted odds ratio for therapy failure of 3.00 (95% CI, 2.08–4.33) in antagonism concentration range. In regard to death alone as an outcome, the same drug concentrations, plus z scores (indicators of malnutrition), and age <3 years, were highly ranked predictors. In children <3 years old, isoniazid 0- to 24-hour area under the concentration-time curve <11.95 mg/L × hour and/or rifampin peak <3.10 mg/L were the best predictors of therapy failure, with relative risk of 3.43 (95% CI, .99–11.82). Conclusions. We have identified new antibiotic target concentrations, which are potential biomarkers associated with treatment failure and death in children with tuberculosis. PMID:27742636

  16. Cascading failures in ac electricity grids.

    PubMed

    Rohden, Martin; Jung, Daniel; Tamrakar, Samyak; Kettemann, Stefan

    2016-09-01

    Sudden failure of a single transmission element in a power grid can induce a domino effect of cascading failures, which can lead to the isolation of a large number of consumers or even to the failure of the entire grid. Here we present results of the simulation of cascading failures in power grids, using an alternating current (AC) model. We first apply this model to a regular square grid topology. For a random placement of consumers and generators on the grid, the probability to find more than a certain number of unsupplied consumers decays as a power law and obeys a scaling law with respect to system size. Varying the transmitted power threshold above which a transmission line fails does not seem to change the power-law exponent q≈1.6. Furthermore, we study the influence of the placement of generators and consumers on the number of affected consumers and demonstrate that large clusters of generators and consumers are especially vulnerable to cascading failures. As a real-world topology, we consider the German high-voltage transmission grid. Applying the dynamic AC model and considering a random placement of consumers, we find that the probability to disconnect more than a certain number of consumers depends strongly on the threshold. For large thresholds the decay is clearly exponential, while for small ones the decay is slow, indicating a power-law decay.

  17. Sensitivity of goodness-of-fit statistics to rainfall data rounding off

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Puliga, Michelangelo

    An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.

  18. Robustness and Vulnerability of Networks with Dynamical Dependency Groups.

    PubMed

    Bai, Ya-Nan; Huang, Ning; Wang, Lei; Wu, Zhi-Xi

    2016-11-28

    The dependency property and self-recovery of failure nodes both have great effects on the robustness of networks during the cascading process. Existing investigations focused mainly on the failure mechanism of static dependency groups without considering the time-dependency of interdependent nodes and the recovery mechanism in reality. In this study, we present an evolving network model consisting of failure mechanisms and a recovery mechanism to explore network robustness, where the dependency relations among nodes vary over time. Based on generating function techniques, we provide an analytical framework for random networks with arbitrary degree distribution. In particular, we theoretically find that an abrupt percolation transition exists corresponding to the dynamical dependency groups for a wide range of topologies after initial random removal. Moreover, when the abrupt transition point is above the failure threshold of dependency groups, the evolving network with the larger dependency groups is more vulnerable; when below it, the larger dependency groups make the network more robust. Numerical simulations employing the Erdős-Rényi network and Barabási-Albert scale free network are performed to validate our theoretical results.

  19. A quantitative model of honey bee colony population dynamics.

    PubMed

    Khoury, David S; Myerscough, Mary R; Barron, Andrew B

    2011-04-18

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem.

  20. Measures of Situation Awareness: An Experimental Evaluation

    DTIC Science & Technology

    1991-10-01

    occurrence from non- occurrence of the target event, referred to as sensitivity (Macmillan and Creelman , 1990). Because sensitivity declines if pilots are...Pollack and Norman, 1964; see also Craig, 1979; Macmillan and Creelman , 1990). Finally, avoidance failures were measured simply as the number of times...Wesley. Macmillan, N. A., & Creelman , C. D. (1990). Response bias: Characteristics of detection theory, threshold theory, and "non- parametric" indexes

  1. A review of the physics and response models for burnout of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Orvis, W. J.; Khanaka, G. H.; Yee, J. H.

    1984-12-01

    Physical mechanisms that cause semiconductor devices to fail from electrical overstress--particularly, EMP-induced electrical stress--are described in light of the current literature and the authors' own research. A major concern is the cause and effects of second breakdown phenomena in p-n junction devices. Models of failure thresholds are evaluated for their inherent errors and for their ability to represent the relevant physics. Finally, the response models that relate electromagnetic stress parameters to appropriate failure-threshold parameters are discussed.

  2. Real-Time Adaptive Control Allocation Applied to a High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Lallman, Frederick J.; Bundick, W. Thomas

    2001-01-01

    Abstract This paper presents the development and application of one approach to the control of aircraft with large numbers of control effectors. This approach, referred to as real-time adaptive control allocation, combines a nonlinear method for control allocation with actuator failure detection and isolation. The control allocator maps moment (or angular acceleration) commands into physical control effector commands as functions of individual control effectiveness and availability. The actuator failure detection and isolation algorithm is a model-based approach that uses models of the actuators to predict actuator behavior and an adaptive decision threshold to achieve acceptable false alarm/missed detection rates. This integrated approach provides control reconfiguration when an aircraft is subjected to actuator failure, thereby improving maneuverability and survivability of the degraded aircraft. This method is demonstrated on a next generation military aircraft Lockheed-Martin Innovative Control Effector) simulation that has been modified to include a novel nonlinear fluid flow control control effector based on passive porosity. Desktop and real-time piloted simulation results demonstrate the performance of this integrated adaptive control allocation approach.

  3. Efficacy of intrathoracic impedance and remote monitoring in patients with an implantable device after the 2011 great East Japan earthquake.

    PubMed

    Suzuki, Hitoshi; Yamada, Shinya; Kamiyama, Yoshiyuki; Takeishi, Yasuchika

    2014-01-01

    Several studies have revealed that stress after catastrophic disasters can trigger cardiovascular events, however, little is known about its association with the occurrence of heart failure in past earthquakes. The objective of the present study was to determine whether the Great East Japan Earthquake on March 11, 2011, increased the incidence of worsening heart failure in chronic heart failure (CHF) patients with implantable devices. Furthermore, we examined whether intrathoracic impedance using remote monitoring was effective for the management of CHF.We enrolled 44 CHF patients (32 males, mean age 63 ± 12 years) with implantable devices that can check intrathoracic impedance using remote monitoring. We defined the worsening heart failure as accumulated impedance under reference impedance exceeding 60 ohms-days (fluid index threshold), and compared the incidence of worsening heart failure and arrhythmic events 30 days before and after March 11.Within the 30 days after March 11, 10 patients exceeded the threshold compared with only 2 patients in the preceding 30 days (P < 0.05). Although 9 patients using remote monitoring among the 10 patients with threshold crossings were not hospitalized, one patient without the system was hospitalized due to acute decompensated heart failure. On the contrary, arrhythmic events did not change between before and after March 11.Our results suggest that earthquake-induced stress causes an increased risk of worsening heart failure without changes in arrhythmia. Furthermore, intrathoracic impedance using remote monitoring may be a useful tool for the management of CHF in catastrophic disasters.

  4. Simulations of fast crab cavity failures in the high luminosity Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Yee-Rendon, Bruce; Lopez-Fernandez, Ricardo; Barranco, Javier; Calaga, Rama; Marsili, Aurelien; Tomás, Rogelio; Zimmermann, Frank; Bouly, Frédéric

    2014-05-01

    Crab cavities (CCs) are a key ingredient of the high luminosity Large Hadron Collider (HL-LHC) project for increasing the luminosity of the LHC. At KEKB, CCs have exhibited abrupt changes of phase and voltage during a time period of the order of a few LHC turns and considering the significant stored energy in the HL-LHC beam, CC failures represent a serious threat in regard to LHC machine protection. In this paper, we discuss the effect of CC voltage or phase changes on a time interval similar to, or longer than, the one needed to dump the beam. The simulations assume a quasistationary-state distribution to assess the particles losses for the HL-LHC. These distributions produce beam losses below the safe operation threshold for Gaussian tails, while, for non-Gaussian tails are on the same order of the limit. Additionally, some mitigation strategies are studied for reducing the damage caused by the CC failures.

  5. Detection of wood failure by image processing method: influence of algorithm, adhesive and wood species

    Treesearch

    Lanying Lin; Sheng He; Feng Fu; Xiping Wang

    2015-01-01

    Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...

  6. An algorithm for simulating fracture of cohesive-frictional materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nukala, Phani K; Sampath, Rahul S; Barai, Pallab

    Fracture of disordered frictional granular materials is dominated by interfacial failure response that is characterized by de-cohesion followed by frictional sliding response. To capture such an interfacial failure response, we introduce a cohesive-friction random fuse model (CFRFM), wherein the cohesive response of the interface is represented by a linear stress-strain response until a failure threshold, which is then followed by a constant response at a threshold lower than the initial failure threshold to represent the interfacial frictional sliding mechanism. This paper presents an efficient algorithm for simulating fracture of such disordered frictional granular materials using the CFRFM. We note that,more » when applied to perfectly plastic disordered materials, our algorithm is both theoretically and numerically equivalent to the traditional tangent algorithm (Roux and Hansen 1992 J. Physique II 2 1007) used for such simulations. However, the algorithm is general and is capable of modeling discontinuous interfacial response. Our numerical simulations using the algorithm indicate that the local and global roughness exponents ({zeta}{sub loc} and {zeta}, respectively) of the fracture surface are equal to each other, and the two-dimensional crack roughness exponent is estimated to be {zeta}{sub loc} = {zeta} = 0.69 {+-} 0.03.« less

  7. Site- and bond-percolation thresholds in K_{n,n}-based lattices: Vulnerability of quantum annealers to random qubit and coupler failures on chimera topologies.

    PubMed

    Melchert, O; Katzgraber, Helmut G; Novotny, M A

    2016-04-01

    We estimate the critical thresholds of bond and site percolation on nonplanar, effectively two-dimensional graphs with chimeralike topology. The building blocks of these graphs are complete and symmetric bipartite subgraphs of size 2n, referred to as K_{n,n} graphs. For the numerical simulations we use an efficient union-find-based algorithm and employ a finite-size scaling analysis to obtain the critical properties for both bond and site percolation. We report the respective percolation thresholds for different sizes of the bipartite subgraph and verify that the associated universality class is that of standard two-dimensional percolation. For the canonical chimera graph used in the D-Wave Systems Inc. quantum annealer (n=4), we discuss device failure in terms of network vulnerability, i.e., we determine the critical fraction of qubits and couplers that can be absent due to random failures prior to losing large-scale connectivity throughout the device.

  8. LOGIC OF CONTROLLED THRESHOLD DEVICES.

    DTIC Science & Technology

    The synthesis of threshold logic circuits from several points of view is presented. The first approach is applicable to resistor-transistor networks...in which the outputs are tied to a common collector resistor. In general, fewer threshold logic gates than NOR gates connected to a common collector...network to realize a specified function such that the failure of any but the output gate can be compensated for by a change in the threshold level (and

  9. A preliminary evaluation of a failure detection filter for detecting and identifying control element failures in a transport aircraft

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1985-01-01

    The application of the failure detection filter to the detection and identification of aircraft control element failures was evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 Aircraft. Simulation results show that with a simple correlator and threshold detector used to process the filter residuals, the failure detection performance is seriously degraded by the effects of turbulence.

  10. Observations of Single Event Failure in Power MOSFETS

    NASA Technical Reports Server (NTRS)

    Nichols, D.; McCarty, K.; Coss, J.

    1994-01-01

    The first compendium of single event test data for power MOSFETs provides failure thresholds from burnout or gate rupture for over 100 devices of eight manufacturers. Ordering the data has also provided some useful insights.

  11. Performance-based maintenance of gas turbines for reliable control of degraded power systems

    NASA Astrophysics Data System (ADS)

    Mo, Huadong; Sansavini, Giovanni; Xie, Min

    2018-03-01

    Maintenance actions are necessary for ensuring proper operations of control systems under component degradation. However, current condition-based maintenance (CBM) models based on component health indices are not suitable for degraded control systems. Indeed, failures of control systems are only determined by the controller outputs, and the feedback mechanism compensates the control performance loss caused by the component deterioration. Thus, control systems may still operate normally even if the component health indices exceed failure thresholds. This work investigates the CBM model of control systems and employs the reduced control performance as a direct degradation measure for deciding maintenance activities. The reduced control performance depends on the underlying component degradation modelled as a Wiener process and the feedback mechanism. To this aim, the controller features are quantified by developing a dynamic and stochastic control block diagram-based simulation model, consisting of the degraded components and the control mechanism. At each inspection, the system receives a maintenance action if the control performance deterioration exceeds its preventive-maintenance or failure thresholds. Inspired by realistic cases, the component degradation model considers random start time and unit-to-unit variability. The cost analysis of maintenance model is conducted via Monte Carlo simulation. Optimal maintenance strategies are investigated to minimize the expected maintenance costs, which is a direct consequence of the control performance. The proposed framework is able to design preventive maintenance actions on a gas power plant, to ensuring required load frequency control performance against a sudden load increase. The optimization results identify the trade-off between system downtime and maintenance costs as a function of preventive maintenance thresholds and inspection frequency. Finally, the control performance-based maintenance model can reduce maintenance costs as compared to CBM and pre-scheduled maintenance.

  12. Observational longitudinal cohort study to determine progression to heart failure in a screened community population: the Echocardiographic Heart of England Screening Extension (ECHOES-X) study

    PubMed Central

    Taylor, Clare J; Roalfe, Andrea K; Tait, Lynda; Davis, Russell C; Iles, Rachel; Derit, Marites; Hobbs, F D Richard

    2014-01-01

    Objectives Rescreen a large community cohort to examine the progression to heart failure over time and the role of natriuretic peptide testing in screening. Design Observational longitudinal cohort study. Setting 16 socioeconomically diverse practices in central England. Participants Participants from the original Echocardiographic Heart of England Screening (ECHOES) study were invited to attend for rescreening. Outcome measures Prevalence of heart failure at rescreening overall and for each original ECHOES subgroup. Test performance of N Terminal pro-B-type Natriuretic Peptide (NT-proBNP) levels at different thresholds for screening. Results 1618 of 3408 participants underwent screening which represented 47% of survivors and 26% of the original ECHOES cohort. A total of 176 (11%, 95% CI 9.4% to 12.5%) participants were classified as having heart failure at rescreening; 103 had heart failure with reduced ejection fraction (HFREF) and 73 had heart failure with preserved ejection fraction (HFPEF). Sixty-eight out of 1232 (5.5%, 95% CI 4.3% to 6.9%) participants who were recruited from the general population over the age of 45 and did not have heart failure in the original study, had heart failure on rescreening. An NT-proBNP cut-off of 400 pg/mL had sensitivity for a diagnosis of heart failure of 79.5% (95% CI 72.4% to 85.5%) and specificity of 87% (95% CI 85.1% to 88.8%). Conclusions Rescreening identified new cases of HFREF and HFPEF. Progression to heart failure poses a significant threat over time. The natriuretic peptide cut-off level for ruling out heart failure must be low enough to ensure cases are not missed at screening. PMID:25015472

  13. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  14. Probabilistic inspection strategies for minimizing service failures

    NASA Technical Reports Server (NTRS)

    Brot, Abraham

    1994-01-01

    The INSIM computer program is described which simulates the 'limited fatigue life' environment in which aircraft structures generally operate. The use of INSIM to develop inspection strategies which aim to minimize service failures is demonstrated. Damage-tolerance methodology, inspection thresholds and customized inspections are simulated using the probability of failure as the driving parameter.

  15. Estimator banks: a new tool for direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Gershman, Alex B.; Boehme, Johann F.

    1997-10-01

    A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.

  16. Update on Simulating Ice-Cliff Failure

    NASA Astrophysics Data System (ADS)

    Parizek, B. R.; Christianson, K. A.; Alley, R. B.; Voytenko, D.; Vankova, I.; Dixon, T. H.; Walker, R. T.; Holland, D.

    2017-12-01

    Using a 2D full-Stokes diagnostic ice-flow model and engineering and glaciological failure criteria, we simulate the limiting physical conditions for rapid structural failure of subaerial ice cliffs. Previously, using a higher-order flowline model, we reported that the threshold height, in crevassed ice and/or under favorable conditions for hydrofracture or crack lubrication, may be only slightly above the 100-m maximum observed today and that under well-drained or low-melt conditions, mechanically-competent ice supports cliff heights up to 220 m (with a likely range of 180-275 m) before ultimately succumbing to tensional and compressive failure along a listric surface. However, proximal to calving fronts, bridging effects lead to variations in vertical normal stress from the background glaciostatic stress state that give rise to the along-flow gradients in vertical shear stress that are included within a full-Stokes momentum balance. When including all flowline stresses within the physics core, diagnostic solutions continue to support our earlier findings that slumping failure ultimately limits the upper bound for cliff heights. Shear failure still requires low cohesive strength, tensile failure leads to deeper dry-crevasse propagation (albeit, less than halfway through the cliff), and compressive failure drops the threshold height for triggering rapid ice-front retreat via slumping to 200 m (145-280 m).

  17. Two-threshold model for scaling laws of noninteracting snow avalanches

    USGS Publications Warehouse

    Faillettaz, J.; Louchet, F.; Grasso, J.-R.

    2004-01-01

    A two-threshold model was proposed for scaling laws of noninteracting snow avalanches. It was found that the sizes of the largest avalanches just preceding the lattice system were power-law distributed. The proposed model reproduced the range of power-law exponents observe for land, rock or snow avalanches, by tuning the maximum value of the ratio of the two failure thresholds. A two-threshold 2D cellular automation was introduced to study the scaling for gravity-driven systems.

  18. Drug Concentration Thresholds Predictive of Therapy Failure and Death in Children With Tuberculosis: Bread Crumb Trails in Random Forests.

    PubMed

    Swaminathan, Soumya; Pasipanodya, Jotam G; Ramachandran, Geetha; Hemanth Kumar, A K; Srivastava, Shashikant; Deshpande, Devyani; Nuermberger, Eric; Gumbo, Tawanda

    2016-11-01

     The role of drug concentrations in clinical outcomes in children with tuberculosis is unclear. Target concentrations for dose optimization are unknown.  Plasma drug concentrations measured in Indian children with tuberculosis were modeled using compartmental pharmacokinetic analyses. The children were followed until end of therapy to ascertain therapy failure or death. An ensemble of artificial intelligence algorithms, including random forests, was used to identify predictors of clinical outcome from among 30 clinical, laboratory, and pharmacokinetic variables.  Among the 143 children with known outcomes, there was high between-child variability of isoniazid, rifampin, and pyrazinamide concentrations: 110 (77%) completed therapy, 24 (17%) failed therapy, and 9 (6%) died. The main predictors of therapy failure or death were a pyrazinamide peak concentration <38.10 mg/L and rifampin peak concentration <3.01 mg/L. The relative risk of these poor outcomes below these peak concentration thresholds was 3.64 (95% confidence interval [CI], 2.28-5.83). Isoniazid had concentration-dependent antagonism with rifampin and pyrazinamide, with an adjusted odds ratio for therapy failure of 3.00 (95% CI, 2.08-4.33) in antagonism concentration range. In regard to death alone as an outcome, the same drug concentrations, plus z scores (indicators of malnutrition), and age <3 years, were highly ranked predictors. In children <3 years old, isoniazid 0- to 24-hour area under the concentration-time curve <11.95 mg/L × hour and/or rifampin peak <3.10 mg/L were the best predictors of therapy failure, with relative risk of 3.43 (95% CI, .99-11.82).  We have identified new antibiotic target concentrations, which are potential biomarkers associated with treatment failure and death in children with tuberculosis. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America.

  19. Simulations of the modified gap experiment

    NASA Astrophysics Data System (ADS)

    Sutherland, Gerrit T.; Benjamin, Richard; Kooker, Douglas

    2017-01-01

    Modified gap experiment (test) hydrocode simulations predict the trends seen in experimental excess free surface velocity versus input pressure curves for explosives with both large and modest failure diameters. Simulations were conducted for explosive "A", an explosive with a large failure diameter, and for cast TNT, which has a modest failure diameter. Using the best available reactive rate models, the simulations predicted sustained ignition thresholds similar to experiment. This is a threshold where detonation is likely given a long enough run distance. For input pressures greater than the sustained ignition threshold pressure, the simulations predicted too little velocity for explosive "A" and too much velocity for TNT. It was found that a better comparison of experiment and simulation requires additional experimental data for both explosives. It was observed that the choice of reactive rate model for cast TNT can lead to large differences in the predicted modified gap experiment result. The cause of the difference is that the same data was not used to parameterize both models; one set of data was more shock reactive than the other.

  20. Automated Bone Screw Tightening to Adaptive Levels of Stripping Torque.

    PubMed

    Reynolds, Karen J; Mohtar, Aaron A; Cleek, Tammy M; Ryan, Melissa K; Hearn, Trevor C

    2017-06-01

    To use relationships between tightening parameters, related to bone quality, to develop an automated system that determines and controls the level of screw tightening. An algorithm relating current at head contact (IHC) to current at construct failure (Imax) was developed. The algorithm was used to trigger cessation of screw insertion at a predefined tightening level, in real time, between head contact and maximum current. The ability of the device to stop at the predefined level was assessed. The mean (±SD) current at which screw insertion ceased was calculated to be [51.47 ± 9.75% × (Imax - IHC)] + IHC, with no premature bone failures. A smart screwdriver was developed that uses the current from the motor driving the screw to predict the current at which the screw will strip the bone threads. The device was implemented and was able to achieve motor shut-off and cease tightening at a predefined threshold, with no premature bone failures.

  1. Probabilistic seismic hazard in the San Francisco Bay area based on a simplified viscoelastic cycle model of fault interactions

    USGS Publications Warehouse

    Pollitz, F.F.; Schwartz, D.P.

    2008-01-01

    We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.

  2. Deviation from Power Law Behavior in Landslide Phenomenon

    NASA Astrophysics Data System (ADS)

    Li, L.; Lan, H.; Wu, Y.

    2013-12-01

    Power law distribution of magnitude is widely observed in many natural hazards (e.g., earthquake, floods, tornadoes, and forest fires). Landslide is unique as the size distribution of landslide is characterized by a power law decrease with a rollover in the small size end. Yet, the emergence of the rollover, i.e., the deviation from power law behavior for small size landslides, remains a mystery. In this contribution, we grouped the forces applied on landslide bodies into two categories: 1) the forces proportional to the volume of failure mass (gravity and friction), and 2) the forces proportional to the area of failure surface (cohesion). Failure occurs when the forces proportional to volume exceed the forces proportional to surface area. As such, given a certain mechanical configuration, the failure volume to failure surface area ratio must exceed a corresponding threshold to guarantee a failure. Assuming all landslides share a uniform shape, which means the volume to surface area ratio of landslide regularly increase with the landslide volume, a cutoff of landslide volume distribution in the small size end can be defined. However, in realistic landslide phenomena, where heterogeneities of landslide shape and mechanical configuration are existent, a simple cutoff of landslide volume distribution does not exist. The stochasticity of landslide shape introduce a probability distribution of the volume to surface area ratio with regard to landslide volume, with which the probability that the volume to surface ratio exceed the threshold can be estimated regarding values of landslide volume. An experiment based on empirical data showed that this probability can induce the power law distribution of landslide volume roll down in the small size end. We therefore proposed that the constraints on the failure volume to failure surface area ratio together with the heterogeneity of landslide geometry and mechanical configuration attribute for the deviation from power law behavior in landslide phenomenon. Figure shows that a rollover of landslide size distribution in the small size end is produced as the probability for V/S (the failure volume to failure surface ratio of landslide) exceeding the mechanical threshold applied to the power law distribution of landslide volume.

  3. Methods, apparatus and system for selective duplication of subtasks

    DOEpatents

    Andrade Costa, Carlos H.; Cher, Chen-Yong; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2016-03-29

    A method for selective duplication of subtasks in a high-performance computing system includes: monitoring a health status of one or more nodes in a high-performance computing system, where one or more subtasks of a parallel task execute on the one or more nodes; identifying one or more nodes as having a likelihood of failure which exceeds a first prescribed threshold; selectively duplicating the one or more subtasks that execute on the one or more nodes having a likelihood of failure which exceeds the first prescribed threshold; and notifying a messaging library that one or more subtasks were duplicated.

  4. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.

  5. Characterization of the behavior of three definitions of prostate-specific antigen-based biochemical failure in relation to detection and follow-up biases: comparison with the American Society for Therapeutic Radiology and Oncology consensus definition.

    PubMed

    Williams, Scott G

    2006-03-01

    To examine the impact of detection biases on three prostate cancer biochemical failure (bF) definitions in comparison with the existing American Society for Therapeutic Radiology and Oncology Consensus Definition (ACD). Three alternative bF definitions were tested against the ACD: three rises in prostate-specific antigen (PSA) level without backdating, nadir plus 2 ng/mL, and a threshold PSA level of >3 ng/mL, according to data from 1050 men. The mean time between PSA tests (MTBT), regularity of collection, and calendar year of analysis were examined in each bF definition. The MTBT produced a statistically significant difference in the derived hazard ratio for identification of bF in all definitions. The influence of test regularity was statistically significant beyond the median level of regularity in all definitions. The year of analysis impacted greatly on the ACD, whereas the three alternative definitions exhibited minor follow-up duration variations by comparison. The alternative definitions had reliable follow-up when the crude median time to censoring was at least 1.6 times greater than that of failure. Detection biases will always be a significant issue in defining bF. A number of alternative failure definitions have more predictable interactions with these biases than the existing ACD.

  6. Hospital Volume and 30-Day Mortality for Three Common Medical Conditions

    PubMed Central

    Ross, Joseph S.; Normand, Sharon-Lise T.; Wang, Yun; Ko, Dennis T.; Chen, Jersey; Drye, Elizabeth E.; Keenan, Patricia S.; Lichtman, Judith H.; Bueno, Héctor; Schreiner, Geoffrey C.; Krumholz, Harlan M.

    2010-01-01

    Background The association between hospital volume and the death rate for patients who are hospitalized for acute myocardial infarction, heart failure, or pneumonia remains unclear. It is also not known whether a volume threshold for such an association exists. Methods We conducted cross-sectional analyses of data from Medicare administrative claims for all fee-for-service beneficiaries who were hospitalized between 2004 and 2006 in acute care hospitals in the United States for acute myocardial infarction, heart failure, or pneumonia. Using hierarchical logistic-regression models for each condition, we estimated the change in the odds of death within 30 days associated with an increase of 100 patients in the annual hospital volume. Analyses were adjusted for patients’ risk factors and hospital characteristics. Bootstrapping procedures were used to estimate 95% confidence intervals to identify the condition-specific volume thresholds above which an increased volume was not associated with reduced mortality. Results There were 734,972 hospitalizations for acute myocardial infarction in 4128 hospitals, 1,324,287 for heart failure in 4679 hospitals, and 1,418,252 for pneumonia in 4673 hospitals. An increased hospital volume was associated with reduced 30-day mortality for all conditions (P<0.001 for all comparisons). For each condition, the association between volume and outcome was attenuated as the hospital's volume increased. For acute myocardial infarction, once the annual volume reached 610 patients (95% confidence interval [CI], 539 to 679), an increase in the hospital volume by 100 patients was no longer significantly associated with reduced odds of death. The volume threshold was 500 patients (95% CI, 433 to 566) for heart failure and 210 patients (95% CI, 142 to 284) for pneumonia. Conclusions Admission to higher-volume hospitals was associated with a reduction in mortality for acute myocardial infarction, heart failure, and pneumonia, although there was a volume threshold above which an increased condition-specific hospital volume was no longer significantly associated with reduced mortality. PMID:20335587

  7. Clinical Pharmacodynamics: Principles of Drug Response and Alterations in Kidney Disease.

    PubMed

    Keller, Frieder; Hann, Alexander

    2018-05-16

    Pharmacokinetics and pharmacodynamics follow the logic of cause and consequence. Receptor-mediated and reversible effects can be distinguished from direct and irreversible effects. Reversible effects are capacity-limited and saturable whereas irreversible effects are limited only by the number of viable targets. In the case of receptor-mediated and reversible effects a threshold and a ceiling concentration can be defined. Antimicrobial drugs with concentration-dependent action are distinguished from drugs with time-dependent action. Concentration-dependent effects are associated with a high ceiling concentration and the target is the high peak. Time-dependent effects are associated with a high threshold concentration and the target is the high trough. During kidney dysfunction, alterations of drug response are usually attributed to pharmacokinetic but rarely to pharmacodynamic changes. Dose adjustment calculations, therefore, tacitly presume that pharmacodynamic parameters remain unchanged while only pharmacokinetic parameters are altered in kidney failure. Kidney dysfunction influences the pharmacokinetic parameters of at least 50% of all essential drugs. Clinicians usually consider pharmacokinetics when kidney disease is found, but pharmacodynamics is as important. Alterations of pharmacodynamic parameters are conceivable but only rarely reported in kidney failure. Sometimes surprising dosing adjustments are needed when pharmacodynamic concepts are brought into the decision process of which dose to choose. Pharmacokinetics and pharmacodynamics should both be considered when any dosing regimen is determined. Copyright © 2018 by the American Society of Nephrology.

  8. Two-IMU FDI performance of the sequential probability ratio test during shuttle entry

    NASA Technical Reports Server (NTRS)

    Rich, T. M.

    1976-01-01

    Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.

  9. Failure Mode Classification for Life Prediction Modeling of Solid-State Lighting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakalaukus, Peter Joseph

    2015-08-01

    Since the passing of the Energy Independence and Security Act of 2007, the U.S. government has mandated greater energy independence which has acted as a catalyst for accelerating and facilitating research efforts toward the development and deployment of market-driven solutions for energy-saving homes, buildings and manufacturing, as well as sustainable transportation and renewable electricity generation. As part of this effort, an emphasis toward advancing solid-state lighting technology through research, development, demonstration, and commercial applications is assisting in the phase out of the common incandescent light bulb, as well as developing a more economical lighting source that is less toxic thanmore » compact fluorescent lighting. This has led lighting manufacturers to pursue SSL technologies for a wide range of consumer lighting applications. An SSL luminaire’s lifetime can be characterized in terms of lumen maintenance life. Lumen maintenance or lumen depreciation is the percentage decrease in the relative luminous flux from that of the original, pristine luminous flux value. Lumen maintenance life is the estimated operating time, in hours, when the desired failure threshold is projected to be reached at normal operating conditions. One accepted failure threshold of SSL luminaires is lumen maintenance of 70% -- a 30% reduction in the light output of the luminaire. Currently, the only approved lighting standard that puts forth a recommendation for long-term luminous flux maintenance projections towards a specified failure threshold of an SSL luminaire is the IES TM-28-14 (TM28) standard. iii TM28 was derived as a means to compare luminaires that have been tested at different facilities, research labs or companies. TM28 recommends the use of the Arrhenius equation to determine SSL device specific reaction rates from thermally driven failure mechanisms used to characterize a single failure mode – the relative change in the luminous flux output or “light power” of the SSL luminaire. The use of the Arrhenius equation necessitates two different temperature conditions, 25°C and 45°C are suggested by TM28, to determine the SSL lamp specific activation energy. One principal issue with TM28 is the lack of additional stresses or parameters needed to characterize non-temperature dependent failure mechanisms. Another principal issue with TM28 is the assumption that lumen maintenance or lumen depreciation gives an adequate comparison between SSL luminaires. Additionally, TM28 has no process for the determination of acceleration factors or lifetime estimations. Currently, a literature gap exists for established accelerated test methods for SSL devices to assess quality, reliability and durability before being introduced into the marketplace. Furthermore, there is a need for Physics-of-Failure based approaches to understand the processes and mechanisms that induce failure for the assessment of SSL reliability in order to develop generalized acceleration factors that better represent SSL product lifetime. This and the deficiencies in TM28 validate the need behind the development of acceleration techniques to quantify SSL reliability under a variety of environmental conditions. The ability to assess damage accrual and investigate reliability of SSL components and systems is essential to understanding the life time of the SSL device itself. The methodologies developed in this work increases the understanding of SSL devices iv through the investigation of component and device reliability under a variety of accelerated test conditions. The approaches for suitable lifetime predictions through the development of novel generalized acceleration factors, as well as a prognostics and health management framework, will greatly reduce the time and effort needed to produce SSL acceleration factors for the development of lifetime predictions.« less

  10. The Shock Behaviour of a SiO2-Li2O Transparent Glass-Ceramic Armour Material

    NASA Astrophysics Data System (ADS)

    Pickup, I. M.; Millett, J. C. F.; Bourne, N. K.

    2004-07-01

    The dynamic behaviour of a transparent glass-ceramic material, Transarm, developed by Alstom UK for the UK MoD has been studied. Plate impact experiments have been used to measure the materials Hugoniot characteristics and failure behaviour. Longitudinal stresses have been measured using embedded and back surface mounted Manganin gauges. Above a threshold stress of ca. 4 GPa, the longitudinal stress histories exhibit a significant secondary rise, prior to attaining their Hugoniot stress. Lateral stresses were also measured by embedding Manganin gauges in longitudinal cuts. Significant secondary rises in stress were observed when the applied longitudinal stress exceeded the 4 GPa threshold, indicating the presence of a failure front. The dynamic shear strength of the glass has been measured using the longitudinal and lateral data. Even though significant strength drops have been measured before and behind the failure front, the material has a high post-failure strength compared to non- crystalline glasses.

  11. Evaluating stress corrosion cracking behaviour of high strength AA7075-T651 aluminium alloy

    NASA Astrophysics Data System (ADS)

    Prabhuraj, P.; Rajakumar, S.; Lakshminarayanan, A. K.; Balasubramanian, V.

    2017-12-01

    The objective of the present study is to determine the threshold stress level of stress corrosion cracking (SCC) in AA7075-T651 aluminium alloy by suitable experimentation. The test was carried out using a circumferential notch specimen in a horizontal-type constant load SCC setup in a 3.5 wt.% NaCl solution. The time to failure by SCC was determined at various loading conditions. The threshold stress of AA7075-T651 alloy was found to be 242 MPa in a 3.5 wt.% NaCl solution. The various regions of the fractured surface specimen such as machined notch, SCC region and final overload fracture area were examined using scanning electron microscopy (SEM) in order to identify the SCC mechanism.

  12. Do Optimal Prognostic Thresholds in Continuous Physiological Variables Really Exist? Analysis of Origin of Apparent Thresholds, with Systematic Review for Peak Oxygen Consumption, Ejection Fraction and BNP

    PubMed Central

    Leong, Tora; Rehman, Michaela B.; Pastormerlo, Luigi Emilio; Harrell, Frank E.; Coats, Andrew J. S.; Francis, Darrel P.

    2014-01-01

    Background Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies. Objectives We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO2), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF). Data Sources and Eligibility Criteria Studies testing pVO2, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed. Methods First, we examined studies testing pVO2, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk. Results 33 studies (8946 patients) tested a pVO2 threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO2 (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001). Second, survival simulations always discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001). Limitations This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds. Key Findings First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO2, EF or BNP. Second, apparently-optimal thresholds always appear, even with no step in prognosis. Conclusions Emphatic therapeutic guidance based on thresholds from observational studies may be ill-founded. We should not assume that optimal thresholds, or any thresholds, exist. PMID:24475020

  13. Ultra-sensitive PSA Following Prostatectomy Reliably Identifies Patients Requiring Post-Op Radiotherapy

    PubMed Central

    Kang, Jung Julie; Reiter, Robert; Steinberg, Michael; King, Christopher R.

    2015-01-01

    PURPOSE Integrating ultra-sensitive PSA (uPSA) into surveillance of high-risk patients following radical prostatectomy (RP) potentially optimizes management by correctly identifying actual recurrences, promoting an early salvage strategy and minimizing overtreatment. The power of uPSA following surgery to identify eventual biochemical failures is tested. PATIENTS AND METHODS From 1991–2013, 247 high-risk patients with a median follow-up was 44 months after RP were identified (extraprostatic extension and/or positive margin). Surgical technique, initial PSA (iPSA), pathology and post-op PSA were analyzed. The uPSA assay threshold was 0.01 ng/mL. Conventional biochemical relapse (cBCR) was defined as PSA ≥0.2 ng/mL. Kaplan Meier and Cox multivariate analyses (MVA) compared uPSA recurrence vs. cBCR rates. RESULTS Sensitivity analysis identified uPSA ≥0.03 as the optimal threshold identifying recurrence. First post-op uPSA ≥0.03, Gleason grade, T-stage, iPSA, and margin status predicted cBCR. On MVA, only first post-op uPSA ≥0.03, Gleason grade, and T-stage independently predicted cBCR. First post-op uPSA ≥0.03 conferred the highest risk (HR 8.5, p<0.0001) and discerned cBCR with greater sensitivity than undetectable first conventional PSA (70% vs. 46%). Any post-op PSA ≥0.03 captured all failures missed by first post-op value (100% sensitivity) with accuracy (96% specificity). Defining failure at uPSA ≥0.03 yielded a median lead-time advantage of 18 months (mean 24 months) over the conventional PSA ≥0.2 definition. CONCLUSION uPSA ≥0.03 is an independent factor, identifies BCR more accurately than any traditional risk factors, and confers a significant lead-time advantage. uPSA enables critical decisions regarding timing and indication for post-op RT among high-risk patients following RP. PMID:25463990

  14. Using Landslide Failure Forecast Models in Near Real Time: the Mt. de La Saxe case-study

    NASA Astrophysics Data System (ADS)

    Manconi, Andrea; Giordan, Daniele

    2014-05-01

    Forecasting the occurrence of landslide phenomena in space and time is a major scientific challenge. The approaches used to forecast landslides mainly depend on the spatial scale analyzed (regional vs. local), the temporal range of forecast (long- vs. short-term), as well as the triggering factor and the landslide typology considered. By focusing on short-term forecast methods for large, deep seated slope instabilities, the potential time of failure (ToF) can be estimated by studying the evolution of the landslide deformation over time (i.e., strain rate) provided that, under constant stress conditions, landslide materials follow creep mechanism before reaching rupture. In the last decades, different procedures have been proposed to estimate ToF by considering simplified empirical and/or graphical methods applied to time series of deformation data. Fukuzono, 1985 proposed a failure forecast method based on the experience performed during large scale laboratory experiments, which were aimed at observing the kinematic evolution of a landslide induced by rain. This approach, known also as the inverse-velocity method, considers the evolution over time of the inverse value of the surface velocity (v) as an indicator of the ToF, by assuming that failure approaches while 1/v tends to zero. Here we present an innovative method to aimed at achieving failure forecast of landslide phenomena by considering near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and then apply straightforward statistical methods to obtain confidence intervals on the time of failure. Our results can be relevant to support the management of early warning systems during landslide emergency conditions, also when the predefined displacement and/or velocity thresholds are exceeded. In addition, our statistical approach for the definition of confidence interval and forecast reliability can be applied also to different failure forecast methods. We applied for the first time the herein presented approach in near real time during the emergency scenario relevant to the reactivation of the La Saxe rockslide, a large mass wasting menacing the population of Courmayeur, northern Italy, and the important European route E25. We show how the application of simplified but robust forecast models can be a convenient method to manage and support early warning systems during critical situations. References: Fukuzono T. (1985), A New Method for Predicting the Failure Time of a Slope, Proc. IVth International Conference and Field Workshop on Landslides, Tokyo.

  15. Primary care REFerral for EchocaRdiogram (REFER) in heart failure: a diagnostic accuracy study

    PubMed Central

    Taylor, Clare J; Roalfe, Andrea K; Iles, Rachel; Hobbs, FD Richard; Barton, P; Deeks, J; McCahon, D; Cowie, MR; Sutton, G; Davis, RC; Mant, J; McDonagh, T; Tait, L

    2017-01-01

    Background Symptoms of breathlessness, fatigue, and ankle swelling are common in general practice but deciding which patients are likely to have heart failure is challenging. Aim To evaluate the performance of a clinical decision rule (CDR), with or without N-Terminal pro-B type natriuretic peptide (NT-proBNP) assay, for identifying heart failure. Design and setting Prospective, observational, diagnostic validation study of patients aged >55 years, presenting with shortness of breath, lethargy, or ankle oedema, from 28 general practices in England. Method The outcome was test performance of the CDR and natriuretic peptide test in determining a diagnosis of heart failure. The reference standard was an expert consensus panel of three cardiologists. Results Three hundred and four participants were recruited, with 104 (34.2%; 95% confidence interval [CI] = 28.9 to 39.8) having a confirmed diagnosis of heart failure. The CDR+NT-proBNP had a sensitivity of 90.4% (95% CI = 83.0 to 95.3) and specificity 45.5% (95% CI = 38.5 to 52.7). NT-proBNP level alone with a cut-off <400 pg/ml had sensitivity 76.9% (95% CI = 67.6 to 84.6) and specificity 91.5% (95% CI = 86.7 to 95.0). At the lower cut-off of NT-proBNP <125 pg/ml, sensitivity was 94.2% (95% CI = 87.9 to 97.9) and specificity 49.0% (95% CI = 41.9 to 56.1). Conclusion At the low threshold of NT-proBNP <125 pg/ml, natriuretic peptide testing alone was better than a validated CDR+NT-proBNP in determining which patients presenting with symptoms went on to have a diagnosis of heart failure. The higher NT-proBNP threshold of 400 pg/ml may mean more than one in five patients with heart failure are not appropriately referred. Guideline natriuretic peptide thresholds may need to be revised. PMID:27919937

  16. Optimizing risk stratification in heart failure and the selection of candidates for heart transplantation.

    PubMed

    Pereira-da-Silva, Tiago; M Soares, Rui; Papoila, Ana Luísa; Pinto, Iola; Feliciano, Joana; Almeida-Morais, Luís; Abreu, Ana; Cruz Ferreira, Rui

    2018-02-01

    Selecting patients for heart transplantation is challenging. We aimed to identify the most important risk predictors in heart failure and an approach to optimize the selection of candidates for heart transplantation. Ambulatory patients followed in our center with symptomatic heart failure and left ventricular ejection fraction ≤40% prospectively underwent a comprehensive baseline assessment including clinical, laboratory, electrocardiographic, echocardiographic, and cardiopulmonary exercise testing parameters. All patients were followed for 60 months. The combined endpoint was cardiac death, urgent heart transplantation or need for mechanical circulatory support, up to 36 months. In the 263 enrolled patients (75% male, age 54±12 years), 54 events occurred. The independent predictors of adverse outcome were ventilatory efficiency (VE/VCO 2 ) slope (HR 1.14, 95% CI 1.11-1.18), creatinine level (HR 2.23, 95% CI 1.14-4.36), and left ventricular ejection fraction (HR 0.96, 95% CI 0.93-0.99). VE/VCO 2 slope was the most accurate risk predictor at any follow-up time analyzed (up to 60 months). The threshold of 39.0 yielded high specificity (97%), discriminated a worse or better prognosis than that reported for post-heart transplantation, and outperformed peak oxygen consumption thresholds of 10.0 or 12.0 ml/kg/min. For low-risk patients (VE/VCO 2 slope <39.0), sodium and creatinine levels and variations in end-tidal carbon dioxide partial pressure on exercise identified those with excellent prognosis. VE/VCO 2 slope was the most accurate parameter for risk stratification in patients with heart failure and reduced ejection fraction. Those with VE/VCO 2 slope ≥39.0 may benefit from heart transplantation. Copyright © 2018 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  17. An evaluation of the Johnson-Cook model to simulate puncture of 7075 aluminum plates.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corona, Edmundo; Orient, George Edgar

    The objective of this project was to evaluate the use of the Johnson-Cook strength and failure models in an adiabatic finite element model to simulate the puncture of 7075- T651 aluminum plates that were studied as part of an ASC L2 milestone by Corona et al (2012). The Johnson-Cook model parameters were determined from material test data. The results show a marked improvement, in particular in the calculated threshold velocity between no puncture and puncture, over those obtained in 2012. The threshold velocity calculated using a baseline model is just 4% higher than the mean value determined from experiment, inmore » contrast to 60% in the 2012 predictions. Sensitivity studies showed that the threshold velocity predictions were improved by calibrating the relations between the equivalent plastic strain at failure and stress triaxiality, strain rate and temperature, as well as by the inclusion of adiabatic heating.« less

  18. A standardized model for predicting flap failure using indocyanine green dye

    NASA Astrophysics Data System (ADS)

    Zimmermann, Terence M.; Moore, Lindsay S.; Warram, Jason M.; Greene, Benjamin J.; Nakhmani, Arie; Korb, Melissa L.; Rosenthal, Eben L.

    2016-03-01

    Techniques that provide a non-invasive method for evaluation of intraoperative skin flap perfusion are currently available but underutilized. We hypothesize that intraoperative vascular imaging can be used to reliably assess skin flap perfusion and elucidate areas of future necrosis by means of a standardized critical perfusion threshold. Five animal groups (negative controls, n=4; positive controls, n=5; chemotherapy group, n=5; radiation group, n=5; chemoradiation group, n=5) underwent pre-flap treatments two weeks prior to undergoing random pattern dorsal fasciocutaneous flaps with a length to width ratio of 2:1 (3 x 1.5 cm). Flap perfusion was assessed via laser-assisted indocyanine green dye angiography and compared to standard clinical assessment for predictive accuracy of flap necrosis. For estimating flap-failure, clinical prediction achieved a sensitivity of 79.3% and a specificity of 90.5%. When average flap perfusion was more than three standard deviations below the average flap perfusion for the negative control group at the time of the flap procedure (144.3+/-17.05 absolute perfusion units), laser-assisted indocyanine green dye angiography achieved a sensitivity of 81.1% and a specificity of 97.3%. When absolute perfusion units were seven standard deviations below the average flap perfusion for the negative control group, specificity of necrosis prediction was 100%. Quantitative absolute perfusion units can improve specificity for intraoperative prediction of viable tissue. Using this strategy, a positive predictive threshold of flap failure can be standardized for clinical use.

  19. Failure probability under parameter uncertainty.

    PubMed

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  20. Utility of the Instability Severity Index Score in Predicting Failure After Arthroscopic Anterior Stabilization of the Shoulder.

    PubMed

    Phadnis, Joideep; Arnold, Christine; Elmorsy, Ahmed; Flannery, Mark

    2015-08-01

    The redislocation rate after arthroscopic stabilization for anterior glenohumeral instability is up to 30%. The Instability Severity Index Score (ISIS) was developed to preoperatively rationalize the risk of failure, but it has not yet been validated by an independent group. To assess the utility of the ISIS in predicting failure of arthroscopic anterior shoulder stabilization and to identify other preoperative factors for failure. Case-control study; Level of evidence, 3. A case-control study was performed on 141 consecutive patients, comparing those who suffered failure of arthroscopic stabilization with those who had successful arthroscopic stabilization. The mean follow-up time was 47 months (range, 24-132 months). The ISIS was applied retrospectively, and an analysis was performed to establish independent risk factors for failure. A receiver operator coefficient curve was constructed to set a threshold ISIS for considering alternative surgery. Of 141 patients, 19 (13.5%) suffered recurrent instability. The mean ISIS of the failed stabilization group was higher than that of the successful stabilization group (5.1 vs 1.7; P < .001). Independent risk factors for failure were Hill-Sachs lesion (P < .001), glenoid bone loss (P < .001), age <21 years at the time of surgery (P < .001), age at first dislocation (P = .01), competitive-level participation in sports (P < .001), and participation in contact or overhead sports (P = .03). The presence of glenoid bone loss carried the highest risk of failure (70%). There was a 70% risk of failure if the ISIS was ≥4, as opposed to a 4% risk of failure if the ISIS was <4. This is the first completely independent study to confirm that the ISIS is a useful preoperative tool. It is recommended that surgeons consider alternative forms of stabilization if the ISIS is ≥4. © 2015 The Author(s).

  1. Method and apparatus for detection of catalyst failure on-board a motor vehicle using a dual oxygen sensor and an algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clemmens, W.B.; Koupal, J.W.; Sabourin, M.A.

    1993-07-20

    Apparatus is described for detecting motor vehicle exhaust gas catalytic converter deterioration comprising a first exhaust gas oxygen sensor adapted for communication with an exhaust stream before passage of the exhaust stream through a catalytic converter and a second exhaust gas oxygen sensor adapted for communication with the exhaust stream after passage of the exhaust stream through the catalytic converter, an on-board vehicle computational means, said computational means adapted to accept oxygen content signals from the before and after catalytic converter oxygen sensors and adapted to generate signal threshold values, said computational means adapted to compare over repeated time intervalsmore » the oxygen content signals to the signal threshold values and to store the output of the compared oxygen content signals, and in response after a specified number of time intervals for a specified mode of motor vehicle operation to determine and indicate a level of catalyst deterioration.« less

  2. Hydro-mechanical mechanism and thresholds of rainfall-induced unsaturated landslides

    NASA Astrophysics Data System (ADS)

    Yang, Zongji; Lei, Xiaoqin; Huang, Dong; Qiao, Jianping

    2017-04-01

    The devastating Ms 8 Wenchuan earthquake in 2008 created the greatest number of co-seismic mountain hazards ever recorded in China. However, the dynamics of rainfall induced mass remobilization and transport deposits after giant earthquake are not fully understood. Moreover, rainfall intensity and duration (I-D) methods are the predominant early warning indicators of rainfall-induced landslides in post-earthquake region, which are a convenient and straight-forward way to predict the hazards. However, the rainfall-based criteria and thresholds are generally empirical and based on statistical analysis,consequently, they ignore the failure mechanisms of the landslides. This study examines the mechanism and hydro-mechanical behavior and thresholds of these unsaturated deposits under the influence of rainfall. To accomplish this, in situ experiments were performed in an instrumented landslide deposit, The field experimental tests were conducted on a natural co-seismic fractured slope to 1) simulate rainfall-induced shallow failures in the depression channels of a debris flow catchment in an earthquake-affected region, 2)explore the mechanisms and transient processes associated with hydro-mechanical parameter variations in response to the infiltration of rainfall, and 3) identify the hydrologic parameter thresholds and critical criteria of gravitational erosion in areas prone to mass remobilization as a source of debris flows. These experiments provided instrumental evidence and directly proved that post-earthquake rainfall-induced mass remobilization occurred under unsaturated conditions in response to transient rainfall infiltration, and revealed the presence of transient processes and the dominance of preferential flow paths during rainfall infiltration. A hydro-mechanical method was adopted for the transient hydrologic process modelling and unsaturated slope stability analysis. and the slope failures during the experimental test were reproduced by the model, indicating that the decrease in matrix suction and increase in moisture content in response to rainfall infiltration contributed greatly to post-earthquake shallow mass movement. Thus, a threshold model for the initiation of mass remobilization is proposed based on correlations between slope stability and volumetric water content and matrix suction As a complement to rainfall-based early warning strategies, the water content and suction threshold models based on the water infiltration induced slope failure mechanism. the proposed method are expected to improve the accuracy of prediction and early warnings of post-earthquake mountain hazards

  3. Application of failure mode and effects analysis to treatment planning in scanned proton beam radiotherapy

    PubMed Central

    2013-01-01

    Background A multidisciplinary and multi-institutional working group applied the Failure Mode and Effects Analysis (FMEA) approach to the actively scanned proton beam radiotherapy process implemented at CNAO (Centro Nazionale di Adroterapia Oncologica), aiming at preventing accidental exposures to the patient. Methods FMEA was applied to the treatment planning stage and consisted of three steps: i) identification of the involved sub-processes; ii) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system, iii) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. Results Thirty-four sub-processes were identified, twenty-two of them were judged to be potentially prone to one or more failure modes. A total of forty-four failure modes were recognized, 52% of them characterized by an RPN score equal to 80 or higher. The threshold of 125 for RPN was exceeded in five cases only. The most critical sub-process appeared related to the delineation and correction of artefacts in planning CT data. Failures associated to that sub-process were inaccurate delineation of the artefacts and incorrect proton stopping power assignment to body regions. Other significant failure modes consisted of an outdated representation of the patient anatomy, an improper selection of beam direction and of the physical beam model or dose calculation grid. The main effects of these failures were represented by wrong dose distribution (i.e. deviating from the planned one) delivered to the patient. Additional strategies for risk mitigation, easily and immediately applicable, consisted of a systematic information collection about any known implanted prosthesis directly from each patient and enforcing a short interval time between CT scan and treatment start. Moreover, (i) the investigation of dedicated CT image reconstruction algorithms, (ii) further evaluation of treatment plan robustness and (iii) implementation of independent methods for dose calculation (such as Monte Carlo simulations) may represent novel solutions to increase patient safety. Conclusions FMEA is a useful tool for prospective evaluation of patient safety in proton beam radiotherapy. The application of this method to the treatment planning stage lead to identify strategies for risk mitigation in addition to the safety measures already adopted in clinical practice. PMID:23705626

  4. EPRI-NASA Cooperative Project on Stress Corrosion Cracking of Zircaloys. [nuclear fuel failures

    NASA Technical Reports Server (NTRS)

    Cubicciotti, D.; Jones, R. L.

    1978-01-01

    Examinations of the inside surface of irradiated fuel cladding from two reactors show the Zircaloy cladding is exposed to a number of aggressive substances, among them iodine, cadmium, and iron-contaminated cesium. Iodine-induced stress corrosion cracking (SCC) of well characterized samples of Zircaloy sheet and tubing was studied. Results indicate that a threshold stress must be exceeded for iodine SCC to occur. The existence of a threshold stress indicates that crack formation probably is the key step in iodine SCC. Investigation of the crack formation process showed that the cracks responsible for SCC failure nucleated at locations in the metal surface that contained higher than average concentrations of alloying elements and impurities. A four-stage model of iodine SCC is proposed based on the experimental results and the relevance of the observations to pellet cladding interaction failures is discussed.

  5. Quasi-Static 3-Point Reinforced Carbon-Carbon Bend Test and Analysis for Shuttle Orbiter Wing Leading Edge Impact Damage Thresholds

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Sotiris, Kellas

    2006-01-01

    Static 3-point bend tests of Reinforced Carbon-Carbon (RCC) were conducted to failure to provide data for additional validation of an LS-DYNA RCC model suitable for predicting the threshold of impact damage to shuttle orbiter wing leading edges. LS-DYNA predictions correlated well with the average RCC failure load, and were good in matching the load vs. deflection. However, correlating the detectable damage using NDE methods with the cumulative damage parameter in LS-DYNA material model 58 was not readily achievable. The difficulty of finding internal RCC damage with NDE and the high sensitivity of the mat58 damage parameter to the load near failure made the task very challenging. In addition, damage mechanisms for RCC due to dynamic impact of debris such as foam and ice and damage mechanisms due to a static loading were, as expected, not equivalent.

  6. Methods for improved forewarning of condition changes in monitoring physical processes

    DOEpatents

    Hively, Lee M.

    2013-04-09

    This invention teaches further improvements in methods for forewarning of critical events via phase-space dissimilarity analysis of data from biomedical equipment, mechanical devices, and other physical processes. One improvement involves objective determination of a forewarning threshold (U.sub.FW), together with a failure-onset threshold (U.sub.FAIL) corresponding to a normalized value of a composite measure (C) of dissimilarity; and providing a visual or audible indication to a human observer of failure forewarning and/or failure onset. Another improvement relates to symbolization of the data according the binary numbers representing the slope between adjacent data points. Another improvement relates to adding measures of dissimilarity based on state-to-state dynamical changes of the system. And still another improvement relates to using a Shannon entropy as the measure of condition change in lieu of a connected or unconnected phase space.

  7. Improving the Estimates of International Space Station (ISS) Induced K-Factor Failure Rates for On-Orbit Replacement Unit (ORU) Supportability Analyses

    NASA Technical Reports Server (NTRS)

    Anderson, Leif F.; Harrington, Sean P.; Omeke, Ojei, II; Schwaab, Douglas G.

    2009-01-01

    This is a case study on revised estimates of induced failure for International Space Station (ISS) on-orbit replacement units (ORUs). We devise a heuristic to leverage operational experience data by aggregating ORU, associated function (vehicle sub -system), and vehicle effective' k-factors using actual failure experience. With this input, we determine a significant failure threshold and minimize the difference between the actual and predicted failure rates. We conclude with a discussion on both qualitative and quantitative improvements the heuristic methods and potential benefits to ISS supportability engineering analysis.

  8. Earthquake triggering by transient and static deformations

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.M.; Blanpied, M.L.; Bodin, P.

    1998-01-01

    Observational evidence for both static and transient near-field and far-field triggered seismicity are explained in terms of a frictional instability model, based on a single degree of freedom spring-slider system and rate- and state-dependent frictional constitutive equations. In this study a triggered earthquake is one whose failure time has been advanced by ??t (clock advance) due to a stress perturbation. Triggering stress perturbations considered include square-wave transients and step functions, analogous to seismic waves and coseismic static stress changes, respectively. Perturbations are superimposed on a constant background stressing rate which represents the tectonic stressing rate. The normal stress is assumed to be constant. Approximate, closed-form solutions of the rate-and-state equations are derived for these triggering and background loads, building on the work of Dieterich [1992, 1994]. These solutions can be used to simulate the effects of static and transient stresses as a function of amplitude, onset time t0, and in the case of square waves, duration. The accuracies of the approximate closed-form solutions are also evaluated with respect to the full numerical solution and t0. The approximate solutions underpredict the full solutions, although the difference decreases as t0, approaches the end of the earthquake cycle. The relationship between ??t and t0 differs for transient and static loads: a static stress step imposed late in the cycle causes less clock advance than an equal step imposed earlier, whereas a later applied transient causes greater clock advance than an equal one imposed earlier. For equal ??t, transient amplitudes must be greater than static loads by factors of several tens to hundreds depending on t0. We show that the rate-and-state model requires that the total slip at failure is a constant, regardless of the loading history. Thus a static load applied early in the cycle, or a transient applied at any time, reduces the stress at the initiation of failure, whereas static loads that are applied sufficiently late raise it. Rate-and-state friction predictions differ markedly from those based on Coulomb failure stress changes (??CFS) in which ??t equals the amplitude of the static stress change divided by the background stressing rate. The ??CFS model assumes a stress failure threshold, while the rate-and-state equations require a slip failure threshold. The complete rale-and-state equations predict larger ??t than the ??CFS model does for static stress steps at small t0, and smaller ??t than the ??CFS model for stress steps at large t0. The ??CFS model predicts nonzero ??t only for transient loads that raise the stress to failure stress levels during the transient. In contrast, the rate-and-state model predicts nonzero ??t for smaller loads, and triggered failure may occur well after the transient is finished. We consider heuristically the effects of triggering on a population of faults, as these effects might be evident in seismicity data. Triggering is manifest as an initial increase in seismicity rate that may be followed by a quiescence or by a return to the background rate. Available seismicity data are insufficient to discriminate whether triggered earthquakes are "new" or clock advanced. However, if triggering indeed results from advancing the failure time of inevitable earthquakes, then our modeling suggests that a quiescence always follows transient triggering and that the duration of increased seismicity also cannot exceed the duration of a triggering transient load. Quiescence follows static triggering only if the population of available faults is finite.

  9. Modeling the roles of damage accumulation and mechanical healing on rainfall-induced landslides

    NASA Astrophysics Data System (ADS)

    Fan, Linfeng; Lehmann, Peter; Or, Dani

    2014-05-01

    The abrupt release of rainfall-induced shallow landslides is preceded by local failures that may abruptly coalesce and form a continuous failure plane within a hillslope. The mechanical status of hillslopes reflects a competition between the extent of severity of accumulated local damage during prior rainfall events and the rates of mechanically healing (i.e. regaining of strength) by closure of micro-cracks, regrowth of roots, etc. The interplay of these processes affects the initial conditions for landslide modeling and shapes potential failure patterns during future rainfall events. We incorporated these competing mechanical processes in a hydro-mechanical landslide triggering model subjected to a sequence of rainfall scenarios. The model employs the Fiber Bundle Model (FBM) with bonds (fiber bundle) with prescribed threshold linking adjacent soil columns and soil to bedrock. Prior damage was represented by a fraction of broken fibers during previous rainfall events, and the healing of broken fibers was described by strength regaining models for soil and roots at different characteristic time scales. Results show that prior damage and healing introduce highly nonlinear response to landslide triggering. For small prior damage, mechanical bonds at soil-bedrock interface may fail early in next rainfall event but lead to small perturbations onto lateral bonds without triggering a landslide. For more severe damage weakening lateral bonds, excess load due to failure at soil-bedrock interface accumulates at downslope soil columns resulting in early soil failure with patterns strongly correlated with prior damage distribution. Increasing prior damage over the hillslope decreases the volume of first landslide and prolongs the time needed to trigger the second landslide due to mechanical relaxation of the system. The mechanical healing of fibers diminishes effects of prior damage on the time of failure, and shortens waiting time between the first and second landslides. These findings highlight the need to improve definition of initial conditions and the shortcomings of assuming pristine hillslopes.

  10. Spatially implicit approaches to understand the manipulation of mating success for insect invasion management

    Treesearch

    Takehiko Yamanaka; Andrew M. Liebhold

    2009-01-01

    Recent work indicates that Allee effects (the positive relationship between population size and per capita growth rate) are critical in determining the successful establishment of invading species. Allee effects may create population thresholds, and failure to establish is likely if invading populations fall below these thresholds. There are many mechanisms that may...

  11. Laboratory Tests for Diagnosis of Chronic Periprosthetic Joint Infection Can Help Predict Outcomes of Two-Stage Exchange.

    PubMed

    Dwyer, Maureen K; Damsgaard, Christopher; Wadibia, Jason; Wong, Gordon; Lazar, Damien; Smith, Eric; Talmo, Carl; Bedair, Hany

    2018-06-20

    Although 2-stage exchange arthroplasty is the most effective treatment among available strategies for managing chronic periprosthetic joint infection (PJI), rates of its success vary greatly. The purpose of our study was to examine whether objective measurements collected at the time of the diagnosis of PJI could be used to identify patients at risk of failure of 2-stage exchange. We identified 205 patients across 4 institutions who underwent 2-stage exchange arthroplasty for the treatment of PJI following total hip or total knee arthroplasty. Demographic, surgical, and laboratory data were obtained for each patient from their medical chart. Laboratory values included serum erythrocyte sedimentation rate (ESR), serum C-reactive protein (CRP) level, synovial fluid white blood-cell (WBC) count and neutrophil percentage, synovial fluid and/or tissue culture, and Gram stain. Patients who underwent revision surgery for recurrent infection were considered to have failed the 2-stage procedure. Demographic, surgical, and laboratory variables were compared between the 2 groups. Receiver operating characteristic (ROC) curves were constructed to determine threshold cutoffs for significant laboratory values. Risk ratios and 95% confidence intervals were calculated. Overall, 2-stage exchange was unsuccessful for 27.3% of the patients. Preoperative serum ESR (p = 0.035) and synovial fluid WBC count (p = 0.008) and neutrophil percentage (p = 0.041) were greater in patients with recurrent infection. ROC curve analysis revealed a threshold of >60,000 cells/μL for synovial fluid WBC count, >92% for synovial fluid WBC neutrophil percentage, and >99 mm/hr for serum ESR. Failure of 2-stage exchange was 2.5 times more likely for patients with an elevated preoperative synovial fluid WBC count, 2.0 times more likely for those with an elevated preoperative synovial fluid WBC neutrophil percentage, and 1.8 times more likely for those with an elevated preoperative serum ESR. Our results demonstrated that a greater number of patients in whom 2-stage exchange arthroplasty ultimately failed had a preoperative synovial fluid WBC count of >60,000 cells/μL, a synovial fluid WBC neutrophil percentage of >92%, or a serum ESR of >99 mm/hr. Patients with elevated laboratory values had 1.8 to 2.5 times the risk of treatment failure. These data can serve as a clinical guideline to identify patients most at risk for failure of 2-stage exchange. Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence.

  12. The failure of earthquake failure models

    USGS Publications Warehouse

    Gomberg, J.

    2001-01-01

    In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.

  13. Effect of Tritium on Cracking Threshold in 7075 Aluminum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duncan, A.; Morgan, M.

    The effect of long-term exposure to tritium gas on the cracking threshold (K TH) of 7075 Aluminum Alloy was investigated. The alloy is the material of construction for a cell used to contain tritium in an accelerator at Jefferson Laboratory designed for inelastic scattering experiments on nucleons. The primary safety concerns for the Jefferson Laboratory tritium cell is a tritium leak due to mechanical failure of windows from hydrogen isotope embrittlement, radiation damage, or loss of target integrity from accidental excessive beam heating due to failure of the raster or grossly mis-steered beam. Experiments were conducted to investigate the potentialmore » for embrittlement of the 7075 Aluminum alloy from tritium gas.« less

  14. Retrospective analysis of reasons for failure of DDD pacemaker implantation in patients operated on between 1993 and 2005.

    PubMed

    Lelakowski, Jacek; Majewski, Jacek; Małecka, Barbara; Bednarek, Jacek; Stypuła, Paweł; Szeglowski, Marcin

    2007-01-01

    During implantation of a DDD pacemaker the following difficulties may be encountered: venous anomalies (the absence of vessels of adequate calibre or difficulty in subclavian vein puncture), arrhythmias during implantation (episodes of atrial flutter/fibrillation while the atrial leads are being positioned), lack of mechanical stability of the electrode in the heart chamber and inability to achieve an acceptable pacing and sensing threshold during implantation. The purpose of the study was to analyse retrospectively the reasons for DDD pacemaker failure in patients operated on between 1993 and 2005. We reviewed retrospectively all implantation data from 1988 to 2005 to identify patients with primary failure of DDD pacemaker implantation. Further analysis included patients who had received a DDD pacemaker between 1993 and 2005, when this type of pacemaker made up between 9 and 40% of all pacemaker implantations. We implanted 7469 pacemakers, including 1958 (26.2%) dual-chamber pacemakers, in 783 patients with atrioventricular block (AVB), 392 with sick sinus syndrome (SSS), 450 with AVB +/- SSS and 333 with tachy-brady syndrome (TBS). The mean age of the patients was 65.5 +/- 17.3 years. DDD pacing was unsuccessful in 108 (1.4%) patients, including 32 with AVB, 22 with SSS, 16 with SSS +/- AVB and 38 with TBS. The mean age of these patients was 78.5 +/- 19.4 years. The reasons for failed implantation were venous anomalies in 12%, an arrhythmia episode in 27.8%, a high pacing threshold in the atrium in 17.6%, low atrial potential amplitude in 25.9% and lack of mechanical stability of the electrode in 16.7% of patients. The difficulties were encountered in elderly patients (p < 0.01), most frequently in patients with SSS and TBS (71). Between 2004 and 2005 venous anomalies and a high pacing threshold were the main causes of failure. Currently the main difficulties encountered during pacemaker implantation are venous anomalies and a high pacing threshold. Arrhythmia episodes, low atrial potential amplitude and lack of mechanical stability are of minor importance. Elderly patients with sick sinus syndrome and tachy-brady syndrome have the highest failure rate. (Cardiol J 2007; 14: 155-159).

  15. SES, Heart Failure, and N-terminal Pro-b-type Natriuretic Peptide: The Atherosclerosis Risk in Communities Study.

    PubMed

    Vart, Priya; Matsushita, Kunihiro; Rawlings, Andreea M; Selvin, Elizabeth; Crews, Deidra C; Ndumele, Chiadi E; Ballantyne, Christie M; Heiss, Gerardo; Kucharska-Newton, Anna; Szklo, Moyses; Coresh, Josef

    2018-02-01

    Compared with coronary heart disease and stroke, the association between SES and the risk of heart failure is less well understood. In 12,646 participants of the Atherosclerosis Risk in Communities Study cohort free of heart failure history at baseline (1987-1989), the association of income, educational attainment, and area deprivation index with subsequent heart failure-related hospitalization or death was examined while accounting for cardiovascular disease risk factors and healthcare access. Because SES may affect threshold of identifying heart failure and admitting for heart failure management, secondarily the association between SES and N-terminal pro-b-type natriuretic peptide (NT-proBNP) levels, a marker reflecting cardiac overload, was investigated. Analysis was conducted in 2016. During a median follow-up of 24.3 years, a total of 2,249 participants developed heart failure. In a demographically adjusted model, the lowest-SES group had 2.2- to 2.5-fold higher risk of heart failure compared with the highest SES group for income, education, and area deprivation. With further adjustment for time-varying cardiovascular disease risk factors and healthcare access, these associations were attenuated but remained statistically significant (e.g., hazard ratio=1.92, 95% CI=1.69, 2.19 for the lowest versus highest income), with no racial interaction (p>0.05 for all SES measures). Similarly, compared with high SES, low SES was associated with both higher baseline level of NT-proBNP in a multivariable adjusted model (15% higher, p<0.001) and increase over time (~1% greater per year, p=0.023). SES was associated with clinical heart failure as well as NT-proBNP levels inversely and independently of traditional cardiovascular disease factors and healthcare access. Copyright © 2018 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  16. Continuous Particulate Filter State of Health Monitoring Using Radio Frequency Sensing

    DOE PAGES

    Sappok, Alexander; Ragaller, Paul; Herman, Andrew; ...

    2018-04-03

    Reliable means for on-board detection of particulate filter failures or malfunctions are needed to meet diagnostics (OBD) requirements. Detecting these failures, which result in tailpipe particulate matter (PM) emissions exceeding the OBD limit, over all operating conditions is challenging. Current approaches employ differential pressure sensors and downstream PM sensors, in combination with particulate filter and engine-out soot models. These conventional monitors typically operate over narrowly-defined time windows and do not provide a direct measure of the filter’s state of health. In contrast, radio frequency (RF) sensors, which transmit a wireless signal through the filter substrate provide a direct means formore » interrogating the condition of the filter itself. Here, this study investigated the use of RF sensors for the continuous measurement of filter trapping efficiency, which was compared to downstream measurements with an AVL Microsoot Sensor, and a PM sampling probe simulating the geometry and installation configuration of a conventional PM sensor. The study included several particulate filter failure modes, both above and below the OBD threshold. Finally, the results confirmed the use of RF sensors to provide a direct and continuous measure of the particulate filter’s state of health over a range of typical in-use operating conditions, thereby significantly increasing the time window over which filter failures may be detected.« less

  17. Continuous Particulate Filter State of Health Monitoring Using Radio Frequency Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sappok, Alexander; Ragaller, Paul; Herman, Andrew

    Reliable means for on-board detection of particulate filter failures or malfunctions are needed to meet diagnostics (OBD) requirements. Detecting these failures, which result in tailpipe particulate matter (PM) emissions exceeding the OBD limit, over all operating conditions is challenging. Current approaches employ differential pressure sensors and downstream PM sensors, in combination with particulate filter and engine-out soot models. These conventional monitors typically operate over narrowly-defined time windows and do not provide a direct measure of the filter’s state of health. In contrast, radio frequency (RF) sensors, which transmit a wireless signal through the filter substrate provide a direct means formore » interrogating the condition of the filter itself. Here, this study investigated the use of RF sensors for the continuous measurement of filter trapping efficiency, which was compared to downstream measurements with an AVL Microsoot Sensor, and a PM sampling probe simulating the geometry and installation configuration of a conventional PM sensor. The study included several particulate filter failure modes, both above and below the OBD threshold. Finally, the results confirmed the use of RF sensors to provide a direct and continuous measure of the particulate filter’s state of health over a range of typical in-use operating conditions, thereby significantly increasing the time window over which filter failures may be detected.« less

  18. Corona-vacuum failure mechanism test facilities

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Mueller, L. A.; Koutnik, E. A.

    1975-01-01

    A nondestructive corona-vacuum test facility for testing high-voltage power system components has been developed using commercially available hardware. The facility simulates operating temperature and vacuum while monitoring coronal discharges with residual gases. Corona threshold voltages obtained from statorette tests with various gas-solid dielectric systems and comparison with calculated data support the following conclusions: (1) air gives the highest corona threshold voltage and helium the lowest, with argon and helium-xenon mixtures intermediate; (2) corona threshold voltage increases with gas pressure; (3) corona threshold voltage for an armature winding can be accurately calculated by using Paschen curves for a uniform field; and (4) Paschen curves for argon can be used to calculate the corona threshold voltage in He-Xe mixtures, for which Paschen curves are unavailable.-

  19. Formation of the 2015 Shenzhen landslide as observed by SAR shape-from-shading.

    PubMed

    Wang, Chisheng; Li, Qingquan; Zhu, Jiasong; Gao, Wei; Shan, Xinjian; Song, Jun; Ding, Xiaoli

    2017-03-03

    The time-series topography change of a landfill site before its failure has rarely been surveyed in detail. However, this information is important for both landfill management and early warning of landslides. Here, we take the 2015 Shenzhen landslide as an example, and we use the radar shape-from-shading (SFS) technique to retrieve time-series digital elevation models of the landfill. The results suggest that the total filling volume reached 4,074,300 m 3 in the one and a half years before the landslide, while 2,817,400 m 3 slid down in the accident. Meanwhile, the landfill rate in most areas exceeded 2 m/month, which is the empirical upper threshold in landfill engineering. Using topography captured on December 12, 2015, the slope safety analysis gives a factor of safety of 0.932, suggesting that this slope was already hazardous before the landslide. We conclude that the synthetic aperture radar (SAR) SFS technique has the potential to contribute to landfill failure monitoring.

  20. Modeling and analyzing cascading dynamics of the Internet based on local congestion information

    NASA Astrophysics Data System (ADS)

    Zhu, Qian; Nie, Jianlong; Zhu, Zhiliang; Yu, Hai; Xue, Yang

    2018-06-01

    Cascading failure has already become one of the vital issues in network science. By considering realistic network operational settings, we propose the congestion function to represent the congested extent of node and construct a local congestion-aware routing strategy with a tunable parameter. We investigate the cascading failures on the Internet triggered by deliberate attacks. Simulation results show that the tunable parameter has an optimal value that makes the network achieve a maximum level of robustness. The robustness of the network has a positive correlation with tolerance parameter, but it has a negative correlation with the packets generation rate. In addition, there exists a threshold of the attacking proportion of nodes that makes the network achieve the lowest robustness. Moreover, by introducing the concept of time delay for information transmission on the Internet, we found that an increase of the time delay will decrease the robustness of the network rapidly. The findings of the paper will be useful for enhancing the robustness of the Internet in the future.

  1. Assessment of spare reliability for multi-state computer networks within tolerable packet unreliability

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Huang, Cheng-Fu

    2015-04-01

    From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.

  2. Formation of the 2015 Shenzhen landslide as observed by SAR shape-from-shading

    PubMed Central

    Wang, Chisheng; Li, Qingquan; Zhu, Jiasong; Gao, Wei; Shan, Xinjian; Song, Jun; Ding, Xiaoli

    2017-01-01

    The time-series topography change of a landfill site before its failure has rarely been surveyed in detail. However, this information is important for both landfill management and early warning of landslides. Here, we take the 2015 Shenzhen landslide as an example, and we use the radar shape-from-shading (SFS) technique to retrieve time-series digital elevation models of the landfill. The results suggest that the total filling volume reached 4,074,300 m3 in the one and a half years before the landslide, while 2,817,400 m3 slid down in the accident. Meanwhile, the landfill rate in most areas exceeded 2 m/month, which is the empirical upper threshold in landfill engineering. Using topography captured on December 12, 2015, the slope safety analysis gives a factor of safety of 0.932, suggesting that this slope was already hazardous before the landslide. We conclude that the synthetic aperture radar (SAR) SFS technique has the potential to contribute to landfill failure monitoring. PMID:28256522

  3. The impact of vaccine failure rate on epidemic dynamics in responsive networks.

    PubMed

    Liang, Yu-Hao; Juang, Jonq

    2015-04-01

    An SIS model based on the microscopic Markov-chain approximation is considered in this paper. It is assumed that the individual vaccination behavior depends on the contact awareness, local and global information of an epidemic. To better simulate the real situation, the vaccine failure rate is also taken into consideration. Our main conclusions are given in the following. First, we show that if the vaccine failure rate α is zero, then the epidemic eventually dies out regardless of what the network structure is or how large the effective spreading rate and the immunization response rates of an epidemic are. Second, we show that for any positive α, there exists a positive epidemic threshold depending on an adjusted network structure, which is only determined by the structure of the original network, the positive vaccine failure rate and the immunization response rate for contact awareness. Moreover, the epidemic threshold increases with respect to the strength of the immunization response rate for contact awareness. Finally, if the vaccine failure rate and the immunization response rate for contact awareness are positive, then there exists a critical vaccine failure rate αc > 0 so that the disease free equilibrium (DFE) is stable (resp., unstable) if α < αc (resp., α > αc). Numerical simulations to see the effectiveness of our theoretical results are also provided.

  4. Network problem threshold

    NASA Technical Reports Server (NTRS)

    Gejji, Raghvendra, R.

    1992-01-01

    Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.

  5. On rate-state and Coulomb failure models

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified Coulomb failure model in which the failure stress threshold is lowered due to weakening, increasing the clock advance. The deviation from a non-Coulomb response also depends on the loading rate, elastic stiffness, initial conditions, and assumptions about how state evolves.

  6. New early warning system for gravity-driven ruptures based on codetection of acoustic signal

    NASA Astrophysics Data System (ADS)

    Faillettaz, J.

    2016-12-01

    Gravity-driven rupture phenomena in natural media - e.g. landslide, rockfalls, snow or ice avalanches - represent an important class of natural hazards in mountainous regions. To protect the population against such events, a timely evacuation often constitutes the only effective way to secure the potentially endangered area. However, reliable prediction of imminence of such failure events remains challenging due to the nonlinear and complex nature of geological material failure hampered by inherent heterogeneity, unknown initial mechanical state, and complex load application (rainfall, temperature, etc.). Here, a simple method for real-time early warning that considers both the heterogeneity of natural media and characteristics of acoustic emissions attenuation is proposed. This new method capitalizes on codetection of elastic waves emanating from microcracks by multiple and spatially separated sensors. Event-codetection is considered as surrogate for large event size with more frequent codetected events (i.e., detected concurrently on more than one sensor) marking imminence of catastrophic failure. Simple numerical model based on a Fiber Bundle Model considering signal attenuation and hypothetical arrays of sensors confirms the early warning potential of codetection principles. Results suggest that although statistical properties of attenuated signal amplitude could lead to misleading results, monitoring the emergence of large events announcing impeding failure is possible even with attenuated signals depending on sensor network geometry and detection threshold. Preliminary application of the proposed method to acoustic emissions during failure of snow samples has confirmed the potential use of codetection as indicator for imminent failure at lab scale. The applicability of such simple and cheap early warning system is now investigated at a larger scale (hillslope). First results of such a pilot field experiment are presented and analysed.

  7. Prognostics for Microgrid Components

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav

    2012-01-01

    Prognostics is the science of predicting future performance and potential failures based on targeted condition monitoring. Moving away from the traditional reliability centric view, prognostics aims at detecting and quantifying the time to impending failures. This advance warning provides the opportunity to take actions that can preserve uptime, reduce cost of damage, or extend the life of the component. The talk will focus on the concepts and basics of prognostics from the viewpoint of condition-based systems health management. Differences with other techniques used in systems health management and philosophies of prognostics used in other domains will be shown. Examples relevant to micro grid systems and subsystems will be used to illustrate various types of prediction scenarios and the resources it take to set up a desired prognostic system. Specifically, the implementation results for power storage and power semiconductor components will demonstrate specific solution approaches of prognostics. The role of constituent elements of prognostics, such as model, prediction algorithms, failure threshold, run-to-failure data, requirements and specifications, and post-prognostic reasoning will be explained. A discussion on performance evaluation and performance metrics will conclude the technical discussion followed by general comments on open research problems and challenges in prognostics.

  8. Robustness of assembly supply chain networks by considering risk propagation and cascading failure

    NASA Astrophysics Data System (ADS)

    Tang, Liang; Jing, Ke; He, Jie; Stanley, H. Eugene

    2016-10-01

    An assembly supply chain network (ASCN) is composed of manufacturers located in different geographical regions. To analyze the robustness of this ASCN when it suffers from catastrophe disruption events, we construct a cascading failure model of risk propagation. In our model, different disruption scenarios s are considered and the probability equation of all disruption scenarios is developed. Using production capability loss as the robustness index (RI) of an ASCN, we conduct a numerical simulation to assess its robustness. Through simulation, we compare the network robustness at different values of linking intensity and node threshold and find that weak linking intensity or high node threshold increases the robustness of the ASCN. We also compare network robustness levels under different disruption scenarios.

  9. Estimating Time to ESRD Using Kidney Failure Risk Equations: Results From the African American Study of Kidney Disease and Hypertension (AASK)

    PubMed Central

    Grams, Morgan E.; Li, Liang; Greene, Tom H.; Tin, Adrienne; Sang, Yingying; Kao, W.H. Linda; Lipkowitz, Michael S.; Wright, Jackson T.; Chang, Alex R.; Astor, Brad C.; Appel, Lawrence J.

    2014-01-01

    Background Planning for renal replacement therapy, such as referral for arteriovenous fistula placement and transplantation, is often guided by level of estimated glomerular filtration rate (eGFR). The use of risk equations might enable more accurate estimation of time to end-stage renal disease (ESRD), thus improving patient care. Study Design Prospective observational study. Setting & Participants 1,094 participants in the African-American Study of Kidney Disease and Hypertension (AASK) cohort. Predictor Age, sex, urine protein-creatinine ratio ≥1 g/g, APOL1 high-risk status, and 3-year antecedent eGFR decline. Outcome Cumulative incidence of ESRD from five different starting points: eGFR values of 30 and 15 ml/min/1.73 m2, and a 5%, 10%, and 20% 1-year ESRD risk, estimated by a published, 4-variable kidney failure risk equation. Results There were 566 participants who developed an eGFR of 30 ml/min/1.73 m2, 244 who developed eGFR of 15 ml/min/1.73 m2, and 437, 336, and 259 who developed a 5%, 10%, and 20% 1-year ESRD risk, respectively. The 1-year cumulative incidence of ESRD was 4.3% from eGFR 30 ml/min/1.73 m2, 49.0% from eGFR 15 ml/min/1.73 m2, 6.7% from 5% ESRD risk, 15.0% from 10% ESRD risk, and 29% from 20% ESRD risk. From eGFR 30 ml/min/1.73 m2, there were several risk factors that predicted ESRD risk. From eGFR 15 ml/min/1.73 m2, only level of proteinuria did; median time to ESRD was 9 and 19 months in those with higher and lower proteinuria, respectively. Median times were less variable from corresponding ESRD risk thresholds. For example, median time to ESRD from 20% ESRD risk was 22 and 25 months among those with higher and lower proteinuria, respectively. Limitations Relatively homogeneous population of African Americans with hypertensive kidney disease. Conclusions The results of the present study suggest the potential benefit of incorporating kidney failure risk equations into clinical care, with selection of a specific threshold guided by its intended use. PMID:25441435

  10. Porting Inition and Failure to Linked Cheetah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P; Souers, P C

    2007-07-18

    Linked CHEETAH is a thermo-chemical code coupled to a 2-D hydrocode. Initially, a quadratic-pressure dependent kinetic rate was used, which worked well in modeling prompt detonation of explosives of large size, but does not work on other aspects of explosive behavior. The variable-pressure Tarantula reactive flow rate model was developed with JWL++ in order to also describe failure and initiation, and we have moved this model into Linked CHEETAH. The model works by turning on only above a pressure threshold, where a slow turn-on creates initiation. At a higher pressure, the rate suddenly leaps to a large value over amore » small pressure range. A slowly failing cylinder will see a rapidly declining rate, which pushes it quickly into failure. At a high pressure, the detonation rate is constant. A sequential validation procedure is used, which includes metal-confined cylinders, rate-sticks, corner-turning, initiation and threshold, gap tests and air gaps. The size (diameter) effect is central to the calibration.« less

  11. Cascading failures with local load redistribution in interdependent Watts-Strogatz networks

    NASA Astrophysics Data System (ADS)

    Hong, Chen; Zhang, Jun; Du, Wen-Bo; Sallan, Jose Maria; Lordan, Oriol

    2016-05-01

    Cascading failures of loads in isolated networks have been studied extensively over the last decade. Since 2010, such research has extended to interdependent networks. In this paper, we study cascading failures with local load redistribution in interdependent Watts-Strogatz (WS) networks. The effects of rewiring probability and coupling strength on the resilience of interdependent WS networks have been extensively investigated. It has been found that, for small values of the tolerance parameter, interdependent networks are more vulnerable as rewiring probability increases. For larger values of the tolerance parameter, the robustness of interdependent networks firstly decreases and then increases as rewiring probability increases. Coupling strength has a different impact on robustness. For low values of coupling strength, the resilience of interdependent networks decreases with the increment of the coupling strength until it reaches a certain threshold value. For values of coupling strength above this threshold, the opposite effect is observed. Our results are helpful to understand and design resilient interdependent networks.

  12. Extreme Threshold Failures Within a Heterogeneous Elastic Thin Sheet and the Spatial-Temporal Development of Induced Seismicity Within the Groningen Gas Field

    NASA Astrophysics Data System (ADS)

    Bourne, S. J.; Oates, S. J.

    2017-12-01

    Measurements of the strains and earthquakes induced by fluid extraction from a subsurface reservoir reveal a transient, exponential-like increase in seismicity relative to the volume of fluids extracted. If the frictional strength of these reactivating faults is heterogeneously and randomly distributed, then progressive failures of the weakest fault patches account in a general manner for this initial exponential-like trend. Allowing for the observable elastic and geometric heterogeneity of the reservoir, the spatiotemporal evolution of induced seismicity over 5 years is predictable without significant bias using a statistical physics model of poroelastic reservoir deformations inducing extreme threshold frictional failures of previously inactive faults. This model is used to forecast the temporal and spatial probability density of earthquakes within the Groningen natural gas reservoir, conditional on future gas production plans. Probabilistic seismic hazard and risk assessments based on these forecasts inform the current gas production policy and building strengthening plans.

  13. A single CD4 test with 250 cells/mm3 threshold predicts viral suppression in HIV-infected adults failing first-line therapy by clinical criteria.

    PubMed

    Gilks, Charles F; Walker, A Sarah; Munderi, Paula; Kityo, Cissy; Reid, Andrew; Katabira, Elly; Goodall, Ruth L; Grosskurth, Heiner; Mugyenyi, Peter; Hakim, James; Gibb, Diana M

    2013-01-01

    In low-income countries, viral load (VL) monitoring of antiretroviral therapy (ART) is rarely available in the public sector for HIV-infected adults or children. Using clinical failure alone to identify first-line ART failure and trigger regimen switch may result in unnecessary use of costly second-line therapy. Our objective was to identify CD4 threshold values to confirm clinically-determined ART failure when VL is unavailable. 3316 HIV-infected Ugandan/Zimbabwean adults were randomised to first-line ART with Clinically-Driven (CDM, CD4s measured but blinded) or routine Laboratory and Clinical Monitoring (LCM, 12-weekly CD4s) in the DART trial. CD4 at switch and ART failure criteria (new/recurrent WHO 4, single/multiple WHO 3 event; LCM: CD4<100 cells/mm(3)) were reviewed in 361 LCM, 314 CDM participants who switched over median 5 years follow-up. Retrospective VLs were available in 368 (55%) participants. Overall, 265/361 (73%) LCM participants failed with CD4<100 cells/mm(3); only 7 (2%) switched with CD4≥250 cells/mm(3), four switches triggered by WHO events. Without CD4 monitoring, 207/314 (66%) CDM participants failed with WHO 4 events, and 77(25%)/30(10%) with single/multiple WHO 3 events. Failure/switching with single WHO 3 events was more likely with CD4≥250 cells/mm(3) (28/77; 36%) (p = 0.0002). CD4 monitoring reduced switching with viral suppression: 23/187 (12%) LCM versus 49/181 (27%) CDM had VL<400 copies/ml at failure/switch (p<0.0001). Amongst CDM participants with CD4<250 cells/mm(3) only 11/133 (8%) had VL<400 copies/ml, compared with 38/48 (79%) with CD4≥250 cells/mm(3) (p<0.0001). Multiple, but not single, WHO 3 events predicted first-line ART failure. A CD4 threshold 'tiebreaker' of ≥250 cells/mm(3) for clinically-monitored patients failing first-line could identify ∼80% with VL<400 copies/ml, who are unlikely to benefit from second-line. Targeting CD4s to single WHO stage 3 'clinical failures' would particularly avoid premature, costly switch to second-line ART.

  14. Influence of Climate Change on Flood Hazard using Climate Informed Bayesian Hierarchical Model in Johnson Creek River

    NASA Astrophysics Data System (ADS)

    Zarekarizi, M.; Moradkhani, H.

    2015-12-01

    Extreme events are proven to be affected by climate change, influencing hydrologic simulations for which stationarity is usually a main assumption. Studies have discussed that this assumption would lead to large bias in model estimations and higher flood hazard consequently. Getting inspired by the importance of non-stationarity, we determined how the exceedance probabilities have changed over time in Johnson Creek River, Oregon. This could help estimate the probability of failure of a structure that was primarily designed to resist less likely floods according to common practice. Therefore, we built a climate informed Bayesian hierarchical model and non-stationarity was considered in modeling framework. Principle component analysis shows that North Atlantic Oscillation (NAO), Western Pacific Index (WPI) and Eastern Asia (EA) are mostly affecting stream flow in this river. We modeled flood extremes using peaks over threshold (POT) method rather than conventional annual maximum flood (AMF) mainly because it is possible to base the model on more information. We used available threshold selection methods to select a suitable threshold for the study area. Accounting for non-stationarity, model parameters vary through time with climate indices. We developed a couple of model scenarios and chose one which could best explain the variation in data based on performance measures. We also estimated return periods under non-stationarity condition. Results show that ignoring stationarity could increase the flood hazard up to four times which could increase the probability of an in-stream structure being overtopped.

  15. Donor hypernatremia before procurement and early outcomes following pediatric liver transplantation.

    PubMed

    Kaseje, Neema; McLin, Valerie; Toso, Christian; Poncet, Antoine; Wildhaber, Barbara E

    2015-08-01

    The demand for transplantable organs far outweighs the supply. Recently, efforts have been made to increase the donor pool by adopting extended criteria for livers, including those from hypernatremic donors. Currently, there is no clear evidence that the use of organs from hypernatremic donors has detrimental effects on pediatric liver transplantation (LT) recipients. Our aim was to use the Scientific Registry of Transplant Recipients database to evaluate the effects of donor hypernatremia on 30-day outcomes in pediatric LT recipients. We performed an analysis of 2325 children who underwent whole or partial LT between 2005 and 2010. First, we sought to determine a donor sodium threshold for increased mortality following pediatric LT. Second, we examined rates of mortality and graft failure at 30 days after LT in patients receiving grafts from hypernatremic donors compared to patients receiving grafts from normonatremic donors. Hypernatremia was defined as a donor sodium level of ≥160 µmol/L. The primary outcome measure was mortality at 30 days after transplant. The secondary outcome measure was graft failure at 30 days after transplant. There was no threshold sodium level for increased 30-day mortality following pediatric LT. Mean recipient ages/weights, Pediatric End-Stage Liver Disease/Model for End-Stage Liver Disease scores, and mean cold and warm ischemia times were similar between the 2 study groups. There were no significant differences in mortality rates (3.9% versus 4.5%; P = 0.87) and graft failure rates (2.2% versus 1.9%; P = 1.00) in patients receiving grafts from hypernatremic donors compared to patients receiving grafts from normonatremic donors at 30 days after LT. In conclusion, donor hypernatremia just before procurement does not appear to have negative effects on mortality and graft failure rates at 30 days following pediatric LT. © 2015 American Association for the Study of Liver Diseases.

  16. Long term real-time monitoring of large alpine rockslides by GB-InSAR: mechanisms, triggers, scenario assessment and Early Warning

    NASA Astrophysics Data System (ADS)

    Crosta, G. B.; Agliardi, F.; Sosio, R.; Rivolta, C.; Leva, D.; Dei Cas, L.

    2012-04-01

    Large rockslides in alpine valleys can undergo catastrophic evolution, posing extraordinary risks to settlements, lives and critical infrastructures. These phenomena are controlled by a complex interplay of lithological, structural, hydrological and meteo-climatic factors, which eventually result in: complex triggering mechanisms and kinematics, highly variable activity, regressive to progressive trends with superimposed acceleration and deceleration periods related to rainfall and snowmelt. Managing large rockslide risk remains challenging, due the high uncertainty related to their geological model and dynamics. In this context, the most promising approach to constrain rockslide kinematics, establish correlations with triggering factors, and predict future displacements, velocity and acceleration, and eventually possible final collapse is based on the analysis and modelling of long-term series of monitoring data. More than traditional monitoring activities, remote sensing represents an important tool aimed at describing local rockslide displacements and kinematics, at distinguishing rates of activity, and providing real time data suitable for early warning. We analyze a long term monitoring dataset collected for a deep-seated rockslide (Ruinon, Lombardy, Italy), actively monitored since 1997 through an in situ monitoring network (topographic and GPS, wire extensometers and distometer baselines) and since 2006 by a ground based radar (GB-InSAR). Monitoring allowed to set-up and update the geological model, identify rockslide extent and geometry, analyze its sensitivity to seasonal changes and their impact on the reliability and EW potential of monitoring data. GB-InSAR data allowed to identify sub-areas with different behaviors associated to outcropping bedrock and thick debris cover, and to set-up a "virtual monitoring network" by a posteriori selection of critical locations. Resulting displacement time series provide a large amount of information even in debris-covered areas, where traditional monitoring fails. Such spatially-distributed, improved information, validated by selected ground-based measurements, allowed to establish new velocity thresholds for EW purposes. Relationships between rainfall and displacement rates allowed to identify different possible failure mechanisms and to constrain the applicability of rainfall EW thresholds. Comparison with temperature and snow melting time series allowed to clarify the sensitivity of the rockslide movement to these controlling factors. Finally, the recognition of the sensitivity to all these factors allowed us to accomplish a more complete hazard assessment by defining different failure scenarios and the associated triggering thresholds.

  17. Sensor Fault Detection and Diagnosis Simulation of a Helicopter Engine in an Intelligent Control Framework

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet

    1994-01-01

    This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.

  18. Relieving dyspnoea by non-invasive ventilation decreases pain thresholds in amyotrophic lateral sclerosis.

    PubMed

    Dangers, Laurence; Laviolette, Louis; Georges, Marjolaine; Gonzalez-Bermejo, Jésus; Rivals, Isabelle; Similowski, Thomas; Morelot-Panzini, Capucine

    2017-03-01

    Dyspnoea is a threatening sensation of respiratory discomfort that presents many similarities with pain. Experimental dyspnoea in healthy subjects induces analgesia. This 'dyspnoea-pain counter-irritation' could, in reverse, imply that relieving dyspnoea in patients with chronic respiratory diseases would lower their pain thresholds. We first determined pressure pain thresholds in 25 healthy volunteers (22-31 years; 13 men; handheld algometer), during unloaded breathing (BASELINE) and during inspiratory threshold loading (ITL). Two levels of loading were used, adjusted to induce dyspnoea self-rated at 60% or 80% of a 10 cm visual analogue scale (ITL6 and ITL8). 18 patients with chronic respiratory failure due to amyotrophic lateral sclerosis (ALS) were then studied during unassisted breathing and after 30 and 60 min of non-invasive ventilation-NIV30 and NIV60-(same dyspnoea evaluation). In healthy volunteers, pressure pain thresholds increased significantly in the deltoid during ITL6 (p<0.05) and ITL8 (p<0.05) and in the trapezius during ITL8 (p<0.05), validating the use of pressure pain thresholds to study dyspnoea-pain counter-irritation. In patients with ALS, the pressure pain thresholds measured in the deltoid during unassisted breathing decreased by a median of 24.5%-33.0% of baseline during NIV30 and NIV60 (p<0.05). Relieving dyspnoea by NIV in patients with ALS having respiratory failure is associated with decreased pressure pain thresholds. Clinical implications have yet to be determined, but this observation suggests that patients with ALS could become more susceptible to pain after the institution of NIV, hence the need for reinforced attention towards potentially painful diagnostic and therapeutic interventions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  19. Forecast-based interventions can reduce the health and economic burden of wildfires.

    PubMed

    Rappold, Ana G; Fann, Neal L; Crooks, James; Huang, Jin; Cascio, Wayne E; Devlin, Robert B; Diaz-Sanchez, David

    2014-09-16

    We simulated public health forecast-based interventions during a wildfire smoke episode in rural North Carolina to show the potential for use of modeled smoke forecasts toward reducing the health burden and showed a significant economic benefit of reducing exposures. Daily and county wide intervention advisories were designed to occur when fine particulate matter (PM2.5) from smoke, forecasted 24 or 48 h in advance, was expected to exceed a predetermined threshold. Three different thresholds were considered in simulations, each with three different levels of adherence to the advisories. Interventions were simulated in the adult population susceptible to health exacerbations related to the chronic conditions of asthma and congestive heart failure. Associations between Emergency Department (ED) visits for these conditions and daily PM2.5 concentrations under each intervention were evaluated. Triggering interventions at lower PM2.5 thresholds (≤ 20 μg/m(3)) with good compliance yielded the greatest risk reduction. At the highest threshold levels (50 μg/m(3)) interventions were ineffective in reducing health risks at any level of compliance. The economic benefit of effective interventions exceeded $1 M in excess ED visits for asthma and heart failure, $2 M in loss of productivity, $100 K in respiratory conditions in children, and $42 million due to excess mortality.

  20. Risk-based decision making to manage water quality failures caused by combined sewer overflows

    NASA Astrophysics Data System (ADS)

    Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.

    2017-12-01

    Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean- pbf optimization. The effectiveness of using buffered failure probability compared to the failure probability is tested by comparing the solutions obtained by using mean-pbf and mean-pf optimizations.

  1. Analysis of particulates on tape lift samples

    NASA Astrophysics Data System (ADS)

    Moision, Robert M.; Chaney, John A.; Panetta, Chris J.; Liu, De-Ling

    2014-09-01

    Particle counts on tape lift samples taken from a hardware surface exceeded threshold requirements in six successive tests despite repeated cleaning of the surface. Subsequent analysis of the particle size distributions of the failed tests revealed that the handling and processing of the tape lift samples may have played a role in the test failures. In order to explore plausible causes for the observed size distribution anomalies, scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDX), and time-of-flight secondary ion mass spectrometry (ToF-SIMS) were employed to perform chemical analysis on collected particulates. SEM/EDX identified Na and S containing particles on the hardware samples in a size range identified as being responsible for the test failures. ToF-SIMS was employed to further examine the Na and S containing particulates and identified the molecular signature of sodium alkylbenzene sulfonates, a common surfactant used in industrial detergent. The root cause investigation suggests that the tape lift test failures originated from detergent residue left behind on the glass slides used to mount and transport the tape following sampling and not from the hardware surface.

  2. Monitoring and Early Warning of the 2012 Preonzo Catastrophic Rockslope Failure

    NASA Astrophysics Data System (ADS)

    Loew, Simon; Gschwind, Sophie; Keller-Signer, Alexandra; Valenti, Giorgio

    2015-04-01

    In this contribution we describe the accelerated creep stage and early warning system of a 210'000 m3 rock slope failure that occurred in May 2012 above the village of Preonzo (Swiss Alps). The very rapid failure occurred from a larger and retrogressive instability in high-grade metamorphic ortho-gneisses and amphibolites with a total volume of about 350'000 m3 located at an alpine meadow called Alpe di Roscioro. This instability showed clearly visible signs of movements since 1989 and accelerated creep with significant hydro-mechanical forcing since about 1999. Because the instability at Preonzo threatened a large industrial facility and important transport routes a cost-effective early warning system was installed in 2010. The alarm thresholds for pre-alarm, general public alarm and evacuation were derived from 10 years of continuous displacement monitoring with crack extensometers and an automated total station. These thresholds were successfully applied to evacuate the industrial facility and close important roads a few days before the catastrophic slope failure of May 15th, 2012. The rock slope failure occurred in two events, exposing a planar rupture plane dipping 42° and generating deposits in the mid-slope portion with a travel angle of 38°. Two hours after the second rockslide, the fresh colluvial deposits became reactivated in a devastating de-bris avalanche reaching the foot of the slope.

  3. Techniques for Improving Pilot Recovery from System Failures

    NASA Technical Reports Server (NTRS)

    Pritchett, Amy R.

    2001-01-01

    This project examined the application of intelligent cockpit systems to aid air transport pilots at the tasks of reacting to in-flight system failures and of planning and then following a safe four dimensional trajectory to the runway threshold during emergencies. Two studies were conducted. The first examined pilot performance with a prototype awareness/alerting system in reacting to on-board system failures. In a full-motion, high-fidelity simulator, Army helicopter pilots were asked to fly a mission during which, without warning or briefing, 14 different failures were triggered at random times. Results suggest that the amount of information pilots require from such diagnostic systems is strongly dependent on their training; for failures they are commonly trained to react to with a procedural response, they needed only an indication of which failure to follow, while for 'un-trained' failures, they benefited from more intelligent and informative systems. Pilots were also found to over-rely on the system in conditions were it provided false or mis-leading information. In the second study, a proof-of-concept system was designed suitable for helping pilots replan their flights in emergency situations for quick, safe trajectory generation. This system is described in this report, including: the use of embedded fast-time simulation to predict the trajectory defined by a series of discrete actions; the models of aircraft and pilot dynamics required by the system; and the pilot interface. Then, results of a flight simulator evaluation with airline pilots are detailed. In 6 of 72 simulator runs, pilots were not able to establish a stable flight path on localizer and glideslope, suggesting a need for cockpit aids. However, results also suggest that, to be operationally feasible, such an aid must be capable of suggesting safe trajectories to the pilot; an aid that only verified plans entered by the pilot was found to have significantly detrimental effects on performance and pilot workload. Results also highlight that the trajectories suggested by the aid must capture the context of the emergency; for example, in some emergencies pilots were willing to violate flight envelope limits to reduce time in flight - in other emergencies the opposite was found.

  4. Solid motor diagnostic instrumentation. [design of self-contained instrumentation

    NASA Technical Reports Server (NTRS)

    Nakamura, Y.; Arens, W. E.; Wuest, W. S.

    1973-01-01

    A review of typical surveillance and monitoring practices followed during the flight phases of representative solid-propellant upper stages and apogee motors was conducted to evaluate the need for improved flight diagnostic instrumentation on future spacecraft. The capabilities of the flight instrumentation package were limited to the detection of whether or not the solid motor was the cause of failure and to the identification of probable primary failure modes. Conceptual designs of self-contained flight instrumentation packages capable of meeting these reqirements were generated and their performance, typical cost, and unit characteristics determined. Comparisons of a continuous real time and a thresholded hybrid design were made on the basis of performance, mass, power, cost, and expected life. The results of this analysis substantiated the feasibility of a self-contained independent flight instrumentation module as well as the existence of performance margins by which to exploit growth option applications.

  5. Processes of coastal bluff erosion in weakly lithified sands, Pacifica, California, USA

    USGS Publications Warehouse

    Collins, B.D.; Sitar, N.

    2008-01-01

    Coastal bluff erosion and landsliding are currently the major geomorphic processes sculpting much of the marine terrace dominated coastline of northern California. In this study, we identify the spatial and temporal processes responsible for erosion and landsliding in an area of weakly lithified sand coastal bluffs located south of San Francisco, California. Using the results of a five year observational study consisting of site visits, terrestrial lidar scanning, and development of empirical failure indices, we identify the lithologic and process controls that determine the failure mechanism and mode for coastal bluff retreat in this region and present concise descriptions of each process. Bluffs composed of weakly cemented sands (unconfined compressive strength - UCS between 5 and 30??kPa) fail principally due to oversteepening by wave action with maximum slope inclinations on the order of 65 at incipient failure. Periods of significant wave action were identified on the basis of an empirical wave run-up equation, predicting failure when wave run-up exceeds the seasonal average value and the bluff toe elevation. The empirical relationship was verified through recorded observations of failures. Bluffs composed of moderately cemented sands (UCS up to 400??kPa) fail due to precipitation-induced groundwater seepage, which leads to tensile strength reduction and fracture. An empirical rainfall threshold was also developed to predict failure on the basis of a 48-hour cumulative precipitation index but was found to be dependent on a time delay in groundwater seepage in some cases.

  6. Material failure modelling in metals at high strain rates

    NASA Astrophysics Data System (ADS)

    Panov, Vili

    2005-07-01

    Plate impact tests have been conducted on the OFHC Cu using single-stage gas gun. Using stress gauges, which were supported with PMMA blocks on the back of the target plates, stress-time histories have been recorded. After testing, micro structural observations of the softly recovered OFHC Cu spalled specimen were carried out and evolution of damage has been examined. To account for the physical mechanisms of failure, the concept that thermal activation in material separation during fracture processes has been adopted as basic mechanism for this material failure model development. With this basic assumption, the proposed model is compatible with the Mechanical Threshold Stress model and therefore in this development it was incorporated into the MTS material model in DYNA3D. In order to analyse proposed criterion a series of FE simulations have been performed for OFHC Cu. The numerical analysis results clearly demonstrate the ability of the model to predict the spall process and experimentally observed tensile damage and failure. It is possible to simulate high strain rate deformation processes and dynamic failure in tension for wide range of temperature. The proposed cumulative criterion, introduced in the DYNA3D code, is able to reproduce the ``pull-back'' stresses of the free surface caused by creation of the internal spalling, and enables one to analyse numerically the spalling over a wide range of impact velocities.

  7. Development of an adaptive failure detection and identification system for detecting aircraft control element failures

    NASA Technical Reports Server (NTRS)

    Bundick, W. Thomas

    1990-01-01

    A methodology for designing a failure detection and identification (FDI) system to detect and isolate control element failures in aircraft control systems is reviewed. An FDI system design for a modified B-737 aircraft resulting from this methodology is also reviewed, and the results of evaluating this system via simulation are presented. The FDI system performed well in a no-turbulence environment, but it experienced an unacceptable number of false alarms in atmospheric turbulence. An adaptive FDI system, which adjusts thresholds and other system parameters based on the estimated turbulence level, was developed and evaluated. The adaptive system performed well over all turbulence levels simulated, reliably detecting all but the smallest magnitude partially-missing-surface failures.

  8. Revision surgery of metal-on-metal hip arthroplasties for adverse reactions to metal debris.

    PubMed

    Matharu, Gulraj S; Eskelinen, Antti; Judge, Andrew; Pandit, Hemant G; Murray, David W

    2018-06-01

    Background and purpose - The initial outcomes following metal-on-metal hip arthroplasty (MoMHA) revision surgery performed for adverse reactions to metal debris (ARMD) were poor. Furthermore, robust thresholds for performing ARMD revision are lacking. This article is the second of 2. The first article considered the various investigative modalities used during MoMHA patient surveillance (Matharu et al. 2018a ). The present article aims to provide a clinical update regarding ARMD revision surgery in MoMHA patients (hip resurfacing and large-diameter MoM total hip arthroplasty), with specific focus on the threshold for performing ARMD revision, the surgical strategy, and the outcomes following revision. Results and interpretation - The outcomes following ARMD revision surgery appear to have improved with time for several reasons, among them the introduction of regular patient surveillance and lowering of the threshold for performing revision. Furthermore, registry data suggest that outcomes following ARMD revision are influenced by modifiable factors (type of revision procedure and bearing surface implanted), meaning surgeons could potentially reduce failure rates. However, additional large multi-center studies are needed to develop robust thresholds for performing ARMD revision surgery, which will guide surgeons' treatment of MoMHA patients. The long-term systemic effects of metal ion exposure in patients with these implants must also be investigated, which will help establish whether there are any systemic reasons to recommend revision of MoMHAs.

  9. A Single CD4 Test with 250 Cells/Mm3 Threshold Predicts Viral Suppression in HIV-Infected Adults Failing First-Line Therapy by Clinical Criteria

    PubMed Central

    Munderi, Paula; Kityo, Cissy; Reid, Andrew; Katabira, Elly; Goodall, Ruth L.; Grosskurth, Heiner; Mugyenyi, Peter; Hakim, James; Gibb, Diana M.

    2013-01-01

    Background In low-income countries, viral load (VL) monitoring of antiretroviral therapy (ART) is rarely available in the public sector for HIV-infected adults or children. Using clinical failure alone to identify first-line ART failure and trigger regimen switch may result in unnecessary use of costly second-line therapy. Our objective was to identify CD4 threshold values to confirm clinically-determined ART failure when VL is unavailable. Methods 3316 HIV-infected Ugandan/Zimbabwean adults were randomised to first-line ART with Clinically-Driven (CDM, CD4s measured but blinded) or routine Laboratory and Clinical Monitoring (LCM, 12-weekly CD4s) in the DART trial. CD4 at switch and ART failure criteria (new/recurrent WHO 4, single/multiple WHO 3 event; LCM: CD4<100 cells/mm3) were reviewed in 361 LCM, 314 CDM participants who switched over median 5 years follow-up. Retrospective VLs were available in 368 (55%) participants. Results Overall, 265/361 (73%) LCM participants failed with CD4<100 cells/mm3; only 7 (2%) switched with CD4≥250 cells/mm3, four switches triggered by WHO events. Without CD4 monitoring, 207/314 (66%) CDM participants failed with WHO 4 events, and 77(25%)/30(10%) with single/multiple WHO 3 events. Failure/switching with single WHO 3 events was more likely with CD4≥250 cells/mm3 (28/77; 36%) (p = 0.0002). CD4 monitoring reduced switching with viral suppression: 23/187 (12%) LCM versus 49/181 (27%) CDM had VL<400 copies/ml at failure/switch (p<0.0001). Amongst CDM participants with CD4<250 cells/mm3 only 11/133 (8%) had VL<400copies/ml, compared with 38/48 (79%) with CD4≥250 cells/mm3 (p<0.0001). Conclusion Multiple, but not single, WHO 3 events predicted first-line ART failure. A CD4 threshold ‘tiebreaker’ of ≥250 cells/mm3 for clinically-monitored patients failing first-line could identify ∼80% with VL<400 copies/ml, who are unlikely to benefit from second-line. Targeting CD4s to single WHO stage 3 ‘clinical failures’ would particularly avoid premature, costly switch to second-line ART. PMID:23437399

  10. SESOTHO trial ("Switch Either near Suppression Or THOusand") - switch to second-line versus WHO-guided standard of care for unsuppressed patients on first-line ART with viremia below 1000 copies/mL: protocol of a multicenter, parallel-group, open-label, randomized clinical trial in Lesotho, Southern Africa.

    PubMed

    Amstutz, Alain; Nsakala, Bienvenu Lengo; Vanobberghen, Fiona; Muhairwe, Josephine; Glass, Tracy Renée; Achieng, Beatrice; Sepeka, Mamorena; Tlali, Katleho; Sao, Lebohang; Thin, Kyaw; Klimkait, Thomas; Battegay, Manuel; Labhardt, Niklaus Daniel

    2018-02-12

    The World Health Organization (WHO) recommends viral load (VL) measurement as the preferred monitoring strategy for HIV-infected individuals on antiretroviral therapy (ART) in resource-limited settings. The new WHO guidelines 2016 continue to define virologic failure as two consecutive VL ≥1000 copies/mL (at least 3 months apart) despite good adherence, triggering switch to second-line therapy. However, the threshold of 1000 copies/mL for defining virologic failure is based on low-quality evidence. Observational studies have shown that individuals with low-level viremia (measurable but below 1000 copies/mL) are at increased risk for accumulation of resistance mutations and subsequent virologic failure. The SESOTHO trial assesses a lower threshold for switch to second-line ART in patients with sustained unsuppressed VL. In this multicenter, parallel-group, open-label, randomized controlled trial conducted in Lesotho, patients on first-line ART with two consecutive unsuppressed VL measurements ≥100 copies/mL, where the second VL is between 100 and 999 copies/mL, will either be switched to second-line ART immediately (intervention group) or not be switched (standard of care, according to WHO guidelines). The primary endpoint is viral resuppression (VL < 50 copies/mL) 9 months after randomization. We will enrol 80 patients, giving us 90% power to detect a difference of 35% in viral resuppression between the groups (assuming two-sided 5% alpha error). For our primary analysis, we will use a modified intention-to-treat set, with those lost to care, death, or crossed over considered failure to resuppress, and using logistic regression models adjusted for the prespecified stratification variables. The SESOTHO trial challenges the current WHO guidelines, assessing an alternative, lower VL threshold for patients with unsuppressed VL on first-line ART. This trial will provide data to inform future WHO guidelines on VL thresholds to recommend switch to second-line ART. ClinicalTrials.gov ( NCT03088241 ), registered May 05, 2017.

  11. Damage Propagation Modeling for Aircraft Engine Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Goebel, Kai; Simon, Don; Eklund, Neil

    2008-01-01

    This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are generated via a thermo-dynamical simulation model for the engine as a function of variations of flow and efficiency of the modules of interest. An exponential rate of change for flow and efficiency loss was imposed for each data set, starting at a randomly chosen initial deterioration set point. The rate of change of the flow and efficiency denotes an otherwise unspecified fault with increasingly worsening effect. The rates of change of the faults were constrained to an upper threshold but were otherwise chosen randomly. Damage propagation was allowed to continue until a failure criterion was reached. A health index was defined as the minimum of several superimposed operational margins at any given time instant and the failure criterion is reached when health index reaches zero. Output of the model was the time series (cycles) of sensed measurements typically available from aircraft gas turbine engines. The data generated were used as challenge data for the Prognostics and Health Management (PHM) data competition at PHM 08.

  12. Reliability demonstration test for load-sharing systems with exponential and Weibull components

    PubMed Central

    Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn’t yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics. PMID:29284030

  13. Reliability demonstration test for load-sharing systems with exponential and Weibull components.

    PubMed

    Xu, Jianyu; Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn't yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics.

  14. Monitoring and modeling conditions for regional shallow landslide initiation in the San Francisco Bay area, California

    NASA Astrophysics Data System (ADS)

    Collins, B. D.; Stock, J. D.; Godt, J. W.

    2012-12-01

    Intense winter storms in the San Francisco Bay area (SFBA) of California often trigger widespread landsliding, including debris flows that originate as shallow (<3 m) landslides. The strongest storms result in the loss of lives and millions of dollars in damage. Whereas precipitation-based rainfall intensity-duration landslide initiation thresholds are available for the SFBA, antecedent soil moisture conditions also play a major role in determining the likelihood for landslide generation from a given storm. Previous research has demonstrated that antecedent triggering conditions can be obtained using pre-storm precipitation thresholds (e.g., 250-400 mm of seasonal pre-storm rainfall). However, these types of thresholds do not account for the often cyclic pattern of wetting and drying that can occur early in the winter storm season (i.e. October - December), and which may skew the applicability of precipitation-only based thresholds. To account for these cyclic and constantly evolving soil moisture conditions, we have pursued methods to measure soil moisture directly and integrate these measurements into predictive analyses. During the past three years, the USGS installed a series of four subsurface hydrology monitoring stations in shallow landslide-prone locations of the SFBA to establish a soil-moisture-based antecedent threshold. In addition to soil moisture sensors, the monitoring stations are each equipped with piezometers to record positive pore water pressure that is likely required for shallow landslide initiation and a rain gauge to compare storm intensities with existing precipitation-based thresholds. Each monitoring station is located on a natural, grassy hillslope typically composed of silty sands, underlain by sandstone, sloping at approximately 30°, and with a depth to bedrock of approximately 1 meter - conditions typical of debris flow generation in the SFBA. Our observations reveal that various locations respond differently to seasonal precipitation, with some areas (e.g., Marin County) remaining at higher levels of saturation for longer periods of time during the winter compared to other areas (e.g., the East Bay Hills). In general, this coincides directly with relative precipitation totals in each region (i.e., Marin county typically receives more rainfall over a longer period of time than the East Bay). In those areas that are saturated for longer periods, the shallow landslide hazard is prolonged because these conditions are first needed for storm-related precipitation to subsequently generate positive pore pressure on the failure plane. Both piezometric field measurements and limit equilibrium slope stability analyses indicate that positive pore pressure is required for most shallow landslide failures to occur in the study regions. Based on measurements from two of the sites, our analyses further indicate that at least 2 kPa of pressure is required to trigger shallow landsliding. We measured this pressure at one of our sites in 2011, where more than 30 landslides, including several that mobilized into debris flows, occurred. Additional monitoring at these sites will be used to further constrain and refine antecedent moisture-based thresholds for shallow landslide initiation.

  15. Evaluation of critical nuclear power plant electrical cable response to severe thermal fire conditions

    NASA Astrophysics Data System (ADS)

    Taylor, Gabriel James

    The failure of electrical cables exposed to severe thermal fire conditions are a safety concern for operating commercial nuclear power plants (NPPs). The Nuclear Regulatory Commission (NRC) has promoted the use of risk-informed and performance-based methods for fire protection which resulted in a need to develop realistic methods to quantify the risk of fire to NPP safety. Recent electrical cable testing has been conducted to provide empirical data on the failure modes and likelihood of fire-induced damage. This thesis evaluated numerous aspects of the data. Circuit characteristics affecting fire-induced electrical cable failure modes have been evaluated. In addition, thermal failure temperatures corresponding to cable functional failures have been evaluated to develop realistic single point thermal failure thresholds and probability distributions for specific cable insulation types. Finally, the data was used to evaluate the prediction capabilities of a one-dimension conductive heat transfer model used to predict cable failure.

  16. Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy.

    PubMed

    Broggi, Sara; Cantone, Marie Claire; Chiara, Anna; Di Muzio, Nadia; Longobardi, Barbara; Mangili, Paola; Veronese, Ivan

    2013-09-06

    The aim of this paper was the application of the failure mode and effects analysis (FMEA) approach to assess the risks for patients undergoing radiotherapy treatments performed by means of a helical tomotherapy unit. FMEA was applied to the preplanning imaging, volume determination, and treatment planning stages of the tomotherapy process and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system; and 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. A total of 74 failure modes were identified: 38 in the stage of preplanning imaging and volume determination, and 36 in the stage of planning. The threshold of 125 for RPN was exceeded in four cases: one case only in the phase of preplanning imaging and volume determination, and three cases in the stage of planning. The most critical failures appeared related to (i) the wrong or missing definition and contouring of the overlapping regions, (ii) the wrong assignment of the overlap priority to each anatomical structure, (iii) the wrong choice of the computed tomography calibration curve for dose calculation, and (iv) the wrong (or not performed) choice of the number of fractions in the planning station. On the basis of these findings, in addition to the safety strategies already adopted in the clinical practice, novel solutions have been proposed for mitigating the risk of these failures and to increase patient safety.

  17. Short pulse duration shock initiation experiments plus ignition and growth modeling on Composition B

    NASA Astrophysics Data System (ADS)

    May, Chadd M.; Tarver, Craig M.

    2014-05-01

    Composition B (63% RDX, 36% TNT, 1% wax) is still a widely used energetic material whose shock initiation characteristics are necessary to understand. It is now possible to shock initiate Composition B and other secondary explosives at diameters well below their characteristic failure diameters for unconfined self-sustaining detonation. This is done using very high velocity, very thin, small diameter flyer plates accelerated by electric or laser power sources. Recently experimental detonation versus failure to detonate threshold flyer velocity curves for Composition B using several KaptonTM flyer thicknesses and diameters were measured. Flyer plates with diameters of 2 mm successfully detonated Composition B, which has a nominal failure diameter of 4.3 mm. The shock pressures required for these initiations are greater than the Chapman-Jouguet (C-J) pressure in self-sustaining Composition B detonation waves. The initiation process is two-dimensional, because both rear and side rarefactions can affect the shocked Composition B reaction rates. The Ignition and Growth reactive flow model for Composition B is extended to yield accurate simulations of this new threshold velocity data for various flyer thicknesses.

  18. Effect of Microscopic Damage Events on Static and Ballistic Impact Strength of Triaxial Braid Composites

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary d.; Goldberg, Robert K.

    2008-01-01

    In previous work, the ballistic impact resistance of triaxial braided carbon/epoxy composites made with large flat tows (12k and 24k) was examined by impacting 2 X2 X0.125" composite panels with gelatin projectiles. Several high strength, intermediate modulus carbon fibers were used in combination with both untoughened and toughened matrix materials. A wide range of penetration thresholds were measured for the various fiber/matrix combinations. However, there was no clear relationship between the penetration threshold and the properties of the constituents. During some of these experiments high speed cameras were used to view the failure process, and full-field strain measurements were made to determine the strain at the onset of failure. However, these experiments provided only limited insight into the microscopic failure processes responsible for the wide range of impact resistance observed. In order to investigate potential microscopic failure processes in more detail, quasi-static tests were performed in tension, compression, and shear. Full-field strain measurement techniques were used to identify local regions of high strain resulting from microscopic failures. Microscopic failure events near the specimen surface, such as splitting of fiber bundles in surface plies, were easily identified. Subsurface damage, such as fiber fracture or fiber bundle splitting, could be identified by its effect on in-plane surface strains. Subsurface delamination could be detected as an out-of-plane deflection at the surface. Using this data, failure criteria could be established at the fiber tow level for use in analysis. An analytical formulation was developed to allow the microscopic failure criteria to be used in place of macroscopic properties as input to simulations performed using the commercial explicit finite element code, LS-DYNA. The test methods developed to investigate microscopic failure will be presented along with methods for determining local failure criteria that can be used in analysis. Results of simulations performed using LS-DYNA will be presented to illustrate the capabilities and limitations for simulating failure during quasi-static deformation and during ballistic impact of large unit cell size triaxial braid composites.

  19. The Borg scale as an important tool of self-monitoring and self-regulation of exercise prescription in heart failure patients during hydrotherapy. A randomized blinded controlled trial.

    PubMed

    Carvalho, Vitor Oliveira; Bocchi, Edimar Alcides; Guimarães, Guilherme Veiga

    2009-10-01

    The Borg Scale may be a useful tool for heart failure patients to self-monitor and self-regulate exercise on land or in water (hydrotherapy) by maintaining the heart rate (HR) between the anaerobic threshold and respiratory compensation point. Patients performed a cardiopulmonary exercise test to determine their anaerobic threshold/respiratory compensation points. The percentage of the mean HR during the exercise session in relation to the anaerobic threshold HR (%EHR-AT), in relation to the respiratory compensation point (%EHR-RCP), in relation to the peak HR by the exercise test (%EHR-Peak) and in relation to the maximum predicted HR (%EHR-Predicted) was calculated. Next, patients were randomized into the land or water exercise group. One blinded investigator instructed the patients in each group to exercise at a level between "relatively easy and slightly tiring". The mean HR throughout the 30-min exercise session was recorded. The %EHR-AT and %EHR-predicted did not differ between the land and water exercise groups, but they differed in the %EHR-RCP (95 +/-7 to 86 +/-7, P<0.001) and in the %EHR-Peak (85 +/-8 to 78 +/-9, P=0.007). Exercise guided by the Borg scale maintains the patient's HR between the anaerobic threshold and respiratory compensation point (ie, in the exercise training zone).

  20. Destructive Single-Event Failures in Diodes

    NASA Technical Reports Server (NTRS)

    Casey, Megan C.; Gigliuto, Robert A.; Lauenstein, Jean-Marie; Wilcox, Edward P.; Kim, Hak; Chen, Dakai; Phan, Anthony M.; LaBel, Kenneth A.

    2013-01-01

    In this summary, we have shown that diodes are susceptible to destructive single-event effects, and that these failures occur along the guard ring. By determining the last passing voltages, a safe operating area can be derived. By derating off of those values, rather than by the rated voltage, like what is currently done with power MOSFETs, we can work to ensure the safety of future missions. However, there are still open questions about these failures. Are they limited to a single manufacturer, a small number, or all of them? Is there a threshold rated voltage that must be exceeded to see these failures? With future work, we hope to answer these questions. In the full paper, laser results will also be presented to verify that failures only occur along the guard ring.

  1. Biomechanical Comparison of Standard and Linked Single-Row Rotator Cuff Repairs in a Human Cadaver Model.

    PubMed

    Meisel, Adam F; Henninger, Heath B; Barber, F Alan; Getelman, Mark H

    2017-05-01

    The purpose of this study was to evaluate the time zero cyclic and failure loading properties of a linked single-row rotator cuff repair compared with a standard simple suture single-row repair using triple-loaded suture anchors. Eighteen human cadaveric shoulders from 9 matched pairs were dissected, and full-thickness supraspinatus tears were created. The tendon cross-sectional area was recorded. In each pair, one side was repaired with a linked single-row construct and the other with a simple suture single-row construct, both using 2 triple-loaded suture anchors. After preloading, specimens were cycled to 1 MPa of effective stress at 1 Hz for 500 cycles, and gap formation was recorded with a digital video system. Samples were then loaded to failure, and modes of failure were recorded. There was no statistical difference in peak gap formation between the control and linked constructs (3.6 ± 0.9 mm and 3.6 ± 1.2 mm, respectively; P = .697). Both constructs averaged below a 5-mm cyclic failure threshold. There was no statistical difference in ultimate load to failure between the control and linked repair (511.1 ± 139.0 N and 561.2 ± 131.8 N, respectively; P = .164), and both groups reached failure at loads similar to previous studies. Constructs failed predominantly via tissue tearing parallel to the medial suture line. The linked repair performed similarly to the simple single-row repair. Both constructs demonstrated high ultimate load to failure and good resistance to gap formation with cyclic loading, validating the time zero strength of both constructs in a human cadaveric model. The linked repair provided equivalent resistance to gap formation and failure loads compared with simple suture single-row repairs with triple-loaded suture anchors. This suggests that the linked repair is a simplified rip-stop configuration using the existing suture that may perform similarly to current rotator cuff repair techniques. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  2. Stress corrosion crack initiation of Zircaloy-4 cladding tubes in an iodine vapor environment during creep, relaxation, and constant strain rate tests

    NASA Astrophysics Data System (ADS)

    Jezequel, T.; Auzoux, Q.; Le Boulch, D.; Bono, M.; Andrieu, E.; Blanc, C.; Chabretou, V.; Mozzani, N.; Rautenberg, M.

    2018-02-01

    During accidental power transient conditions with Pellet Cladding Interaction (PCI), the synergistic effect of the stress and strain imposed on the cladding by thermal expansion of the fuel, and corrosion by iodine released as a fission product, may lead to cladding failure by Stress Corrosion Cracking (SCC). In this study, internal pressure tests were conducted on unirradiated cold-worked stress-relieved Zircaloy-4 cladding tubes in an iodine vapor environment. The goal was to investigate the influence of loading type (constant pressure tests, constant circumferential strain rate tests, or constant circumferential strain tests) and test temperature (320, 350, or 380 °C) on iodine-induced stress corrosion cracking (I-SCC). The experimental results obtained with different loading types were consistent with each other. The apparent threshold hoop stress for I-SCC was found to be independent of the test temperature. SEM micrographs of the tested samples showed many pits distributed over the inner surface, which tended to coalesce into large pits in which a microcrack could initiate. A model for the time-to-failure of a cladding tube was developed using finite element simulations of the viscoplastic mechanical behavior of the material and a modified Kachanov's damage growth model. The times-to-failure predicted by this model are consistent with the experimental data.

  3. Joint optimization of maintenance, buffers and machines in manufacturing lines

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Nourelfath, Mustapha

    2018-01-01

    This article considers a series manufacturing line composed of several machines separated by intermediate buffers of finite capacity. The goal is to find the optimal number of preventive maintenance actions performed on each machine, the optimal selection of machines and the optimal buffer allocation plan that minimize the total system cost, while providing the desired system throughput level. The mean times between failures of all machines are assumed to increase when applying periodic preventive maintenance. To estimate the production line throughput, a decomposition method is used. The decision variables in the formulated optimal design problem are buffer levels, types of machines and times between preventive maintenance actions. Three heuristic approaches are developed to solve the formulated combinatorial optimization problem. The first heuristic consists of a genetic algorithm, the second is based on the nonlinear threshold accepting metaheuristic and the third is an ant colony system. The proposed heuristics are compared and their efficiency is shown through several numerical examples. It is found that the nonlinear threshold accepting algorithm outperforms the genetic algorithm and ant colony system, while the genetic algorithm provides better results than the ant colony system for longer manufacturing lines.

  4. Identifying failure in a tree network of a parallel computer

    DOEpatents

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  5. Biomarkers in Acute Heart Failure – Cardiac And Kidney

    PubMed Central

    2015-01-01

    Natriuretic peptides (NP) are well-validated aids in the diagnosis of acute decompensated heart failure (ADHF). In acute presentations, both brain natriuretic peptide (BNP) and N-terminal of the prohormone brain natriuretic peptide (NT-proBNP) offer high sensitivity (>90 %) and negative predictive values (>95 %) for ruling out ADHF at thresholds of 100 and 300 pg/ml, respectively. Plasma NP rise with age. For added rule-in performance age-adjusted thresholds (450 pg/ml for under 50 years, 900 pg/ml for 50–75 years and 1,800 pg/ml for those >75 years) can be applied to NT-proBNP results. Test performance (specificity and accuracy but not sensitivity) is clearly reduced by renal dysfunction and atrial fibrillation. Obesity offsets the threshold downwards (to ~50 pg/ml for BNP), but overall discrimination is preserved. Reliable markers for impending acute kidney injury in ADHF constitute an unmet need, with candidates, such as kidney injury molecule-1 and neutrophil gelatinase-associated lipocalin, failing to perform sufficiently well, and new possibilities, including the cell cycle markers insulin growth factor binding protein 7 and tissue inhibitor of metalloproteinases type 2, remain the subject of research. PMID:28785442

  6. Modelling accelerated degradation data using Wiener diffusion with a time scale transformation.

    PubMed

    Whitmore, G A; Schenkelberg, F

    1997-01-01

    Engineering degradation tests allow industry to assess the potential life span of long-life products that do not fail readily under accelerated conditions in life tests. A general statistical model is presented here for performance degradation of an item of equipment. The degradation process in the model is taken to be a Wiener diffusion process with a time scale transformation. The model incorporates Arrhenius extrapolation for high stress testing. The lifetime of an item is defined as the time until performance deteriorates to a specified failure threshold. The model can be used to predict the lifetime of an item or the extent of degradation of an item at a specified future time. Inference methods for the model parameters, based on accelerated degradation test data, are presented. The model and inference methods are illustrated with a case application involving self-regulating heating cables. The paper also discusses a number of practical issues encountered in applications.

  7. Discriminating between Graduates and Failure in the USAF Medical Laboratory Specialist School: An Explorative Approach.

    DTIC Science & Technology

    1981-12-01

    occurred on the Introversion Scale of the NMPI. 20 A review of the use of psychological tests on MT’s was accomplished by Driver and Feeley [1974...programs, Gondek [1981] has recommended that the best pro- cedure for variable inclusion when using a stepwise procedure is to use the threshold default...values supplied by the package, since no simple rules exist for determining entry or removal thresholds for partial F’s, tolerance statistics, or any of

  8. Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy

    PubMed Central

    Broggi, Sara; Cantone, Marie Claire; Chiara, Anna; Muzio, Nadia Di; Longobardi, Barbara; Mangili, Paola

    2013-01-01

    The aim of this paper was the application of the failure mode and effects analysis (FMEA) approach to assess the risks for patients undergoing radiotherapy treatments performed by means of a helical tomotherapy unit. FMEA was applied to the preplanning imaging, volume determination, and treatment planning stages of the tomotherapy process and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system; and 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. A total of 74 failure modes were identified: 38 in the stage of preplanning imaging and volume determination, and 36 in the stage of planning. The threshold of 125 for RPN was exceeded in four cases: one case only in the phase of preplanning imaging and volume determination, and three cases in the stage of planning. The most critical failures appeared related to (i) the wrong or missing definition and contouring of the overlapping regions, (ii) the wrong assignment of the overlap priority to each anatomical structure, (iii) the wrong choice of the computed tomography calibration curve for dose calculation, and (iv) the wrong (or not performed) choice of the number of fractions in the planning station. On the basis of these findings, in addition to the safety strategies already adopted in the clinical practice, novel solutions have been proposed for mitigating the risk of these failures and to increase patient safety. PACS number: 87.55.Qr PMID:24036868

  9. Effects of partial interlaminar bonding on impact resistance and loaded-hole behavior of graphite/epoxy quasi-isotropic laminates

    NASA Technical Reports Server (NTRS)

    Illg, W.

    1986-01-01

    A partial-bonding interlaminar toughening concept was evaluated for resistance to impact and for behavior of a loaded hole. Perforated Mylar sheets were interleaved between all 24 plies of a graphite/epoxy quasi-isotropic lay-up. Specimens were impacted by aluminum spheres while under tensile or compressive loads. Impact-failure thresholds and residual strengths were obtained. Loaded-hole specimens were tested in three configurations that were critical in bearing, shear, or tension. Partial bonding reduced the tensile and compressive strengths of undamaged specimens by about one-third. For impact, partial bonding did not change the threshold for impact failure under tensile preload. However, under compressive preload, partial bonding caused serious degradation of impact resistance. Partial bonding reduced the maximum load-carrying capacity of all three types of loaded-hole specimens. Overall, partial bonding degraded both impact resistance and bearing strength of holes.

  10. Survival time of the susceptible-infected-susceptible infection process on a graph.

    PubMed

    van de Bovenkamp, Ruud; Van Mieghem, Piet

    2015-09-01

    The survival time T is the longest time that a virus, a meme, or a failure can propagate in a network. Using the hitting time of the absorbing state in an uniformized embedded Markov chain of the continuous-time susceptible-infected-susceptible (SIS) Markov process, we derive an exact expression for the average survival time E[T] of a virus in the complete graph K_{N} and the star graph K_{1,N-1}. By using the survival time, instead of the average fraction of infected nodes, we propose a new method to approximate the SIS epidemic threshold τ_{c} that, at least for K_{N} and K_{1,N-1}, correctly scales with the number of nodes N and that is superior to the epidemic threshold τ_{c}^{(1)}=1/λ_{1} of the N-intertwined mean-field approximation, where λ_{1} is the spectral radius of the adjacency matrix of the graph G. Although this new approximation of the epidemic threshold offers a more intuitive understanding of the SIS process, it remains difficult to compare outbreaks in different graph types. For example, the survival in an arbitrary graph seems upper bounded by the complete graph and lower bounded by the star graph as a function of the normalized effective infection rate τ/τ_{c}^{(1)}. However, when the average fraction of infected nodes is used as a basis for comparison, the virus will survive in the star graph longer than in any other graph, making the star graph the worst-case graph instead of the complete graph. Finally, in non-Markovian SIS, the distribution of the spreading attempts over the infectious period of a node influences the survival time, even if the expected number of spreading attempts during an infectious period (the non-Markovian equivalent of the effective infection rate) is kept constant. Both early and late infection attempts lead to shorter survival times. Interestingly, just as in Markovian SIS, the survival times appear to be exponentially distributed, regardless of the infection and curing time distributions.

  11. Explosive Model Tarantula V1/JWL++ Calibration of LX-17: #2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souers, P C; Vitello, P

    2009-05-01

    Tarantula V1 is a kinetic package for reactive flow codes that seeks to describe initiation, failure, dead zones and detonation simultaneously. The most important parameter is P1, the pressure between the initiation and failure regions. Both dead zone formation and failure can be largely controlled with this knob. However, V1 does failure with low settings and dead zones with higher settings, so that it cannot fulfill its purpose in the current format. To this end, V2 is under test. The derivation of the initiation threshold P0 is discussed. The derivation of the initiation pressure-tau curve as an output of Tarantulamore » shows that the initiation package is sound. A desensitization package is also considered.« less

  12. The research of single intersection sensor signal control based on section data

    NASA Astrophysics Data System (ADS)

    Liu, Yunxiang; Huang, Yue; Wang, Hao

    2016-12-01

    Propose a sensing signal intersection control design electronic license based on the design by setting the intersection readers to interact with active electronic tags equipped vehicles, vehicle information obtained on the road section. In the vehicle detection sensor may control the green density as evaluation criteria are extended when the vehicle is higher than the threshold, the green density continuity, whereas the switching phases. Induction showed improved control strategy can achieve real-time traffic signal control effectively in high saturation intersection, to overcome the traditional sensor control failure at high saturation drawbacks and improve the utilization of urban Intersection comparative analysis by simulation.

  13. Radiation evaluation study of LSI RAM technologies

    NASA Astrophysics Data System (ADS)

    Dinger, G. L.; Knoll, M. G.

    1980-01-01

    Five commercial LSI static random access memory technologies having a 1 kilobit capacity were radiation characterized. Arrays from the transistor-transistor-logic (TTL), Schottky TTL, n-channel metal oxide semiconductor, complementary metal oxide semiconductor (CMOS), and CMOS/silicon on sapphire families were evaluated. Radiation failure thresholds for gamma doserate logic upset, total gamma dose survivability, and neutron fluence survivability were determined. A brief analysis of the radiation failure mechanism for each of the logic families tested is included.

  14. Time-Series Modeling and Simulation for Comparative Cost-Effective Analysis in Cancer Chemotherapy: An Application to Platinum-Based Regimens for Advanced Non-small Cell Lung Cancer.

    PubMed

    Chisaki, Yugo; Nakamura, Nobuhiko; Yano, Yoshitaka

    2017-01-01

    The purpose of this study was to propose a time-series modeling and simulation (M&S) strategy for probabilistic cost-effective analysis in cancer chemotherapy using a Monte-Carlo method based on data available from the literature. The simulation included the cost for chemotherapy, for pharmaceutical care for adverse events (AEs) and other medical costs. As an application example, we describe the analysis for the comparison of four regimens, cisplatin plus irinotecan, carboplatin plus paclitaxel, cisplatin plus gemcitabine (GP), and cisplatin plus vinorelbine, for advanced non-small cell lung cancer. The factors, drug efficacy explained by overall survival or time to treatment failure, frequency and severity of AEs, utility value of AEs to determine QOL, the drugs' and other medical costs in Japan, were included in the model. The simulation was performed and quality adjusted life years (QALY) and incremental cost-effectiveness ratios (ICER) were calculated. An index, percentage of superiority (%SUP) which is the rate of the increased cost vs. QALY-gained plots within the area of positive QALY-gained and also below some threshold values of the ICER, was calculated as functions of threshold values of the ICER. An M&S process was developed, and for the simulation example, the GP regimen was the most cost-effective, in case of threshold values of the ICER=$70000/year, the %SUP for the GP are more than 50%. We developed an M&S process for probabilistic cost-effective analysis, this method would be useful for decision-making in choosing a cancer chemotherapy regimen in terms of pharmacoeconomic.

  15. Porting Initiation and Failure into Linked CHEETAH

    NASA Astrophysics Data System (ADS)

    Souers, Clark; Vitello, Peter

    2007-06-01

    Linked CHEETAH is a thermo-chemical code coupled to a 2-D hydrocode. Initially, a quadratic-pressure dependent kinetic rate was used, which worked well in modeling prompt detonation of explosives of large size, but does not work on other aspects of explosive behavior. The variable-pressure Tarantula reactive flow rate model was developed with JWL++ in order to also describe failure and initiation, and we have moved this model into Linked CHEETAH. The model works by turning on only above a pressure threshold, where a slow turn-on creates initiation. At a higher pressure, the rate suddenly leaps to a large value over a small pressure range. A slowly failing cylinder will see a rapidly declining rate, which pushes it quickly into failure. At a high pressure, the detonation rate is constant. A sequential validation procedure is used, which includes metal-confined cylinders, rate-sticks, corner-turning, initiation and threshold, gap tests and air gaps. The size (diameter) effect is central to the calibration. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  16. Age-Related Reduction of Recovery Sleep and Arousal Threshold in Drosophila

    PubMed Central

    Vienne, Julie; Spann, Ryanne; Guo, Fang; Rosbash, Michael

    2016-01-01

    Study Objectives: Physiological studies show that aging affects both sleep quality and quantity in humans, and sleep complaints increase with age. Along with knowledge about the negative effects of poor sleep on health, understanding the enigmatic relationship between sleep and aging is important. Because human sleep is similar to Drosophila (fruit fly) sleep in many ways, we addressed the effects of aging on sleep in this model organism. Methods: Baseline sleep was recorded in five different Drosophila genotypes raised at either 21°C or 25°C. The amount of sleep recovered was then investigated after a nighttime of sleep deprivation (12 h) and after chronic sleep deprivation (3 h every night for multiple nights). Finally, the effects of aging on arousal, namely, sensitivity to neuronal and mechanical stimuli, were studied. Results: We show that fly sleep is affected by age in a manner similar to that of humans and other mammals. Not only do older flies of several genotypes have more fragmented sleep and reduced total sleep time compared to young flies, but older flies also fail to recover as much sleep after sleep deprivation. This suggests either lower sleep homeostasis and/or a failure to properly recover sleep. Older flies also show a decreased arousal threshold, i.e., an increased response to neuronal and mechanical wake-promoting stimuli. The reduced threshold may either reflect or cause the reduced recovery sleep of older flies compared to young flies after sleep deprivation. Conclusions: Further studies are certainly needed, but we suggest that the lower homeostatic sleep drive of older flies causes their decreased arousal threshold. Citation: Vienne J, Spann R, Guo F, Rosbash M. Age-related reduction of recovery sleep and arousal threshold in Drosophila. SLEEP 2016;39(8):1613–1624. PMID:27306274

  17. Influence of dissolved hydrogen on the fatigue crack growth behaviour of AISI 4140 steel

    NASA Astrophysics Data System (ADS)

    Ramasagara Nagarajan, Varun

    Many metallic structural components come into contact with hydrogen during manufacturing processes or forming operations such as hot stamping of auto body frames and while in service. This interaction of metallic parts with hydrogen can occur due to various reasons such as water molecule dissociation during plating operations, interaction with atmospheric hydrogen due to the moisture present in air during stamping operations or due to prevailing conditions in service (e.g.: acidic or marine environments). Hydrogen, being much smaller in size compared to other metallic elements such as Iron in steels, can enter the material and become dissolved in the matrix. It can lodge itself in interstitials locations of the metal atoms, at vacancies or dislocations in the metallic matrix or at grain boundaries or inclusions (impurities) in the alloy. This dissolved hydrogen can affect the functional life of these structural components leading to catastrophic failures in mission critical applications resulting in loss of lives and structural component. Therefore, it is very important to understand the influence of the dissolved hydrogen on the failure of these structural materials due to cyclic loading (fatigue). For the next generation of hydrogen based fuel cell vehicles and energy systems, it is very crucial to develop structural materials for hydrogen storage and containment which are highly resistant to hydrogen embrittlement. These materials should also be able to provide good long term life in cyclic loading, without undergoing degradation, even when exposed to hydrogen rich environments for extended periods of time. The primary focus of this investigation was to examine the influence of dissolved hydrogen on the fatigue crack growth behaviour of a commercially available high strength medium carbon low alloy (AISI 4140) steel. The secondary objective was to examine the influence of microstructure on the fatigue crack growth behaviour of this material and to determine the hydrogen induced failure mechanism in this material during cyclic loading. The secondary objective of this investigation was to determine the role of inclusions and their influence in affecting the fatigue crack growth rate of this material. Compact tension and tensile specimens were prepared as per ASTM E-647, E-399 and E-8 standards. The specimens were tested in three different heat treated conditions i.e. annealed (as received) as well as two austempered conditions. These specimens were precharged with hydrogen (ex situ) using cathodic charging method at a constant current density at three different time periods ranging from 150 to 250 hours before conducting fatigue crack growth tests. Mode 1 type fatigue tests were then performed in ambient atmosphere at constant amplitude using load ratio R of 0.1. The near threshold fatigue crack growth rate, fatigue threshold and the fatigue crack growth rate in the linear region were determined. Fatigue crack growth behaviour of specimens without any dissolve hydrogen were then compared with the specimens with different concentration of dissolved hydrogen. The test results show that the dissolved hydrogen concentration increases with the increase in charging time in all three heat treated conditions and the hydrogen uptake shows a strong dependence on the microstructure of the alloy. It was also observed that the microstructure has a significant influence of on the fatigue crack growth and SCC behaviour of the alloy with dissolved hydrogen. As the dissolved hydrogen concentration increases, the fatigue threshold was found to decrease and the near threshold crack growth rate increases in all three heat treated conditions showing the deleterious effect of hydrogen, but to a different extent in each condition. Current test results also indicate that the fatigue crack growth rates in the linear region increases as the dissolved hydrogen content increases in all three heat treated conditions. It is also observed that increasing the austempering temperature decreases the resistance to hydrogen embrittlement. An interesting phenomenon was also observed in annealed specimen charged with hydrogen for 250 h which had an unusually high fatigue threshold (DeltaKth).

  18. Ductile Tearing of Thin Aluminum Plates Under Blast Loading. Predictions with Fully Coupled Models and Biaxial Material Response Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corona, Edmundo; Gullerud, Arne S.; Haulenbeek, Kimberly K.

    2015-06-01

    The work presented in this report concerns the response and failure of thin 2024- T3 aluminum alloy circular plates to a blast load produced by the detonation of a nearby spherical charge. The plates were fully clamped around the circumference and the explosive charge was located centrally with respect to the plate. The principal objective was to conduct a numerical model validation study by comparing the results of predictions to experimental measurements of plate deformation and failure for charges with masses in the vicinity of the threshold between no tearing and tearing of the plates. Stereo digital image correlation datamore » was acquired for all tests to measure the deflection and strains in the plates. The size of the virtual strain gage in the measurements, however, was relatively large, so the strain measurements have to be interpreted accordingly as lower bounds of the actual strains in the plate and of the severity of the strain gradients. A fully coupled interaction model between the blast and the deflection of the structure was considered. The results of the validation exercise indicated that the model predicted the deflection of the plates reasonably accurately as well as the distribution of strain on the plate. The estimation of the threshold charge based on a critical value of equivalent plastic strain measured in a bulge test, however, was not accurate. This in spite of efforts to determine the failure strain of the aluminum sheet under biaxial stress conditions. Further work is needed to be able to predict plate tearing with some degree of confidence. Given the current technology, at least one test under the actual blast conditions where the plate tears is needed to calibrate the value of equivalent plastic strain when failure occurs in the numerical model. Once that has been determined, the question of the explosive mass value at the threshold could be addressed with more confidence.« less

  19. Ontology-Driven Monitoring of Patient's Vital Signs Enabling Personalized Medical Detection and Alert

    PubMed Central

    Hristoskova, Anna; Sakkalis, Vangelis; Zacharioudakis, Giorgos; Tsiknakis, Manolis; De Turck, Filip

    2014-01-01

    A major challenge related to caring for patients with chronic conditions is the early detection of exacerbations of the disease. Medical personnel should be contacted immediately in order to intervene in time before an acute state is reached, ensuring patient safety. This paper proposes an approach to an ambient intelligence (AmI) framework supporting real-time remote monitoring of patients diagnosed with congestive heart failure (CHF). Its novelty is the integration of: (i) personalized monitoring of the patients health status and risk stage; (ii) intelligent alerting of the dedicated physician through the construction of medical workflows on-the-fly; and (iii) dynamic adaptation of the vital signs’ monitoring environment on any available device or smart phone located in close proximity to the physician depending on new medical measurements, additional disease specifications or the failure of the infrastructure. The intelligence lies in the adoption of semantics providing for a personalized and automated emergency alerting that smoothly interacts with the physician, regardless of his location, ensuring timely intervention during an emergency. It is evaluated on a medical emergency scenario, where in the case of exceeded patient thresholds, medical personnel are localized and contacted, presenting ad hoc information on the patient's condition on the most suited device within the physician's reach. PMID:24445411

  20. Sensory redundancy management: The development of a design methodology for determining threshold values through a statistical analysis of sensor output data

    NASA Technical Reports Server (NTRS)

    Scalzo, F.

    1983-01-01

    Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.

  1. A simple model for enamel fracture from margin cracks.

    PubMed

    Chai, Herzl; Lee, James J-W; Kwon, Jae-Young; Lucas, Peter W; Lawn, Brian R

    2009-06-01

    We present results of in situ fracture tests on extracted human molar teeth showing failure by margin cracking. The teeth are mounted into an epoxy base and loaded with a rod indenter capped with a Teflon insert, as representative of food modulus. In situ observations of cracks extending longitudinally upward from the cervical margins are recorded in real time with a video camera. The cracks appear above some threshold and grow steadily within the enamel coat toward the occlusal surface in a configuration reminiscent of channel-like cracks in brittle films. Substantially higher loading is required to delaminate the enamel from the dentin, attesting to the resilience of the tooth structure. A simplistic fracture mechanics analysis is applied to determine the critical load relation for traversal of the margin crack along the full length of the side wall. The capacity of any given tooth to resist failure by margin cracking is predicted to increase with greater enamel thickness and cuspal radius. Implications in relation to dentistry and evolutionary biology are briefly considered.

  2. A comparison of statistical criteria for setting optimally discriminating MCAT and GPA thresholds in medical school admissions.

    PubMed

    Albanese, Mark A; Farrell, Philip; Dottl, Susan L

    2005-01-01

    Using Medical College Admission Test-grade point average (MCAT-GPA) scores as a threshold has the potential to address issues raised in recent Supreme Court cases, but it introduces complicated methodological issues for medical school admissions. To assess various statistical indexes to determine optimally discriminating thresholds for MCAT-GPA scores. Entering classes from 1992 through 1998 (N = 752) are used to develop guidelines for cut scores that optimize discrimination between students who pass and do not pass the United States Medical Licensing Examination (USMLE) Step 1 on the first attempt. Risk differences, odds ratios, sensitivity, and specificity discriminated best for setting thresholds. Compensatory versus noncompensatory procedures both accounted for 54% of Step 1 failures, but demanded different performance requirements (noncompensatory MCAT-biological sciences = 8, physical sciences = 7, verbal reasoning = 7--sum of scores = 22; compensatory MCAT total = 24). Rational and defensible intellectual achievement thresholds that are likely to comply with recent Supreme Court decisions can be set from MCAT scores and GPAs.

  3. Transmission intensity and drug resistance in malaria population dynamics: implications for climate change.

    PubMed

    Artzy-Randrup, Yael; Alonso, David; Pascual, Mercedes

    2010-10-26

    Although the spread of drug resistance and the influence of climate change on malaria are most often considered separately, these factors have the potential to interact through altered levels of transmission intensity. The influence of transmission intensity on the evolution of drug resistance has been addressed in theoretical studies from a population genetics' perspective; less is known however on how epidemiological dynamics at the population level modulates this influence. We ask from a theoretical perspective, whether population dynamics can explain non-trivial, non-monotonic, patterns of treatment failure with transmission intensity, and, if so, under what conditions. We then address the implications of warmer temperatures in an East African highland, where, as in other similar regions at the altitudinal edge of malaria's distribution, there has been a pronounced increase of cases from the 1970s to the 1990s. Our theoretical analyses, with a transmission model that includes different levels of immunity, demonstrate that an increase in transmission beyond a threshold can lead to a decrease in drug resistance, as previously shown, but that a second threshold may occur and lead to the re-establishment of drug resistance. Estimates of the increase in transmission intensity from the 1970s to the 1990s for the Kenyan time series, obtained by fitting the two-stage version of the model with an explicit representation of vector dynamics, suggest that warmer temperatures are likely to have moved the system towards the first threshold, and in so doing, to have promoted the faster spread of drug resistance. Climate change and drug resistance can interact and need not be considered as alternative explanations for trends in disease incidence in this region. Non-monotonic patterns of treatment failure with transmission intensity similar to those described as the 'valley phenomenon' for Uganda can result from epidemiological dynamics but under poorly understood assumptions.

  4. Transient variations of transthoracic impedance as a predictor of heart failure and death in patients with implanted defibrillators.

    PubMed

    Mitrani, Raul D; Sager, Solomon J; Moscucci, Mauro; Cogan, John; Myerburg, Robert J

    2014-08-20

    Transient variations in physiological parameters may forewarn of life-threatening cardiac events, but are difficult to identify clinically. Implantable cardioverter defibrillators (ICD) designed to measure transthoracic impedance provide a surrogate marker for pulmonary congestion. The aim of this study is to determine if the frequency of changes in transthoracic impedance (TTI) is associated with congestive heart failure (CHF) exacerbation and predicts mortality. We followed 109 consecutive patients (pts) with ICDs (n=58) or CRT-ICDs (n=51) for a mean of 21.3 (+10.2) months. Using 80 ohm-days as a reference, we correlated the frequency of TTI changes above this index to CHF hospitalizations or death. There was at least one TTI threshold crossing in 79 (72%) pts over 23.3 months follow-up, with a mean of 1.8 ± 3.4 per year. There were 18 pts with CHF hospitalizations who had a mean of 4.3 TTI threshold crossings/year (S.D.=±7.3; median=2.8), compared to 1.3 (S.D.=±1.5; median=0.8) among pts without CHF hospitalizations (p=0.0006). Among 20 patients who died during follow-up, there were 4.2 (S.D.=±7.0; median=2.9) TTI threshold crossings/year, compared with 1.3 (S.D.=±1.3; median=0.9) threshold crossings/year among survivors (p=0.0004). Using Cox Proportional Hazard modeling, after adjusting for age, baseline EF, and number of shocks, TTI threshold crossing was an independent predictor of death (HR 1.72, 95% CI 1.26-2.36, p=0.001). Increased frequency of TTI threshold crossings may be a useful predictor of transient risk for identifying a subgroup of ICD recipients at greater individual risk for death or CHF hospitalizations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Exploring total cardiac variability in healthy and pathophysiological subjects using improved refined multiscale entropy.

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2017-02-01

    Multiscale entropy (MSE) and refined multiscale entropy (RMSE) techniques are being widely used to evaluate the complexity of a time series across multiple time scales 't'. Both these techniques, at certain time scales (sometimes for the entire time scales, in the case of RMSE), assign higher entropy to the HRV time series of certain pathologies than that of healthy subjects, and to their corresponding randomized surrogate time series. This incorrect assessment of signal complexity may be due to the fact that these techniques suffer from the following limitations: (1) threshold value 'r' is updated as a function of long-term standard deviation and hence unable to explore the short-term variability as well as substantial variability inherited in beat-to-beat fluctuations of long-term HRV time series. (2) In RMSE, entropy values assigned to different filtered scaled time series are the result of changes in variance, but do not completely reflect the real structural organization inherited in original time series. In the present work, we propose an improved RMSE (I-RMSE) technique by introducing a new procedure to set the threshold value by taking into account the period-to-period variability inherited in a signal and evaluated it on simulated and real HRV database. The proposed I-RMSE assigns higher entropy to the age-matched healthy subjects than that of patients suffering from atrial fibrillation, congestive heart failure, sudden cardiac death and diabetes mellitus, for the entire time scales. The results strongly support the reduction in complexity of HRV time series in female group, old-aged, patients suffering from severe cardiovascular and non-cardiovascular diseases, and in their corresponding surrogate time series.

  6. Durability of implanted electrodes and leads in an upper-limb neuroprosthesis.

    PubMed

    Kilgore, Kevin L; Peckham, P Hunter; Keith, Michael W; Montague, Fred W; Hart, Ronald L; Gazdik, Martha M; Bryden, Anne M; Snyder, Scott A; Stage, Thomas G

    2003-01-01

    Implanted neuroprosthetic systems have been successfully used to provide upper-limb function for over 16 years. A critical aspect of these implanted systems is the safety, stability, and-reliability of the stimulating electrodes and leads. These components are (1) the stimulating electrode itself, (2) the electrode lead, and (3) the lead-to-device connector. A failure in any of these components causes the direct loss of the capability to activate a muscle consistently, usually resulting in a decrement in the function provided by the neuroprosthesis. Our results indicate that the electrode, lead, and connector system are extremely durable. We analyzed 238 electrodes that have been implanted as part of an upper-limb neuroprosthesis. Each electrode had been implanted at least 3 years, with a maximum implantation time of over 16 years. Only three electrode-lead failures and one electrode infection occurred, for a survival rate of almost 99 percent. Electrode threshold measurements indicate that the electrode response is stable over time, with no evidence of electrode migration or continual encapsulation in any of the electrodes studied. These results have an impact on the design of implantable neuroprosthetic systems. The electrode-lead component of these systems should no longer be considered a weak technological link.

  7. Verification of a Remaining Flying Time Prediction System for Small Electric Aircraft

    NASA Technical Reports Server (NTRS)

    Hogge, Edward F.; Bole, Brian M.; Vazquez, Sixto L.; Celaya, Jose R.; Strom, Thomas H.; Hill, Boyd L.; Smalling, Kyle M.; Quach, Cuong C.

    2015-01-01

    This paper addresses the problem of building trust in online predictions of a battery powered aircraft's remaining available flying time. A set of ground tests is described that make use of a small unmanned aerial vehicle to verify the performance of remaining flying time predictions. The algorithm verification procedure described here uses a fully functional vehicle that is restrained to a platform for repeated run-to-functional-failure experiments. The vehicle under test is commanded to follow a predefined propeller RPM profile in order to create battery demand profiles similar to those expected in flight. The fully integrated aircraft is repeatedly operated until the charge stored in powertrain batteries falls below a specified lower-limit. The time at which the lower-limit on battery charge is crossed is then used to measure the accuracy of remaining flying time predictions. Accuracy requirements are considered in this paper for an alarm that warns operators when remaining flying time is estimated to fall below a specified threshold.

  8. Accelerated life testing effects on CMOS microcircuit characteristics, phase 1

    NASA Technical Reports Server (NTRS)

    Maximow, B.

    1976-01-01

    An accelerated life test of sufficient duration to generate a minimum of 50% cumulative failures in lots of CMOS devices was conducted to provide a basis for determining the consistency of activation energy at 250 C. An investigation was made to determine whether any thresholds were exceeded during the high temperature testing, which could trigger failure mechanisms unique to that temperature. The usefulness of the 250 C temperature test as a predictor of long term reliability was evaluated.

  9. Analysis of lasers as a solution to efficiency droop in solid-state lighting

    DOE PAGES

    Chow, Weng W.; Crawford, Mary H.

    2015-10-06

    This letter analyzes the proposal to mitigate the efficiency droop in solid-state light emitters by replacing InGaN light-emitting diodes (LEDs) with lasers. The argument in favor of this approach is that carrier-population clamping after the onset of lasing limits carrier loss to that at threshold, while stimulated emission continues to grow with injection current. A fully quantized (carriers and light) theory that is applicable to LEDs and lasers (above and below threshold) is used to obtain a quantitative evaluation. The results confirm the potential advantage of higher laser output power and efficiency above lasing threshold, while also indicating disadvantages includingmore » low efficiency prior to lasing onset, sensitivity of lasing threshold to temperature, and the effects of catastrophic laser failure. As a result, a solution to some of these concerns is suggested that takes advantage of recent developments in nanolasers.« less

  10. Simplified spacecraft vulnerability assessments at component level in early design phase at the European Space Agency's Concurrent Design Facility

    NASA Astrophysics Data System (ADS)

    Kempf, Scott; Schäfer, Frank K.; Cardone, Tiziana; Ferreira, Ivo; Gerené, Sam; Destefanis, Roberto; Grassi, Lilith

    2016-12-01

    During recent years, the state-of-the-art risk assessment of the threat posed to spacecraft by micrometeoroids and space debris has been expanded to the analysis of failure modes of internal spacecraft components. This method can now be used to perform risk analyses for satellites to assess various failure levels - from failure of specific sub-systems to catastrophic break-up. This new assessment methodology is based on triple-wall ballistic limit equations (BLEs), specifically the Schäfer-Ryan-Lambert (SRL) BLE, which is applicable for describing failure threshold levels for satellite components following a hypervelocity impact. The methodology is implemented in the form of the software tool Particle Impact Risk and vulnerability Analysis Tool (PIRAT). During a recent European Space Agency (ESA) funded study, the PIRAT functionality was expanded in order to provide an interface to ESA's Concurrent Design Facility (CDF). The additions include a geometry importer and an OCDT (Open Concurrent Design Tool) interface. The new interface provides both the expanded geometrical flexibility, which is provided by external computer aided design (CAD) modelling, and an ease of import of existing data without the need for extensive preparation of the model. The reduced effort required to perform vulnerability analyses makes it feasible for application during early design phase, at which point modifications to satellite design can be undertaken with relatively little extra effort. The integration of PIRAT in the CDF represents the first time that vulnerability analyses can be performed in-session in ESA's CDF and the first time that comprehensive vulnerability studies can be applied cost-effectively in early design phase in general.

  11. Can complexity decrease in congestive heart failure?

    NASA Astrophysics Data System (ADS)

    Mukherjee, Sayan; Palit, Sanjay Kumar; Banerjee, Santo; Ariffin, M. R. K.; Rondoni, Lamberto; Bhattacharya, D. K.

    2015-12-01

    The complexity of a signal can be measured by the Recurrence period density entropy (RPDE) from the reconstructed phase space. We have chosen a window based RPDE method for the classification of signals, as RPDE is an average entropic measure of the whole phase space. We have observed the changes in the complexity in cardiac signals of normal healthy person (NHP) and congestive heart failure patients (CHFP). The results show that the cardiac dynamics of a healthy subject is more complex and random compare to the same for a heart failure patient, whose dynamics is more deterministic. We have constructed a general threshold to distinguish the border line between a healthy and a congestive heart failure dynamics. The results may be useful for wide range for physiological and biomedical analysis.

  12. Supersonic Localized Excitations Mediate Microscopic Dynamic Failure

    NASA Astrophysics Data System (ADS)

    Ghaffari, H. O.; Griffith, W. A.; Pec, M.

    2017-12-01

    A moving rupture front activates a fault patch by increasing stress above a threshold strength level. Subsequent failure yields fast slip which releases stored energy in the rock. A fraction of the released energy is radiated as seismic waves carrying information about the earthquake source. While this simplified model is widely accepted, the detailed evolution from the onset of dynamic failure to eventual re-equilibration is still poorly understood. To study dynamic failure of brittle solids we indented thin sheets of single mineral crystals and recorded the emitted ultrasound signals (high frequency analogues to seismic waves) using an array of 8 to 16 ultrasound probes. The simple geometry of the experiments allows us to unravel details of dynamic stress history of the laboratory earthquake sources. A universal pattern of failure is observed. First, stress increases over a short time period (1 - 2 µs), followed by rapid weakening (≈ 15 µs). Rapid weakening is followed by two distinct relaxation phases: a temporary quasi-steady state phase (10 µs) followed by a long-term relaxation phase (> 50 µs). We demonstrate that the dynamic stress history during failure is governed by formation and interaction of local non-dispersive excitations, or solitons. The formation and annihilation of solitons mediates the microscopic fast weakening phase, during which extreme acceleration and collision of solitons lead to non-Newtonian behavior and Lorentz contraction, i.e. shortening of solitons' characteristic length. Interestingly, a soliton can propagate as fast as 37 km/s, much faster than the p-wave velocity, implying that a fraction of the energy transmits through soliton excitations. The quasi-steady state phase delays the long-term ageing of the damaged crystal, implying a potentially weaker material. Our results open new horizons for understanding the complexity of earthquake sources, and, more generally, non-equilibrium relaxation of many body systems.

  13. Detection and severity classification of extracardiac interference in {sup 82}Rb PET myocardial perfusion imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orton, Elizabeth J., E-mail: eorton@physics.carleton.ca; Kemp, Robert A. de; Glenn Wells, R.

    2014-10-15

    Purpose: Myocardial perfusion imaging (MPI) is used for diagnosis and prognosis of coronary artery disease. When MPI studies are performed with positron emission tomography (PET) and the radioactive tracer rubidium-82 chloride ({sup 82}Rb), a small but non-negligible fraction of studies (∼10%) suffer from extracardiac interference: high levels of tracer uptake in structures adjacent to the heart which mask the true cardiac tracer uptake. At present, there are no clinically available options for automated detection or correction of this problem. This work presents an algorithm that detects and classifies the severity of extracardiac interference in {sup 82}Rb PET MPI images andmore » reports the accuracy and failure rate of the method. Methods: A set of 200 {sup 82}Rb PET MPI images were reviewed by a trained nuclear cardiologist and interference severity reported on a four-class scale, from absent to severe. An automated algorithm was developed that compares uptake at the external border of the myocardium to three thresholds, separating the four interference severity classes. A minimum area of interference was required, and the search region was limited to that facing the stomach wall and spleen. Maximizing concordance (Cohen’s Kappa) and minimizing failure rate for the set of 200 clinician-read images were used to find the optimal population-based constants defining search limit and minimum area parameters and the thresholds for the algorithm. Tenfold stratified cross-validation was used to find optimal thresholds and report accuracy measures (sensitivity, specificity, and Kappa). Results: The algorithm was capable of detecting interference with a mean [95% confidence interval] sensitivity/specificity/Kappa of 0.97 [0.94, 1.00]/0.82 [0.66, 0.98]/0.79 [0.65, 0.92], and a failure rate of 1.0% ± 0.2%. The four-class overall Kappa was 0.72 [0.64, 0.81]. Separation of mild versus moderate-or-greater interference was performed with good accuracy (sensitivity/specificity/Kappa = 0.92 [0.86, 0.99]/0.86 [0.71, 1.00]/0.78 [0.64, 0.92]), while separation of moderate versus severe interference severity classes showed reduced sensitivity/Kappa but little change in specificity (sensitivity/specificity/Kappa = 0.83 [0.77, 0.88]/0.82 [0.77, 0.88]/0.65 [0.60, 0.70]). Specificity was greater than sensitivity for all interference classes. Algorithm execution time was <1 min. Conclusions: The algorithm produced here has a low failure rate and high accuracy for detection of extracardiac interference in {sup 82}Rb PET MPI scans. It provides a fast, reliable, automated method for assessing severity of extracardiac interference.« less

  14. Safety and feasibility of STAT RAD: Improvement of a novel rapid tomotherapy-based radiation therapy workflow by failure mode and effects analysis.

    PubMed

    Jones, Ryan T; Handsfield, Lydia; Read, Paul W; Wilson, David D; Van Ausdal, Ray; Schlesinger, David J; Siebers, Jeffrey V; Chen, Quan

    2015-01-01

    The clinical challenge of radiation therapy (RT) for painful bone metastases requires clinicians to consider both treatment efficacy and patient prognosis when selecting a radiation therapy regimen. The traditional RT workflow requires several weeks for common palliative RT schedules of 30 Gy in 10 fractions or 20 Gy in 5 fractions. At our institution, we have created a new RT workflow termed "STAT RAD" that allows clinicians to perform computed tomographic (CT) simulation, planning, and highly conformal single fraction treatment delivery within 2 hours. In this study, we evaluate the safety and feasibility of the STAT RAD workflow. A failure mode and effects analysis (FMEA) was performed on the STAT RAD workflow, including development of a process map, identification of potential failure modes, description of the cause and effect, temporal occurrence, and team member involvement in each failure mode, and examination of existing safety controls. A risk probability number (RPN) was calculated for each failure mode. As necessary, workflow adjustments were then made to safeguard failure modes of significant RPN values. After workflow alterations, RPN numbers were again recomputed. A total of 72 potential failure modes were identified in the pre-FMEA STAT RAD workflow, of which 22 met the RPN threshold for clinical significance. Workflow adjustments included the addition of a team member checklist, changing simulation from megavoltage CT to kilovoltage CT, alteration of patient-specific quality assurance testing, and allocating increased time for critical workflow steps. After these modifications, only 1 failure mode maintained RPN significance; patient motion after alignment or during treatment. Performing the FMEA for the STAT RAD workflow before clinical implementation has significantly strengthened the safety and feasibility of STAT RAD. The FMEA proved a valuable evaluation tool, identifying potential problem areas so that we could create a safer workflow. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  15. Application of Failure Mode and Effects Analysis to Intraoperative Radiation Therapy Using Mobile Electron Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciocca, Mario, E-mail: mario.ciocca@cnao.it; Cantone, Marie-Claire; Veronese, Ivan

    2012-02-01

    Purpose: Failure mode and effects analysis (FMEA) represents a prospective approach for risk assessment. A multidisciplinary working group of the Italian Association for Medical Physics applied FMEA to electron beam intraoperative radiation therapy (IORT) delivered using mobile linear accelerators, aiming at preventing accidental exposures to the patient. Methods and Materials: FMEA was applied to the IORT process, for the stages of the treatment delivery and verification, and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system,more » based on the product of three parameters (severity, frequency of occurrence and detectability, each ranging from 1 to 10); 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. Results: Twenty-four subprocesses were identified. Ten potential failure modes were found and scored, in terms of RPN, in the range of 42-216. The most critical failure modes consisted of internal shield misalignment, wrong Monitor Unit calculation and incorrect data entry at treatment console. Potential causes of failure included shield displacement, human errors, such as underestimation of CTV extension, mainly because of lack of adequate training and time pressures, failure in the communication between operators, and machine malfunctioning. The main effects of failure were represented by CTV underdose, wrong dose distribution and/or delivery, unintended normal tissue irradiation. As additional safety measures, the utilization of a dedicated staff for IORT, double-checking of MU calculation and data entry and finally implementation of in vivo dosimetry were suggested. Conclusions: FMEA appeared as a useful tool for prospective evaluation of patient safety in radiotherapy. The application of this method to IORT lead to identify three safety measures for risk mitigation.« less

  16. New perspectives on the transition between discrete fracture, fragmentation, and pulverization during brittle failure of rocks

    NASA Astrophysics Data System (ADS)

    Griffith, W. A.; Ghaffari, H.; Barber, T. J.; Borjas, C.

    2015-12-01

    The motions of Earth's tectonic plates are typically measured in millimeters to tens of centimeters per year, seemingly confirming the generally-held view that tectonic processes are slow, and have been throughout Earth's history. In line with this perspective, the vast majority of laboratory rock mechanics research focused on failure in the brittle regime has been limited to experiments utilizing slow loading rates. On the other hand, many natural processes that pose significant risk for humans (e.g., earthquakes and extraterrestrial impacts), as well as risks associated with human activities (blow-outs, explosions, mining and mine failures, projectile penetration), occur at rates that are hundreds to thousands of times faster than those typically simulated in the laboratory. Little experimental data exists to confirm or calibrate theoretical models explaining the connection between these dramatic events and the pulverized rocks found in fault zones, impacts, or explosions; however the experimental data that does exist is thought-provoking: At the earth's surface, the process of brittle fracture passes through a critical transition in rocks at high strain rates (101-103s-1) between regimes of discrete fracture and distributed fragmentation, accompanied by a dramatic increase in strength. Previous experimental works on this topic have focused on key thresholds (e.g., peak stress, peak strain, average strain rate) that define this transition, but more recent work suggests that this transition is more fundamentally dependent on characteristics (e.g., shape) of the loading pulse and related microcrack dynamics, perhaps explaining why for different lithologies different thresholds more effectively define the pulverization transition. In this presentation we summarize some of our work focused on this transition, including the evolution of individual defects at the microscopic, microsecond scale and the energy budget associated with the brittle fragmentation process as a function of lithology and loading pulse characteristics.

  17. EM calibration based on Post OPC layout analysis

    NASA Astrophysics Data System (ADS)

    Sreedhar, Aswin; Kundu, Sandip

    2010-03-01

    Design for Manufacturability (DFM) involves changes to the design and CAD tools to help increase pattern printability and improve process control. Design for Reliability (DFR) performs the same to improve reliability of devices from failures such as Electromigration (EM), gate-oxide break down, hot carrier injection (HCI), Negative Bias Temperature Insatiability (NBTI) and mechanical stress effects. Electromigration (EM) occurs due to migration or displacement of atoms as a result of the movement of electrons through a conducting medium. The rate of migration determines the Mean Time to Failure (MTTF) which is modeled as a function of temperature and current density. The model itself is calibrated through failure analysis (FA) of parts that are deemed to have failed due to EM against design parameters such as linewidth. Reliability Verification (RV) of a design involves verifying that every conducting line in a design meets certain MTTF threshold. In order to perform RV, current density for each wire must be computed. Current itself is a function of the parasitics that are determined through RC extraction. The standard practice is to perform the RC extraction and current density calculation on drawn, pre-OPC layouts. If a wire fails to meet threshold for MTTF, it may be resized. Subsequently, mask preparation steps such as OPC and PSM introduce extra features such as SRAFs, jogs,hammerheads and serifs that change their resistance, capacitance and current density values. Hence, calibrating EM model based on pre-OPC layouts will lead to different results compared to post-OPC layouts. In this work, we compare EM model calibration and reliability check based on drawn layout versus predicted layout, where the drawn layout is pre-OPC layout and predicted layout is based on litho simulation of post-OPC layout. Results show significant divergence between these two approaches, making a case for methodology based on predicted layout.

  18. From GCM Output to Local Hydrologic and Ecological Impacts: Integrating Climate Change Projections into Conservation Lands

    NASA Astrophysics Data System (ADS)

    Weiss, S. B.; Micheli, L.; Flint, L. E.; Flint, A. L.; Thorne, J. H.

    2014-12-01

    Assessment of climate change resilience, vulnerability, and adaptation options require downscaling of GCM outputs to local scales, and conversion of temperature and precipitation forcings into hydrologic and ecological responses. Recent work in the San Francisco Bay Area, and California demonstrate a practical approach to this process. First, climate futures (GCM x Emissions Scenario) are screened using cluster analysis for seasonal precipitation and temperature, to select a tractable subset of projections that still represent the range of climate projections. Second, monthly climate projections are downscaled to 270m and the Basin Characterization Model (BCM) applied, to generate fine-scale recharge, runoff, actual evapotranspiration (AET), and climatic water deficit (CWD) accounting for soils, bedrock geology, topography, and local climate. Third, annual time-series are used to derive 30-year climatologies and recurrence intervals of extreme events (including multi-year droughts) at the scale of small watersheds and conservation parcels/networks. We take a "scenario-neutral" approach where thresholds are defined for system "failure," such as water supply shortfalls or drought mortality/vegetation transitions, and the time-window for hitting those thresholds is evaluated across all selected climate projections. San Francisco Bay Area examples include drought thresholds (CWD) for specific vegetation-types that identify leading/trailing edges and local refugia, evaluation of hydrologic resources (recharge and runoff) provided by conservation lands, and productivity of rangelands (AET). BCM outputs for multiple futures are becoming available to resource managers through on-line data extraction tools. This approach has wide applicability to numerous resource management issues.

  19. Effects of hydromechanical loading history and antecedent soil mechanical damage on shallow landslide triggering

    NASA Astrophysics Data System (ADS)

    Fan, Linfeng; Lehmann, Peter; Or, Dani

    2015-10-01

    Evidence suggests that the sudden triggering of rainfall-induced shallow landslides is preceded by accumulation of local internal failures in the soil mantle before their abrupt coalescence into a landslide failure plane. The mechanical status of a hillslope at any given time reflects competition between local damage accumulated during antecedent rainfall events and rates of mechanical healing (e.g., rebonding of microcracks and root regrowth). This dynamic interplay between damage accumulation and healing rates determines the initial mechanical state for landslide modeling. We evaluated the roles of these dynamic processes on landslide characteristics and patterns using a hydromechanical landslide-triggering model for a sequence of rainfall scenarios. The progressive nature of soil failure was represented by the fiber bundle model formalism that considers threshold strength of mechanical bonds linking adjacent soil columns and bedrock. The antecedent damage induced by prior rainfall events was expressed by the fraction of broken fibers that gradually regain strength or mechanically heal at rates specific to soil and roots. Results indicate that antecedent damage accelerates landslide initiation relative to pristine (undamaged) hillslopes. The volumes of first triggered landslides increase with increasing antecedent damage; however, for heavily damaged hillslopes, landslide volumes tend to decrease. Elapsed time between rainfall events allows mechanical healing that reduces the effects of antecedent damage. This study proposed a quantitative framework for systematically incorporating hydromechanical loading history and information on precursor events (e.g., such as recorded by acoustic emissions) into shallow landslide hazard assessment.

  20. Retrieval dynamics in self-terminated memory search.

    PubMed

    Hussey, Erika K; Dougherty, Michael R; Harbison, J Isaiah; Davelaar, Eddy J

    2014-02-01

    Most free-recall experiments employ a paradigm in which participants are given a preset amount of time to retrieve items from a list. While much has been learned using this paradigm, it ignores an important component of many real-world retrieval tasks: the decision to terminate memory search. The present study examines the temporal characteristics underlying memory search by comparing within subjects a standard retrieval paradigm with a finite, preset amount of time (closed interval) to a design that allows participants to terminate memory search on their own (open interval). Calling on the results of several presented simulations, we anticipated that the threshold for number of retrieval failures varied as a function of the nature of the recall paradigm, such that open intervals should result in lower thresholds than closed intervals. Moreover, this effect was expected to manifest in interretrieval times (IRTs). Although retrieval-interval type did not significantly impact the number of items recalled or error rates, IRTs were sensitive to the manipulation. Specifically, the final IRTs in the closed-interval paradigm were longer than those of the open-interval paradigm. This pattern suggests that providing participants with a preset retrieval interval not only masks an important component of the retrieval process (the memory search termination decision), but also alters temporal retrieval dynamics. Task demands may compel people to strategically control aspects of their retrieval by implementing different stopping rules.

  1. Mechanisms That Modulate Peripheral Oxygen Delivery during Exercise in Heart Failure.

    PubMed

    Kisaka, Tomohiko; Stringer, William W; Koike, Akira; Agostoni, Piergiuseppe; Wasserman, Karlman

    2017-07-01

    Oxygen uptake ([Formula: see text]o 2 ) measured at the mouth, which is equal to the cardiac output (CO) times the arterial-venous oxygen content difference [C(a-v)O 2 ], increases more than 10- to 20-fold in normal subjects during exercise. To achieve this substantial increase in oxygen uptake [[Formula: see text]o 2  = CO × C(a-v)O 2 ] both CO and the arterial-venous difference must simultaneously increase. Although this occurs in normal subjects, patients with heart failure cannot achieve significant increases in cardiac output and must rely primarily on changes in the arterial-venous difference to increase [Formula: see text]o 2 during exercise. Inadequate oxygen delivery to the tissue during exercise in heart failure results in tissue anaerobiosis, lactic acid accumulation, and reduction in exercise tolerance. H + is an important regulatory and feedback mechanism to facilitate additional oxygen delivery to the tissue (Bohr effect) and further aerobic production of ATP when tissue anaerobic metabolism increases the production of lactate (anaerobic threshold). This H + production in the muscle capillary promotes the continued unloading of oxygen (oxyhemoglobin desaturation) while maintaining the muscle capillary Po 2 (Fick principle) at a sufficient level to facilitate aerobic metabolism and overcome the diffusion barriers from capillary to mitochondria ("critical capillary Po 2 ," 15-20 mm Hg). This mechanism is especially important during exercise in heart failure where cardiac output increase is severely constrained. Several compensatory mechanisms facilitate peripheral oxygen delivery during exercise in both normal persons and patients with heart failure.

  2. Phase-resolved analysis of the susceptibility of pinned spiral waves to far-field pacing in a two-dimensional model of excitable media

    PubMed Central

    Bittihn, Philip; Squires, Amgad; Luther, Gisa; Bodenschatz, Eberhard; Krinsky, Valentin; Parlitz, Ulrich; Luther, Stefan

    2010-01-01

    Life-threatening cardiac arrhythmias are associated with the existence of stable and unstable spiral waves. Termination of such complex spatio-temporal patterns by local control is substantially limited by anchoring of spiral waves at natural heterogeneities. Far-field pacing (FFP) is a new local control strategy that has been shown to be capable of unpinning waves from obstacles. In this article, we investigate in detail the FFP unpinning mechanism for a single rotating wave pinned to a heterogeneity. We identify qualitatively different phase regimes of the rotating wave showing that the concept of vulnerability is important but not sufficient to explain the failure of unpinning in all cases. Specifically, we find that a reduced excitation threshold can lead to the failure of unpinning, even inside the vulnerable window. The critical value of the excitation threshold (below which no unpinning is possible) decreases for higher electric field strengths and larger obstacles. In contrast, for a high excitation threshold, the success of unpinning is determined solely by vulnerability, allowing for a convenient estimation of the unpinning success rate. In some cases, we also observe phase resetting in discontinuous phase intervals of the spiral wave. This effect is important for the application of multiple stimuli in experiments. PMID:20368243

  3. Restrictive or Liberal Red-Cell Transfusion for Cardiac Surgery.

    PubMed

    Mazer, C David; Whitlock, Richard P; Fergusson, Dean A; Hall, Judith; Belley-Cote, Emilie; Connolly, Katherine; Khanykin, Boris; Gregory, Alexander J; de Médicis, Étienne; McGuinness, Shay; Royse, Alistair; Carrier, François M; Young, Paul J; Villar, Juan C; Grocott, Hilary P; Seeberger, Manfred D; Fremes, Stephen; Lellouche, François; Syed, Summer; Byrne, Kelly; Bagshaw, Sean M; Hwang, Nian C; Mehta, Chirag; Painter, Thomas W; Royse, Colin; Verma, Subodh; Hare, Gregory M T; Cohen, Ashley; Thorpe, Kevin E; Jüni, Peter; Shehata, Nadine

    2017-11-30

    The effect of a restrictive versus liberal red-cell transfusion strategy on clinical outcomes in patients undergoing cardiac surgery remains unclear. In this multicenter, open-label, noninferiority trial, we randomly assigned 5243 adults undergoing cardiac surgery who had a European System for Cardiac Operative Risk Evaluation (EuroSCORE) I of 6 or more (on a scale from 0 to 47, with higher scores indicating a higher risk of death after cardiac surgery) to a restrictive red-cell transfusion threshold (transfuse if hemoglobin level was <7.5 g per deciliter, starting from induction of anesthesia) or a liberal red-cell transfusion threshold (transfuse if hemoglobin level was <9.5 g per deciliter in the operating room or intensive care unit [ICU] or was <8.5 g per deciliter in the non-ICU ward). The primary composite outcome was death from any cause, myocardial infarction, stroke, or new-onset renal failure with dialysis by hospital discharge or by day 28, whichever came first. Secondary outcomes included red-cell transfusion and other clinical outcomes. The primary outcome occurred in 11.4% of the patients in the restrictive-threshold group, as compared with 12.5% of those in the liberal-threshold group (absolute risk difference, -1.11 percentage points; 95% confidence interval [CI], -2.93 to 0.72; odds ratio, 0.90; 95% CI, 0.76 to 1.07; P<0.001 for noninferiority). Mortality was 3.0% in the restrictive-threshold group and 3.6% in the liberal-threshold group (odds ratio, 0.85; 95% CI, 0.62 to 1.16). Red-cell transfusion occurred in 52.3% of the patients in the restrictive-threshold group, as compared with 72.6% of those in the liberal-threshold group (odds ratio, 0.41; 95% CI, 0.37 to 0.47). There were no significant between-group differences with regard to the other secondary outcomes. In patients undergoing cardiac surgery who were at moderate-to-high risk for death, a restrictive strategy regarding red-cell transfusion was noninferior to a liberal strategy with respect to the composite outcome of death from any cause, myocardial infarction, stroke, or new-onset renal failure with dialysis, with less blood transfused. (Funded by the Canadian Institutes of Health Research and others; TRICS III ClinicalTrials.gov number, NCT02042898 .).

  4. Time threshold for second positive phototropism is decreased by a preirradiation with red light.

    PubMed Central

    Janoudi A-K; Konjevic, R; Apel, P; Poff, K L

    1992-01-01

    A second positive phototropic response is exhibited by a plant after the time of irradiation has exceeded a time threshold. The time threshold of dark-grown seedlings is about 15 minutes for Arabidopsis thaliana. This threshold is decreased to about 4 minutes by a 669-nanometer preirradiation. Tobacco (Nicotiana tabacum) seedlings show a similar response. The time threshold of dark-grown seedlings is about 60 minutes for tobacco, and is decreased to about 15 minutes after a preirradiation with either 450- or 669- nanometer light. The existence of a time threshold for second positive phototropism and the dependence of this threshold on the irradiation history of the seedling contribute to the complexity of the fluence response relationship for phototropism. PMID:11537887

  5. Time threshold for second positive phototropism is decreased by a preirradiation with red light

    NASA Technical Reports Server (NTRS)

    Konjevic, R.; Apel, P.; Poff, K. L.

    1992-01-01

    A second positive phototropic response is exhibited by a plant after the time of irradiation has exceeded a time threshold. The time threshold of dark-grown seedlings is about 15 minutes for Arabidopsis thaliana. This threshold is decreased to about 4 minutes by a 669-nanometer preirradiation. Tobacco (Nicotiana tabacum) seedlings show a similar response. The time threshold of dark-grown seedlings is about 60 minutes for tobacco, and is decreased to about 15 minutes after a preirradiation with either 450- or 669- nanometer light. The existence of a time threshold for second positive phototropism and the dependence of this threshold on the irradiation history of the seedling contribute to the complexity of the fluence response relationship for phototropism.

  6. Age-Related Reduction of Recovery Sleep and Arousal Threshold in Drosophila.

    PubMed

    Vienne, Julie; Spann, Ryanne; Guo, Fang; Rosbash, Michael

    2016-08-01

    Physiological studies show that aging affects both sleep quality and quantity in humans, and sleep complaints increase with age. Along with knowledge about the negative effects of poor sleep on health, understanding the enigmatic relationship between sleep and aging is important. Because human sleep is similar to Drosophila (fruit fly) sleep in many ways, we addressed the effects of aging on sleep in this model organism. Baseline sleep was recorded in five different Drosophila genotypes raised at either 21°C or 25°C. The amount of sleep recovered was then investigated after a nighttime of sleep deprivation (12 h) and after chronic sleep deprivation (3 h every night for multiple nights). Finally, the effects of aging on arousal, namely, sensitivity to neuronal and mechanical stimuli, were studied. We show that fly sleep is affected by age in a manner similar to that of humans and other mammals. Not only do older flies of several genotypes have more fragmented sleep and reduced total sleep time compared to young flies, but older flies also fail to recover as much sleep after sleep deprivation. This suggests either lower sleep homeostasis and/or a failure to properly recover sleep. Older flies also show a decreased arousal threshold, i.e., an increased response to neuronal and mechanical wake-promoting stimuli. The reduced threshold may either reflect or cause the reduced recovery sleep of older flies compared to young flies after sleep deprivation. Further studies are certainly needed, but we suggest that the lower homeostatic sleep drive of older flies causes their decreased arousal threshold. © 2016 Associated Professional Sleep Societies, LLC.

  7. Ultrasonic actuation for MEMS dormancy-related stiction reduction

    NASA Astrophysics Data System (ADS)

    Kaajakari, Ville; Kan, Shyi-Herng; Lin, Li-Jen; Lal, Amit; Rodgers, M. Steven

    2000-08-01

    The use of ultrasonic pulses incident on surface micromachines has been shown to reduce dormancy-related failure. We applied ultrasonic pulses from the backside of a silicon substrate carrying SUMMiT processed surface micromachined rotors, used earlier as ultrasonic motors. The amplitude of the pulses was less than what is required to actuate the rotor (sub-threshold actuation). By controlling the ultrasonic pulse exposure time it was found that pulsed samples had smaller actuation voltages as compared to non-pulsed samples after twelve-hour dormancy. This result indicates that the micromachine stiction to surfaces during dormant period can be effectively eliminated, resulting in long-term stability of surface micromachines in critical applications.

  8. Development of a Fault Monitoring Technique for Wind Turbines Using a Hidden Markov Model.

    PubMed

    Shin, Sung-Hwan; Kim, SangRyul; Seo, Yun-Ho

    2018-06-02

    Regular inspection for the maintenance of the wind turbines is difficult because of their remote locations. For this reason, condition monitoring systems (CMSs) are typically installed to monitor their health condition. The purpose of this study is to propose a fault detection algorithm for the mechanical parts of the wind turbine. To this end, long-term vibration data were collected over two years by a CMS installed on a 3 MW wind turbine. The vibration distribution at a specific rotating speed of main shaft is approximated by the Weibull distribution and its cumulative distribution function is utilized for determining the threshold levels that indicate impending failure of mechanical parts. A Hidden Markov model (HMM) is employed to propose the statistical fault detection algorithm in the time domain and the method whereby the input sequence for HMM is extracted is also introduced by considering the threshold levels and the correlation between the signals. Finally, it was demonstrated that the proposed HMM algorithm achieved a greater than 95% detection success rate by using the long-term signals.

  9. Comparison of three methods to identify the anaerobic threshold during maximal exercise testing in patients with chronic heart failure.

    PubMed

    Beckers, Paul J; Possemiers, Nadine M; Van Craenenbroeck, Emeline M; Van Berendoncks, An M; Wuyts, Kurt; Vrints, Christiaan J; Conraads, Viviane M

    2012-02-01

    Exercise training efficiently improves peak oxygen uptake (V˙O2peak) in patients with chronic heart failure. To optimize training-derived benefit, higher exercise intensities are being explored. The correct identification of anaerobic threshold is important to allow safe and effective exercise prescription. During 48 cardiopulmonary exercise tests obtained in patients with chronic heart failure (59.6 ± 11 yrs; left ventricular ejection fraction, 27.9% ± 9%), ventilatory gas analysis findings and lactate measurements were collected. Three technicians independently determined the respiratory compensation point (RCP), the heart rate turning point (HRTP) and the second lactate turning point (LTP2). Thereafter, exercise intensity (target heart rate and workload) was calculated and compared between the three methods applied. Patients had significantly reduced maximal exercise capacity (68% ± 21% of predicted V˙O2peak) and chronotropic incompetence (74% ± 7% of predicted peak heart rate). Heart rate, workload, and V˙O2 at HRTP and at RCP were not different, but at LTP2, these parameters were significantly (P < 0.0001) higher. Mean target heart rate and target workload calculated using the LTP2 were 5% and 12% higher compared with those calculated using HRTP and RCP, respectively. The calculation of target heart rate based on LTP2 was 5% and 10% higher in 12 of 48 (25%) and 6 of 48 (12.5%) patients, respectively, compared with the other two methods. In patients with chronic heart failure, RCP and HRTP, determined during cardiopulmonary exercise tests, precede the occurrence of LTP2. Target heart rates and workloads used to prescribe tailored exercise training in patients with chronic heart failure based on LTP2 are significantly higher than those derived from HRTP and RCP.

  10. The application of the detection filter to aircraft control surface and actuator failure detection and isolation

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Motyka, P.; Hall, S. R.

    1985-01-01

    The performance of the detection filter in detecting and isolating aircraft control surface and actuator failures is evaluated. The basic detection filter theory assumption of no direct input-output coupling is violated in this application due to the use of acceleration measurements for detecting and isolating failures. With this coupling, residuals produced by control surface failures may only be constrained to a known plane rather than to a single direction. A detection filter design with such planar failure signatures is presented, with the design issues briefly addressed. In addition, a modification to constrain the residual to a single known direction even with direct input-output coupling is also presented. Both the detection filter and the modification are tested using a nonlinear aircraft simulation. While no thresholds were selected, both filters demonstrated an ability to detect control surface and actuator failures. Failure isolation may be a problem if there are several control surfaces which produce similar effects on the aircraft. In addition, the detection filter was sensitive to wind turbulence and modeling errors.

  11. Approaches to Identify Exceedances of Water Quality Thresholds Associated with Ocean Conditions

    EPA Science Inventory

    WED scientists have developed a method to help distinguish whether failures to meet water quality criteria are associated with natural coastal upwelling by using the statistical approach of logistic regression. Estuaries along the west coast of the United States periodically ha...

  12. Predicting Renal Failure Progression in Chronic Kidney Disease Using Integrated Intelligent Fuzzy Expert System.

    PubMed

    Norouzi, Jamshid; Yadollahpour, Ali; Mirbagheri, Seyed Ahmad; Mazdeh, Mitra Mahdavi; Hosseini, Seyed Ahmad

    2016-01-01

    Chronic kidney disease (CKD) is a covert disease. Accurate prediction of CKD progression over time is necessary for reducing its costs and mortality rates. The present study proposes an adaptive neurofuzzy inference system (ANFIS) for predicting the renal failure timeframe of CKD based on real clinical data. This study used 10-year clinical records of newly diagnosed CKD patients. The threshold value of 15 cc/kg/min/1.73 m(2) of glomerular filtration rate (GFR) was used as the marker of renal failure. A Takagi-Sugeno type ANFIS model was used to predict GFR values. Variables of age, sex, weight, underlying diseases, diastolic blood pressure, creatinine, calcium, phosphorus, uric acid, and GFR were initially selected for the predicting model. Weight, diastolic blood pressure, diabetes mellitus as underlying disease, and current GFR(t) showed significant correlation with GFRs and were selected as the inputs of model. The comparisons of the predicted values with the real data showed that the ANFIS model could accurately estimate GFR variations in all sequential periods (Normalized Mean Absolute Error lower than 5%). Despite the high uncertainties of human body and dynamic nature of CKD progression, our model can accurately predict the GFR variations at long future periods.

  13. Identifying factors that predict the choice and success rate of radial artery catheterisation in contemporary real world cardiology practice: a sub-analysis of the PREVAIL study data.

    PubMed

    Pristipino, Christian; Roncella, Adriana; Trani, Carlo; Nazzaro, Marco S; Berni, Andrea; Di Sciascio, Germano; Sciahbasi, Alessandro; Musarò, Salvatore Donato; Mazzarotto, Pietro; Gioffrè, Gaetano; Speciale, Giulio

    2010-06-01

    To assess: the reasons behind an operator choosing to perform radial artery catheterisation (RAC) as against femoral arterial catheterisation, and to explore why RAC may fail in the real world. A pre-determined analysis of PREVAIL study database was performed. Relevant data were collected in a prospective, observational survey of 1,052 consecutive patients undergoing invasive cardiovascular procedures at nine Italian hospitals over a one month observation period. By multivariate analysis, the independent predictors of RAC choice were having the procedure performed: (1) at a high procedural volume centre; and (2) by an operator who performs a high volume of radial procedures; clinical variables played no statistically significant role. RAC failure was predicted independently by (1) a lower operator propensity to use RAC; and (2) the presence of obstructive peripheral artery disease. A 10-fold lower rate of RAC failure was observed among operators who perform RAC for > 85% of their personal caseload than among those who use RAC < 25% of the time (3.8% vs. 33.0%, respectively); by receiver operator characteristic (ROC) analysis, no threshold value for operator RAC volume predicted RAC failure. A routine RAC in all-comers is superior to a selective strategy in terms of feasibility and success rate.

  14. Impact-initiated damage thresholds in composites

    NASA Technical Reports Server (NTRS)

    Sharma, A. V.

    1980-01-01

    An experimental investigation was conducted to study the effect of low velocity projectile impact on the sandwich-type structural components. The materials used in the fabrication of the impact surface were graphite-, Kevlar-, and boron-fibers with appropriate epoxy matrices. The testing of the specimens was performed at moderately low- and high-temperatures as well as at room temperature to assess the impact-initiated strength degradation of the laminates. Eleven laminates with different stacking sequences, orientations, and thicknesses were tested. The low energy projectile impact is considered to simulate the damage caused by runway debris, dropping of the hand tools during servicing, etc., on the secondary aircraft structures fabricated with the composite materials. The results show the preload and the impact energy combinations necessary to cause catastrophic failures in the laminates tested. A set of faired curves indicating the failure thresholds is shown separately for the tension- and compression-loaded laminates. The specific-strengths and -moduli for the various laminates tested are also given.

  15. A multi-species synthesis of physiological mechanisms in drought-induced tree mortality

    USGS Publications Warehouse

    Adams, Henry D.; Zeppel, Melanie; Anderegg, William R.L.; Hartmann, Henrik; Landhäusser, Simon M.; Tissue, David T.; Huxman, Travis E.; Hudson, Patrick J.; Franz, Trenton E.; Allen, Craig D.; Anderegg, Leander D. L.; Barron-Gafford, Greg A.; Beerling, David; Breshears, David D.; Brodribb, Timothy J.; Bugmann, Harald; Cobb, Richard C.; Collins, Adam D.; Dickman, L. Turin; Duan, Honglang; Ewers, Brent E.; Galiano, Lucia; Galvez, David A.; Garcia-Forner, Núria; Gaylord, Monica L.; Germino, Matthew J.; Gessler, Arthur; Hacke, Uwe G.; Hakamada, Rodrigo; Hector, Andy; Jenkins, Michael W.; Kane, Jeffrey M.; Kolb, Thomas E.; Law, Darin J.; Lewis, James D.; Limousin, Jean-Marc; Love, David; Macalady, Alison K.; Martinez-Vilalta, Jordi; Mencuccini, Maurizio; Mitchell, Patrick J.; Muss, Jordan D.; O'Brien, Michael J.; O'Grady, Anthony P.; Pangle, Robert E.; Pinkard, Elizabeth A.; Piper, Frida I.; Plaut, Jennifer; Pockman, William T.; Quirk, Joe; Reinhardt, Keith; Ripullone, Francesco; Ryan, Michael G.; Sala, Anna; Sevanto, Sanna; Sperry, John S.; Vargas, Rodrigo; Vennetier, Michel; Way, Danielle A.; Wu, Chonggang; Yepez, Enrico A.; McDowell, Nate G.

    2017-01-01

    Widespread tree mortality associated with drought has been observed on all forested continents and global change is expected to exacerbate vegetation vulnerability. Forest mortality has implications for future biosphere–atmosphere interactions of carbon, water and energy balance, and is poorly represented in dynamic vegetation models. Reducing uncertainty requires improved mortality projections founded on robust physiological processes. However, the proposed mechanisms of drought-induced mortality, including hydraulic failure and carbon starvation, are unresolved. A growing number of empirical studies have investigated these mechanisms, but data have not been consistently analysed across species and biomes using a standardized physiological framework. Here, we show that xylem hydraulic failure was ubiquitous across multiple tree taxa at drought-induced mortality. All species assessed had 60% or higher loss of xylem hydraulic conductivity, consistent with proposed theoretical and modelled survival thresholds. We found diverse responses in non-structural carbohydrate reserves at mortality, indicating that evidence supporting carbon starvation was not universal. Reduced non-structural carbohydrates were more common for gymnosperms than angiosperms, associated with xylem hydraulic vulnerability, and may have a role in reducing hydraulic function. Our finding that hydraulic failure at drought-induced mortality was persistent across species indicates that substantial improvement in vegetation modelling can be achieved using thresholds in hydraulic function.

  16. A multi-species synthesis of physiological mechanisms in drought-induced tree mortality.

    PubMed

    Adams, Henry D; Zeppel, Melanie J B; Anderegg, William R L; Hartmann, Henrik; Landhäusser, Simon M; Tissue, David T; Huxman, Travis E; Hudson, Patrick J; Franz, Trenton E; Allen, Craig D; Anderegg, Leander D L; Barron-Gafford, Greg A; Beerling, David J; Breshears, David D; Brodribb, Timothy J; Bugmann, Harald; Cobb, Richard C; Collins, Adam D; Dickman, L Turin; Duan, Honglang; Ewers, Brent E; Galiano, Lucía; Galvez, David A; Garcia-Forner, Núria; Gaylord, Monica L; Germino, Matthew J; Gessler, Arthur; Hacke, Uwe G; Hakamada, Rodrigo; Hector, Andy; Jenkins, Michael W; Kane, Jeffrey M; Kolb, Thomas E; Law, Darin J; Lewis, James D; Limousin, Jean-Marc; Love, David M; Macalady, Alison K; Martínez-Vilalta, Jordi; Mencuccini, Maurizio; Mitchell, Patrick J; Muss, Jordan D; O'Brien, Michael J; O'Grady, Anthony P; Pangle, Robert E; Pinkard, Elizabeth A; Piper, Frida I; Plaut, Jennifer A; Pockman, William T; Quirk, Joe; Reinhardt, Keith; Ripullone, Francesco; Ryan, Michael G; Sala, Anna; Sevanto, Sanna; Sperry, John S; Vargas, Rodrigo; Vennetier, Michel; Way, Danielle A; Xu, Chonggang; Yepez, Enrico A; McDowell, Nate G

    2017-09-01

    Widespread tree mortality associated with drought has been observed on all forested continents and global change is expected to exacerbate vegetation vulnerability. Forest mortality has implications for future biosphere-atmosphere interactions of carbon, water and energy balance, and is poorly represented in dynamic vegetation models. Reducing uncertainty requires improved mortality projections founded on robust physiological processes. However, the proposed mechanisms of drought-induced mortality, including hydraulic failure and carbon starvation, are unresolved. A growing number of empirical studies have investigated these mechanisms, but data have not been consistently analysed across species and biomes using a standardized physiological framework. Here, we show that xylem hydraulic failure was ubiquitous across multiple tree taxa at drought-induced mortality. All species assessed had 60% or higher loss of xylem hydraulic conductivity, consistent with proposed theoretical and modelled survival thresholds. We found diverse responses in non-structural carbohydrate reserves at mortality, indicating that evidence supporting carbon starvation was not universal. Reduced non-structural carbohydrates were more common for gymnosperms than angiosperms, associated with xylem hydraulic vulnerability, and may have a role in reducing hydraulic function. Our finding that hydraulic failure at drought-induced mortality was persistent across species indicates that substantial improvement in vegetation modelling can be achieved using thresholds in hydraulic function.

  17. A multi-species synthesis of physiological mechanisms in drought-induced tree mortality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Henry D.; Zeppel, Melanie J. B.; Anderegg, William R. L.

    Widespread tree mortality associated with drought has been observed on all forested continents, and global change is expected to exacerbate vegetation vulnerability. Forest mortality has implications for future biosphere-atmosphere interactions of carbon, water, and energy balance, and is poorly represented in dynamic vegetation models. Reducing uncertainty requires improved mortality projections founded on robust physiological processes. However, the proposed mechanisms of drought-induced mortality, including hydraulic failure and carbon starvation, are unresolved. A growing number of empirical studies have investigated these mechanisms, but data have not been consistently analyzed across species and biomes using a standardized physiological framework. Here we show thatmore » xylem hydraulic failure was ubiquitous across multiple tree taxa at drought-induced mortality. All species assessed had 60% or greater loss of xylem hydraulic conductivity, consistent with proposed theoretical and modelled survival thresholds. We found diverse responses in non-structural carbohydrates at mortality, indicating that evidence supporting carbon starvation was not universal. Reduced non-structural carbohydrates were more common for gymnosperms than angiosperms, associated with xylem hydraulic vulnerability, and may have a role in hydraulic deterioration. The consistent Our finding that across species of hydraulic failure at drought-induced mortality was persistent across species indicates that substantial improvement in vegetation modelling can be achieved using thresholds in hydraulic function.« less

  18. Improved aortic enhancement in CT angiography using slope-based triggering with table speed optimization: a pilot study.

    PubMed

    Bashir, Mustafa R; Weber, Paul W; Husarik, Daniela B; Howle, Laurens E; Nelson, Rendon C

    2012-08-01

    To assess whether a scan triggering technique based on the slope of the time-attenuation curve combined with table speed optimization may improve arterial enhancement in aortic CT angiography compared to conventional threshold-based triggering techniques. Measurements of arterial enhancement were performed in a physiologic flow phantom over a range of simulated cardiac outputs (2.2-8.1 L/min) using contrast media boluses of 80 and 150 mL injected at 4 mL/s. These measurements were used to construct computer models of aortic attenuation in CT angiography, using cardiac output, aortic diameter, and CT table speed as input parameters. In-plane enhancement was calculated for normal and aneurysmal aortic diameters. Calculated arterial enhancement was poor (<150 HU) along most of the scan length using the threshold-based triggering technique for low cardiac outputs and the aneurysmal aorta model. Implementation of the slope-based triggering technique with table speed optimization improved enhancement in all scenarios and yielded good- (>200 HU; 13/16 scenarios) to excellent-quality (>300 HU; 3/16 scenarios) enhancement in all cases. Slope-based triggering with table speed optimization may improve the technical quality of aortic CT angiography over conventional threshold-based techniques, and may reduce technical failures related to low cardiac output and slow flow through an aneurysmal aorta.

  19. Controversies in the physiological basis of the 'anaerobic threshold' and their implications for clinical cardiopulmonary exercise testing.

    PubMed

    Hopker, J G; Jobson, S A; Pandit, J J

    2011-02-01

    This article reviews the notion of the 'anaerobic threshold' in the context of cardiopulmonary exercise testing. Primarily, this is a review of the proposed mechanisms underlying the ventilatory and lactate response to incremental exercise, which is important to the clinical interpretation of an exercise test. Since such tests are often conducted for risk stratification before major surgery, a failure to locate or justify the existence of an anaerobic threshold will have some implications for clinical practice. We also consider alternative endpoints within the exercise response that might be better used to indicate a patient's capacity to cope with the metabolic demands encountered both during and following major surgery. © 2011 The Authors. Anaesthesia © 2011 The Association of Anaesthetists of Great Britain and Ireland.

  20. Degradation modeling of mid-power white-light LEDs by using Wiener process.

    PubMed

    Huang, Jianlin; Golubović, Dušan S; Koh, Sau; Yang, Daoguo; Li, Xiupeng; Fan, Xuejun; Zhang, G Q

    2015-07-27

    The IES standard TM-21-11 provides a guideline for lifetime prediction of LED devices. As it uses average normalized lumen maintenance data and performs non-linear regression for lifetime modeling, it cannot capture dynamic and random variation of the degradation process of LED devices. In addition, this method cannot capture the failure distribution, although it is much more relevant in reliability analysis. Furthermore, the TM-21-11 only considers lumen maintenance for lifetime prediction. Color shift, as another important performance characteristic of LED devices, may also render significant degradation during service life, even though the lumen maintenance has not reached the critical threshold. In this study, a modified Wiener process has been employed for the modeling of the degradation of LED devices. By using this method, dynamic and random variations, as well as the non-linear degradation behavior of LED devices, can be easily accounted for. With a mild assumption, the parameter estimation accuracy has been improved by including more information into the likelihood function while neglecting the dependency between the random variables. As a consequence, the mean time to failure (MTTF) has been obtained and shows comparable result with IES TM-21-11 predictions, indicating the feasibility of the proposed method. Finally, the cumulative failure distribution was presented corresponding to different combinations of lumen maintenance and color shift. The results demonstrate that a joint failure distribution of LED devices could be modeled by simply considering their lumen maintenance and color shift as two independent variables.

  1. Tests for Determining Failure Criteria of Ceramics under Ballistic Impact

    DTIC Science & Technology

    1992-06-01

    strength steel rods, joined together by connectors and pretensioned to remove the slack before prcssurization. The system was designed to operate up...and ftaporta, IJIi J—arfW P*ii Kqfrwar. tv«a 11Q4. Arlington. VA 22201-4)02. an« t« «ha Oflna a« Manag *«** and Rudoat. f«*v«t HatfudtOA rVotoc«(0704-01...different mechanisms for failure and postfailure flow, and these mechanisms operate at different threshold loads. Two one-dimensional computational

  2. The regional ecology of alternative states and thresholds: Strategies for ecological site descriptions

    USDA-ARS?s Scientific Manuscript database

    Land dynamics, ecosystem resilience, and the interaction of management decisions with them vary significantly across space. One-size-fits-all applications across distinct land types have been responsible for many failures in rangeland management. Ecological Site Descriptions (ESDs) and similar lan...

  3. Fatigue Assessment for the Failed Bridge Deck Closure Pour at Mile Marker 43 on I-81.

    DOT National Transportation Integrated Search

    2014-04-01

    "Fatigue of reinforcing steel in concrete bridge decks has not been identified as a common failure mode. Generally, the : stress range occurring in reinforcing steel is below the fatigue threshold and infinite fatigue life can be expected. Closure po...

  4. Differential Motor Unit Changes after Endurance or High-Intensity Interval Training.

    PubMed

    Martinez-Valdes, Eduardo; Falla, Deborah; Negro, Francesco; Mayer, Frank; Farina, Dario

    2017-06-01

    Using a novel technique of high-density surface EMG decomposition and motor unit (MU) tracking, we compared changes in the properties of vastus medialis and vastus lateralis MU after endurance (END) and high-intensity interval training (HIIT). Sixteen men were assigned to the END or the HIIT group (n = 8 each) and performed six training sessions for 14 d. Each session consisted of 8-12 × 60-s intervals at 100% peak power output separated by 75 s of recovery (HIIT) or 90-120 min continuous cycling at ~65% V˙O2peak (END). Pre- and postintervention, participants performed 1) incremental cycling to determine V˙O2peak and peak power output and 2) maximal, submaximal (10%, 30%, 50%, and 70% maximum voluntary contraction [MVC]), and sustained (until task failure at 30% MVC) isometric knee extensions while high-density surface EMG signals were recorded from the vastus medialis and vastus lateralis. EMG signals were decomposed (submaximal contractions) into individual MU by convolutive blind source separation. Finally, MU were tracked across sessions by semiblind source separation. After training, END and HIIT improved V˙O2peak similarly (by 5.0% and 6.7%, respectively). The HIIT group showed enhanced maximal knee extension torque by ~7% (P = 0.02) and was accompanied by an increase in discharge rate for high-threshold MU (≥50% knee extension MVC) (P < 0.05). By contrast, the END group increased their time to task failure by ~17% but showed no change in MU discharge rates (P > 0.05). HIIT and END induce different adjustments in MU discharge rate despite similar improvements in cardiopulmonary fitness. Moreover, the changes induced by HIIT are specific for high-threshold MU. For the first time, we show that HIIT and END induce specific neuromuscular adaptations, possibly related to differences in exercise load intensity and training volume.

  5. The threshold strength of laminar ceramics utilizing molar volume changes and porosity

    NASA Astrophysics Data System (ADS)

    Pontin, Michael Gene

    It has been shown that uniformly spaced thin compressive layers within a ceramic body can arrest the propagation of an otherwise catastrophic crack, producing a threshold strength: a strength below which the probability of failure is zero. Previous work has shown that the threshold strength increases with both the magnitude of the compressive stress and the fracture toughness of the thin layer material, and finite element analysis predicts that the threshold strength can be further increased when the elastic modulus of the compressive layer is much smaller than the thicker layer. The current work describes several new approaches to increase the threshold strength of a laminar ceramic system. The initial method utilized a molar volume expansion within the thin layers, produced by the tetragonal-to-monoclinic phase transformation of unstabilized zirconia during cooling, in order to produce large compressive stresses within the thin layers. High threshold strengths were measured for this system, but they remained relatively constant as the zirconia content was increased. It was determined that microcracking produced during the transformation reduced the magnitude of the compressive stresses, but may also have served to reduce the modulus of the thin compressive layer, providing an additional strengthening mechanism. The second approach studied the addition of porosity to reduce the elastic modulus of the thin compressive layers. A new processing method was created and analyzed, in which thick layers of the laminate were fabricated by tape-casting, and then dip-coated into a slurry, containing rice starch, to create thin porous compressive layers upon densification. The effects of porosity on the residual compressive stress, elastic modulus, and fracture toughness of the thin layers were measured and calculated, and it was found that the elastic modulus mismatch between the thin and thick layers produced a large strengthening effect for volume fractions of porosity below a critical level. Specimens with greater volume fractions of porosity exhibited complete crack arrest, typically followed by non-catastrophic failure, as cracks initiating in adjacent thick layers coalesced by cracking or delamination along the thin porous layers.

  6. Improving the Accuracy of Cloud Detection Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Craddock, M. E.; Alliss, R. J.; Mason, M.

    2017-12-01

    Cloud detection from geostationary satellite imagery has long been accomplished through multi-spectral channel differencing in comparison to the Earth's surface. The distinction of clear/cloud is then determined by comparing these differences to empirical thresholds. Using this methodology, the probability of detecting clouds exceeds 90% but performance varies seasonally, regionally and temporally. The Cloud Mask Generator (CMG) database developed under this effort, consists of 20 years of 4 km, 15minute clear/cloud images based on GOES data over CONUS and Hawaii. The algorithms to determine cloudy pixels in the imagery are based on well-known multi-spectral techniques and defined thresholds. These thresholds were produced by manually studying thousands of images and thousands of man-hours to determine the success and failure of the algorithms to fine tune the thresholds. This study aims to investigate the potential of improving cloud detection by using Random Forest (RF) ensemble classification. RF is the ideal methodology to employ for cloud detection as it runs efficiently on large datasets, is robust to outliers and noise and is able to deal with highly correlated predictors, such as multi-spectral satellite imagery. The RF code was developed using Python in about 4 weeks. The region of focus selected was Hawaii and includes the use of visible and infrared imagery, topography and multi-spectral image products as predictors. The development of the cloud detection technique is realized in three steps. First, tuning of the RF models is completed to identify the optimal values of the number of trees and number of predictors to employ for both day and night scenes. Second, the RF models are trained using the optimal number of trees and a select number of random predictors identified during the tuning phase. Lastly, the model is used to predict clouds for an independent time period than used during training and compared to truth, the CMG cloud mask. Initial results show 97% accuracy during the daytime, 94% accuracy at night, and 95% accuracy for all times. The total time to train, tune and test was approximately one week. The improved performance and reduced time to produce results is testament to improved computer technology and the use of machine learning as a more efficient and accurate methodology of cloud detection.

  7. Experimental Exploration on Rainfall-induced Mass Re-mobilization after Giant Earthquake: A case study in Wenchuan earthquake hit region, China

    NASA Astrophysics Data System (ADS)

    Yang, Zongji; Bogaard, Thom. A.; Qiao, Jianping; Jiang, Yuanjun

    2015-04-01

    Prevention and mitigation of rainfall induced geological hazards after the Ms=8 Wenchuan earthquake on May 12th, 2008 were gained more significance for the rebuild of earthquake hit regions in China. After the Wenchuan earthquake, there were thousands of slopes failure, which were much more susceptible to subsequent heavy rainfall and many even transformed into potential debris flows. An typical example can be found in the catastrophic disaster occurred in Zhongxing County, Chengdu City on 10th July, 2013 in which the unknown fractured slope up the mountain was triggered by a downpour and transformed into subsequent debris flow which wiped the community downstream, about 200 victims were reported in that tragic event. The transform patterns of rainfall-induced mass re-mobilization was categorized into three major type as the erosion of fractured slopes, initiate on loosen deposit and outbreak of landslide (debris flow) dams according to vast field investigation in the earthquake hit region. Despite the widespread and hidden characters,the complexity of the process also demonstrated in the transforms of the mass re-mobilized by the erosion of both gravity and streams in the small watersheds which have never been reported before the giant Wenchuan Earthquake in many regions. As a result, an increasing number of questions for disaster relief and mitigation were proposed including the threshold of early warning and measurement of the volume for the design of mitigation measures on rainfall-induced mass re-mobilization in debris flow gullies. This study is aimed for answer the essential questions about the threshold and amount of mass initiation triggered by the subsequent rainfall in post earthquake time. In this study, experimental tests were carried out for simulating the failure of the rainfall-induced mass re-mobilization in respectively in a natural co-seismic fractured slope outside and the debris flow simulation platform inside the laboratory. A natural fractured slope was selected to conduct the field experimental test,after the field experimental test, the correlation of rainfall parameters, deformation criterion and water content as well as the failure volume of gravity erosion was investigated. In addition, the loosen mass re-mobilized by the stream was also simulated by the model experiment by which the correlation of rainfall thresholds, and the initial volume of mass triggered by the flow was analyzed. Thus, the threshold and volume measurement model for the initiation of mass re-mobilization were proposed by means of this experimental research. Despite of the fact that the simplicity of the model derived from experimental and empirical method and some drawbacks connected with the uncertainty and complexity of the geological phenomenon, the proposed method have contributed a lot in application for the early warning and prevention of mass transformed debris flows in earthquake hit region, China.

  8. Evaluation of the onset of failure under mechanical and thermal stresses on luting agent for metal–ceramic and metal crowns by finite element analysis

    PubMed Central

    Agnihotri, Hema; Bhatnagar, Naresh; Rao, G. Venugopal; Jain, Veena; Parkash, Hari; Kar, Aswini Kumar

    2010-01-01

    Long-term clinical failures of cemented prosthesis depend, to a large extent, on the integrity of the luting agent. The causative factors that lead to microfracture and, hence, failure of the luting agents are the stresses acting inside the oral cavity. Therefore, the present study was designed to develop an understanding of the relationship between stresses in the tooth and the failure potential of the luting agent. Two-dimensional finite element stress analysis was performed on the mandibular second premolar. The behavior of zinc-phosphate and glass-ionomer were studied under different crowns (metal–ceramic and metal crown) and loading conditions (mechanical force of 450 N acting vertically over the occlusal surface, thermal loads of 60° and 0°C). It was observed from the study that failure threshold of the luting agent was influenced both by the elastic modulus of the luting agent and by the type of the crown. PMID:22114426

  9. Analysis of progressive damage in thin circular laminates due to static-equivalent impact loads

    NASA Technical Reports Server (NTRS)

    Shivakumar, K. N.; Elber, W.; Illg, W.

    1983-01-01

    Clamped circular graphite/epoxy plates (25.4, 38.1, and 50.8 mm radii) with an 8-ply quasi-isotropic layup were analyzed for static-equivalent impact loads using the minimum-total-potential-energy method and the von Karman strain-displacement equations. A step-by-step incremental transverse displacement procedure was used to calculate plate load and ply stresses. The ply failure region was calculated using the Tsai-Wu criterion. The corresponding failure modes (splitting and fiber failure) were determined using the maximum stress criteria. The first-failure mode was splitting and initiated first in the bottom ply. The splitting-failure thresholds were relatively low and tended to be lower for larger plates than for small plates. The splitting-damage region in each ply was elongated in its fiber direction; the bottom ply had the largest damage region. The calculated damage region for the 25.4-mm-radius plate agreed with limited static test results from the literature.

  10. Does semantic preactivation reduce inattentional blindness?

    PubMed

    Kreitz, Carina; Schnuerch, Robert; Furley, Philip A; Gibbons, Henning; Memmert, Daniel

    2015-04-01

    We are susceptible to failures of awareness if a stimulus occurs unexpectedly and our attention is focused elsewhere. Such inattentional blindness is modulated by various parameters, including stimulus attributes, the observer's cognitive resources, and the observer's attentional set regarding the primary task. In three behavioral experiments with a total of 360 participants, we investigated whether mere semantic preactivation of the color of an unexpected object can reduce inattentional blindness. Neither explicitly mentioning the color several times before the occurrence of the unexpected stimulus nor priming the color more implicitly via color-related concepts could significantly reduce the susceptibility to inattentional blindness. Even putting the specific color concept in the main focus of the primary task did not lead to reduced inattentional blindness. Thus, we have shown that the failure to consciously perceive unexpected objects was not moderated by semantic preactivation of the objects' most prominent feature: its color. We suggest that this finding reflects the rather general principle that preactivations that are not motivationally relevant for one's current selection goals do not suffice to make an unexpected object overcome the threshold of awareness.

  11. Methodology for balancing design and process tradeoffs for deep-subwavelength technologies

    NASA Astrophysics Data System (ADS)

    Graur, Ioana; Wagner, Tina; Ryan, Deborah; Chidambarrao, Dureseti; Kumaraswamy, Anand; Bickford, Jeanne; Styduhar, Mark; Wang, Lee

    2011-04-01

    For process development of deep-subwavelength technologies, it has become accepted practice to use model-based simulation to predict systematic and parametric failures. Increasingly, these techniques are being used by designers to ensure layout manufacturability, as an alternative to, or complement to, restrictive design rules. The benefit of model-based simulation tools in the design environment is that manufacturability problems are addressed in a design-aware way by making appropriate trade-offs, e.g., between overall chip density and manufacturing cost and yield. The paper shows how library elements and the full ASIC design flow benefit from eliminating hot spots and improving design robustness early in the design cycle. It demonstrates a path to yield optimization and first time right designs implemented in leading edge technologies. The approach described herein identifies those areas in the design that could benefit from being fixed early, leading to design updates and avoiding later design churn by careful selection of design sensitivities. This paper shows how to achieve this goal by using simulation tools incorporating various models from sparse to rigorously physical, pattern detection and pattern matching, checking and validating failure thresholds.

  12. Mobile Learning and Achievement Goal Orientation Profiles

    ERIC Educational Resources Information Center

    Asplund, Minna

    2014-01-01

    Students with different achievement goal orientations have different approaches towards learning and studying. There is a widespread interest to find an easy access into learning spaces for those students who have low motivation with fear of failure and academic withdrawal. Mobile learning offers an easily accessible chance with low threshold to…

  13. Injury tolerance and moment response of the knee joint to combined valgus bending and shear loading.

    PubMed

    Bose, Dipan; Bhalla, Kavi S; Untaroiu, Costin D; Ivarsson, B Johan; Crandall, Jeff R; Hurwitz, Shepard

    2008-06-01

    Valgus bending and shearing of the knee have been identified as primary mechanisms of injuries in a lateral loading environment applicable to pedestrian-car collisions. Previous studies have reported on the structural response of the knee joint to pure valgus bending and lateral shearing, as well as the estimated injury thresholds for the knee bending angle and shear displacement based on experimental tests. However, epidemiological studies indicate that most knee injuries are due to the combined effects of bending and shear loading. Therefore, characterization of knee stiffness for combined loading and the associated injury tolerances is necessary for developing vehicle countermeasures to mitigate pedestrian injuries. Isolated knee joint specimens (n=40) from postmortem human subjects were tested in valgus bending at a loading rate representative of a pedestrian-car impact. The effect of lateral shear force combined with the bending moment on the stiffness response and the injury tolerances of the knee was concurrently evaluated. In addition to the knee moment-angle response, the bending angle and shear displacement corresponding to the first instance of primary ligament failure were determined in each test. The failure displacements were subsequently used to estimate an injury threshold function based on a simplified analytical model of the knee. The validity of the determined injury threshold function was subsequently verified using a finite element model. Post-test necropsy of the knees indicated medial collateral ligament injury consistent with the clinical injuries observed in pedestrian victims. The moment-angle response in valgus bending was determined at quasistatic and dynamic loading rates and compared to previously published test data. The peak bending moment values scaled to an average adult male showed no significant change with variation in the superimposed shear load. An injury threshold function for the knee in terms of bending angle and shear displacement was determined by performing regression analysis on the experimental data. The threshold values of the bending angle (16.2 deg) and shear displacement (25.2 mm) estimated from the injury threshold function were in agreement with previously published knee injury threshold data. The continuous knee injury function expressed in terms of bending angle and shear displacement enabled injury prediction for combined loading conditions such as those observed in pedestrian-car collisions.

  14. Risk Prediction Models of Locoregional Failure After Radical Cystectomy for Urothelial Carcinoma: External Validation in a Cohort of Korean Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, Ja Hyeon; Kim, Myong; Jeong, Chang Wook

    2014-08-01

    Purpose: To evaluate the predictive accuracy and general applicability of the locoregional failure model in a different cohort of patients treated with radical cystectomy. Methods and Materials: A total of 398 patients were included in the analysis. Death and isolated distant metastasis were considered competing events, and patients without any events were censored at the time of last follow-up. The model included the 3 variables pT classification, the number of lymph nodes identified, and margin status, as follows: low risk (≤pT2), intermediate risk (≥pT3 with ≥10 nodes removed and negative margins), and high risk (≥pT3 with <10 nodes removed ormore » positive margins). Results: The bootstrap-corrected concordance index of the model 5 years after radical cystectomy was 66.2%. When the risk stratification was applied to the validation cohort, the 5-year locoregional failure estimates were 8.3%, 21.2%, and 46.3% for the low-risk, intermediate-risk, and high-risk groups, respectively. The risk of locoregional failure differed significantly between the low-risk and intermediate-risk groups (subhazard ratio [SHR], 2.63; 95% confidence interval [CI], 1.35-5.11; P<.001) and between the low-risk and high-risk groups (SHR, 4.28; 95% CI, 2.17-8.45; P<.001). Although decision curves were appropriately affected by the incidence of the competing risk, decisions about the value of the models are not likely to be affected because the model remains of value over a wide range of threshold probabilities. Conclusions: The model is not completely accurate, but it demonstrates a modest level of discrimination, adequate calibration, and meaningful net benefit gain for prediction of locoregional failure after radical cystectomy.« less

  15. Bipolar leads for use with permanently implantable cardiac pacing systems: a review of limitations of traditional and coaxial configurations and the development and testing of new conductor, insulation, and electrode designs.

    PubMed

    Tyers, G F; Mills, P; Clark, J; Cheesman, M; Yeung-Lai-Wah, J A; Brownlee, R R

    1997-01-01

    The unacceptable rate of mechanical failures, threshold problems, and recalls experienced with many coaxial bipolar cardiac pacing lead designs are reviewed in detail. To address these problems, redundant insulation coradial atrial and ventricular tined leads (AL and VL, respectively) with iridium oxide electrodes were developed and subjected to extensive accelerated testing. There were no mechanical failures. The new lead body design proved to be much more durable than widely used trifilar MP35N configurations. The data reviewed and early and current test results are strongly supportive of tightly coupled insulation being a major factor in improving lead durability as long as the insulating material is not stressed. In addition to improving flex life, insulation adherence to the conductor may reduce the potential for ionic degradation. Pacing and sensing thresholds in animal studies of the new leads were within the reported range for leads with steroid eluting electrodes. A multicenter Canadian clinical trial was initiated with the first implant in early January 1994. By November 1995, 110 VL and 82 AL had been placed in 124 patients and followed for a mean of 11 +/- 6 months; maximum 21, total 1355. There were 60 males and 64 females with a mean age of 64 +/- 16 years, range 15-88. Primary indications for pacing were AV block in 61 patients, sick sinus syndrome in 53, vasovagal syncope in 4, and congestive heart failure in 7. Many patients had associated or primary tachyarrhythmias, including 111 with supraventricular and 12 with ventricular. Forty-two percent of patients (52/124) had prior cardiac procedures, including 18 open heart surgeries and 20 AV nodal ablations. At implant, 8 lead characteristics were rated good or excellent in 90% (746/829) of evaluations. X-ray visibility was of concern in 10% of patients (12/124). Three perioperative complications occurred, including displacement of one AL (1.2%) and one VL (0.9%). There were no subsequent mechanical (connector, conductor, or insulation) or functional (exit block, micro or macro displacement, or over- or undersensing) problems. Implant pacing thresholds (PT) at 0.45 ms were AL, 0.6 +/- 0.2 (74) and VL 0.4 +/- 0.2 V; impedance (Z) at 3.5 V output AL 373 +/- 77 (82) and VL 497 +/- 117 omega. Sensing thresholds (ST) were AL 3.1 +/- 1.6 (74) and VL 10.3 +/- 4.9 mV. Ventricular lead data were obtained for all patients (N = 110). Atrial lead data are incomplete, because some patients were in atrial fibrillation during implantation. After 12 months, AL PT at 1.5 V output was 0.18 +/- 0.10 ms (21) and at 2.5 V was 0.10 +/- 0.053 (22). Associated AL ST was 3.3 +/- 0.9 mV (21) AL Z 500 +/- 65 omega (25). After 18 months VL PT at 1.5 V was 0.15 +/- 0.10 ms (9) and at 2.5 V output was 0.09 +/- 0.04 ms (9). Associated VL ST was > 7.5 +/- 2.4 mV (9) and VL Z 497 +/- 105 omega (9). Follow-up time discrepancy is due to the VL being available 6 months earlier than the AL. There were no 30-day deaths and only one late death at 10 months in a patient with chronic atrial fibrillation. Death was unrelated to pacer or lead function. At 1 year, 68% AL (15/22) and 62% (24/39) captured at 0.5 V and < or = 1 ms pulse width output. Innovative adherent insulation coradial bipolar lead conductors of the design studied combined with coated iridium oxide electrodes provide for a negligible incidence of mechanical or functional failure with clinical follow-up now approaching 3 years. Excellent acute and chronic sensing and pacing thresholds have been documented. Late thresholds have continued to improve gradually. Long-term clinical pacing at < or = 1.5 V output with a large safety margin is feasible in essentially all patients. This coradial design produces very flexible < 5 French bipolar redundantly insulated lead bodies allowing both AL and VL to simultaneously pass through a single 10 French introducer sheath. (ABSTRACT TRUNCATED)

  16. Error suppression via complementary gauge choices in Reed-Muller codes

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Jochym-O'Connor, Tomas

    2017-09-01

    Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.

  17. A Survey of Architectural Techniques for Near-Threshold Computing

    DOE PAGES

    Mittal, Sparsh

    2015-12-28

    Energy efficiency has now become the primary obstacle in scaling the performance of all classes of computing systems. In low-voltage computing and specifically, near-threshold voltage computing (NTC), which involves operating the transistor very close to and yet above its threshold voltage, holds the promise of providing many-fold improvement in energy efficiency. However, use of NTC also presents several challenges such as increased parametric variation, failure rate and performance loss etc. Our paper surveys several re- cent techniques which aim to offset these challenges for fully leveraging the potential of NTC. By classifying these techniques along several dimensions, we also highlightmore » their similarities and differences. Ultimately, we hope that this paper will provide insights into state-of-art NTC techniques to researchers and system-designers and inspire further research in this field.« less

  18. A Survey of Architectural Techniques for Near-Threshold Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    Energy efficiency has now become the primary obstacle in scaling the performance of all classes of computing systems. In low-voltage computing and specifically, near-threshold voltage computing (NTC), which involves operating the transistor very close to and yet above its threshold voltage, holds the promise of providing many-fold improvement in energy efficiency. However, use of NTC also presents several challenges such as increased parametric variation, failure rate and performance loss etc. Our paper surveys several re- cent techniques which aim to offset these challenges for fully leveraging the potential of NTC. By classifying these techniques along several dimensions, we also highlightmore » their similarities and differences. Ultimately, we hope that this paper will provide insights into state-of-art NTC techniques to researchers and system-designers and inspire further research in this field.« less

  19. Statistics of acoustic emissions and stress drops during granular shearing using a stick-slip fiber bundle mode

    NASA Astrophysics Data System (ADS)

    Cohen, D.; Michlmayr, G.; Or, D.

    2012-04-01

    Shearing of dense granular materials appears in many engineering and Earth sciences applications. Under a constant strain rate, the shearing stress at steady state oscillates with slow rises followed by rapid drops that are linked to the build up and failure of force chains. Experiments indicate that these drops display exponential statistics. Measurements of acoustic emissions during shearing indicates that the energy liberated by failure of these force chains has power-law statistics. Representing force chains as fibers, we use a stick-slip fiber bundle model to obtain analytical solutions of the statistical distribution of stress drops and failure energy. In the model, fibers stretch, fail, and regain strength during deformation. Fibers have Weibull-distributed threshold strengths with either quenched and annealed disorder. The shape of the distribution for drops and energy obtained from the model are similar to those measured during shearing experiments. This simple model may be useful to identify failure events linked to force chain failures. Future generalizations of the model that include different types of fiber failure may also allow identification of different types of granular failures that have distinct statistical acoustic emission signatures.

  20. First overpower tests of metallic IFR [Integral Fast Reactor] fuel in TREAT [Transient Reactor Test Facility]: Data and analysis from tests M5, M6, and M7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, T. H.; Robinson, W. R.; Holland, J. W.

    1989-12-01

    Results and analyses of margin to cladding failure and pre-failure axial expansion of metallic fuel are reported for TREAT in-pile transient overpower tests M5--M7. These are the first such tests on reference binary and ternary alloy fuel of the Integral Fast Reactor (IFR) concept with burnup ranging from 1 to 10 at. %. In all cases, test fuel was subjected to an exponential power rise on an 8 s period until either incipient or actual cladding failure was achieved. Objectives, designs and methods are described with emphasis on developments unique to metal fuel safety testing. The resulting database for claddingmore » failure threshold and prefailure fuel expansion is presented. The nature of the observed cladding failure and resultant fuel dispersals is described. Simple models of cladding failures and pre-failure axial expansions are described and compared with experimental results. Reported results include: temperature, flow, and pressure data from test instrumentation; fuel motion diagnostic data principally from the fast neutron hodoscope; and test remains described from both destructive and non-destructive post-test examination. 24 refs., 144 figs., 17 tabs.« less

  1. Unbiased multi-fidelity estimate of failure probability of a free plane jet

    NASA Astrophysics Data System (ADS)

    Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin

    2017-11-01

    Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.

  2. High elastic modulus polymer electrolytes suitable for preventing thermal runaway in lithium batteries

    DOEpatents

    Mullin, Scott; Panday, Ashoutosh; Balsara, Nitash Pervez; Singh, Mohit; Eitouni, Hany Basam; Gomez, Enrique Daniel

    2014-04-22

    A polymer that combines high ionic conductivity with the structural properties required for Li electrode stability is useful as a solid phase electrolyte for high energy density, high cycle life batteries that do not suffer from failures due to side reactions and dendrite growth on the Li electrodes, and other potential applications. The polymer electrolyte includes a linear block copolymer having a conductive linear polymer block with a molecular weight of at least 5000 Daltons, a structural linear polymer block with an elastic modulus in excess of 1.times.10.sup.7 Pa and an ionic conductivity of at least 1.times.10.sup.-5 Scm.sup.-1. The electrolyte is made under dry conditions to achieve the noted characteristics. In another aspect, the electrolyte exhibits a conductivity drop when the temperature of electrolyte increases over a threshold temperature, thereby providing a shutoff mechanism for preventing thermal runaway in lithium battery cells.

  3. On Possible Arc Inception on Low Voltage Solar Array

    NASA Technical Reports Server (NTRS)

    Vayner, Boris

    2015-01-01

    Recent analysis of spacecraft failures during the period of 1990-2013 demonstrated clearly that electrostatic discharges caused more than 8 of all registered failures and anomalies, and comprised the most costly losses (25) for operating companies and agencies. The electrostatic discharges on spacecraft surfaces are the results of differential charging above some critical (threshold) voltages. The mechanisms of differential charging are well known, and various methods have been developed to prevent a generation of significant electric fields in areas of triple junctions. For example, low bus voltages in Low Earth Orbit plasma environment and slightly conducting layer over coverglass (ITO) in Geosynchronous Orbit surroundings are believed to be quite reliable measures to prevent discharges on respective surfaces. In most cases, the vulnerable elements of spacecraft (solar arrays, diode boards, etc.) go through comprehensive ground tests in vacuum chambers. However, tests articles contain the miniscule fragments of spacecraft components such as 10-30 solar cells of many thousands deployed on spacecraft in orbit. This is one reason why manufacturing defects may not be revealed in ground tests but expose themselves in arcing on array surface in space. The other reason for ineffectiveness of discharge preventive measures is aging of all materials in harsh orbital environments. The expected life time of modern spacecraft varies within the range of five-fifteen years, and thermal cycling, radiation damages, and mechanical stresses can result in surface erosion on conductive layers and microscopic cracks in coverglass sheets and adhesive films. These possible damages may cause significant increases in local electric field strengths and subsequent discharges. The primary discharges may or may not be detrimental to spacecraft operation, but they can produce the necessary conditions for sustained arcs initiation. Multiple measures were developed to prevent sustained discharges between adjacent strings, and many ground tests were performed to determine threshold parameters (voltage and current) for sustained arcs. And again, manufacturing defects and aging in space environments may result in considerable decrease of critical threshold parameters. This paper is devoted to the analysis of possible reasons behind arcing on spacecraft with low bus voltages.

  4. On Possible Arc Inception on Low Voltage Solar Array

    NASA Technical Reports Server (NTRS)

    Vayner, Boris

    2015-01-01

    Recent analysis of spacecraft failures during the period of 1990-2013 demonstrated clearly that electrostatic discharges caused more than 8 percent of all registered failures and anomalies, and comprised the most costly losses (25 percent) for operating companies and agencies. The electrostatic discharges on spacecraft surfaces are the results of differential charging above some critical (threshold) voltages. The mechanisms of differential charging are well known, and various methods have been developed to prevent a generation of significant electric fields in areas of triple junctions. For example, low bus voltages in Low Earth Orbit plasma environment and slightly conducting layer over cover-glass (ITO) in Geosynchronous Orbit surroundings are believed to be quite reliable measures to prevent discharges on respective surfaces. In most cases, the vulnerable elements of spacecraft (solar arrays, diode boards, etc.) go through comprehensive ground tests in vacuum chambers. However, tests articles contain the miniscule fragments of spacecraft components such as 10-30 solar cells of many thousands deployed on spacecraft in orbit. This is one reason why manufacturing defects may not be revealed in ground tests but expose themselves in arcing on array surface in space. The other reason for ineffectiveness of discharge preventive measures is aging of all materials in harsh orbital environments. The expected life time of modern spacecraft varies within the range of five-fifteen years, and thermal cycling, radiation damages, and mechanical stresses can result in surface erosion on conductive layers and microscopic cracks in cover-glass sheets and adhesive films. These possible damages may cause significant increases in local electric field strengths and subsequent discharges. The primary discharges may or may not be detrimental to spacecraft operation, but they can produce the necessary conditions for sustained arcs initiation. Multiple measures were developed to prevent sustained discharges between adjacent strings, and many ground tests were performed to determine threshold parameters (voltage and current) for sustained arcs. And again, manufacturing defects and aging in space environments may result in considerable decrease of critical threshold parameters. This paper is devoted to the analysis of possible reasons behind arcing on spacecraft with low bus voltages.

  5. Cost-effectiveness thresholds in health care: a bookshelf guide to their meaning and use.

    PubMed

    Culyer, Anthony J

    2016-10-01

    There is misunderstanding about both the meaning and the role of cost-effectiveness thresholds in policy decision making. This article dissects the main issues by use of a bookshelf metaphor. Its main conclusions are as follows: it must be possible to compare interventions in terms of their impact on a common measure of health; mere effectiveness is not a persuasive case for inclusion in public insurance plans; public health advocates need to address issues of relative effectiveness; a 'first best' benchmark or threshold ratio of health gain to expenditure identifies the least effective intervention that should be included in a public insurance plan; the reciprocal of this ratio - the 'first best' cost-effectiveness threshold - will rise or fall as the health budget rises or falls (ceteris paribus); setting thresholds too high or too low costs lives; failure to set any cost-effectiveness threshold at all also involves avertable deaths and morbidity; the threshold cannot be set independently of the health budget; the threshold can be approached from either the demand side or the supply side - the two are equivalent only in a health-maximising equilibrium; the supply-side approach generates an estimate of a 'second best' cost-effectiveness threshold that is higher than the 'first best'; the second best threshold is the one generally to be preferred in decisions about adding or subtracting interventions in an established public insurance package; multiple thresholds are implied by systems having distinct and separable health budgets; disinvestment involves eliminating effective technologies from the insured bundle; differential weighting of beneficiaries' health gains may affect the threshold; anonymity and identity are factors that may affect the interpretation of the threshold; the true opportunity cost of health care in a community, where the effectiveness of interventions is determined by their impact on health, is not to be measured in money - but in health itself.

  6. Residence time control on hot moments of net nitrate production and uptake in the hyporheic zone

    USGS Publications Warehouse

    Briggs, Martin A.; Lautz, Laura K.; Hare, Danielle K.

    2014-01-01

    moments of net production and uptake, enhancing NO3- production as residence times approach the anaerobic threshold, and changing zones of net NO3- production to uptake as residence times increase past the net sink threshold. The anaerobic and net sink thresholds for beaver-influenced streambed morphology occur at much shorter residence times (1.3 h and 2.3 h, respectively) compared to other documented hyporheic systems, and the net sink threshold compares favorably to the lower boundary of the anaerobic threshold determined for this system with the new oxygen Damkohler number. The consistency of the residence time threshold values of NO3- cycling in this study, despite environmental variability and disparate morphology, indicates that NO3- hot moment dynamics are primarily driven by changes in physical hydrology and associated residence times.

  7. Cost-effectiveness of sacubitril/valsartan in the treatment of heart failure with reduced ejection fraction

    PubMed Central

    McMurray, John J V; Trueman, David; Hancock, Elizabeth; Cowie, Martin R; Briggs, Andrew; Taylor, Matthew; Mumby-Croft, Juliet; Woodcock, Fionn; Lacey, Michael; Haroun, Rola; Deschaseaux, Celine

    2018-01-01

    Objective Chronic heart failure with reduced ejection fraction (HF-REF) represents a major public health issue and is associated with considerable morbidity and mortality. We evaluated the cost-effectiveness of sacubitril/valsartan (formerly LCZ696) compared with an ACE inhibitor (ACEI) (enalapril) in the treatment of HF-REF from the perspective of healthcare providers in the UK, Denmark and Colombia. Methods A cost-utility analysis was performed based on data from a multinational, Phase III randomised controlled trial. A decision-analytic model was developed based on a series of regression models, which extrapolated health-related quality of life, hospitalisation rates and survival over a lifetime horizon. The primary outcome was the incremental cost-effectiveness ratio (ICER). Results In the UK, the cost per quality-adjusted life-year (QALY) gained for sacubitril/valsartan (using cardiovascular mortality) was £17 100 (€20 400) versus enalapril. In Denmark, the ICER for sacubitril/valsartan was Kr 174 000 (€22 600). In Colombia, the ICER was COP$39.5 million (€11 200) per QALY gained. Deterministic sensitivity analysis showed that results were most sensitive to the extrapolation of mortality, duration of treatment effect and time horizon, but were robust to other structural changes, with most scenarios associated with ICERs below the willingness-to-pay threshold for all three country settings. Probabilistic sensitivity analysis suggested the probability that sacubitril/valsartan was cost-effective at conventional willingness-to-pay thresholds was 68%–94% in the UK, 84% in Denmark and 95% in Colombia. Conclusions Our analysis suggests that, in all three countries, sacubitril/valsartan is likely to be cost-effective compared with an ACEI (the current standard of care) in patients with HF-REF. PMID:29269379

  8. Emergency physicians' risk attitudes in acute decompensated heart failure patients.

    PubMed

    McCausland, Julie B; Machi, Mari S; Yealy, Donald M

    2010-01-01

    Despite the existence of various clinical prediction rules, no data exist defining what frequency of death or serious nonfatal outcomes comprises a realistic "low-risk" group for clinicians. This exploratory study sought to identify emergency physicians' (EPs) definition of low-risk acute decompensated heart failure (ADHF) emergency department (ED) patients. Surveys were mailed to full-time physicians (n = 88) in a multihospital EP group in southwestern Pennsylvania between December 2004 and February 2005. Participation was voluntary, and each EP was asked to define low risk (low risk of all-cause 30-day death and low risk of either hospital death or other serious medical complications) and choose a risk threshold at which they might consider outpatient management for those with ADHF. A range of choices was offered (<0.5, <1, <2, <3, <4, and <5%), and demographic data were collected. The response rate was 80%. Physicians defined low risk both for all-cause 30-day death and for hospital death or other serious complications, at <1% (38.8 and 40.3%, respectively). The decision threshold to consider outpatient therapy was <0.5% risk both for all-cause 30-day death (44.6%) and for hospital death or serious medical complications (44.4%). Emergency physicians in this exploratory study define low-risk ADHF patients as having less than a 1% risk of 30-day death or inpatient death or complications. They state a desire to have and use an ADHF clinical prediction rule that can identify low-risk ADHF patients who have less than a 0.5% risk of 30-day death or inpatient death or complications. (c) 2010 by the Society for Academic Emergency Medicine.

  9. A common fluence threshold for first positive and second positive phototropism in Arabidopsis thaliana

    NASA Technical Reports Server (NTRS)

    Janoudi, A.; Poff, K. L.

    1990-01-01

    The relationship between the amount of light and the amount of response for any photobiological process can be based on the number of incident quanta per unit time (fluence rate-response) or on the number of incident quanta during a given period of irradiation (fluence-response). Fluence-response and fluence rate-response relationships have been measured for second positive phototropism by seedlings of Arabidopsis thaliana. The fluence-response relationships exhibit a single limiting threshold at about 0.01 micromole per square meter when measured at fluence rates from 2.4 x 10(-5) to 6.5 x 10(-3) micromoles per square meter per second. The threshold values in the fluence rate-response curves decrease with increasing time of irradiation, but show a common fluence threshold at about 0.01 micromole per square meter. These thresholds are the same as the threshold of about 0.01 micromole per square meter measured for first positive phototropism. Based on these data, it is suggested that second positive curvature has a threshold in time of about 10 minutes. Moreover, if the times of irradiation exceed the time threshold, there is a single limiting fluence threshold at about 0.01 micromole per square meter. Thus, the limiting fluence threshold for second positive phototropism is the same as the fluence threshold for first positive phototropism. Based on these data, we suggest that this common fluence threshold for first positive and second positive phototropism is set by a single photoreceptor pigment system.

  10. Performance of the SWEEP model affected by estimates of threshold friction velocity

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is a process-based model and needs to be verified under a broad range of climatic, soil, and management conditions. Occasional failure of the WEPS erosion submodel (Single-event Wind Erosion Evaluation Program or SWEEP) to simulate erosion in the Columbia Pl...

  11. Comparison of measured and simulated friction velocity and threshold friction velocity using SWEEP

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) was developed by the USDA Agricultural Research Service as a tool to predict wind erosion and assess the influence of control practices on windblown soil loss. Occasional failure of the WEPS erosion submodel (SWEEP) to simulate erosion in the Columbia Platea...

  12. Minimum threshold for establishment and dispersal of Lilioceris cheni (Coleoptera: Chrysomelide): a biological control agent of Dioscorea bulbifera

    USDA-ARS?s Scientific Manuscript database

    The successful establishment or failure of a new population is often attributed to propagule pressure, the combination of the number of independent introduction events, and the number of individuals released at each event. Design of optimal release strategies for biological control agents benefits f...

  13. Properties of oxide dispersion strengthened alloys

    NASA Technical Reports Server (NTRS)

    Whittenberger, J. Daniel

    1989-01-01

    A contrast is drawn between the behavior of ODS alloys' matrix compositions and that of more conventional alloys. Mechanical property enhancements associated with ODS alloys are accounted for by both the presence of the dispersoids and, in some cases, the smaller size and number of secondary phases. Data obtained for ODS materials to date indicate the presence of three different threshold stresses, due to dislocation-particle interactions, diffusional creep/grain boundary sliding, and, in the case of the MA 956 ODS alloy, crack nucleation and growth processes. It is critical to establish which threshold stress is in effect, since the latter two stresses can result in failure by cracking.

  14. Failure of disordered materials as a depinning transition

    NASA Astrophysics Data System (ADS)

    Ponson, Laurent

    2010-03-01

    Crack propagation is the fundamental process leading to material failure. However, its dynamics is far from being fully understood. In this work, we investigate both experimentally and theoretically the far-from-equilibrium propagation of a crack within a disordered brittle material. At first, we focus on the average dynamics of a crack, and study the variations of its growth velocity v with respect to the external driving force G [1]. Carefully measured on a brittle rock, these variations are shown to display two regimes: above a given threshold Gc, the velocity evolves as a power law v ˜(G- Gc)^0.8, while at low driving force, its variations are well described by a sub-critical creep law, characteristic of a thermally activated crack propagation. Extending the continuum theory of Fracture Mechanics to inhomogeneous media, we show that this behavior is reminiscent of a dynamical critical transition: critical failure occurs when the driving force is sufficiently large to depin the crack front from the material heterogeneities. Another way to reveal such a transition is to investigate the fluctuations of crack velocity [2]. Considering a crack at the heterogeneous interface between two elastic solids, we predict that its propagation occurs through sudden jumps, with power law distributed sizes and durations. These predictions compare quantitatively well with recent direct observations of interfacial crack propagation [3]. Such an interpretation of material failure opens new perspectives in the field of Engineering and Applied Science that will be finally discussed. [4pt] [1] L. Ponson, Depinning transition in failure of inhomogeneous brittle materials, Phys. Rev. Lett. 103, 055501 (2009). [0pt] [2] D. Bonamy, S. Santucci and L. Ponson, Crackling dynamics in material failure as a signature of a self-organized dynamic phase transition, Phys. Rev. Lett. 101, 045501 (2008). [0pt] [3] K.J. Måløy, S. Santucci, J. Schmittbuhl and R. Toussaint, Local waiting time fluctuations along a randomly pinned crack front, Phys. Rev. Lett. 96, 045501 (2006).

  15. Neural Network-Based Sensor Validation for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei

    1998-01-01

    Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.

  16. A cascading failure model for analyzing railway accident causation

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Tao; Li, Ke-Ping

    2018-01-01

    In this paper, a new cascading failure model is proposed for quantitatively analyzing the railway accident causation. In the model, the loads of nodes are redistributed according to the strength of the causal relationships between the nodes. By analyzing the actual situation of the existing prevention measures, a critical threshold of the load parameter in the model is obtained. To verify the effectiveness of the proposed cascading model, simulation experiments of a train collision accident are performed. The results show that the cascading failure model can describe the cascading process of the railway accident more accurately than the previous models, and can quantitatively analyze the sensitivities and the influence of the causes. In conclusion, this model can assist us to reveal the latent rules of accident causation to reduce the occurrence of railway accidents.

  17. Frequency-dependent failure mechanisms of nanocrystalline gold interconnect lines under general alternating current

    NASA Astrophysics Data System (ADS)

    Luo, X. M.; Zhang, B.; Zhang, G. P.

    2014-09-01

    Thermal fatigue failure of metallization interconnect lines subjected to alternating currents (AC) is becoming a severe threat to the long-term reliability of micro/nanodevices with increasing electrical current density/power. Here, thermal fatigue failure behaviors and damage mechanisms of nanocrystalline Au interconnect lines on the silicon glass substrate have been investigated by applying general alternating currents (the pure alternating current coupled with a direct current (DC) component) with different frequencies ranging from 0.05 Hz to 5 kHz. We observed both thermal fatigue damages caused by Joule heating-induced cyclic strain/stress and electromigration (EM) damages caused by the DC component. Besides, the damage formation showed a strong electrically-thermally-mechanically coupled effect and frequency dependence. At lower frequencies, thermal fatigue damages were dominant and the main damage forms were grain coarsening with grain boundary (GB) cracking/voiding and grain thinning. At higher frequencies, EM damages took over and the main damage forms were GB cracking/voiding of smaller grains and hillocks. Furthermore, the healing effect of the reversing current was considered to elucidate damage mechanisms of the nanocrystalline Au lines generated by the general AC. Lastly, a modified model was proposed to predict the lifetime of the nanocrystalline metal interconnect lines, i.e., that was a competing drift velocity-based approach based on the threshold time required for reverse diffusion/healing to occur.

  18. EXPERIMENTAL STUDIES IN ACUTE RENAL FAILURE

    PubMed Central

    Menefee, Max G.; Mueller, C. Barber; Miller, Tracy B.; Myers, Joseph K.; Bell, Allen L.

    1964-01-01

    When purified human globin is injected intravenously into rats it produces acute renal failure characterized by tubular casts and oliguria. The globin is identifiable within vesicles and channels in the cytoplasm of the proximal tubules, through which it passes from lumen to basal side with no apparent serious effect on the cells. When a very minimal amount of globin is taken up by cells of the distal limb of Henle's loop or distal tubules (lower nephron), a markedly deleterious effect is apparent and the cells die within a short time. The mixture of cell debris and precipitated globin forms plugs within the confines of the basement membranes of the former distal limbs and distal tubules. After a number of lower nephrons are plugged a disruption of proximal tubules is found, which apparently results from the effect of back pressure in the obstructed nephrons. We suggest that any amount in excess of a low threshold of globin, either alone or combined with heme or related material, has a toxic effect on lower nephron cells. Once initiated, the toxic effect is not reversible and the resulting plug of debris and precipitate will occlude the lumen. If a sufficient number of nephrons are made non-functional the animal becomes anuric; otherwise it is oliguric. A high rate of urine flow will protect against the excess absorption of material and thus against acute renal failure. PMID:14238931

  19. Newly Diagnosed Meniere's Disease: Clinical Course With Initiation of Noninvasive Treatment Including an Accounting of Vestibular Migraine.

    PubMed

    Sbeih, Firas; Christov, Florian; Gluth, Michael B

    2018-05-01

    To describe the course of Meniere's disease with noninvasive treatment during the first few years after initial diagnosis. A retrospective review of consecutive patients with newly diagnosed definite Meniere's disease between 2013 and 2016 and a minimum follow-up of 1 year. Patients received a written plan for low sodium, water therapy, and treatment with a diuretic and/or betahistine. Subjects were screened and treated for vestibular migraine as needed. Vertigo control and hearing status at most recent follow-up were assessed. Forty-four subjects had an average follow up of 24.3 months. Thirty-four percent had Meniere's disease and vestibular migraine, and 84% had unilateral Meniere's disease. Seventy-five percent had vertigo well controlled at most recent follow-up, with only noninvasive treatments. Age, gender, body mass index, presence of vestibular migraine, bilateral disease, and duration of follow-up did not predict noninvasive treatment failure. Worse hearing threshold at 250 Hz and lower pure tone average (PTA) at the time of diagnosis did predict failure. Fifty-two percent of ears had improved PTA at most recent visit, 20% had no change, and 28% were worse Conclusions: Encountering excellent vertigo control and stable hearing after a new diagnosis of Meniere's disease is possible with noninvasive treatments. Worse hearing status at diagnosis predicted treatment failure.

  20. Non-destructive measurement and role of surface residual stress monitoring in residual life assessment of a steam turbine blading material

    NASA Astrophysics Data System (ADS)

    Prabhu-Gaunkar, Gajanana; Rawat, M. S.; Prasad, C. R.

    2014-02-01

    Steam turbine blades in power generation equipment are made from martensitic stainless steels having high strength, good toughness and corrosion resistance. However, these steels are susceptible to pitting which can promote early failures of blades in the turbines, particularly in the low pressure dry/wet areas by stress corrosion and corrosion fatigue. Presence of tensile residual stresses is known to accelerate failures whereas compressive stresses can help in delaying failures. Shot peening has been employed as an effective tool to induce compressive residual stresses which offset a part of local surface tensile stresses in the surface layers of components. Maintaining local stresses at stress raisers, such as pits formed during service, below a threshold level can help in preventing the initiation microcracks and failures. The thickness of the layer in compression will, however, depend of the shot peening parameters and should extend below the bottom of corrosion pits. The magnitude of surface compressive drops progressively during service exposure and over time the effectiveness of shot peening is lost making the material susceptible to micro-crack initiation once again. Measurement and monitoring of surface residual stress therefore becomes important for assessing residual life of components in service. This paper shows the applicability of surface stress monitoring to life assessment of steam turbine blade material based on data generated in laboratory on residual surface stress measurements in relation to fatigue exposure. An empirical model is proposed to calculate the remaining life of shot peened steam turbine blades in service.

  1. Multi-Dimensional Simulation of LWR Fuel Behavior in the BISON Fuel Performance Code

    NASA Astrophysics Data System (ADS)

    Williamson, R. L.; Capps, N. A.; Liu, W.; Rashid, Y. R.; Wirth, B. D.

    2016-11-01

    Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. To simulate this behavior requires a wide variety of material models that are often complex and nonlinear. The recently developed BISON code represents a powerful fuel performance simulation tool based on its material and physical behavior capabilities, finite-element versatility of spatial representation, and use of parallel computing. The code can operate in full three dimensional (3D) mode, as well as in reduced two dimensional (2D) modes, e.g., axisymmetric radial-axial ( R- Z) or plane radial-circumferential ( R- θ), to suit the application and to allow treatment of global and local effects. A BISON case study was used to illustrate analysis of Pellet Clad Mechanical Interaction failures from manufacturing defects using combined 2D and 3D analyses. The analysis involved commercial fuel rods and demonstrated successful computation of metrics of interest to fuel failures, including cladding peak hoop stress and strain energy density. In comparison with a failure threshold derived from power ramp tests, results corroborate industry analyses of the root cause of the pellet-clad interaction failures and illustrate the importance of modeling 3D local effects around fuel pellet defects, which can produce complex effects including cold spots in the cladding, stress concentrations, and hot spots in the fuel that can lead to enhanced cladding degradation such as hydriding, oxidation, CRUD formation, and stress corrosion cracking.

  2. Multi-Dimensional Simulation of LWR Fuel Behavior in the BISON Fuel Performance Code

    DOE PAGES

    Williamson, R. L.; Capps, N. A.; Liu, W.; ...

    2016-09-27

    Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. To simulate this behavior requires a wide variety of material models that are often complex and nonlinear. The recently developed BISON code represents a powerful fuel performance simulation tool based on its material and physical behavior capabilities, finite-element versatility of spatial representation, and use of parallel computing. The code can operate in full three dimensional (3D) mode, as well as in reduced two dimensional (2D) modes, e.g., axisymmetric radial-axial (R-Z) ormore » plane radial-circumferential (R-θ), to suit the application and to allow treatment of global and local effects. A BISON case study was used in this paper to illustrate analysis of Pellet Clad Mechanical Interaction failures from manufacturing defects using combined 2D and 3D analyses. The analysis involved commercial fuel rods and demonstrated successful computation of metrics of interest to fuel failures, including cladding peak hoop stress and strain energy density. Finally, in comparison with a failure threshold derived from power ramp tests, results corroborate industry analyses of the root cause of the pellet-clad interaction failures and illustrate the importance of modeling 3D local effects around fuel pellet defects, which can produce complex effects including cold spots in the cladding, stress concentrations, and hot spots in the fuel that can lead to enhanced cladding degradation such as hydriding, oxidation, CRUD formation, and stress corrosion cracking.« less

  3. An economic evaluation of maxillary implant overdentures based on six vs. four implants.

    PubMed

    Listl, Stefan; Fischer, Leonhard; Giannakopoulos, Nikolaos Nikitas

    2014-08-18

    The purpose of the present study was to assess the value for money achieved by bar-retained implant overdentures based on six implants compared with four implants as treatment alternatives for the edentulous maxilla. A Markov decision tree model was constructed and populated with parameter estimates for implant and denture failure as well as patient-centred health outcomes as available from recent literature. The decision scenario was modelled within a ten year time horizon and relied on cost reimbursement regulations of the German health care system. The cost-effectiveness threshold was identified above which the six-implant solution is preferable over the four-implant solution. Uncertainties regarding input parameters were incorporated via one-way and probabilistic sensitivity analysis based on Monte-Carlo simulation. Within a base case scenario of average treatment complexity, the cost-effectiveness threshold was identified to be 17,564 € per year of denture satisfaction gained above of which the alternative with six implants is preferable over treatment including four implants. Sensitivity analysis yielded that, depending on the specification of model input parameters such as patients' denture satisfaction, the respective cost-effectiveness threshold varies substantially. The results of the present study suggest that bar-retained maxillary overdentures based on six implants provide better patient satisfaction than bar-retained overdentures based on four implants but are considerably more expensive. Final judgements about value for money require more comprehensive clinical evidence including patient-centred health outcomes.

  4. An economic evaluation of maxillary implant overdentures based on six vs. four implants

    PubMed Central

    2014-01-01

    Background The purpose of the present study was to assess the value for money achieved by bar-retained implant overdentures based on six implants compared with four implants as treatment alternatives for the edentulous maxilla. Methods A Markov decision tree model was constructed and populated with parameter estimates for implant and denture failure as well as patient-centred health outcomes as available from recent literature. The decision scenario was modelled within a ten year time horizon and relied on cost reimbursement regulations of the German health care system. The cost-effectiveness threshold was identified above which the six-implant solution is preferable over the four-implant solution. Uncertainties regarding input parameters were incorporated via one-way and probabilistic sensitivity analysis based on Monte-Carlo simulation. Results Within a base case scenario of average treatment complexity, the cost-effectiveness threshold was identified to be 17,564 € per year of denture satisfaction gained above of which the alternative with six implants is preferable over treatment including four implants. Sensitivity analysis yielded that, depending on the specification of model input parameters such as patients’ denture satisfaction, the respective cost-effectiveness threshold varies substantially. Conclusions The results of the present study suggest that bar-retained maxillary overdentures based on six implants provide better patient satisfaction than bar-retained overdentures based on four implants but are considerably more expensive. Final judgements about value for money require more comprehensive clinical evidence including patient-centred health outcomes. PMID:25135370

  5. Continuous glucose monitoring: quality of hypoglycaemia detection.

    PubMed

    Zijlstra, E; Heise, T; Nosek, L; Heinemann, L; Heckermann, S

    2013-02-01

    To evaluate the accuracy of a (widely used) continuous glucose monitoring (CGM)-system and its ability to detect hypoglycaemic events. A total of 18 patients with type 1 diabetes mellitus used continuous glucose monitoring (Guardian REAL-Time CGMS) during two 9-day in-house periods. A hypoglycaemic threshold alarm alerted patients to sensor readings <70 mg/dl. Continuous glucose monitoring sensor readings were compared to laboratory reference measurements taken every 4 h and in case of a hypoglycaemic alarm. A total of 2317 paired data points were evaluated. Overall, the mean absolute relative difference (MARD) was 16.7%. The percentage of data points in the clinically accurate or acceptable Clarke Error Grid zones A + B was 94.6%. In the hypoglycaemic range, accuracy worsened (MARD 38.8%) leading to a failure to detect more than half of the true hypoglycaemic events (sensitivity 37.5%). Furthermore, more than half of the alarms that warn patients for hypoglycaemia were false (false alert rate 53.3%). Above the low alert threshold, the sensor confirmed 2077 of 2182 reference values (specificity 95.2%). Patients using continuous glucose monitoring should be aware of its limitation to accurately detect hypoglycaemia. © 2012 Blackwell Publishing Ltd.

  6. Could persistency of current of injury forecast successful active-fixation pacing lead implantation?

    PubMed

    Shali, Shalaimaiti; Su, Yangang; Qin, Shengmei; Ge, Junbo

    2018-05-01

    Presence of adequate current of injury (COI) was recognized as a sign of favorable pacemaker lead outcome. Little is known regarding the value of its dynamic behavior. We sought to test whether persistency of COI could predict active-fixation pacing lead performance. COI was monitored up to 10min after right ventricular (RV) pacing electrode fixation. COI persistency was defined as the percentage of COI magnitude relative to its initial measurement. An unacceptable pacing threshold (≥1.0V in acute evaluation or ≥2.0V over 2-year follow-up) with or without lead dislodgement was considered as lead failure. Lead implantation was attempted for 217 times in 174 patients (age 66.3±7.8years, 78 female). Acute lead failures occurred 43 times. Independent predictors of acute lead failure were RV enlargement (odds ratio [OR] 1.23, 95% confidential interval [CI] 1.11-2.04, P=0.033), absence of COI (OR 3.13, 95%CI 2.08-9.09, P=0.027), and COI persistency at 5min (OR 0.32, 95%CI 0.20-0.69, P=0.001) and 10min (OR 0.41, 95%CI 0.13-0.77, P=0.001). The optimal cutoffs were COI 5min persistency ≥50% (sensitivity 81.4%; specificity 81.9%) and COI 10min persistency ≥20% (sensitivity 86%; specificity 88.6%). There were 12 lead failures during 24.0±6.4months of follow-up. Patients with COI 5min persistency ≥50% had higher event-free survival compared to those with COI 5min persistency <50% (hazard ratio 3.54, 95% CI 1.04-12.06, P=0.043). COI persistency appears to be a valuable indicator for both acute and long-term outcome of active-fixation pacemaker leads. A precipitous decline in COI may require more attention to make sure of the lead performance. Copyright © 2018 Elsevier Ireland Ltd. All rights reserved.

  7. Threshold network of a financial market using the P-value of correlation coefficients

    NASA Astrophysics Data System (ADS)

    Ha, Gyeong-Gyun; Lee, Jae Woo; Nobi, Ashadun

    2015-06-01

    Threshold methods in financial networks are important tools for obtaining important information about the financial state of a market. Previously, absolute thresholds of correlation coefficients have been used; however, they have no relation to the length of time. We assign a threshold value depending on the size of the time window by using the P-value concept of statistics. We construct a threshold network (TN) at the same threshold value for two different time window sizes in the Korean Composite Stock Price Index (KOSPI). We measure network properties, such as the edge density, clustering coefficient, assortativity coefficient, and modularity. We determine that a significant difference exists between the network properties of the two time windows at the same threshold, especially during crises. This implies that the market information depends on the length of the time window when constructing the TN. We apply the same technique to Standard and Poor's 500 (S&P500) and observe similar results.

  8. Which is more valuable, longer survival or better quality of life? Israeli oncologists' and family physicians' attitudes toward the relative value of new cancer and congestive heart failure interventions.

    PubMed

    Greenberg, Dan; Hammerman, Ariel; Vinker, Shlomo; Shani, Adi; Yermiahu, Yuval; Neumann, Peter J

    2013-01-01

    We determined how Israeli oncologists and family physicians value life-prolongation versus quality-of-life (QOL)-enhancing outcomes attributable to cancer and congestive heart failure interventions. We presented physicians with two scenarios involving a hypothetical patient with metastatic cancer expected to survive 12 months with current treatment. In a life-prolongation scenario, we suggested that a new treatment increases survival at an incremental cost of $50,000 over the standard of care. Participants were asked what minimum improvement in median survival the new therapy would need to provide for them to recommend it over the standard of care. In the QOL-enhancing scenario, we asked the maximum willingness to pay for an intervention that leads to the same survival as the standard treatment, but increases patient's QOL from 50 to 75 (on a 0-100 scale). We replicated these scenarios by substituting a patient with congestive heart failure instead of metastatic cancer. We derived the incremental cost-effectiveness ratio per quality-adjusted life-year (QALY) gained threshold implied by each response. In the life-prolongation scenario, the cost-effectiveness thresholds implied by oncologists were $150,000/QALY and $100,000/QALY for cancer and CHF, respectively. Cost-effectiveness thresholds implied by family physicians were $50,000/QALY regardless of the disease type. Willingness to pay for the QOL-enhancing scenarios was $60,000/QALY and did not differ by physicians' specialty or disease. Our findings suggest that family physicians value life-prolonging and QOL-enhancing interventions roughly equally, while oncologists value interventions that extend survival more highly than those that improve only QOL. These findings may have important implications for coverage and reimbursement decisions of new technologies. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  9. WE-D-BRA-06: IMRT QA with ArcCHECK: The MD Anderson Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aristophanous, M; Suh, Y; Chi, P

    Purpose: The objective of this project was to report our initial IMRT QA results and experience with the SunNuclear ArcCHECK. Methods: Three thousand one-hundred and sixteen cases were treated with IMRT or VMAT at our institution between October 2013 and September 2014. All IMRT/VMAT treatment plans underwent Quality Assurance (QA) using ArcCHECK prior to therapy. For clinical evaluation, a Gamma analysis is performed following QA delivery using the SNC Patient software (Sun Nuclear Corp) at the 3%/3mm level. QA Gamma pass rates were analyzed based on categories of treatment site, technique, and type of MLCs. Our current clinical threshold formore » passing a QA (Tclin) is set at a Gamma pass rate greater than 90%. We recorded the percent of failures for each category, as well as the Gamma pass rate threshold that would Result in 95% of QAs to pass (T95). Results: Using Tclin a failure rate of 5.9% over all QAs was observed. The highest failure rate was observed for gynecological (22%) and the lowest for CNS (0.9%) treatments. T95 was 91% over all QAs and ranged from 73% (gynecological) to 96.5% (CNS) for individual treatments sites. T95 was lower for IMRT and non-HD (high definition) MLCs at 88.5% and 94.5%, respectively, compared to 92.4% and 97.1% for VMAT and HD MLC treatments, respectively. There was a statistically significant difference between the passing rates for IMRT vs. VMAT and for HD MLCs vs. non-HD MLCs (p-values << 0.01). Gynecological, IMRT, and HD MLC treatments typically include more plans with larger field sizes. Conclusion: On average, Tclin with ArcCHECK was consistent with T95, as well as the 90% action level reported in TG-119. However, significant variations between the examined categories suggest a link between field size and QA passing rates and may warrant field size-specific passing rate thresholds.« less

  10. Performance characteristics of finger-stick dried blood spots (DBS) on the determination of human immunodeficiency virus (HIV) treatment failure in a pediatric population in Mozambique

    PubMed Central

    Chang, Joy; de Sousa, Amina; Sabatier, Jennifer; Assane, Mariamo; Zhang, Guoqing; Bila, Dulce; Vaz, Paula; Alfredo, Charity; Cossa, Loide; Bhatt, Nilesh; Koumans, Emilia H.; Yang, Chunfu; Rivadeneira, Emilia; Jani, Ilesh; Houston, James C.

    2017-01-01

    Quantitative plasma viral load (VL) at 1000 copies /mL was recommended as the threshold to confirm antiretroviral therapy (ART) failure by the World Health Organization (WHO). Because of ongoing challenges of using plasma for VL testing in resource-limited settings (RLS), especially for children, this study collected 717 DBS and paired plasma samples from children receiving ART ≥1 year in Mozambique and compared the performance of DBS using Abbott’s VL test with a paired plasma sample using Roche’s VL test. At a cut-off of 1000 copies/mL, sensitivity of DBS using Abbott DBS VL test was 79.9%, better than 71.0% and 63.9% at 3000 and 5000 copies/mL, respectively. Specificities were 97.6%, 98.8%, 99.3% at 1000, 3000, and 5000 copies/mL, respectively. The Kappa value at 1000 copies/mL, 0.80 (95% CI: 0.73, 0.87), was higher than 0.73 (95% CI: 0.66, 0.80) and 0.66 (95% CI: 0.59, 0.73) at 3000, 5000 copies/mL, respectively, also indicating better agreement. The mean difference between the DBS and plasma VL tests with 95% limits of agreement by Bland-Altman was 0.311 (-0.908, 1.530). Among 73 children with plasma VL between 1000 to 5000 copies/mL, the DBS results were undetectable in 53 at the 1000 copies/mL threshold. While one DBS sample in the Abbott DBS VL test may be an alternative method to confirm ART failure at 1000 copies/mL threshold when a plasma sample is not an option for treatment monitoring, because of sensitivity concerns between 1,000 and 5,000 copies/ml, two DBS samples may be preferred accompanied by careful patient monitoring and repeat testing. PMID:28704560

  11. Correlation of gas exchange threshold and first muscle oxyhemoglobin inflection point with time-to-exhaustion during heavy-intensity exercise.

    PubMed

    Coquart, Jérémy B; Mucci, Patrick; L'hermette, Maxime; Chamari, Karim; Tourny, Claire; Garcin, Murielle

    2017-03-01

    The twofold aim of the study was to: 1) compare the gas exchange threshold (GET), the first oxyhemoglobin inflection point ([O2Hb]-T), and perceptual threshold as determined during an incremental exercise test, and 2) investigate the link between each threshold and time-to-exhaustion during heavy intensity exercise. Fourteen competitive cyclists performed an incremental exercise test to exhaustion on a cycloergometer to determine the different thresholds and peak workload (Wpeak). The participants then performed a sub-maximal constant workload test (90% Wpeak) to exhaustion to determine time-to-exhaustion. The thresholds were identified from: 1) the first breakpoint in the oxygen uptake vs. carbon dioxide output curve (GET), 2) the [O2Hb]-T, and 3) a rating of 13 in perceived exertion (perceptual threshold: RPE13-T). Oxygen uptake at the different thresholds was not significantly different (P>0.05). Moreover, GET and [O2Hb]-T were significantly correlated: 1) to each other (r≥0.79; P≤0.001), and 2) to time-to-exhaustion (r=0.81 and r=0.72, respectively; P<0.01). RPE13-T, however, was not significantly correlated with the time-to-exhaustion (P=0.148). The anaerobic threshold as identified from GET was concomitant to [O2Hb]-T. Both thresholds were correlated to time-to-exhaustion, and could therefore be used as a performance index in middle-duration events.

  12. Impact response of graphite-epoxy flat laminates using projectiles that simulate aircraft engine encounters

    NASA Technical Reports Server (NTRS)

    Preston, J. L., Jr.; Cook, T. S.

    1975-01-01

    An investigation of the response of a graphite-epoxy material to foreign object impact was made by impacting spherical projectiles of gelatin, ice, and steel normally on flat panels. The observed damage was classified as transverse (stress wave delamination and cracking), penetrative, or structural (gross failure): the minimum, or threshold, velocity to cause each class of damage was established as a function of projectile characteristics. Steel projectiles had the lowest transverse damage threshold, followed by gelatin and ice. Making use of the threshold velocities and assuming that the normal component of velocity produces the damage in nonnormal impacts, a set of impact angles and velocities was established for each projectile material which would result in damage to composite fan blades. Analysis of the operating parameters of a typical turbine fan blade shows that small steel projectiles are most likely to cause delamination and penetration damage to unprotected graphite-epoxy composite fan blades.

  13. A Common Fluence Threshold for First Positive and Second Positive Phototropism in Arabidopsis thaliana1

    PubMed Central

    Janoudi, Abdul; Poff, Kenneth L.

    1990-01-01

    The relationship between the amount of light and the amount of response for any photobiological process can be based on the number of incident quanta per unit time (fluence rate-response) or on the number of incident quanta during a given period of irradiation (fluence-response). Fluence-response and fluence rate-response relationships have been measured for second positive phototropism by seedlings of Arabidopsis thaliana. The fluence-response relationships exhibit a single limiting threshold at about 0.01 micromole per square meter when measured at fluence rates from 2.4 × 10−5 to 6.5 × 10−3 micromoles per square meter per second. The threshold values in the fluence rateresponse curves decrease with increasing time of irradiation, but show a common fluence threshold at about 0.01 micromole per square meter. These thresholds are the same as the threshold of about 0.01 micromole per square meter measured for first positive phototropism. Based on these data, it is suggested that second positive curvature has a threshold in time of about 10 minutes. Moreover, if the times of irradiation exceed the time threshold, there is a single limiting fluence threshold at about 0.01 micromole per square meter. Thus, the limiting fluence threshold for second positive phototropism is the same as the fluence threshold for first positive phototropism. Based on these data, we suggest that this common fluence threshold for first positive and second positive phototropism is set by a single photoreceptor pigment system. PMID:11537470

  14. A longitudinal study on the ammonia threshold in junior cyclists

    PubMed Central

    Yuan, Y; Chan, K

    2004-01-01

    Objectives: To identify the effect of a one year non-specific training programme on the ammonia threshold of a group of junior cyclists and to correlate ammonia threshold with other common physiological variables. Methods: The cyclists performed tests at three time points (T1, T2, T3) during the year. Follow up tests were conducted every six months after the original test. Ammonia threshold was obtained from a graded exercise with four minute steps. Results: The relatively non-specific one year training programme was effective in inducing an increase in peak VO2 (60.6 (5.9), 65.9 (7.4), and 64.6 (6.5) ml/min/kg at T1, T2, and T3 respectively) and endurance time (18.3 (4.5), 20.1 (5.2), and 27.0 (6.1) minutes at T1, T2, and T3 respectively), but was not effective for the sprint related variables. Ammonia threshold, together with lactate threshold and ventilatory threshold, was not significantly different at the three test times. Only endurance time correlated significantly with ammonia threshold (r  =  0.915, p  =  0.001). Conclusions: The findings suggest that a relatively non-specific one year training programme does not modify the ammonia threshold of junior cyclists. The significant correlation between ammonia threshold and endurance time further confirms that ammonia threshold is a measure of the ability to sustain exercise at submaximal intensities. PMID:15039242

  15. ω3-Polyunsaturated fatty acids for heart failure: Effects of dose on efficacy and novel signaling through free fatty acid receptor 4.

    PubMed

    O'Connell, Timothy D; Block, Robert C; Huang, Shue P; Shearer, Gregory C

    2017-02-01

    Heart failure (HF) affects 5.7 million in the U.S., and despite well-established pharmacologic therapy, the 5-year mortality rate remains near 50%. Furthermore, the mortality rate for HF has not declined in years, highlighting the need for new therapeutic options. Omega-3 polyunsaturated fatty acids (ω3-PUFAs), eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), are important regulators of cardiovascular health. However, questions of efficacy and mechanism of action have made the use of ω3-PUFAs in all cardiovascular disease (CVD) controversial. Here, we review recent studies in animal models of HF indicating that ω3-PUFAs, particularly EPA, are cardioprotective, with the results indicating a threshold for efficacy. We also examine clinical studies suggesting that ω3-PUFAs improve outcomes in patients with HF. Due to the relatively small number of clinical studies of ω3-PUFAs in HF, we discuss EPA concentration-dependency on outcomes in clinical trials of CVD to gain insight into the perceived questionable efficacy of ω3-PUFAs clinically, with the results again indicating a threshold for efficacy. Ultimately, we suggest that the main failing of ω3-PUFAs in clinical trials might be a failure to reach a therapeutically effective concentration. We also examine mechanistic studies suggesting that ω3-PUFAs signal through free fatty acid receptor 4 (Ffar4), a G-protein coupled receptor (GPR) for long-chain fatty acids (FA), thereby identifying an entirely novel mechanism of action for ω3-PUFA mediated cardioprotection. Finally, based on mechanistic animal studies suggesting that EPA prevents interstitial fibrosis and diastolic dysfunction, we speculate about a potential benefit for EPA-Ffar4 signaling in heart failure preserved with ejection fraction. Copyright © 2016. Published by Elsevier Ltd.

  16. Rainfall-triggered shallow landslides at catchment scale: Threshold mechanics-based modeling for abruptness and localization

    NASA Astrophysics Data System (ADS)

    von Ruette, J.; Lehmann, P.; Or, D.

    2013-10-01

    Rainfall-induced shallow landslides may occur abruptly without distinct precursors and could span a wide range of soil mass released during a triggering event. We present a rainfall-induced landslide-triggering model for steep catchments with surfaces represented as an assembly of hydrologically and mechanically interconnected soil columns. The abruptness of failure was captured by defining local strength thresholds for mechanical bonds linking soil and bedrock and adjacent columns, whereby a failure of a single bond may initiate a chain reaction of subsequent failures, culminating in local mass release (a landslide). The catchment-scale hydromechanical landslide-triggering model (CHLT) was applied to results from two event-based landslide inventories triggered by two rainfall events in 2002 and 2005 in two nearby catchments located in the Prealps in Switzerland. Rainfall radar data, surface elevation and vegetation maps, and a soil production model for soil depth distribution were used for hydromechanical modeling of failure patterns for the two rainfall events at spatial and temporal resolutions of 2.5 m and 0.02 h, respectively. The CHLT model enabled systematic evaluation of the effects of soil type, mechanical reinforcement (soil cohesion and lateral root strength), and initial soil water content on landslide characteristics. We compared various landslide metrics and spatial distribution of simulated landslides in subcatchments with observed inventory data. Model parameters were optimized for the short but intense rainfall event in 2002, and the calibrated model was then applied for the 2005 rainfall, yielding reasonable predictions of landslide events and volumes and statistically reproducing localized landslide patterns similar to inventory data. The model provides a means for identifying local hot spots and offers insights into the dynamics of locally resolved landslide hazards in mountainous regions.

  17. Determination of a Testing Threshold for Lumbar Puncture in the Diagnosis of Subarachnoid Hemorrhage after a Negative Head Computed Tomography: A Decision Analysis.

    PubMed

    Taylor, Richard Andrew; Singh Gill, Harman; Marcolini, Evie G; Meyers, H Pendell; Faust, Jeremy Samuel; Newman, David H

    2016-10-01

    The objective was to determine the testing threshold for lumbar puncture (LP) in the evaluation of aneurysmal subarachnoid hemorrhage (SAH) after a negative head computed tomography (CT). As a secondary aim we sought to identify clinical variables that have the greatest impact on this threshold. A decision analytic model was developed to estimate the testing threshold for patients with normal neurologic findings, being evaluated for SAH, after a negative CT of the head. The testing threshold was calculated as the pretest probability of disease where the two strategies (LP or no LP) are balanced in terms of quality-adjusted life-years. Two-way and probabilistic sensitivity analyses (PSAs) were performed. For the base-case scenario the testing threshold for performing an LP after negative head CT was 4.3%. Results for the two-way sensitivity analyses demonstrated that the test threshold ranged from 1.9% to 15.6%, dominated by the uncertainty in the probability of death from initial missed SAH. In the PSA the mean testing threshold was 4.3% (95% confidence interval = 1.4% to 9.3%). Other significant variables in the model included probability of aneurysmal versus nonaneurysmal SAH after negative head CT, probability of long-term morbidity from initial missed SAH, and probability of renal failure from contrast-induced nephropathy. Our decision analysis results suggest a testing threshold for LP after negative CT to be approximately 4.3%, with a range of 1.4% to 9.3% on robust PSA. In light of these data, and considering the low probability of aneurysmal SAH after a negative CT, classical teaching and current guidelines addressing testing for SAH should be revisited. © 2016 by the Society for Academic Emergency Medicine.

  18. The Effect of Fin Pitch on Fluid Elastic Instability of Tube Arrays Subjected to Cross Flow of Water

    NASA Astrophysics Data System (ADS)

    Desai, Sandeep Rangrao; Pavitran, Sampat

    2018-02-01

    Failure of tubes in shell and tube exchangers is attributed to flow induced vibrations of such tubes. There are different excitations mechanisms due to which flow induced vibration occurs and among such mechanisms, fluid elastic instability is the most prominent one as it causes the most violent vibrations and may lead to rapid tube failures within short time. Fluid elastic instability is the fluid-structure interaction phenomenon which occurs when energy input by the fluid force exceeds energy expended in damping. This point is referred as instability threshold and corresponding velocity is referred as critical velocity. Once flow velocity exceeds critical flow velocity, the vibration amplitude increases very rapidly with flow velocity. An experimental program is carried out to determine the critical velocity at instability for plain and finned tube arrays subjected to cross flow of water. The tube array geometry is parallel triangular with cantilever end condition and pitch ratios considered are 2.6 and 2.1. The objective of research is to determine the effect of increase in pitch ratio on instability threshold for plain tube arrays and to assess the effect of addition of fins as well as increase in fin density on instability threshold for finned tube arrays. Plain tube array with two different pitch ratios; 2.1 and 2.6 and finned tube arrays with same pitch ratio; 2.6 but with two different fin pitches; such as fine (10 fpi) and coarse (4 fpi) are considered for the experimentation. Connors' equation that relates critical velocity at instability to different parameters, on which instability depends, has been used as the basis for analysis and the concept of effective diameter is used for the present investigation. The modal parameters are first suitably modified using natural frequency reduction setup that is already designed and developed to reduce natural frequency and hence to achieve experimental simulation of fluid elastic instability within the limited flow capacity of the pump. The tests are carried out first on plain tube arrays to establish the same as the datum case and results are compared to known results of plain tube arrays and hence the quality of the test rig is also assessed. The fluid elastic vibration tests are then carried out on finned tube arrays with coarse and fine fin pitches and effects of fins and fin pitch on instability threshold are shown. The vibration response of the tube is recorded for each gradually increasing flow rates of water till instability point is reached. The parameters at the instability are then presented in terms of dimensionless parameters to compare them with published results. It is concluded that, arrays with higher pitch ratios are unstable at comparatively higher flow velocities and instability threshold for finned tube arrays is delayed due to addition of the fins. Further, it is concluded that, instability threshold for finned tube arrays with fine fin pitch is delayed compared to coarse fin pitch and hence for increased fin density, instability threshold is delayed. The experimental results in terms of critical velocities obtained for different tube arrays subjected to water cross flow will serve as the base flow rates for air-water cross flow experiments to be conducted in the next phase.

  19. Prediction of Fracture Initiation in Hot Compression of Burn-Resistant Ti-35V-15Cr-0.3Si-0.1C Alloy

    NASA Astrophysics Data System (ADS)

    Zhang, Saifei; Zeng, Weidong; Zhou, Dadi; Lai, Yunjin

    2015-11-01

    An important concern in hot working of metals is whether the desired deformation can be accomplished without fracture of the material. This paper builds a fracture prediction model to predict fracture initiation in hot compression of a burn-resistant beta-stabilized titanium alloy Ti-35V-15Cr-0.3Si-0.1C using a combined approach of upsetting experiments, theoretical failure criteria and finite element (FE) simulation techniques. A series of isothermal compression experiments on cylindrical specimens were conducted in temperature range of 900-1150 °C, strain rate of 0.01-10 s-1 first to obtain fracture samples and primary reduction data. Based on that, a comparison of eight commonly used theoretical failure criteria was made and Oh criterion was selected and coded into a subroutine. FE simulation of upsetting experiments on cylindrical specimens was then performed to determine the fracture threshold values of Oh criterion. By building a correlation between threshold values and the deforming parameters (temperature and strain rate, or Zener-Hollomon parameter), a new fracture prediction model based on Oh criterion was established. The new model shows an exponential decay relationship between threshold values and Zener-Hollomon parameter (Z), and the relative error of the model is less than 15%. This model was then applied successfully in the cogging of Ti-35V-15Cr-0.3Si-0.1C billet.

  20. Differential activation of stress-response signaling in load-induced cardiac hypertrophy and failure

    PubMed Central

    Rothermel, Beverly A.; Berenji, Kambeez; Tannous, Paul; Kutschke, William; Dey, Asim; Nolan, Bridgid; Yoo, Ki-Dong; Demetroulis, Elaine; Gimbel, Michael; Cabuay, Barry; Karimi, Mohsen; Hill, Joseph A.

    2014-01-01

    Hypertrophic growth of the myocardium occurs in most forms of heart failure and may contribute to the pathogenesis of the failure state. Little is known about the regulatory mechanisms governing the often-coexisting phenotypes of hypertrophy, systolic failure, and diastolic stiffness that characterize clinical disease. We hypothesized that intracellular signaling pathways are differentially activated by graded degrees of hemodynamic stress. To test this, we developed models of graded pressure stress in mice and used them to directly compare compensated hypertrophy and pressure-overload heart failure. Surgical interventions were designed to be similar, on either side of a threshold separating compensated from decompensated responses. Our findings revealed two dramatically different hypertrophic phenotypes with only modest differences in the activation of relevant intracellular signaling pathways. Furthermore, we uncovered a functional requirement of calcineurin signaling in each model such that calcineurin suppression blunted hypertrophic growth. Remarkably, in each case, suppression of calcineurin signaling was not associated with clinical deterioration or increased mortality. Profiles of stress-response signaling and Ca2+ handling differ between the steady-state, maintenance phases of load-induced cardiac hypertrophy and failure. This information may be useful in identifying novel targets of therapy in chronic disease. PMID:16033866

  1. Differential activation of stress-response signaling in load-induced cardiac hypertrophy and failure.

    PubMed

    Rothermel, Beverly A; Berenji, Kambeez; Tannous, Paul; Kutschke, William; Dey, Asim; Nolan, Bridgid; Yoo, Ki-Dong; Demetroulis, Elaine; Gimbel, Michael; Cabuay, Barry; Karimi, Mohsen; Hill, Joseph A

    2005-09-21

    Hypertrophic growth of the myocardium occurs in most forms of heart failure and may contribute to the pathogenesis of the failure state. Little is known about the regulatory mechanisms governing the often-coexisting phenotypes of hypertrophy, systolic failure, and diastolic stiffness that characterize clinical disease. We hypothesized that intracellular signaling pathways are differentially activated by graded degrees of hemodynamic stress. To test this, we developed models of graded pressure stress in mice and used them to directly compare compensated hypertrophy and pressure-overload heart failure. Surgical interventions were designed to be similar, on either side of a threshold separating compensated from decompensated responses. Our findings revealed two dramatically different hypertrophic phenotypes with only modest differences in the activation of relevant intracellular signaling pathways. Furthermore, we uncovered a functional requirement of calcineurin signaling in each model such that calcineurin suppression blunted hypertrophic growth. Remarkably, in each case, suppression of calcineurin signaling was not associated with clinical deterioration or increased mortality. Profiles of stress-response signaling and Ca2+ handling differ between the steady-state, maintenance phases of load-induced cardiac hypertrophy and failure. This information may be useful in identifying novel targets of therapy in chronic disease.

  2. A new temperature threshold detector - Application to missile monitoring

    NASA Astrophysics Data System (ADS)

    Coston, C. J.; Higgins, E. V.

    Comprehensive thermal surveys within the case of solid propellant ballistic missile flight motors are highly desirable. For example, a problem involving motor failures due to insulator cracking at motor ignition, which took several years to solve, could have been identified immediately on the basis of a suitable thermal survey. Using conventional point measurements, such as those utilizing typical thermocouples, for such a survey on a full scale motor is not feasible because of the great number of sensors and measurements required. An alternate approach recognizes that temperatures below a threshold (which depends on the material being monitored) are acceptable, but higher temperatures exceed design margins. In this case hot spots can be located by a grid of wire-like sensors which are sensitive to temperature above the threshold anywhere along the sensor. A new type of temperature threshold detector is being developed for flight missile use. The considered device consists of KNO3 separating copper and Constantan metals. Above the KNO3 MP, galvanic action provides a voltage output of a few tenths of a volt.

  3. Modern Adaptive Analytics Approach to Lowering Seismic Network Detection Thresholds

    NASA Astrophysics Data System (ADS)

    Johnson, C. E.

    2017-12-01

    Modern seismic networks present a number of challenges, but perhaps most notably are those related to 1) extreme variation in station density, 2) temporal variation in station availability, and 3) the need to achieve detectability for much smaller events of strategic importance. The first of these has been reasonably addressed in the development of modern seismic associators, such as GLASS 3.0 by the USGS/NEIC, though some work still remains to be done in this area. However, the latter two challenges demand special attention. Station availability is impacted by weather, equipment failure or the adding or removing of stations, and while thresholds have been pushed to increasingly smaller magnitudes, new algorithms are needed to achieve even lower thresholds. Station availability can be addressed by a modern, adaptive architecture that maintains specified performance envelopes using adaptive analytics coupled with complexity theory. Finally, detection thresholds can be lowered using a novel approach that tightly couples waveform analytics with the event detection and association processes based on a principled repicking algorithm that uses particle realignment for enhanced phase discrimination.

  4. 76 FR 71833 - Defense Federal Acquisition Regulation Supplement: Simplified Acquisition Threshold for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-18

    ...-AH29 Defense Federal Acquisition Regulation Supplement: Simplified Acquisition Threshold for... statutory authority to invoke a simplified acquisition threshold that is two times the normal amount to...) to invoke a simplified acquisition threshold that is two times the amount specified at 41 U.S.C 134...

  5. Hazard function theory for nonstationary natural hazards

    NASA Astrophysics Data System (ADS)

    Read, L. K.; Vogel, R. M.

    2015-11-01

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied Generalized Pareto (GP) model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series X, with corresponding failure time series T, should have application to a wide class of natural hazards with rich opportunities for future extensions.

  6. Hazard function theory for nonstationary natural hazards

    NASA Astrophysics Data System (ADS)

    Read, Laura K.; Vogel, Richard M.

    2016-04-01

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.

  7. Asymmetric dimethylarginine may mediate increased heat pain threshold in experimental chronic kidney disease.

    PubMed

    Kielstein, Jan T; Suntharalingam, Mayuren; Perthel, Ronny; Rong, Song; Martens-Lobenhoffer, Jens; Jäger, Kristin; Bode-Böger, Stefanie M; Nave, Heike

    2012-03-01

    Thermal sensitivity in uraemia is decreased. Non-selective synthetic nitric oxide synthase (NOS) inhibitors significantly attenuate thermal hyperalgesia in preclinical models. The aim of our study was to evaluate the effect of experimental uraemia, which is associated with an increase of the endogenous NOS inhibitor asymmetric dimethylarginine (ADMA), on thermal sensitivity in rats. Furthermore, we intended to study the effect of chronic ADMA infusion alone on thermal sensitivity. Male Sprague-Dawley rats (n = 54), 10 weeks old, weight 370-430 g, were randomly assigned to three groups receiving either (i) isotonic saline or (ii) ADMA via osmotic mini pumps or (iii) underwent 5/6 nephrectomy (Nx). After 14 days, 50% of all animals from all groups underwent thermal sensitivity testing and terminal blood draw. After 28 days, the remaining animals underwent the same procedures. Thermal sensitivity examination was performed by the hot-plate test, measuring time from heat exposition to first paw licking or jumping of the animal. While the median [interquartile range] latency time between heat exposition to first paw licking or jumping of the animal in the NaCl infusion group remained unchanged between Day 14 (8.4 [6.75-11.50] s) and Day 28 (7.35 [6.10-7.90] s) both, ADMA infusion and 5/6 nephrectomy tended to increase the thermal pain threshold at Day 14 (9.25 [6.55-12.18] s) and (9.50 [5.8 ± 11.0] s), respectively, compared to NaCl on Day 14 (8.4 [6.75-11.50] s). This difference became statistical significant at Day 28 where the median latency time in the ADMA group (13.10 [11.85-15.95] s) and in the 5/6 Nx group (13.50 [10.85-17.55] s) were significantly higher than in the NaCl group (7.35 [6.10-7.90] s). Induction of progressive renal failure in rats by 5/6 nephrectomy, which is accompanied by a marked increase of the serum levels of the endogenous NOS inhibitor ADMA, leads to a significantly increased heat pain threshold at 28 days. The sole infusion of ADMA into healthy rats leads to the same increase in heat pain threshold.

  8. [Perioperative acute kidney injury and failure].

    PubMed

    Chhor, Vibol; Journois, Didier

    2014-04-01

    Perioperative period is very likely to lead to acute renal failure because of anesthesia (general or perimedullary) and/or surgery which can cause acute kidney injury. Characterization of acute renal failure is based on serum creatinine level which is imprecise during and following surgery. Studies are based on various definitions of acute renal failure with different thresholds which skewed their comparisons. The RIFLE classification (risk, injury, failure, loss, end stage kidney disease) allows clinicians to distinguish in a similar manner between different stages of acute kidney injury rather than using a unique definition of acute renal failure. Acute renal failure during the perioperative period can mainly be explained by iatrogenic, hemodynamic or surgical causes and can result in an increased morbi-mortality. Prevention of this complication requires hemodynamic optimization (venous return, cardiac output, vascular resistance), discontinuation of nephrotoxic drugs but also knowledge of the different steps of the surgery to avoid further degradation of renal perfusion. Diuretics do not prevent acute renal failure and may even push it forward especially during the perioperative period when venous retourn is already reduced. Edema or weight gain following surgery are not correlated with the vascular compartment volume, much less with renal perfusion. Treatment of perioperative acute renal failure is similar to other acute renal failure. Renal replacement therapy must be mastered to prevent any additional risk of hemodynamic instability or hydro-electrolytic imbalance. Copyright © 2014 Association Société de néphrologie. Published by Elsevier SAS. All rights reserved.

  9. D-phenylalanine: a putative enkephalinase inhibitor studied in a primate acute pain model.

    PubMed

    Halpern, L M; Dong, W K

    1986-02-01

    D-Phenylalanine, along with morphine, acetylsalicylic acid and zomepirac sodium were evaluated for their antinociceptive actions in monkeys (M. fascicularis) trained to autoregulate nociceptive stimulation using a discrete-trials, aversive-threshold paradigm. Morphine sulfate produced dose-related increases in aversive threshold which were reversible after administration of naloxone (12.5 or 25 micrograms/kg i.m.). D-Phenylalanine (500 mg/kg p.o.) produced a small increase in aversive threshold which was not statistically significant and not naloxone reversible. Acetylsalicylic acid (200 mg/kg p.o.) but not zomepirac sodium (200 mg/kg p.o.) in combination with D-phenylalanine (500 mg/kg) produced a small statistically significant increase in aversive threshold. Our results argue against the hypothesis that D-phenylalanine is responsible for increasing aversive thresholds via opiate receptor mechanisms involving increased activity of enkephalins at synaptic loci. Previous studies by others in rats and mice showed that D-phenylalanine and acetylsalicylic acid produced increases in nociceptive thresholds which were naloxone reversible. Our failure to find opiate receptor mediated analgesia in a primate model with demonstrated opiate receptor selectivity and sensitivity is discussed in terms of previous basic and clinical research indicating an analgesic role for D-phenylalanine. Possible species difference in drug action is discussed in terms of inhibition by D-phenylalanine of carboxy-peptidase-like enkephalin processing enzymes as well as inhibition of carboxypeptidase-like enkephalin degrading enzymes.

  10. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  11. Development of a landlside EWS based on rainfall thresholds for Tuscany Region, Italy

    NASA Astrophysics Data System (ADS)

    Rosi, Ascanio; Segoni, Samuele; Battistini, Alessandro; Rossi, Guglielmo; Catani, Filippo; Casagli, Nicola

    2017-04-01

    We present the set-up of a landslide EWS based on rainfall thresholds for the Tuscany region (central Italy), that shows a heterogeneous distribution of reliefs and precipitation. The work started with the definition of a single set of thresholds for the whole region, but it resulted unsuitable for EWS purposes, because of the heterogeneity of the Tuscan territory and non-repeatability of the analyses, that were affected by a high degree of subjectivity. To overcome this problem, the work started from the implementation of a software capable of objectively defining the rainfall thresholds, since some of the main issues of these thresholds are the subjectivity of the analysis and therefore their non-repeatability. This software, named MaCumBA, is largely automated and can analyze, in a short time, a high number of rainfall events to define several parameters of the threshold, such as the intensity (I) and the duration (D) of the rainfall event, the no-rain time gap (NRG: how many hours without rain are needed to consider two events as separated) and the equation describing the threshold. The possibility of quickly perform several analyses lead to the decision to divide the territory in 25 homogeneous areas (named alert zones, AZ), so as a single threshold for each AZ could be defined. For the definition of the thresholds two independent datasets (of joint rainfall-landslide occurrences) have been used: a calibration dataset (data from 2000 to 2007) and a validation dataset (2008-2009). Once the thresholds were defined, a WebGIS-based EWS has been implemented. In this system it is possible to focus both on monitoring of real-time data and on forecasting at different lead times up to 48 h; forecasting data are collected from LAMI (Limited Area Model Italy) rainfall forecasts. The EWS works on the basis of the threshold parameters defined by MaCumBA (I, D, NRG). An important feature of the warning system is that the visualization of the thresholds in the WebGIS interface may vary in time depending on when the starting time of the rainfall event is set. Therefore, the starting time of the rainfall event is considered as a variable by the system: whenever new rainfall data are available, a recursive algorithm identifies the starting time for which the rainfall path is closest to or overcomes the threshold. This is considered the most hazardous condition, and it is displayed by the WebGIS interface. One more issue that came to surface, after the EWS implementation, was the time-limited validity of the thresholds. On one hand rainfall thresholds can give good results, on the other hand their validity is limited in time, because of several factors, such as changes of pluviometric regime, land use and urban development. Furthermore, the availability of new landslide data can lead to more robust results. For the aforementioned reasons some of the thresholds defined for Tuscany region were updated, by using new landslide data (from 2010 to march 2013). A comparison between updated and former thresholds clearly shows that the performance of an EWS can be enhanced if the thresholds are constantly updated.

  12. A vector-based failure detection and isolation algorithm for a dual fail-operational redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, Frederick R.; Bailey, Melvin L.

    1987-01-01

    A vector-based failure detection and isolation technique for a skewed array of two degree-of-freedom inertial sensors is developed. Failure detection is based on comparison of parity equations with a threshold, and isolation is based on comparison of logic variables which are keyed to pass/fail results of the parity test. A multi-level approach to failure detection is used to ensure adequate coverage for the flight control, display, and navigation avionics functions. Sensor error models are introduced to expose the susceptibility of the parity equations to sensor errors and physical separation effects. The algorithm is evaluated in a simulation of a commercial transport operating in a range of light to severe turbulence environments. A bias-jump failure level of 0.2 deg/hr was detected and isolated properly in the light and moderate turbulence environments, but not detected in the extreme turbulence environment. An accelerometer bias-jump failure level of 1.5 milli-g was detected over all turbulence environments. For both types of inertial sensor, hard-over, and null type failures were detected in all environments without incident. The algorithm functioned without false alarm or isolation over all turbulence environments for the runs tested.

  13. Causes of death from the randomized CoreValve US Pivotal High-Risk Trial.

    PubMed

    Gaudiani, Vincent; Deeb, G Michael; Popma, Jeffrey J; Adams, David H; Gleason, Thomas G; Conte, John V; Zorn, George L; Hermiller, James B; Chetcuti, Stan; Mumtaz, Mubashir; Yakubov, Steven J; Kleiman, Neal S; Huang, Jian; Reardon, Michael J

    2017-06-01

    Explore causes and timing of death from the CoreValve US Pivotal High-Risk Trial. An independent clinical events committee adjudicated causes of death, followed by post hoc hierarchical classification. Baseline characteristics, early outcomes, and causes of death were evaluated for 3 time periods (selected based on threshold of surgical 30-day mortality and on the differences in the continuous hazard between the 2 groups): early (0-30 days), recovery (31-120 days), and late (121-365 days). Differences in the rate of death were evident only during the recovery period (31-120 days), whereas 15 patients undergoing transcatheter aortic valve replacement (TAVR) (4.0%) and 27 surgical aortic valve replacement (SAVR) patients (7.9%) died (P = .025). This mortality difference was largely driven by higher rates of technical failure, surgical complications, and lack of recovery following surgery. From 0 to 30 days, the causes of death were more technical failures in the TAVR group and lack of recovery in the SAVR group. Mortality in the late period (121-365 days) in both arms was most commonly ascribed to other circumstances, comprising death from medical complications from comorbid disease. Mortality at 1 year in the CoreValve US Pivotal High-Risk Trial favored TAVR over SAVR. The major contributor was that more SAVR patients died during the recovery period (31-121 days), likely affected by the overall influence of physical stress associated with surgery. Similar rates of technical failure and complications were observed between the 2 groups. This suggests that early TAVR results can improve with technical refinements and that high-risk surgical patients will benefit from reducing complications. Copyright © 2017 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  14. PO-07 - Excluding pulmonary embolism in cancer patients using the Wells rule and age-adjusted D-dimer testing: an individual patient data meta-analysis.

    PubMed

    van Es, N; van der Hulle, T; van Es, J; den Exter, P L; Douma, R A; Goekoop, R J; Mos, I C M; Garcia, J G; Kamphuisen, P W; Huisman, M V; Klok, F A; Büller, H R; Bossuyt, P M

    2016-04-01

    Among patients with clinically suspected pulmonary embolism (PE), imaging and anticoagulant treatment can be safely withheld in approximately one-third of patients based on the combination of a "PE unlikely" Wells score and a D-dimer below the age-adjusted threshold. The clinical utility of this diagnostic approach in cancer patients is less clear. To evaluate the efficiency and failure rate of the original and simplified Wells rules in combination with age-adjusted D-dimer testing in patients with active cancer. Individual patient data were used from 6 large prospective studies in which the diagnostic management of PE was guided by the original Wells rule and D-dimer testing. Study physicians classified patients as having active cancer if they had new, recurrent, or progressive cancer (excluding basal-cell or squamous-cell skin carcinoma), or cancer requiring treatment in the last 6 months. We evaluated the dichotomous Wells rule and its simplified version (Table). The efficiency of the algorithm was defined as the proportion of patients with a "PE unlikely" Wells score and a negative age-adjusted D-dimer, defined by a D-dimer below the threshold of a patient's age times 10 μg/L in patients aged ≥51 years. A diagnostic failure was defined as a patient with a "PE unlikely" Wells score and negative age-adjusted D-dimer who had symptomatic venous thromboembolism during 3 months follow-up. A one-stage random effects meta-analysis was performed to estimate the efficiency and failure. The dataset comprised 938 patients with active cancer with a mean age of 63 years. The most frequent cancer types were breast (13%), gastrointestinal tract (11%), and lung (8%). The type of cancer was not specified in 42%. The pooled PE prevalence was 29% (95% CI 25-32). PE could be excluded in 122 patients based on a "PE unlikely" Wells score and a negative age-adjusted D-dimer (efficiency 13%; 95% CI 11-15). Two of 122 patients were diagnosed with non-fatal symptomatic venous thromboembolism during follow-up (failure rate 1.5%; 95% CI 0.13-14.8). The simplified Wells score in combination with a negative age-adjusted D-dimer had an efficiency of 3.9% (95% CI 2.0-7.6) and a failure rate of 2.4% (95% CI 0.3-15). Among cancer patients with clinically suspected PE, imaging and anticoagulant treatment can be withheld in 1 out of every 8 patients by the original Wells rule and age-adjusted D-dimer testing. The simplified Wells rule was neither efficient nor safe in this population. © 2016 Elsevier Ltd. All rights reserved.

  15. Usefulness of anaerobic threshold to peak oxygen uptake ratio to determine the severity and pathophysiological condition of chronic heart failure.

    PubMed

    Tomono, Junichi; Adachi, Hitoshi; Oshima, Shigeru; Kurabayashi, Masahiko

    2016-11-01

    Anaerobic threshold (AT) and peak oxygen uptake (V˙O 2 ) are well known as indicators of severity and prognosis of heart failure. Since these parameters are regulated by many factors, multiple organ dysfunction may occur in chronic heart failure, and these two parameters would vary among patients. However, it is not clear whether AT and peak V˙O 2 deteriorate similarly. Therefore, we planned to compare the degree of deterioration of these two parameters using a ratio of AT and peak V˙O 2 (%AT/peak), and evaluated its significance in heart failure subjects. One hundred ninety-four stable heart failure patients who had optimal medical treatment for at least 3 months were enrolled. Cardiopulmonary exercise testing, echocardiography, and blood sampling were examined within one week. Since %AT/peak varied from 50.3% to 108.5%, we divided patients into tertiles of %AT/peak [Group A, 50.1-70.0 (n=112), Group B, 70.1-90.0 (n=64), Group C, 90.1-110.0 (n=18)], and compared factors relating with skeletal muscle and heart failure among these 3 groups. In Group A, ratio of measured AT against predicted value (%AT) and measured peak V˙O 2 against predicted value (%peak V˙O 2 ) were similar (80.3±19.0% and 80.4±17.1%, respectively). Peak V˙O 2 became lower as %AT/peak increased (Group B; 65.6±14.8%, p<0.01 vs. Group A, Group C; 38.3±9.7%, p<0.01 vs. Group B). On the other hand, %AT in Group B (77.1±18.5%) was similar to Group A, and diminished in Group C (58.0±8.2%, p<0.05 vs. Group B). Peak work rate and lean body mass were smaller in Group B than those in Group A. Although, left ventricular ejection fraction and E/E' deteriorated in Group B compared with Group A, plasma B-type natriuretic peptide and estimated glomerular filtration rate stayed constant in Group B and deteriorated in Group C. %AT/peak showed negative correlation with peak V˙O 2 . In chronic heart failure, muscle weakness occurs at an early stage, and this can be evaluated using %AT/peak. Copyright © 2016 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  16. PCI fuel failure analysis: a report on a cooperative program undertaken by Pacific Northwest Laboratory and Chalk River Nuclear Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohr, C.L.; Pankaskie, P.J.; Heasler, P.G.

    Reactor fuel failure data sets in the form of initial power (P/sub i/), final power (P/sub f/), transient increase in power (..delta..P), and burnup (Bu) were obtained for pressurized heavy water reactors (PHWRs), boiling water reactors (BWRs), and pressurized water reactors (PWRs). These data sets were evaluated and used as the basis for developing two predictive fuel failure models, a graphical concept called the PCI-OGRAM, and a nonlinear regression based model called PROFIT. The PCI-OGRAM is an extension of the FUELOGRAM developed by AECL. It is based on a critical threshold concept for stress dependent stress corrosion cracking. The PROFITmore » model, developed at Pacific Northwest Laboratory, is the result of applying standard statistical regression methods to the available PCI fuel failure data and an analysis of the environmental and strain rate dependent stress-strain properties of the Zircaloy cladding.« less

  17. Reliability evaluation of CMOS RAMs

    NASA Astrophysics Data System (ADS)

    Salvo, C. J.; Sasaki, A. T.

    The results of an evaluation of the reliability of a 1K x 1 bit CMOS RAM and a 4K x 1 bit CMOS RAM for the USAF are reported. The tests consisted of temperature cycling, thermal shock, electrical overstress-static discharge and accelerated life test cells. The study indicates that the devices have high reliability potential for military applications. Use-temperature failure rates at 100 C were 0.54 x 10 to the -5th failures/hour for the 1K RAM and 0.21 x 10 to the -5th failures/hour for the 4K RAM. Only minimal electrostatic discharge damage was noted in the devices when they were subjected to multiple pulses at 1000 Vdc, and redesign of the 7 Vdc quiescent parameter of the 4K RAM is expected to raise its field threshold voltage.

  18. Threshold units: A correct metric for reaction time?

    PubMed Central

    Zele, Andrew J.; Cao, Dingcai; Pokorny, Joel

    2007-01-01

    Purpose To compare reaction time (RT) to rod incremental and decremental stimuli expressed in physical contrast units or psychophysical threshold units. Methods Rod contrast detection thresholds and suprathreshold RTs were measured for Rapid-On and Rapid-Off ramp stimuli. Results Threshold sensitivity to Rapid-Off stimuli was higher than to Rapid-On stimuli. Suprathreshold RTs specified in Weber contrast for Rapid-Off stimuli were shorter than for Rapid-On stimuli. Reaction time data expressed in multiples of threshold reversed the outcomes: Reaction times for Rapid-On stimuli were shorter than those for Rapid-Off stimuli. The use of alternative contrast metrics also failed to equate RTs. Conclusions A case is made that the interpretation of RT data may be confounded when expressed in threshold units. Stimulus energy or contrast is the only metric common to the response characteristics of the cells underlying speeded responses. The use of threshold metrics for RT can confuse the interpretation of an underlying physiological process. PMID:17240416

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lall, Pradeep; Wei, Junchao; Sakalaukus, Peter

    A new method has been developed for assessment of the onset of degradation in solid state luminaires to classify failure mechanisms by using metrics beyond lumen degradation that are currently used for identification of failure. Luminous Flux output, Correlated Color Temperature Data on Philips LED Lamps has been gathered under 85°C/85%RH till lamp failure. Failure modes of the test population of the lamps have been studied to understand the failure mechanisms in 85°C/85%RH accelerated test. Results indicate that the dominant failure mechanism is the discoloration of the LED encapsulant inside the lamps which is the likely cause for the luminousmore » flux degradation and the color shift. The acquired data has been used in conjunction with Bayesian Probabilistic Models to identify luminaires with onset of degradation much prior to failure through identification of decision boundaries between lamps with accrued damage and lamps beyond the failure threshold in the feature space. In addition luminaires with different failure modes have been classified separately from healthy pristine luminaires. The α-λ plots have been used to evaluate the robustness of the proposed methodology. Results show that the predicted degradation for the lamps tracks the true degradation observed during 85°C/85%RH during accelerated life test fairly closely within the ±20% confidence bounds. Correlation of model prediction with experimental results indicates that the presented methodology allows the early identification of the onset of failure much prior to development of complete failure distributions and can be used for assessing the damage state of SSLs in fairly large deployments. It is expected that, the new prediction technique will allow the development of failure distributions without testing till L70 life for the manifestation of failure.« less

  20. Objective definition of rainfall intensity-duration thresholds for the initiation of post-fire debris flows in southern California

    USGS Publications Warehouse

    Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.

    2012-01-01

    Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.

  1. Rainfall thresholds for possible landslide occurrence in Italy

    NASA Astrophysics Data System (ADS)

    Peruccacci, Silvia; Brunetti, Maria Teresa; Gariano, Stefano Luigi; Melillo, Massimo; Rossi, Mauro; Guzzetti, Fausto

    2017-08-01

    The large physiographic variability and the abundance of landslide and rainfall data make Italy an ideal site to investigate variations in the rainfall conditions that can result in rainfall-induced landslides. We used landslide information obtained from multiple sources and rainfall data captured by 2228 rain gauges to build a catalogue of 2309 rainfall events with - mostly shallow - landslides in Italy between January 1996 and February 2014. For each rainfall event with landslides, we reconstructed the rainfall history that presumably caused the slope failure, and we determined the corresponding rainfall duration D (in hours) and cumulated event rainfall E (in mm). Adopting a power law threshold model, we determined cumulated event rainfall-rainfall duration (ED) thresholds, at 5% exceedance probability, and their uncertainty. We defined a new national threshold for Italy, and 26 regional thresholds for environmental subdivisions based on topography, lithology, land-use, land cover, climate, and meteorology, and we used the thresholds to study the variations of the rainfall conditions that can result in landslides in different environments, in Italy. We found that the national and the environmental thresholds cover a small part of the possible DE domain. The finding supports the use of empirical rainfall thresholds for landslide forecasting in Italy, but poses an empirical limitation to the possibility of defining thresholds for small geographical areas. We observed differences between some of the thresholds. With increasing mean annual precipitation (MAP), the thresholds become higher and steeper, indicating that more rainfall is needed to trigger landslides where the MAP is high than where it is low. This suggests that the landscape adjusts to the regional meteorological conditions. We also observed that the thresholds are higher for stronger rocks, and that forested areas require more rainfall than agricultural areas to initiate landslides. Finally, we observed that a 20% exceedance probability national threshold was capable of predicting all the rainfall-induced landslides with casualties between 1996 and 2014, and we suggest that this threshold can be used to forecast fatal rainfall-induced landslides in Italy. We expect the method proposed in this work to define and compare the thresholds to have an impact on the definition of new rainfall thresholds for possible landslide occurrence in Italy, and elsewhere.

  2. Statistical approaches for the definition of landslide rainfall thresholds and their uncertainty using rain gauge and satellite data

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-05-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the "ground" rainfall registered by rain gauges.

  3. Statistical Approaches for the Definition of Landslide Rainfall Thresholds and their Uncertainty Using Rain Gauge and Satellite Data

    NASA Technical Reports Server (NTRS)

    Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.

    2017-01-01

    Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the 'ground' rainfall registered by rain gauges.

  4. Experimental investigation of time dependent behavior of welded Topopah Spring Tuff

    NASA Astrophysics Data System (ADS)

    Ma, Lumin

    Four types of laboratory tests have been performed. Specimens were attained from four lithophysal zones of the welded Topopah Spring Tuff unit at Yucca Mountain, Nevada: upper lithophysal, middle nonlithophysal, lower lithophysal and lower nonlithophysal zones. Two types of tests are conducted to study time-dependent behavior: constant strain rate and creep tests. Sixty-five specimens from the middle nonlithophysal zone were tested at six strain rates: 10-2, 10-4, 10-5, 10-6, 10-7, and 10-8 s-1. Test durations range from 2 seconds to 7 days. Fourteen specimens from middle nonlithophysal, lower lithophysal and lower nonlithophysal zones are creep tested by incremental stepwise loading. All the tests are conducted under uniaxial compression at room temperature and humidity. Specimens exhibit extremely brittle fracture and fail by axial splitting, and show very little dilatancy if any. It is assumed that microfracturing dominates the inelastic deformation and failure of the tuff. Nonlinear regression is applied to the results of the constant strain rate tests to estimate the relations between peak strength, peak axial strain, secant modulus and strain rate. All three these parameters decrease with a decrease of strain rate and follow power functions: sigmapeak = 271.37 3˙0.0212 0.0212, epsilonpeak = 0.006 3˙0.0083 , ES = 41985.4 3˙0.015 . Secant modulus is introduced mainly as a tool to analyze strain rate dependent axial strain. Two threshold stresses define creep behavior. Below about 50% of peak strength, a specimen does not creep. Above about 94% of peak strength, a specimen creeps at an accelerating rate. Between the two threshold stresses, a power law relates strain rate and stress. One hundred fifty-eight Brazilian (Indirect tensile splitting) tests have been performed at six different constant strain rates. Nineteen lithophysal specimens were tested in uniaxial compression to study their fracture pattern. These specimens have a far less brittle failure mode. They slowly crumble, collapse, and maintain considerable relative strength beyond the peak. Due to the presence of multiple relatively large lithophysal cavities, they are far weaker and softer than the nonlithophysal specimens.

  5. Association of ambient particulate matter with heart failure incidence and all-cause readmissions in Tasmania: an observational study.

    PubMed

    Huynh, Quan L; Blizzard, Christopher Leigh; Marwick, Thomas H; Negishi, Kazuaki

    2018-05-10

    We sought to investigate the relationship between air quality and heart failure (HF) incidence and rehospitalisation to elucidate whether there is a threshold in this relationship and whether this relationship differs for HF incidence and rehospitalisation. This retrospective observational study was performed in an Australian state-wide setting, where air pollution is mainly associated with wood-burning for winter heating. Data included all 1246 patients with a first-ever HF hospitalisation and their 3011 subsequent all-cause readmissions during 2009-2012. Daily particulate matter <2.5 µm (PM 2.5 ), temperature, relative humidity and influenza infection were recorded. Poisson regression was used, with adjustment for time trend, public and school holiday and day of week. Tasmania has excellent air quality (median PM 2.5 =2.9 µg/m 3 (IQR: 1.8-6.0)). Greater HF incidences and readmissions occurred in winter than in other seasons (p<0.001). PM 2.5 was detrimentally associated with HF incidence (risk ratio (RR)=1.29 (1.15-1.42)) and weakly so with readmission (RR=1.07 (1.02-1.17)), with 1 day time lag. In multivariable analyses, PM 2.5 significantly predicted HF incidence (RR=1.12 (1.01-1.24)) but not readmission (RR=0.96 (0.89-1.04)). HF incidence was similarly low when PM <4 µg/m 3 and only started to rise when PM 2.5 ≥4 µg/m 3 . Stratified analyses showed that PM 2.5 was associated with readmissions among patients not taking beta-blockers but not among those taking beta-blockers (p interaction =0.011). PM 2.5 predicted HF incidence, independent of other environmental factors. A possible threshold of PM 2.5 =4 µg/m 3 is far below the daily Australian national standard of 25 µg/m 3 . Our data suggest that beta-blockers might play a role in preventing adverse association between air pollution and patients with HF. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Experimental study on the stability and failure of individual step-pool

    NASA Astrophysics Data System (ADS)

    Zhang, Chendi; Xu, Mengzhen; Hassan, Marwan A.; Chartrand, Shawn M.; Wang, Zhaoyin

    2018-06-01

    Step-pools are one of the most common bedforms in mountain streams, the stability and failure of which play a significant role for riverbed stability and fluvial processes. Given this importance, flume experiments were performed with a manually constructed step-pool model. The experiments were carried out with a constant flow rate to study features of step-pool stability as well as failure mechanisms. The results demonstrate that motion of the keystone grain (KS) caused 90% of the total failure events. The pool reached its maximum depth and either exhibited relative stability for a period before step failure, which was called the stable phase, or the pool collapsed before its full development. The critical scour depth for the pool increased linearly with discharge until the trend was interrupted by step failure. Variability of the stable phase duration ranged by one order of magnitude, whereas variability of pool scour depth was constrained within 50%. Step adjustment was detected in almost all of the runs with step-pool failure and was one or two orders smaller than the diameter of the step stones. Two discharge regimes for step-pool failure were revealed: one regime captures threshold conditions and frames possible step-pool failure, whereas the second regime captures step-pool failure conditions and is the discharge of an exceptional event. In the transitional stage between the two discharge regimes, pool and step adjustment magnitude displayed relatively large variabilities, which resulted in feedbacks that extended the duration of step-pool stability. Step adjustment, which was a type of structural deformation, increased significantly before step failure. As a result, we consider step deformation as the direct explanation to step-pool failure rather than pool scour, which displayed relative stability during step deformations in our experiments.

  7. Anodal right ventricular capture during left ventricular stimulation in CRT-implantable cardioverter defibrillators.

    PubMed

    Thibault, Bernard; Roy, Denis; Guerra, Peter G; Macle, Laurent; Dubuc, Marc; Gagné, Pierre; Greiss, Isabelle; Novak, Paul; Furlani, Aldo; Talajic, Mario

    2005-07-01

    Cardiac resynchronization therapy (CRT) has been shown to improve symptoms of patients with moderate to severe heart failure. Optimal CRT involves biventricular or left ventricular (LV) stimulation alone, atrio-ventricular (AV) delay optimization, and possibly interventricular timing adjustment. Recently, anodal capture of the right ventricle (RV) has been described for patients with CRT-pacemakers. It is unknown whether the same phenomenon exists in CRT systems associated with defibrillators (CRT-ICD). The RV leads used in these systems are different from pacemaker leads: they have a larger diameter and shocking coils, which may affect the occurrence of anodal capture. We looked for anodal RV capture during LV stimulation in 11 consecutive patients who received a CRT-ICD system with RV leads with a true bipolar design. Fifteen patients who had RV leads with an integrated design were used as controls. Anodal RV and LV thresholds were determined at pulse width (pw) durations of 0.2, 0.5, and 1.0 ms. RV anodal capture during LV pacing was found in 11/11 patients at some output with true bipolar RV leads versus 0/15 patients with RV leads with an integrated bipolar design. Anodal RV capture threshold was more affected by changes in pw duration than LV capture threshold. In CRT-ICD systems, RV leads with a true bipolar design with the proximal ring also used as the anode for LV pacing are associated with a high incidence of anodal RV capture during LV pacing. This may affect the clinical response to alternative resynchronization methods using single LV stimulation or interventricular delay programming.

  8. Donor Hemodynamics as a Predictor of Outcomes After Kidney Transplantation From Donors After Cardiac Death.

    PubMed

    Allen, M B; Billig, E; Reese, P P; Shults, J; Hasz, R; West, S; Abt, P L

    2016-01-01

    Donation after cardiac death is an important source of transplantable organs, but evidence suggests donor warm ischemia contributes to inferior outcomes. Attempts to predict recipient outcome using donor hemodynamic measurements have not yielded statistically significant results. We evaluated novel measures of donor hemodynamics as predictors of delayed graft function and graft failure in a cohort of 1050 kidneys from 566 donors. Hemodynamics were described using regression line slopes, areas under the curve, and time beyond thresholds for systolic blood pressure, oxygen saturation, and shock index (heart rate divided by systolic blood pressure). A logistic generalized estimation equation model showed that area under the curve for systolic blood pressure was predictive of delayed graft function (above median: odds ratio 1.42, 95% confidence interval [CI] 1.06-1.90). Multivariable Cox regression demonstrated that slope of oxygen saturation during the first 10 minutes after extubation was associated with graft failure (below median: hazard ratio 1.30, 95% CI 1.03-1.64), with 5-year graft survival of 70.0% (95%CI 64.5%-74.8%) for donors above the median versus 61.4% (95%CI 55.5%-66.7%) for those below the median. Among older donors, increased shock index slope was associated with increased hazard of graft failure. Validation of these findings is necessary to determine the utility of characterizing donor warm ischemia to predict recipient outcome. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.

  9. Timing discriminator using leading-edge extrapolation

    DOEpatents

    Gottschalk, Bernard

    1983-01-01

    A discriminator circuit to recover timing information from slow-rising pulses by means of an output trailing edge, a fixed time after the starting corner of the input pulse, which is nearly independent of risetime and threshold setting. This apparatus comprises means for comparing pulses with a threshold voltage; a capacitor to be charged at a certain rate when the input signal is one-third threshold voltage, and at a lower rate when the input signal is two-thirds threshold voltage; current-generating means for charging the capacitor; means for comparing voltage capacitor with a bias voltage; a flip-flop to be set when the input pulse reaches threshold voltage and reset when capacitor voltage reaches the bias voltage; and a clamping means for discharging the capacitor when the input signal returns below one-third threshold voltage.

  10. Fuel Pin Behavior Under the Slow Power Ramp Transients in the CABRI-2 Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charpenel, Jean; Lemoine, Francette; Sato, Ikken

    Slow ramp-type transient-overpower tests were performed within the framework of the international CABRI-2 experimental program. The implemented power transients of {approx}1% nominal power/s correspond to a control rod withdrawal-type accident in a liquid-metal-cooled fast breeder reactor (FBR). The analysis of the tests includes the information elements derived from the hodoscope signals, which were assessed quantitatively and supported by destructive and nondestructive posttest examinations. These tests, performed with fuels of various geometries, demonstrated the high margin to failure of such FBR fuel pins within the expected power level before the emergency reactor shutdown. At the same time, these tests performed withmore » high- and low-smear-density industrial pins led to clarification of the influence of pellet design on fuel pin behavior under high overpower condition. With the high-smear-density solid fuel pellet pin of high burnup level, the retained gaseous fission products played an important role in the solid fuel swelling, leading to clad deformation and failure at a maximum heating rate of 81 kW.m{sup -1}, which is much greater than the end-of-life (EOL) linear rating of the pin. With the low smear-density annular pellet pin, an important fuel swelling takes place, leading to degradation of the fuel thermal conductivity. This effect was detected at the power level around 73 kW.m{sup -1}, which is also much higher than the EOL value of the pin. Furthermore, the absence of clad deformation, and consequently of failure even at the power level going up to 134.7 kW.m{sup -1}, confirmed the very high margin to failure. In consequence, it was clarified that gaseous fission products have significant effects on failure threshold as well as on thermal performance during overpower condition, and such effects are significantly dependent on fuel design and power operation conditions.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell K Meyer

    Blister–threshold testing of fuel plates is a standard method through which the safety margin for operation of plate-type in research and test reactors is assessed. The blister-threshold temperature is indicative of the ability of fuel to operate at high temperatures for short periods of time (transient conditions) without failure. This method of testing was applied to the newly developed U-Mo monolithic fuel system. Blister annealing studies on the U-Mo monolithic fuel plates began in 2007, with the Reduced Enrichment for Research and Test Reactors (RERTR)-6 experiment, and they have continued as the U-Mo fuel system has evolved through the researchmore » and development process. Blister anneal threshold temperatures from early irradiation experiments (RERTR-6 through RERTR-10) ranged from 400 to 500°C. These temperatures were projected to be acceptable for NRC-licensed research reactors and the high-power Advanced Test Reactor (ATR) and the High Flux Isotope Reactor (HFIR) based on current safety-analysis reports (SARs). Initial blister testing results from the RERTR-12 experiment capsules X1 and X2 showed a decrease in the blister-threshold temperatures. Blister threshold temperatures from this experiment ranged from 300 to 400°C. Selected plates from the AFIP-4 experiment, which was fabricated using a process similar to that used to fabricate the RERTR-12 experiment, also underwent blister testing to determine whether results would be similar. The measured blister-threshold temperatures from the AFIP-4 plates fell within the same blister-threshold temperature range measured in the RERTR-12 plates. Investigation of the cause of this decrease in bister threshold temperature is being conducted under the guidance of Idaho National Laboratory PLN-4155, “Analysis of Low Blister Threshold Temperatures in the RERTR-12 and AFIP-4 Experiments,” and is driven by hypotheses. The main focus of the investigation is in the following areas: 1. Fabrication variables 2. Pre-irradiation characterization 3. Irradiation conditions 4. Post-irradiation examination 5. Additional blister testing 6. Mechanical modeling This report documents the preliminary results of this investigation. Several hypotheses can be dismissed as a result of this investigation. Two primary categories of causes remain. The most prominent theory, supported by the data, is that low blister-threshold temperature is the result of mechanical energy imparted on the samples during the fabrication process (hot and cold rolling) without adequate post processing (annealing). The mechanisms are not clearly understood and require further investigation, but can be divided into two categories: • Residual Stress • Undesirable interaction boundary and/or U-Mo microstructure change A secondary theory that cannot be dismissed with the information that is currently available is that a change in the test conditions has resulted in a statistically significant downward shift of measured blister temperature. This report outlines the results of the forensic investigations conducted to date. The data and conclusions presented in this report are preliminary. Definitive cause and effect relationships will be established by future experimental programs.« less

  12. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  13. Real time observation system for monitoring environmental impact on marine ecosystems from oil drilling operations.

    PubMed

    Godø, Olav Rune; Klungsøyr, Jarle; Meier, Sonnich; Tenningen, Eirik; Purser, Autun; Thomsen, Laurenz

    2014-07-15

    Environmental awareness and technological advances has spurred development of new monitoring solutions for the petroleum industry. This paper presents experience from a monitoring program off Norway. To maintain operation within the limits of the government regulations Statoil tested a new monitoring concept. Multisensory data were cabled to surface buoys and transmitted to land via wireless communication. The system collected information about distribution of the drilling wastes and the welfare of the corals in relation to threshold values. The project experienced a series of failures, but the backup monitoring provided information to fulfil the requirements of the permit. The experience demonstrated the need for real time monitoring and how such systems enhance understanding of impacts on marine organisms. Also, drilling operations may improve by taking environmental information into account. The paper proposes to standardize and streamline monitoring protocols to maintain comparability during all phases of the operation and between drill sites. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Migratory gold resistive shorts - Chemical aspects of a failure mechanism

    NASA Technical Reports Server (NTRS)

    Grunthaner, F. J.; Griswold, T. W.; Clendening, P. J.

    1975-01-01

    Integrated-circuit devices using the Ti/W/Au metal system are subject to failure mechanisms based on electrolytic corrosion. The migratory gold resistive short (MGRS) failure mode is one example of this mechanism and results in the formation of filamentary or dendritic deposits of gold between adjacent stripes on the IC chip. This reaction requires the presence of a sufficient amount of water, a bias voltage between adjacent stripes, and the activation of the cathodic (-) stripe. Gold ions are transported from anode to cathode through a film of moisture adsorbed on the surface of the chip; halide ions are probably involved in the transfer. Their presence is verified experimentally by X-ray photoelectron spectroscopy. Some of the chemical and electrostatic factors involved in the MGRS mechanism are discussed in this paper, including the questions of a threshold level of moisture and contamination.

  15. Statistical Mechanics Model of Solids with Defects

    NASA Astrophysics Data System (ADS)

    Kaufman, M.; Walters, P. A.; Ferrante, J.

    1997-03-01

    Previously(M.Kaufman, J.Ferrante,NASA Tech. Memor.,1996), we examined the phase diagram for the failure of a solid under isotropic expansion and compression as a function of stress and temperature with the "springs" modelled by the universal binding energy relation (UBER)(J.H.Rose, J.R.Smith, F.Guinea, J.Ferrante, Phys.Rev.B29, 2963 (1984)). In the previous calculation we assumed that the "springs" failed independently and that the strain is uniform. In the present work, we have extended this statistical model of mechanical failure by allowing for correlations between "springs" and for thermal fluctuations in strains. The springs are now modelled in the harmonic approximation with a failure threshold energy E0, as an intermediate step in future studies to reinclude the full non-linear dependence of the UBER for modelling the interactions. We use the Migdal-Kadanoff renormalization-group method to determine the phase diagram of the model and to compute the free energy.

  16. Creep test observation of viscoelastic failure of edible fats

    NASA Astrophysics Data System (ADS)

    Vithanage, C. R.; Grimson, M. J.; Smith, B. G.; Wills, P. R.

    2011-03-01

    A rheological creep test was used to investigate the viscoelastic failure of five edible fats. Butter, spreadable blend and spread were selected as edible fats because they belong to three different groups according to the Codex Alimentarius. Creep curves were analysed according to the Burger model. Results were fitted to a Weibull distribution representing the strain-dependent lifetime of putative fibres in the material. The Weibull shape and scale (lifetime) parameters were estimated for each substance. A comparison of the rheometric measurements of edible fats demonstrated a clear difference between the three different groups. Taken together the results indicate that butter has a lower threshold for mechanical failure than spreadable blend and spread. The observed behaviour of edible fats can be interpreted using a model in which there are two types of bonds between fat crystals; primary bonds that are strong and break irreversibly, and secondary bonds, which are weaker but break and reform reversibly.

  17. Atrial Fibrillation Burden: Moving Beyond Atrial Fibrillation as a Binary Entity: A Scientific Statement From the American Heart Association.

    PubMed

    Chen, Lin Y; Chung, Mina K; Allen, Larry A; Ezekowitz, Michael; Furie, Karen L; McCabe, Pamela; Noseworthy, Peter A; Perez, Marco V; Turakhia, Mintu P

    2018-05-15

    Our understanding of the risk factors and complications of atrial fibrillation (AF) is based mostly on studies that have evaluated AF in a binary fashion (present or absent) and have not investigated AF burden. This scientific statement discusses the published literature and knowledge gaps related to methods of defining and measuring AF burden, the relationship of AF burden to cardiovascular and neurological outcomes, and the effect of lifestyle and risk factor modification on AF burden. Many studies examine outcomes by AF burden classified by AF type (paroxysmal versus nonparoxysmal); however, quantitatively, AF burden can be defined by longest duration, number of AF episodes during a monitoring period, and the proportion of time an individual is in AF during a monitoring period (expressed as a percentage). Current guidelines make identical recommendations for anticoagulation regardless of AF pattern or burden; however, a review of recent evidence suggests that higher AF burden is associated with higher risk of stroke. It is unclear whether the risk increases continuously or whether a threshold exists; if a threshold exists, it has not been defined. Higher burden of AF is also associated with higher prevalence and incidence of heart failure and higher risk of mortality, but not necessarily lower quality of life. A structured and comprehensive risk factor management program targeting risk factors, weight loss, and maintenance of a healthy weight appears to be effective in reducing AF burden. Despite this growing understanding of AF burden, research is needed into validation of definitions and measures of AF burden, determination of the threshold of AF burden that results in an increased risk of stroke that warrants anticoagulation, and discovery of the mechanisms underlying the weak temporal correlations of AF and stroke. Moreover, developments in monitoring technologies will likely change the landscape of long-term AF monitoring and could allow better definition of the significance of changes in AF burden over time. © 2018 American Heart Association, Inc.

  18. Continuous Seismic Threshold Monitoring

    DTIC Science & Technology

    1992-05-31

    Continuous threshold monitoring is a technique for using a seismic network to monitor a geographical area continuously in time. The method provides...area. Two approaches are presented. Site-specific monitoring: By focusing a seismic network on a specific target site, continuous threshold monitoring...recorded events at the site. We define the threshold trace for the network as the continuous time trace of computed upper magnitude limits of seismic

  19. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  20. Monitoring health interventions – who's afraid of LQAS?

    PubMed Central

    Pezzoli, Lorenzo; Kim, Sung Hye

    2013-01-01

    Lot quality assurance sampling (LQAS) is used to evaluate health services. Subunits of a population (lots) are accepted or rejected according to the number of failures in a random sample (N) of a given lot. If failures are greater than decision value (d), we reject the lot and recommend corrective actions in the lot (i.e. intervention area); if they are equal to or less than d, we accept it. We used LQAS to monitor coverage during the last 3 days of a meningitis vaccination campaign in Niger. We selected one health area (lot) per day reporting the lowest administrative coverage in the previous 2 days. In the sampling plan we considered: N to be small enough to allow us to evaluate one lot per day, deciding to sample 16 individuals from the selected villages of each health area, using probability proportionate to population size; thresholds and d to vary according to administrative coverage reported; α ≤5% (meaning that, if we would have conducted the survey 100 times, we would have accepted the lot up to five times when real coverage was at an unacceptable level) and β ≤20% (meaning that we would have rejected the lot up to 20 times, when real coverage was equal or above the satisfactory level). We classified all three lots as with the acceptable coverage. LQAS appeared to be a rapid, simple, and statistically sound method for in-process coverage assessment. We encourage colleagues in the field to consider using LQAS in complement with other monitoring techniques such as house-to-house monitoring. PMID:24206650

  1. Monitoring health interventions--who's afraid of LQAS?

    PubMed

    Pezzoli, Lorenzo; Kim, Sung Hye

    2013-11-08

    Lot quality assurance sampling (LQAS) is used to evaluate health services. Subunits of a population (lots) are accepted or rejected according to the number of failures in a random sample (N) of a given lot. If failures are greater than decision value (d), we reject the lot and recommend corrective actions in the lot (i.e. intervention area); if they are equal to or less than d, we accept it. We used LQAS to monitor coverage during the last 3 days of a meningitis vaccination campaign in Niger. We selected one health area (lot) per day reporting the lowest administrative coverage in the previous 2 days. In the sampling plan we considered: N to be small enough to allow us to evaluate one lot per day, deciding to sample 16 individuals from the selected villages of each health area, using probability proportionate to population size; thresholds and d to vary according to administrative coverage reported; α ≤5% (meaning that, if we would have conducted the survey 100 times, we would have accepted the lot up to five times when real coverage was at an unacceptable level) and β ≤20% (meaning that we would have rejected the lot up to 20 times, when real coverage was equal or above the satisfactory level). We classified all three lots as with the acceptable coverage. LQAS appeared to be a rapid, simple, and statistically sound method for in-process coverage assessment. We encourage colleagues in the field to consider using LQAS in complement with other monitoring techniques such as house-to-house monitoring.

  2. Effects of Dopamine Donor Pretreatment on Graft Survival after Kidney Transplantation: A Randomized Trial.

    PubMed

    Schnuelle, Peter; Schmitt, Wilhelm H; Weiss, Christel; Habicht, Antje; Renders, Lutz; Zeier, Martin; Drüschler, Felix; Heller, Katharina; Pisarski, Przemyslaw; Banas, Bernhard; Krämer, Bernhard K; Jung, Matthias; Lopau, Kai; Olbricht, Christoph J; Weihprecht, Horst; Schenker, Peter; De Fijter, Johan W; Yard, Benito A; Benck, Urs

    2017-03-07

    Donor dopamine improves initial graft function after kidney transplantation due to antioxidant properties. We investigated if a 4 µ g/kg per minute continuous dopamine infusion administered after brain-death confirmation affects long-term graft survival and examined the exposure-response relationship with treatment duration. Five-year follow-up of 487 renal transplant patients from 60 European centers who had participated in the randomized, multicenter trial of dopamine donor pretreatment between 2004 and 2007 (ClinicalTrials.gov identifier: NCT00115115). Follow-up was complete in 99.2%. Graft survival was 72.6% versus 68.7% ( P =0.34), and 83.3% versus 80.4% ( P =0.42) after death-censoring in treatment and control arms according to trial assignment. Although infusion times varied substantially in the treatment arm (range 0-32.2 hours), duration of the dopamine infusion and all-cause graft failure exhibited an exposure-response relationship (hazard ratio, 0.96; 95% confidence interval [95% CI], 0.92 to 1.00, per hour). Cumulative frequency curves of graft survival and exposure time of the dopamine infusion indicated a maximum response rate at 7.10 hours (95% CI, 6.99 to 7.21), which almost coincided with the optimum infusion time for improvement of early graft function (7.05 hours; 95% CI, 6.92 to 7.18). Taking infusion time of 7.1 hours as threshold in subsequent graft survival analyses indicated a relevant benefit: Overall, 81.5% versus 68.5%; P =0.03; and 90.3% versus 80.2%; P =0.04 after death-censoring. We failed to show a significant graft survival advantage on intention-to-treat. Dopamine infusion time was very short in a considerable number of donors assigned to treatment. Our finding of a significant, nonlinear exposure-response relationship disclosed a threshold value of the dopamine infusion time that may improve long-term kidney graft survival. Copyright © 2017 by the American Society of Nephrology.

  3. Mechanical and biomechanical analysis of a linear piston design for angular-velocity-based orthotic control.

    PubMed

    Lemaire, Edward D; Samadi, Reza; Goudreau, Louis; Kofman, Jonathan

    2013-01-01

    A linear piston hydraulic angular-velocity-based control knee joint was designed for people with knee-extensor weakness to engage knee-flexion resistance when knee-flexion angular velocity reaches a preset threshold, such as during a stumble, but to otherwise allow free knee motion. During mechanical testing at the lowest angular-velocity threshold, the device engaged within 2 degrees knee flexion and resisted moment loads of over 150 Nm. The device completed 400,000 loading cycles without mechanical failure or wear that would affect function. Gait patterns of nondisabled participants were similar to normal at walking speeds that produced below-threshold knee angular velocities. Fast walking speeds, employed purposely to attain the angular-velocity threshold and cause knee-flexion resistance, reduced maximum knee flexion by approximately 25 degrees but did not lead to unsafe gait patterns in foot ground clearance during swing. In knee collapse tests, the device successfully engaged knee-flexion resistance and stopped knee flexion with peak knee moments of up to 235.6 Nm. The outcomes from this study support the potential for the linear piston hydraulic knee joint in knee and knee-ankle-foot orthoses for people with lower-limb weakness.

  4. Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.

    PubMed

    d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K

    2015-12-01

    Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.

  5. Hazard function theory for nonstationary natural hazards

    DOE PAGES

    Read, Laura K.; Vogel, Richard M.

    2016-04-11

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field ofmore » hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series ( X) with its failure time series ( T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. As a result, our theoretical analysis linking hazard random variable  X with corresponding failure time series  T should have application to a wide class of natural hazards with opportunities for future extensions.« less

  6. Integrating Near-Real Time Hydrologic-Response Monitoring and Modeling for Improved Assessments of Slope Stability Along the Coastal Bluffs of the Puget Sound Rail Corridor, Washington State

    NASA Astrophysics Data System (ADS)

    Mirus, B. B.; Baum, R. L.; Stark, B.; Smith, J. B.; Michel, A.

    2015-12-01

    Previous USGS research on landslide potential in hillside areas and coastal bluffs around Puget Sound, WA, has identified rainfall thresholds and antecedent moisture conditions that correlate with heightened probability of shallow landslides. However, physically based assessments of temporal and spatial variability in landslide potential require improved quantitative characterization of the hydrologic controls on landslide initiation in heterogeneous geologic materials. Here we present preliminary steps towards integrating monitoring of hydrologic response with physically based numerical modeling to inform the development of a landslide warning system for a railway corridor along the eastern shore of Puget Sound. We instrumented two sites along the steep coastal bluffs - one active landslide and one currently stable slope with the potential for failure - to monitor rainfall, soil-moisture, and pore-pressure dynamics in near-real time. We applied a distributed model of variably saturated subsurface flow for each site, with heterogeneous hydraulic-property distributions based on our detailed site characterization of the surficial colluvium and the underlying glacial-lacustrine deposits that form the bluffs. We calibrated the model with observed volumetric water content and matric potential time series, then used simulated pore pressures from the calibrated model to calculate the suction stress and the corresponding distribution of the factor of safety against landsliding with the infinite slope approximation. Although the utility of the model is limited by uncertainty in the deeper groundwater flow system, the continuous simulation of near-surface hydrologic response can help to quantify the temporal variations in the potential for shallow slope failures at the two sites. Thus the integration of near-real time monitoring and physically based modeling contributes a useful tool towards mitigating hazards along the Puget Sound railway corridor.

  7. Hazard function theory for nonstationary natural hazards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Read, Laura K.; Vogel, Richard M.

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field ofmore » hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series ( X) with its failure time series ( T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. As a result, our theoretical analysis linking hazard random variable  X with corresponding failure time series  T should have application to a wide class of natural hazards with opportunities for future extensions.« less

  8. Early prediction of thiopurine-induced hepatotoxicity in inflammatory bowel disease.

    PubMed

    Wong, D R; Coenen, M J H; Derijks, L J J; Vermeulen, S H; van Marrewijk, C J; Klungel, O H; Scheffer, H; Franke, B; Guchelaar, H-J; de Jong, D J; Engels, L G J B; Verbeek, A L M; Hooymans, P M

    2017-02-01

    Hepatotoxicity, gastrointestinal complaints and general malaise are common limiting adverse reactions of azathioprine and mercaptopurine in IBD patients, often related to high steady-state 6-methylmercaptopurine ribonucleotide (6-MMPR) metabolite concentrations. To determine the predictive value of 6-MMPR concentrations 1 week after treatment initiation (T1) for the development of these adverse reactions, especially hepatotoxicity, during the first 20 weeks of treatment. The cohort study consisted of the first 270 IBD patients starting thiopurine treatment as part of the Dutch randomised-controlled trial evaluating pre-treatment thiopurine S-methyltransferase genotype testing (ClinicalTrials.gov NCT00521950). Blood samples for metabolite assessment were collected at T1. Hepatotoxicity was defined by alanine aminotransaminase elevations >2 times the upper normal limit or a ratio of alanine aminotransaminase/alkaline phosphatase ≥5. Forty-seven patients (17%) presented hepatotoxicity during the first 20 weeks of thiopurine treatment. A T1 6-MMPR threshold of 3615 pmol/8 × 10 8 erythrocytes was defined. Analysis of patients on stable thiopurine dose (n = 174) showed that those exceeding the 6-MMPR threshold were at increased risk of hepatotoxicity: OR = 3.8 (95% CI: 1.8-8.0). Age, male gender and BMI were significant determinants. A predictive algorithm was developed based on these determinants and the 6-MMPR threshold to assess hepatotoxicity risk [AUC = 0.83 (95% CI: 0.75-0.91)]. 6-MMPR concentrations above the threshold also correlated with gastrointestinal complaints: OR = 2.4 (95% CI: 1.4-4.3), and general malaise: OR = 2.0 (95% CI: 1.1-3.7). In more than 80% of patients, thiopurine-induced hepatotoxicity could be explained by elevated T1 6-MMPR concentrations and the independent risk factors age, gender and BMI, allowing personalised thiopurine treatment in IBD to prevent early failure. © 2016 John Wiley & Sons Ltd.

  9. Design and fabrication of prototype system for early warning of impending bearing failure

    NASA Technical Reports Server (NTRS)

    Meacher, J.; Chen, H. M.

    1974-01-01

    A test program was conducted with the objective of developing a method and equipment for on-line monitoring of installed ball bearings to detect deterioration or impending failure of the bearings. The program was directed at the spin-axis bearings of a control moment gyro. The bearings were tested at speeds of 6000 and 8000 rpm, thrust loads from 50 to 1000 pounds, with a wide range of lubrication conditions, with and without a simulated fatigue spall implanted in the inner race ball track. It was concluded that a bearing monitor system based on detection and analysis of modulations of a fault indicating bearing resonance frequency can provide a low threshold of sensitivity.

  10. Complications with the MICRA TPS Pacemaker System: Persistent Complete Heart Block and Late Capture Failure.

    PubMed

    Holm, Niels; Müller, Andreas; Zbinden, Rainer

    2017-04-01

    A Medtronic MICRA transcatheter pacing system (Medtronic, Minneapolis, MN, USA) was implanted in an 86-year-old patient with sick sinus syndrome and left bundle branch block after transfemoral aortic valve implantation. During implantation she developed a persistent complete heart block due to manipulation with the large-bore delivery catheter. Two weeks later, acute pacemaker dysfunction occurred due to massive increase of pacing threshold and impedance without obvious pacemaker dislocation or myocardial perforation. Recurrent capture failure was seen with pacing output set at 5 V/1.0 ms. Hence, microdislocation or fixation of the tines in the right ventricular trabeculae has to be assumed. © 2016 Wiley Periodicals, Inc.

  11. Failure modes in electroactive polymer thin films with elastic electrodes

    NASA Astrophysics Data System (ADS)

    De Tommasi, D.; Puglisi, G.; Zurlo, G.

    2014-02-01

    Based on an energy minimization approach, we analyse the elastic deformations of a thin electroactive polymer (EAP) film sandwiched by two elastic electrodes with non-negligible stiffness. We analytically show the existence of a critical value of the electrode voltage for which non-homogeneous solutions bifurcate from the homogeneous equilibrium state, leading to the pull-in phenomenon. This threshold strongly decreases the limit value proposed in the literature considering only homogeneous deformations. We explicitly discuss the influence of geometric and material parameters together with boundary conditions in the attainment of the different failure modes observed in EAP devices. In particular, we obtain the optimum values of these parameters leading to the maximum activation performances of the device.

  12. Timing discriminator using leading-edge extrapolation

    DOEpatents

    Gottschalk, B.

    1981-07-30

    A discriminator circuit to recover timing information from slow-rising pulses by means of an output trailing edge, a fixed time after the starting corner of the input pulse, which is nearly independent of risetime and threshold setting is described. This apparatus comprises means for comparing pulses with a threshold voltage; a capacitor to be charged at a certain rate when the input signal is one-third threshold voltage, and at a lower rate when the input signal is two-thirds threshold voltage; current-generating means for charging the capacitor; means for comparing voltage capacitor with a bias voltage; a flip-flop to be set when the input pulse reaches threshold voltage and reset when capacitor voltage reaches the bias voltage; and a clamping means for discharging the capacitor when the input signal returns below one-third threshold voltage.

  13. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  14. Absorbed dose thresholds and absorbed dose rate limitations for studies of electron radiation effects on polyetherimides

    NASA Technical Reports Server (NTRS)

    Long, Edward R., Jr.; Long, Sheila Ann T.; Gray, Stephanie L.; Collins, William D.

    1989-01-01

    The threshold values of total absorbed dose for causing changes in tensile properties of a polyetherimide film and the limitations of the absorbed dose rate for accelerated-exposure evaluation of the effects of electron radiation in geosynchronous orbit were studied. Total absorbed doses from 1 kGy to 100 MGy and absorbed dose rates from 0.01 MGy/hr to 100 MGy/hr were investigated, where 1 Gy equals 100 rads. Total doses less than 2.5 MGy did not significantly change the tensile properties of the film whereas doses higher than 2.5 MGy significantly reduced elongation-to-failure. There was no measurable effect of the dose rate on the tensile properties for accelerated electron exposures.

  15. Aerospect operations criteria for Mercury thresholds

    NASA Technical Reports Server (NTRS)

    Katz, S.

    1979-01-01

    The hazards anticipated from a large scale mercury spill during a possible failure in the preflight and early flight stages of the Space Shuttle were studied. Toxicity thresholds were investigated as well as other consequences of mercury interacting with the environment. Three sites of mercury spill were investigated: land, water, and atmosphere. A laboratory study of interactions between mercury vapor and ozone in a low pressure, high ultraviolet radiation environment approximated the conditions of a mercury vapor release in the ozone layer region of the stratosphere. Clear evidence of an interaction leading to the destruction of ozone by conversion to oxygen was obtained. The impact of a spill on the Earth's environment and methods of early detection of a developing hazard wave of primary concern in the study.

  16. Pool desiccation and developmental thresholds in the common frog, Rana temporaria.

    PubMed

    Lind, Martin I; Persbo, Frida; Johansson, Frank

    2008-05-07

    The developmental threshold is the minimum size or condition that a developing organism must have reached in order for a life-history transition to occur. Although developmental thresholds have been observed for many organisms, inter-population variation among natural populations has not been examined. Since isolated populations can be subjected to strong divergent selection, population divergence in developmental thresholds can be predicted if environmental conditions favour fast or slow developmental time in different populations. Amphibian metamorphosis is a well-studied life-history transition, and using a common garden approach we compared the development time and the developmental threshold of metamorphosis in four island populations of the common frog Rana temporaria: two populations originating from islands with only temporary breeding pools and two from islands with permanent pools. As predicted, tadpoles from time-constrained temporary pools had a genetically shorter development time than those from permanent pools. Furthermore, the variation in development time among females from temporary pools was low, consistent with the action of selection on rapid development in this environment. However, there were no clear differences in the developmental thresholds between the populations, indicating that the main response to life in a temporary pool is to shorten the development time.

  17. Self-Tuning Threshold Method for Real-Time Gait Phase Detection Based on Ground Contact Forces Using FSRs.

    PubMed

    Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi

    2018-02-06

    This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.

  18. Modeling cascading failures with the crisis of trust in social networks

    NASA Astrophysics Data System (ADS)

    Yi, Chengqi; Bao, Yuanyuan; Jiang, Jingchi; Xue, Yibo

    2015-10-01

    In social networks, some friends often post or disseminate malicious information, such as advertising messages, informal overseas purchasing messages, illegal messages, or rumors. Too much malicious information may cause a feeling of intense annoyance. When the feeling exceeds a certain threshold, it will lead social network users to distrust these friends, which we call the crisis of trust. The crisis of trust in social networks has already become a universal concern and an urgent unsolved problem. As a result of the crisis of trust, users will cut off their relationships with some of their untrustworthy friends. Once a few of these relationships are made unavailable, it is likely that other friends will decline trust, and a large portion of the social network will be influenced. The phenomenon in which the unavailability of a few relationships will trigger the failure of successive relationships is known as cascading failure dynamics. To our best knowledge, no one has formally proposed cascading failures dynamics with the crisis of trust in social networks. In this paper, we address this potential issue, quantify the trust between two users based on user similarity, and model the minimum tolerance with a nonlinear equation. Furthermore, we construct the processes of cascading failures dynamics by considering the unique features of social networks. Based on real social network datasets (Sina Weibo, Facebook and Twitter), we adopt two attack strategies (the highest trust attack (HT) and the lowest trust attack (LT)) to evaluate the proposed dynamics and to further analyze the changes of the topology, connectivity, cascading time and cascade effect under the above attacks. We numerically find that the sparse and inhomogeneous network structure in our cascading model can better improve the robustness of social networks than the dense and homogeneous structure. However, the network structure that seems like ripples is more vulnerable than the other two network structures. Our findings will be useful in further guiding the construction of social networks to effectively avoid the cascading propagation with the crisis of trust. Some research results can help social network service providers to avoid severe cascading failures.

  19. Dynamic Response of Monolithic and Laminate/Particulate Reactive Mixtures

    NASA Astrophysics Data System (ADS)

    Wei, Chung-Ting

    Two dynamic compression methods were applied to a monolithic metal and reactive mixtures to investigate their responses: (a) Dynamic experiments using a split Hopkinson pressure bar were applied to reactive mixtures densified by explosive consolidation in order to establish their mechanical response and failure mechanisms. (b) Laser compression and release, which can impart high stresses, up to hundreds GPa, in times of nanoseconds and fractions thereof, was applied to establish the spalling strength of vanadium and the reaction threshold for Ni/Al laminates. The spallation and fragmentation exhibited by recovered mono- and poly-crystalline vanadium prove that the laser intensities and crystal structure play important roles in determining spall strength, fragmentation, and microstructural processes. Densified reactive mixtures with different microstructures (Ni, Mo, W, Nb and Ta with Al) were subjected to the quasi-static and dynamic strain rates. Two distinct failure mechanisms, axial splitting and shear failure, were observed in the recovered specimens. Axial splitting occurred when the bonding between the powders was poor; shear failure was primarily associated with extensive deformation of continuous Ta and Nb phases. Finite element simulations provided valuable information in interpreting the experimental results and predicting failure mechanisms akin to those observed. Ni/Al laminates were subjected to laser compression. The strain rates varied from 105 to 108 s-1, and the initial stress varied from 30 to ˜300 GPa. It is found the thickness of the lamellar and the interlaminar bonding strength are the two critical factors in determining mechanical failure. The intermetallic reaction leading to Ni3Al and NiAl were produced by the laser energies and laser pulse durations in direct laser shock experiments. Laser-driven compression was also applied to study the high temperature synthesis in nano-scale Ni/Al laminates with bilayer thickness 54 nm. Intermetallic phases, NiAl and NiAl 3, were found on the plasma stagnated laminates. However, the self-propagating high temperature synthesis (SHS) did not self-sustain in the micro-scale laminate because of the short duration of the pulse.

  20. Self-adjusting threshold mechanism for pixel detectors

    NASA Astrophysics Data System (ADS)

    Heim, Timon; Garcia-Sciveres, Maurice

    2017-09-01

    Readout chips of hybrid pixel detectors use a low power amplifier and threshold discrimination to process charge deposited in semiconductor sensors. Due to transistor mismatch each pixel circuit needs to be calibrated individually to achieve response uniformity. Traditionally this is addressed by programmable threshold trimming in each pixel, but requires robustness against radiation effects, temperature, and time. In this paper a self-adjusting threshold mechanism is presented, which corrects the threshold for both spatial inequality and time variation and maintains a constant response. It exploits the electrical noise as relative measure for the threshold and automatically adjust the threshold of each pixel to always achieve a uniform frequency of noise hits. A digital implementation of the method in the form of an up/down counter and combinatorial logic filter is presented. The behavior of this circuit has been simulated to evaluate its performance and compare it to traditional calibration results. The simulation results show that this mechanism can perform equally well, but eliminates instability over time and is immune to single event upsets.

  1. Development of High Level Electrical Stress Failure Threshold and Prediction Model for Small Scale Junction Integrated Circuits

    DTIC Science & Technology

    1978-09-01

    AWACS EMP Guidelines presents two different models to predict the damage pcwer of the dev-ce and the circuit damage EMP voltage ( VEMP ). Neither of...calculated as K P~ I V BD 6. The damage EMP voltage ( VEMP ) is calculated KZ EMP +IZ =D +BD VBD1F 7. The damage EMP voltage is calculated for collector

  2. Aspiration of capsule endoscope and successful bronchoscopic extraction.

    PubMed

    Girdhar, Ankur; Usman, Faisal; Bajwa, Abubakr

    2012-10-01

    Capsule endoscopy is a novel tool for the diagnosis of small intestinal disorders. Recently, a new complication of the procedure in the form of the capsule's aspiration into the lungs has been reported. The aspiration of capsule endoscope can lead to a variety of complications including respiratory failure. A low threshold to suspect this complication and urgent bronchoscopic extraction in appropriate patients can prevent serious sequelae.

  3. Determinants of hepatotoxicity after repeated supratherapeutic paracetamol ingestion: systematic review of reported cases.

    PubMed

    Acheampong, Paul; Thomas, Simon H L

    2016-10-01

    To evaluate the role of reported daily dose, age and other risk factors, and to assess the value of quantifying serum transaminase activity and paracetamol (acetaminophen) concentration at initial assessment for identifying patients at risk of hepatotoxicity following repeated supratherapeutic paracetamol ingestion (RSPI). Systematic literature review with collation and analysis of individual-level data from reported cases of RSPI associated with liver damage. In 199 cases meeting the selection criteria, severe liver damage (ALT/AST ≥1000 IU l(-1) , liver failure or death) was reported in 186 (93%) cases including 77/78 (99%) children aged ≤6 years. Liver failure occurred in 127 (64%) cases; of these 49 (39%) died. Maximum ingested daily paracetamol doses were above UK recommendations in 143 (72%) patients. US-Australasian thresholds for repeated supratherapeutic ingestions requiring intervention were not met in 71 (36%) cases; of these 35 (49%) developed liver failure and 10 (14%) died. No cases developing liver damage had paracetamol concentration < 20 mg l(-1) and a normal ALT/AST on initial presentation or when RSPI was first suspected, but both of these values were only available for 79 (40%) cases. Severe liver damage is reported after RSPI in adults and children, sometimes involving reported doses below current thresholds for intervention. Paracetamol concentrations <20 mg l(-1) with normal serum ALT/AST activity on initial assessment suggests a low risk of subsequent liver damage. These findings are, however, limited by low patient numbers, publication bias and the accuracy of the histories in reported cases. © 2016 The British Pharmacological Society.

  4. A novel approach to estimation of the time to biomarker threshold: applications to HIV.

    PubMed

    Reddy, Tarylee; Molenberghs, Geert; Njagi, Edmund Njeru; Aerts, Marc

    2016-11-01

    In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient-specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus-positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Threshold concepts as barriers to understanding climate science

    NASA Astrophysics Data System (ADS)

    Walton, P.

    2013-12-01

    Whilst the scientific case for current climate change is compelling, the consequences of climate change have largely failed to permeate through to individuals. This lack of public awareness of the science and the potential impacts could be considered a key obstacle to action. The possible reasons for such limited success centre on the issue that climate change is a complex subject, and that a wide ranging academic, political and social research literature on the science and wider implications of climate change has failed to communicate the key issues in an accessible way. These failures to adequately communicate both the science and the social science of climate change at a number of levels results in ';communication gaps' that act as fundamental barriers to both understanding and engagement with the issue. Meyer and Land (2003) suggest that learners can find certain ideas and concepts within a discipline difficult to understand and these act as a barrier to deeper understanding of a subject. To move beyond these threshold concepts, they suggest that the expert needs to support the learner through a range of learning experiences that allows the development of learning strategies particular to the individual. Meyer and Land's research into these threshold concepts has been situated within Economics, but has been suggested to be more widely applicable though there has been no attempt to either define or evaluate threshold concepts to climate change science. By identifying whether common threshold concepts exist specifically in climate science for cohorts of either formal or informal learners, scientists will be better able to support the public in understanding these concepts by changing how the knowledge is communicated to help overcome these barriers to learning. This paper reports on the findings of a study that examined the role of threshold concepts as barriers to understanding climate science in a UK University and considers its implications for wider scientific engagement with the public to develop climate literacy. The analysis of 3 successive cohorts of students' journals who followed the same degree module identified that threshold concepts do exist within the field, such as those related to: role of ocean circulation, use of proxy indicators, forcing factors and feedback mechanisms. Once identified, the study looked at possible strategies to overcome these barriers to support student climate literacy. It concluded that the use of threshold concepts could be problematic when trying to improve climate literacy, as each individual has their own concepts they find ';troublesome' that do not necessarily relate to others. For scientists this presents the difficulty of how to develop a strategy that supports the individual that is cost and time effective. However, the study identifies that eLearning can be used effectively to help people understand troublesome knowledge.

  6. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  7. Subject specific finite element modeling of periprosthetic femoral fracture using element deactivation to simulate bone failure.

    PubMed

    Miles, Brad; Kolos, Elizabeth; Walter, William L; Appleyard, Richard; Shi, Angela; Li, Qing; Ruys, Andrew J

    2015-06-01

    Subject-specific finite element (FE) modeling methodology could predict peri-prosthetic femoral fracture (PFF) for cementless hip arthoplasty in the early postoperative period. This study develops methodology for subject-specific finite element modeling by using the element deactivation technique to simulate bone failure and validate with experimental testing, thereby predicting peri-prosthetic femoral fracture in the early postoperative period. Material assignments for biphasic and triphasic models were undertaken. Failure modeling with the element deactivation feature available in ABAQUS 6.9 was used to simulate a crack initiation and propagation in the bony tissue based upon a threshold of fracture strain. The crack mode for the biphasic models was very similar to the experimental testing crack mode, with a similar shape and path of the crack. The fracture load is sensitive to the friction coefficient at the implant-bony interface. The development of a novel technique to simulate bone failure by element deactivation of subject-specific finite element models could aid prediction of fracture load in addition to fracture risk characterization for PFF. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Anomaly Monitoring Method for Key Components of Satellite

    PubMed Central

    Fan, Linjun; Xiao, Weidong; Tang, Jun

    2014-01-01

    This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703

  9. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  10. Have the temperature time series a structural change after 1998?

    NASA Astrophysics Data System (ADS)

    Werner, Rolf; Valev, Dimitare; Danov, Dimitar

    2012-07-01

    The global and hemisphere temperature GISS and Hadcrut3 time series were analysed for structural changes. We postulate the continuity of the preceding temperature function depending from the time. The slopes are calculated for a sequence of segments limited by time thresholds. We used a standard method, the restricted linear regression with dummy variables. We performed the calculations and tests for different number of thresholds. The thresholds are searched continuously in determined time intervals. The F-statistic is used to obtain the time points of the structural changes.

  11. Deep Brain Stimulation of the Subthalamic Nucleus Does Not Affect the Decrease of Decision Threshold during the Choice Process When There Is No Conflict, Time Pressure, or Reward.

    PubMed

    Leimbach, Friederike; Georgiev, Dejan; Litvak, Vladimir; Antoniades, Chrystalina; Limousin, Patricia; Jahanshahi, Marjan; Bogacz, Rafal

    2018-06-01

    During a decision process, the evidence supporting alternative options is integrated over time, and the choice is made when the accumulated evidence for one of the options reaches a decision threshold. Humans and animals have an ability to control the decision threshold, that is, the amount of evidence that needs to be gathered to commit to a choice, and it has been proposed that the subthalamic nucleus (STN) is important for this control. Recent behavioral and neurophysiological data suggest that, in some circumstances, the decision threshold decreases with time during choice trials, allowing overcoming of indecision during difficult choices. Here we asked whether this within-trial decrease of the decision threshold is mediated by the STN and if it is affected by disrupting information processing in the STN through deep brain stimulation (DBS). We assessed 13 patients with Parkinson disease receiving bilateral STN DBS six or more months after the surgery, 11 age-matched controls, and 12 young healthy controls. All participants completed a series of decision trials, in which the evidence was presented in discrete time points, which allowed more direct estimation of the decision threshold. The participants differed widely in the slope of their decision threshold, ranging from constant threshold within a trial to steeply decreasing. However, the slope of the decision threshold did not depend on whether STN DBS was switched on or off and did not differ between the patients and controls. Furthermore, there was no difference in accuracy and RT between the patients in the on and off stimulation conditions and healthy controls. Previous studies that have reported modulation of the decision threshold by STN DBS or unilateral subthalamotomy in Parkinson disease have involved either fast decision-making under conflict or time pressure or in anticipation of high reward. Our findings suggest that, in the absence of reward, decision conflict, or time pressure for decision-making, the STN does not play a critical role in modulating the within-trial decrease of decision thresholds during the choice process.

  12. Quantifying ecological thresholds from response surfaces

    Treesearch

    Heather E. Lintz; Bruce McCune; Andrew N. Gray; Katherine A. McCulloh

    2011-01-01

    Ecological thresholds are abrupt changes of ecological state. While an ecological threshold is a widely accepted concept, most empirical methods detect them in time or across geographic space. Although useful, these approaches do not quantify the direct drivers of threshold response. Causal understanding of thresholds detected empirically requires their investigation...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keeling, A. N.; Leong, S.; Logan, P. M.

    Empyema and complicated pleural effusion represent common medical problems. Current treatment options are multiple. The purpose of this study was to access the outcome of image-guided, small-bore catheter drainage of empyema and effusion. We evaluated 93 small-bore catheters in 82 patients with pleural effusion (n = 30) or empyema (n = 52), over a 2-year period. Image guidance was with ultrasound (US; n = 56) and CT (n = 37). All patients were followed clinically, with catheter dwell times, catheter outcome, pleural fluid outcome, reinsertion rates, and need for urokinase or surgery recorded. Ninety-three small-bore chest drains (mean=10.2 Fr; range,more » 8.2-12.2 Fr) were inserted, with an average dwell time of 7.81 days for empyemas and 7.14 days for effusions (p > 0.05). Elective removal rates (73% empyema vs 86% effusions) and dislodgement rates (12% empyema vs 13% effusions) were similar for both groups. Eight percent of catheters became blocked and 17% necessitated reinsertion in empyemas, with no catheters blocked or requiring reinsertion in effusions (p < 0.05). Thirty-two patients (51%) required urokinase in the empyema group, versus 2 patients (6%) in the effusion group (p < 0.05). All treatment failures, requiring surgery, occurred in the empyema group (19%; n = 12; p < 0.05). In conclusion, noninfected pleural collections are adequately treated with small-bore catheters, however, empyemas have a failure rate of 19%. The threshold for using urokinase and larger-bore catheters should be low in empyema.« less

  14. Empyema and effusion: outcome of image-guided small-bore catheter drainage.

    PubMed

    Keeling, A N; Leong, S; Logan, P M; Lee, M J

    2008-01-01

    Empyema and complicated pleural effusion represent common medical problems. Current treatment options are multiple. The purpose of this study was to access the outcome of image-guided, small-bore catheter drainage of empyema and effusion. We evaluated 93 small-bore catheters in 82 patients with pleural effusion (n = 30) or empyema (n = 52), over a 2-year period. Image guidance was with ultrasound (US; n = 56) and CT (n = 37). All patients were followed clinically, with catheter dwell times, catheter outcome, pleural fluid outcome, reinsertion rates, and need for urokinase or surgery recorded. Ninety-three small-bore chest drains (mean=10.2 Fr; range, 8.2-12.2 Fr) were inserted, with an average dwell time of 7.81 days for empyemas and 7.14 days for effusions (p > 0.05). Elective removal rates (73% empyema vs 86% effusions) and dislodgement rates (12% empyema vs 13% effusions) were similar for both groups. Eight percent of catheters became blocked and 17% necessitated reinsertion in empyemas, with no catheters blocked or requiring reinsertion in effusions (p < 0.05). Thirty-two patients (51%) required urokinase in the empyema group, versus 2 patients (6%) in the effusion group (p < 0.05). All treatment failures, requiring surgery, occurred in the empyema group (19%; n = 12; p < 0.05). In conclusion, noninfected pleural collections are adequately treated with small-bore catheters, however, empyemas have a failure rate of 19%. The threshold for using urokinase and larger-bore catheters should be low in empyema.

  15. Methods for automatic trigger threshold adjustment

    DOEpatents

    Welch, Benjamin J; Partridge, Michael E

    2014-03-18

    Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.

  16. Prognostic value of changes in galectin-3 levels over time in patients with heart failure: data from CORONA and COACH.

    PubMed

    van der Velde, A Rogier; Gullestad, Lars; Ueland, Thor; Aukrust, Pål; Guo, Yu; Adourian, Aram; Muntendam, Pieter; van Veldhuisen, Dirk J; de Boer, Rudolf A

    2013-03-01

    In several cross-sectional analyses, circulating baseline levels of galectin-3, a protein involved in myocardial fibrosis and remodeling, have been associated with increased risk for morbidity and mortality in patients with heart failure (HF). The importance and clinical use of repeated measurements of galectin-3 have not yet been reported. Plasma galectin-3 was measured at baseline and at 3 months in patients enrolled in the Controlled Rosuvastatin Multinational Trial in Heart Failure (CORONA) trial (n=1329), and at baseline and at 6 months in patients enrolled in the Coordinating Study Evaluating Outcomes of Advising and Counseling Failure (COACH) trial (n=324). Patient results were analyzed by categorical and percentage changes in galectin-3 level. A threshold value of 17.8 ng/mL or 15% change from baseline was used to categorize patients. Increasing galectin-3 levels over time, from a low to high galectin-3 category, were associated with significantly more HF hospitalization and mortality compared with stable or decreasing galectin-3 levels (hazard ratio in CORONA, 1.60; 95% confidence interval, 1.13-2.25; P=0.007; hazard ratio in COACH, 2.38; 95% confidence interval, 1.02-5.55; P=0.046). In addition, patients whose galectin-3 increased by >15% between measurements had a 50% higher relative hazard of adverse event than those whose galectin-3 stayed within ±15% of the baseline value, independent of age, sex, diabetes mellitus, left ventricular ejection fraction, renal function, medication (β-blocker, angiotensin converting enzyme inhibitor, and angiotensin receptor blocker), and N-terminal probrain natriuretic peptide (hazard ratio in CORONA, 1.50; 95% confidence interval, 1.17-1.92; P=0.001). The impact of changing galectin-3 levels on other secondary end points was comparable. In 2 large cohorts of patients with chronic and acute decompensated HF, repeated measurements of galectin-3 level provided important and significant prognostic value in identifying patients with HF at elevated risk for subsequent HF morbidity and mortality.

  17. Effect of risk aversion on prioritizing conservation projects.

    PubMed

    Tulloch, Ayesha I T; Maloney, Richard F; Joseph, Liana N; Bennett, Joseph R; Di Fonzo, Martina M I; Probert, William J M; O'Connor, Shaun M; Densem, Jodie P; Possingham, Hugh P

    2015-04-01

    Conservation outcomes are uncertain. Agencies making decisions about what threat mitigation actions to take to save which species frequently face the dilemma of whether to invest in actions with high probability of success and guaranteed benefits or to choose projects with a greater risk of failure that might provide higher benefits if they succeed. The answer to this dilemma lies in the decision maker's aversion to risk--their unwillingness to accept uncertain outcomes. Little guidance exists on how risk preferences affect conservation investment priorities. Using a prioritization approach based on cost effectiveness, we compared 2 approaches: a conservative probability threshold approach that excludes investment in projects with a risk of management failure greater than a fixed level, and a variance-discounting heuristic used in economics that explicitly accounts for risk tolerance and the probabilities of management success and failure. We applied both approaches to prioritizing projects for 700 of New Zealand's threatened species across 8303 management actions. Both decision makers' risk tolerance and our choice of approach to dealing with risk preferences drove the prioritization solution (i.e., the species selected for management). Use of a probability threshold minimized uncertainty, but more expensive projects were selected than with variance discounting, which maximized expected benefits by selecting the management of species with higher extinction risk and higher conservation value. Explicitly incorporating risk preferences within the decision making process reduced the number of species expected to be safe from extinction because lower risk tolerance resulted in more species being excluded from management, but the approach allowed decision makers to choose a level of acceptable risk that fit with their ability to accommodate failure. We argue for transparency in risk tolerance and recommend that decision makers accept risk in an adaptive management framework to maximize benefits and avoid potential extinctions due to inefficient allocation of limited resources. © 2014 Society for Conservation Biology.

  18. Characterizing the Effects of a Vertical Time Threshold for a Class of Well-Clear Definitions

    NASA Technical Reports Server (NTRS)

    Upchurch, Jason M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Consiglio, Maria C.; Chamberlain James P.

    2015-01-01

    A fundamental requirement for the integration of unmanned aircraft into civil airspace is the capability of aircraft to remain well clear of each other and avoid collisions. This requirement has led to a broad recognition of the need for an unambiguous, formal definition of well clear. It is further recognized that any such definition must be interoperable with existing airborne collision avoidance systems (ACAS). A particular class of well-clear definitions uses logic checks of independent distance thresholds as well as independent time thresholds in the vertical and horizontal dimensions to determine if a well-clear violation is predicted to occur within a given time interval. Existing ACAS systems also use independent distance thresholds, however a common time threshold is used for the vertical and horizontal logic checks. The main contribution of this paper is the characterization of the effects of the decoupled vertical time threshold on a well-clear definition in terms of (1) time to well-clear violation, and (2) interoperability with existing ACAS. The paper provides governing equations for both metrics and includes simulation results to illustrate the relationships. In this paper, interoperability implies that the time of well-clear violation is strictly less than the time a resolution advisory is issued by ACAS. The encounter geometries under consideration in this paper are initially well clear and consist of constant-velocity trajectories resulting in near-mid-air collisions.

  19. Optimization of Second Fault Detection Thresholds to Maximize Mission POS

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan

    2018-01-01

    In order to support manned spaceflight safety requirements, the Space Launch System (SLS) has defined program-level requirements for key systems to ensure successful operation under single fault conditions. To accommodate this with regards to Navigation, the SLS utilizes an internally redundant Inertial Navigation System (INS) with built-in capability to detect, isolate, and recover from first failure conditions and still maintain adherence to performance requirements. The unit utilizes multiple hardware- and software-level techniques to enable detection, isolation, and recovery from these events in terms of its built-in Fault Detection, Isolation, and Recovery (FDIR) algorithms. Successful operation is defined in terms of sufficient navigation accuracy at insertion while operating under worst case single sensor outages (gyroscope and accelerometer faults at launch). In addition to first fault detection and recovery, the SLS program has also levied requirements relating to the capability of the INS to detect a second fault, tracking any unacceptable uncertainty in knowledge of the vehicle's state. This detection functionality is required in order to feed abort analysis and ensure crew safety. Increases in navigation state error and sensor faults can drive the vehicle outside of its operational as-designed environments and outside of its performance envelope causing loss of mission, or worse, loss of crew. The criteria for operation under second faults allows for a larger set of achievable missions in terms of potential fault conditions, due to the INS operating at the edge of its capability. As this performance is defined and controlled at the vehicle level, it allows for the use of system level margins to increase probability of mission success on the operational edges of the design space. Due to the implications of the vehicle response to abort conditions (such as a potentially failed INS), it is important to consider a wide range of failure scenarios in terms of both magnitude and time. As such, the Navigation team is taking advantage of the INS's capability to schedule and change fault detection thresholds in flight. These values are optimized along a nominal trajectory in order to maximize probability of mission success, and reducing the probability of false positives (defined as when the INS would report a second fault condition resulting in loss of mission, but the vehicle would still meet insertion requirements within system-level margins). This paper will describe an optimization approach using Genetic Algorithms to tune the threshold parameters to maximize vehicle resilience to second fault events as a function of potential fault magnitude and time of fault over an ascent mission profile. The analysis approach, and performance assessment of the results will be presented to demonstrate the applicability of this process to second fault detection to maximize mission probability of success.

  20. Diagnostic accuracy of point-of-care natriuretic peptide testing for chronic heart failure in ambulatory care: systematic review and meta-analysis.

    PubMed

    Taylor, Kathryn S; Verbakel, Jan Y; Feakins, Benjamin G; Price, Christopher P; Perera, Rafael; Bankhead, Clare; Plüddemann, Annette

    2018-05-21

    To assess the diagnostic accuracy of point-of-care natriuretic peptide tests in patients with chronic heart failure, with a focus on the ambulatory care setting. Systematic review and meta-analysis. Ovid Medline, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Embase, Health Technology Assessment Database, Science Citation Index, and Conference Proceedings Citation Index until 31 March 2017. Eligible studies evaluated point-of-care natriuretic peptide testing (B-type natriuretic peptide (BNP) or N terminal fragment pro B-type natriuretic peptide (NTproBNP)) against any relevant reference standard, including echocardiography, clinical examination, or combinations of these, in humans. Studies were excluded if reported data were insufficient to construct 2×2 tables. No language restrictions were applied. 42 publications of 39 individual studies met the inclusion criteria and 40 publications of 37 studies were included in the analysis. Of the 37 studies, 30 evaluated BNP point-of-care testing and seven evaluated NTproBNP testing. 15 studies were done in ambulatory care settings in populations with a low prevalence of chronic heart failure. Five studies were done in primary care. At thresholds >100 pg/mL, the sensitivity of BNP, measured with the point-of-care index device Triage, was generally high and was 0.95 (95% confidence interval 0.90 to 0.98) at 100 pg/mL. At thresholds <100 pg/mL, sensitivity ranged from 0.46 to 0.97 and specificity from 0.31 to 0.98. Primary care studies that used NTproBNP testing reported a sensitivity of 0.99 (0.57 to 1.00) and specificity of 0.60 (0.44 to 0.74) at 135 pg/mL. No statistically significant difference in diagnostic accuracy was found between point-of-care BNP and NTproBNP tests. Given the lack of studies in primary care, the paucity of NTproBNP data, and potential methodological limitations in these studies, large scale trials in primary care are needed to assess the role of point-of-care natriuretic peptide testing and clarify appropriate thresholds to improve care of patients with suspected or chronic heart failure. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. Artificial Intelligence and Amikacin Exposures Predictive of Outcomes in Multidrug-Resistant Tuberculosis Patients.

    PubMed

    Modongo, Chawangwa; Pasipanodya, Jotam G; Magazi, Beki T; Srivastava, Shashikant; Zetola, Nicola M; Williams, Scott M; Sirugo, Giorgio; Gumbo, Tawanda

    2016-10-01

    Aminoglycosides such as amikacin continue to be part of the backbone of treatment of multidrug-resistant tuberculosis (MDR-TB). We measured amikacin concentrations in 28 MDR-TB patients in Botswana receiving amikacin therapy together with oral levofloxacin, ethionamide, cycloserine, and pyrazinamide and calculated areas under the concentration-time curves from 0 to 24 h (AUC0-24). The patients were followed monthly for sputum culture conversion based on liquid cultures. The median duration of amikacin therapy was 184 (range, 28 to 866) days, at a median dose of 17.30 (range 11.11 to 19.23) mg/kg. Only 11 (39%) patients had sputum culture conversion during treatment; the rest failed. We utilized classification and regression tree analyses (CART) to examine all potential predictors of failure, including clinical and demographic features, comorbidities, and amikacin peak concentrations (Cmax), AUC0-24, and trough concentrations. The primary node for failure had two competing variables, Cmax of <67 mg/liter and AUC0-24 of <568.30 mg · h/L; weight of >41 kg was a secondary node with a score of 35% relative to the primary node. The area under the receiver operating characteristic curve for the CART model was an R(2) = 0.90 on posttest. In patients weighing >41 kg, sputum conversion was 3/3 (100%) in those with an amikacin Cmax of ≥67 mg/liter versus 3/15 (20%) in those with a Cmax of <67 mg/liter (relative risk [RR] = 5.00; 95% confidence interval [CI], 1.82 to 13.76). In all patients who had both amikacin Cmax and AUC0-24 below the threshold, 7/7 (100%) failed, compared to 7/15 (47%) of those who had these parameters above threshold (RR = 2.14; 95% CI, 1.25 to 43.68). These amikacin dose-schedule patterns and exposures are virtually the same as those identified in the hollow-fiber system model. Copyright © 2016 Modongo et al.

  2. Cost-effectiveness of sacubitril/valsartan in the treatment of heart failure with reduced ejection fraction.

    PubMed

    McMurray, John J V; Trueman, David; Hancock, Elizabeth; Cowie, Martin R; Briggs, Andrew; Taylor, Matthew; Mumby-Croft, Juliet; Woodcock, Fionn; Lacey, Michael; Haroun, Rola; Deschaseaux, Celine

    2018-06-01

    Chronic heart failure with reduced ejection fraction (HF-REF) represents a major public health issue and is associated with considerable morbidity and mortality. We evaluated the cost-effectiveness of sacubitril/valsartan (formerly LCZ696) compared with an ACE inhibitor (ACEI) (enalapril) in the treatment of HF-REF from the perspective of healthcare providers in the UK, Denmark and Colombia. A cost-utility analysis was performed based on data from a multinational, Phase III randomised controlled trial. A decision-analytic model was developed based on a series of regression models, which extrapolated health-related quality of life, hospitalisation rates and survival over a lifetime horizon. The primary outcome was the incremental cost-effectiveness ratio (ICER). In the UK, the cost per quality-adjusted life-year (QALY) gained for sacubitril/valsartan (using cardiovascular mortality) was £17 100 (€20 400) versus enalapril. In Denmark, the ICER for sacubitril/valsartan was Kr 174 000 (€22 600). In Colombia, the ICER was COP$39.5 million (€11 200) per QALY gained. Deterministic sensitivity analysis showed that results were most sensitive to the extrapolation of mortality, duration of treatment effect and time horizon, but were robust to other structural changes, with most scenarios associated with ICERs below the willingness-to-pay threshold for all three country settings. Probabilistic sensitivity analysis suggested the probability that sacubitril/valsartan was cost-effective at conventional willingness-to-pay thresholds was 68%-94% in the UK, 84% in Denmark and 95% in Colombia. Our analysis suggests that, in all three countries, sacubitril/valsartan is likely to be cost-effective compared with an ACEI (the current standard of care) in patients with HF-REF. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. THRESH—Software for tracking rainfall thresholds for landslide and debris-flow occurrence, user manual

    USGS Publications Warehouse

    Baum, Rex L.; Fischer, Sarah J.; Vigil, Jacob C.

    2018-02-28

    Precipitation thresholds are used in many areas to provide early warning of precipitation-induced landslides and debris flows, and the software distribution THRESH is designed for automated tracking of precipitation, including precipitation forecasts, relative to thresholds for landslide occurrence. This software is also useful for analyzing multiyear precipitation records to compare timing of threshold exceedance with dates and times of historical landslides. This distribution includes the main program THRESH for comparing precipitation to several kinds of thresholds, two utility programs, and a small collection of Python and shell scripts to aid the automated collection and formatting of input data and the graphing and further analysis of output results. The software programs can be deployed on computing platforms that support Fortran 95, Python 2, and certain Unix commands. The software handles rainfall intensity-duration thresholds, cumulative recent-antecedent precipitation thresholds, and peak intensity thresholds as well as various measures of antecedent precipitation. Users should have predefined rainfall thresholds before running THRESH.

  4. Effects of global financial crisis on network structure in a local stock market

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Maeng, Seong Eun; Ha, Gyeong Gyun; Lee, Jae Woo

    2014-08-01

    This study considers the effects of the 2008 global financial crisis on threshold networks of a local Korean financial market around the time of the crisis. Prices of individual stocks belonging to KOSPI 200 (Korea Composite Stock Price Index 200) are considered for three time periods, namely before, during, and after the crisis. Threshold networks are constructed from fully connected cross-correlation networks, and thresholds of cross-correlation coefficients are assigned to obtain threshold networks. At the high threshold, only one large cluster consisting of firms in the financial sector, heavy industry, and construction is observed during the crisis. However, before and after the crisis, there are several fragmented clusters belonging to various sectors. The power law of the degree distribution in threshold networks is observed within the limited range of thresholds. Threshold networks are fatter during the crisis than before or after the crisis. The clustering coefficient of the threshold network follows the power law in the scaling range.

  5. Non-Markovian Infection Spread Dramatically Alters the Susceptible-Infected-Susceptible Epidemic Threshold in Networks

    NASA Astrophysics Data System (ADS)

    Van Mieghem, P.; van de Bovenkamp, R.

    2013-03-01

    Most studies on susceptible-infected-susceptible epidemics in networks implicitly assume Markovian behavior: the time to infect a direct neighbor is exponentially distributed. Much effort so far has been devoted to characterize and precisely compute the epidemic threshold in susceptible-infected-susceptible Markovian epidemics on networks. Here, we report the rather dramatic effect of a nonexponential infection time (while still assuming an exponential curing time) on the epidemic threshold by considering Weibullean infection times with the same mean, but different power exponent α. For three basic classes of graphs, the Erdős-Rényi random graph, scale-free graphs and lattices, the average steady-state fraction of infected nodes is simulated from which the epidemic threshold is deduced. For all graph classes, the epidemic threshold significantly increases with the power exponents α. Hence, real epidemics that violate the exponential or Markovian assumption can behave seriously differently than anticipated based on Markov theory.

  6. Comparison of Ventilatory Measures and 20 km Time Trial Performance.

    PubMed

    Peveler, Willard W; Shew, Brandy; Johnson, Samantha; Sanders, Gabe; Kollock, Roger

    2017-01-01

    Performance threshold measures are used to predict cycling performance. Previous research has focused on long time trials (≥ 40 km) using power at ventilatory threshold and respiratory threshold to estimate time trial performance. As intensity greatly differs during shorter time trails applying findings from longer time trials may not be appropriate. The use of heart rate measures to determine 20 km time trial performance has yet to be examined. The purpose of this study was to determine the effectiveness of heart rate measures at ventilatory threshold (VE/VO 2 Plotted and VT determined by software) and respiratory threshold (RER of 0.95, 1.00, and 1.05) to predict 20 km time trial performance. Eighteen cyclists completed a VO 2max protocol and two 20 km time trials. Average heart rates from 20 km time trials were compared with heart rates from performance threshold measures (VT plotted, VT software, and an RER at 0.95, 1.00, and 1.05) using repeated measures ANOVA. Significance was set a priori at P ≤ 0.05. The only measure not found to be significantly different in relation to time trial performance was HR at an RER of 1.00 (166.61±12.70 bpm vs. 165.89 ± 9.56 bpm, p = .671). VT plotting and VT determined by software were found to underestimate time trial performance by 3% and 8% respectively. From these findings it is recommended to use heart rate at a RER of 1.00 in order to determine 20 km time trial intensity.

  7. Property Values Associated with the Failure of Individual Links in a System with Multiple Weak and Strong Links.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Representations are developed and illustrated for the distribution of link property values at the time of link failure in the presence of aleatory uncertainty in link properties. The following topics are considered: (i) defining properties for weak links and strong links, (ii) cumulative distribution functions (CDFs) for link failure time, (iii) integral-based derivation of CDFs for link property at time of link failure, (iv) sampling-based approximation of CDFs for link property at time of link failure, (v) verification of integral-based and sampling-based determinations of CDFs for link property at time of link failure, (vi) distributions of link properties conditional onmore » time of link failure, and (vii) equivalence of two different integral-based derivations of CDFs for link property at time of link failure.« less

  8. Olfactory Threshold of Chlorine in Oxygen.

    DTIC Science & Technology

    1977-09-01

    The odor threshold of chlorine in oxygen was determined. Measurements were conducted in an altitude chamber, which provided an odor-free and noise...free background. Human male volunteers, with no previous olfactory acuity testing experience, served as panelists. Threshold values were affected by...time intervals between trials and by age differences. The mean threshold value for 11 subjects was 0.08 ppm obtained by positive responses to the lowest detectable level of chlorine in oxygen, 50% of the time. (Author)

  9. Kv1 channels control spike threshold dynamics and spike timing in cortical pyramidal neurones

    PubMed Central

    Higgs, Matthew H; Spain, William J

    2011-01-01

    Abstract Previous studies showed that cortical pyramidal neurones (PNs) have a dynamic spike threshold that functions as a high-pass filter, enhancing spike timing in response to high-frequency input. While it is commonly assumed that Na+ channel inactivation is the primary mechanism of threshold accommodation, the possible role of K+ channel activation in fast threshold changes has not been well characterized. The present study tested the hypothesis that low-voltage activated Kv1 channels affect threshold dynamics in layer 2–3 PNs, using α-dendrotoxin (DTX) or 4-aminopyridine (4-AP) to block these conductances. We found that Kv1 blockade reduced the dynamic changes of spike threshold in response to a variety of stimuli, including stimulus-evoked synaptic input, current steps and ramps of varied duration, and noise. Analysis of the responses to noise showed that Kv1 channels increased the coherence of spike output with high-frequency components of the stimulus. A simple model demonstrates that a dynamic spike threshold can account for this effect. Our results show that the Kv1 conductance is a major mechanism that contributes to the dynamic spike threshold and precise spike timing of cortical PNs. PMID:21911608

  10. Impacts of selected stimulation patterns on the perception threshold in electrocutaneous stimulation

    PubMed Central

    2011-01-01

    Background Consistency is one of the most important concerns to convey stable artificially induced sensory feedback. However, the constancy of perceived sensations cannot be guaranteed, as the artificially evoked sensation is a function of the interaction of stimulation parameters. The hypothesis of this study is that the selected stimulation parameters in multi-electrode cutaneous stimulation have significant impacts on the perception threshold. Methods The investigated parameters included the stimulated location, the number of active electrodes, the number of pulses, and the interleaved time between a pair of electrodes. Biphasic, rectangular pulses were applied via five surface electrodes placed on the forearm of 12 healthy subjects. Results Our main findings were: 1) the perception thresholds at the five stimulated locations were significantly different (p < 0.0001), 2) dual-channel simultaneous stimulation lowered the perception thresholds and led to smaller variance in perception thresholds compared to single-channel stimulation, 3) the perception threshold was inversely related to the number of pulses, and 4) the perception threshold increased with increasing interleaved time when the interleaved time between two electrodes was below 500 μs. Conclusions To maintain a consistent perception threshold, our findings indicate that dual-channel simultaneous stimulation with at least five pulses should be used, and that the interleaved time between two electrodes should be longer than 500 μs. We believe that these findings have implications for design of reliable sensory feedback codes. PMID:21306616

  11. The ventilatory anaerobic threshold is related to, but is lower than, the critical power, but does not explain exercise tolerance at this workrate.

    PubMed

    Okudan, N; Gökbel, H

    2006-03-01

    The aim of the present study was to investigate the relationships between critical power (CP), maximal aerobic power and the anaerobic threshold and whether exercise time to exhaustion and work at the CP can be used as an index in the determination of endurance. An incremental maximal cycle exercise test was performed on 30 untrained males aged 18-22 years. Lactate analysis was carried out on capillary blood samples at every 2 minutes. From gas exchange parameters and heart rate and lactate values, ventilatory anaerobic thresholds, heart rate deflection point and the onset of blood lactate accumulation were calculated. CP was determined with linear work-time method using 3 loads. The subjects exercised until they could no longer maintain a cadence above 24 rpm at their CP and exercise time to exhaustion was determined. CP was lower than the power output corresponding to VO2max, higher than the power outputs corresponding to anaerobic threshold. CP was correlated with VO2max and anaerobic threshold. Exercise time to exhaustion and work at CP were not correlated with VO2max and anaerobic threshold. Because of the correlations of the CP with VO2max and anaerobic threshold and no correlation of exercise time to exhaustion and work at the CP with these parameters, we conclude that exercise time to exhaustion and work at the CP cannot be used as an index in the determination of endurance.

  12. A Bivariate return period for levee failure monitoring

    NASA Astrophysics Data System (ADS)

    Isola, M.; Caporali, E.

    2017-12-01

    Levee breaches are strongly linked with the interaction processes among water, soil and structure, thus many are the factors that affect the breach development. One of the main is the hydraulic load, characterized by intensity and duration, i.e. by the flood event hydrograph. On the magnitude of the hydraulic load is based the levee design, generally without considering the fatigue failure due to the load duration. Moreover, many are the cases in which the levee breach are characterized by flood of magnitude lower than the design one. In order to implement the strategies of flood risk management, we built here a procedure based on a multivariate statistical analysis of flood peak and volume together with the analysis of the past levee failure events. Particularly, in order to define the probability of occurrence of the hydraulic load on a levee, a bivariate copula model is used to obtain the bivariate joint distribution of flood peak and volume. Flood peak is the expression of the load magnitude, while the volume is the expression of the stress over time. We consider the annual flood peak and the relative volume. The volume is given by the hydrograph area between the beginning and the end of event. The beginning of the event is identified as an abrupt rise of the discharge by more than 20%. The end is identified as the point from which the receding limb is characterized by the baseflow, using a nonlinear reservoir algorithm as baseflow separation technique. By this, with the aim to define warning thresholds we consider the past levee failure events and the relative bivariate return period (BTr) compared with the estimation of a traditional univariate model. The discharge data of 30 hydrometric stations of Arno River in Tuscany, Italy, in the period 1995-2016 are analysed. The database of levee failure events, considering for each event the location as well as the failure mode, is also created. The events were registered in the period 2000-2014 by EEA-Europe Environment Agency, the Italian Civil Protection and ISPRA (the Italian National Institute for Environmental Protection and Research). Only two levee failures events occurred in the sub-basin of Era River have been detected and analysed. The estimated return period with the univariate model of flood peak is greater than 2 and 5 years while the BTr is greater of 25 and 30 years respectively.

  13. Heroic Reliability Improvement in Manned Space Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.

  14. Coupling between the spinal cord and cervical vertebral column under tensile loading.

    PubMed

    Kroeker, Shannon G; Ching, Randal P

    2013-02-22

    Current neck injury criteria are based on structural failure of the spinal (vertebral) column without consideration of injury to the spinal cord. Since one of the primary functions of the vertebral column is to protect the cord, it stands to reason that a more refined measure of neck injury threshold would be the onset of spinal cord injury (SCI). This study investigated the relationship between axial strains in the cervical vertebral column and the spinal cord using an in vitro primate model (n=10) under continuous tensile loading. Mean failure loads occurred at 1951.5±396N with failure strains in the vertebral column of 16±5% at the level of failure. Average tensile strains in the spinal cord at failure were 11±5% resulting in a mean coupling ratio of 0.54±0.17 between C1 and C7. The level of peak strain measured in the spinal cord did not always occur at the location of vertebral column failure. Spinal cord strains were less than spine strains and coupling ratios were not significantly different along the length of the spine. The largest coupling ratio was measured in the atlanto-occipital joint whereas the smallest coupling ratio occurred at the adjacent C1-C2 joint. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Thermal Effect on Fracture Integrity in Enhanced Geothermal Systems

    NASA Astrophysics Data System (ADS)

    Zeng, C.; Deng, W.; Wu, C.; Insall, M.

    2017-12-01

    In enhanced geothermal systems (EGS), cold fluid is injected to be heated up for electricity generation purpose, and pre-existing fractures are the major conduits for fluid transport. Due to the relative cold fluid injection, the rock-fluid temperature difference will induce thermal stress along the fracture wall. Such large thermal stress could cause the failure of self-propping asperities and therefore change the fracture integrity, which could affect the heat recovery efficiency and fluid recycling. To study the thermal effect on fracture integrity, two mechanisms pertinent to thermal stress are proposed to cause asperity contact failure: (1) the crushing between two pairing asperities leads to the failure at contact area, and (2) the thermal spalling expedites this process. Finite element modeling is utilized to investigate both failure mechanisms by idealizing the asperities as hemispheres. In the numerical analysis, we have implemented meso-scale damage model to investigate coupled failure mechanism induced by thermomechanical stress field and original overburden pressure at the vicinity of contact point. Our results have shown that both the overburden pressure and a critical temperature determine the threshold of asperity failure. Since the overburden pressure implies the depth of fractures in EGS and the critical temperature implies the distance of fractures to the injection well, our ultimate goal is to locate a region of EGS where the fracture integrity is vulnerable to such thermal effect and estimate the influences.

  16. Evidence accumulation in obsessive-compulsive disorder: the role of uncertainty and monetary reward on perceptual decision-making thresholds.

    PubMed

    Banca, Paula; Vestergaard, Martin D; Rankov, Vladan; Baek, Kwangyeol; Mitchell, Simon; Lapa, Tatyana; Castelo-Branco, Miguel; Voon, Valerie

    2015-03-13

    The compulsive behaviour underlying obsessive-compulsive disorder (OCD) may be related to abnormalities in decision-making. The inability to commit to ultimate decisions, for example, patients unable to decide whether their hands are sufficiently clean, may reflect failures in accumulating sufficient evidence before a decision. Here we investigate the process of evidence accumulation in OCD in perceptual discrimination, hypothesizing enhanced evidence accumulation relative to healthy volunteers. Twenty-eight OCD patients and thirty-five controls were tested with a low-level visual perceptual task (random-dot-motion task, RDMT) and two response conflict control tasks. Regression analysis across different motion coherence levels and Hierarchical Drift Diffusion Modelling (HDDM) were used to characterize response strategies between groups in the RDMT. Patients required more evidence under high uncertainty perceptual contexts, as indexed by longer response time and higher decision boundaries. HDDM, which defines a decision when accumulated noisy evidence reaches a decision boundary, further showed slower drift rate towards the decision boundary reflecting poorer quality of evidence entering the decision process in patients under low uncertainty. With monetary incentives emphasizing speed and penalty for slower responses, patients decreased the decision thresholds relative to controls, accumulating less evidence in low uncertainty. These findings were unrelated to visual perceptual deficits and response conflict. This study provides evidence for impaired decision-formation processes in OCD, with a differential influence of high and low uncertainty contexts on evidence accumulation (decision threshold) and on the quality of evidence gathered (drift rates). It further emphasizes that OCD patients are sensitive to monetary incentives heightening speed in the speed-accuracy tradeoff, improving evidence accumulation.

  17. Turbomachine Sealing and Secondary Flows. Part 2; Review of Rotordynamics Issues in Inherently Unsteady Flow Systems With Small Clearances

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Tam, L. T.; Muszynska, A.

    2004-01-01

    Today's computational methods enable the determination of forces in complex systems, but without field validation data, or feedback, there is a high risk of failure when the design envelope is challenged. The data of Childs and Bently and field data reported in NASA Conference Proceedings serve as sources of design information for the development of these computational codes. Over time all turbomachines degrade and instabilities often develop, requiring responsible, accurate, turbomachine diagnostics with proper decisions to prevent failures. Tam et al. (numerical) and Bently and Muszynska (analytical) models corroborate and implicate that destabilizing factors are related through increases in the fluid-force average circumferential velocity. The stability threshold can be controlled by external swirl and swirl brakes and increases in radial fluid film stiffness (e.g., hydrostatic and ambient pressures) to enhance rotor stability. Also cited are drum rotor self-excited oscillations, where the classic fix is to add a split or severed damper ring or cylindrical damper drum, and the Benkert-Wachter work that engendered swirl brake concepts. For a smooth-operating, reliable, long-lived machine, designers must pay very close attention to sealing dynamics and diagnostic methods. Correcting the seals enabled the space shuttle main engine high-pressure fuel turbopump (SSME HPFTP) to operate successfully.

  18. A tri-fold hybrid classification approach for diagnostics with unexampled faulty states

    NASA Astrophysics Data System (ADS)

    Tamilselvan, Prasanna; Wang, Pingfeng

    2015-01-01

    System health diagnostics provides diversified benefits such as improved safety, improved reliability and reduced costs for the operation and maintenance of engineered systems. Successful health diagnostics requires the knowledge of system failures. However, with an increasing system complexity, it is extraordinarily difficult to have a well-tested system so that all potential faulty states can be realized and studied at product testing stage. Thus, real time health diagnostics requires automatic detection of unexampled system faulty states based upon sensory data to avoid sudden catastrophic system failures. This paper presents a trifold hybrid classification (THC) approach for structural health diagnosis with unexampled health states (UHS), which comprises of preliminary UHS identification using a new thresholded Mahalanobis distance (TMD) classifier, UHS diagnostics using a two-class support vector machine (SVM) classifier, and exampled health states diagnostics using a multi-class SVM classifier. The proposed THC approach, which takes the advantages of both TMD and SVM-based classification techniques, is able to identify and isolate the unexampled faulty states through interactively detecting the deviation of sensory data from the exampled health states and forming new ones autonomously. The proposed THC approach is further extended to a generic framework for health diagnostics problems with unexampled faulty states and demonstrated with health diagnostics case studies for power transformers and rolling bearings.

  19. Laying the Groundwork for NCLEX Success: An Exploration of Adaptive Quizzing as an Examination Preparation Method.

    PubMed

    Cox-Davenport, Rebecca A; Phelan, Julia C

    2015-05-01

    First-time NCLEX-RN pass rates are an important indicator of nursing school success and quality. Nursing schools use different methods to anticipate NCLEX outcomes and help prevent student failure and possible threat to accreditation. This study evaluated the impact of a shift in NCLEX preparation policy at a BSN program in the southeast United States. The policy shifted from the use of predictor score thresholds to determine graduation eligibility to a more proactive remediation strategy involving adaptive quizzing. A descriptive correlational design evaluated the impact of an adaptive quizzing system designed to give students ongoing active practice and feedback and explored the relationship between predictor examinations and NCLEX success. Data from student usage of the system as well as scores on predictor tests were collected for three student cohorts. Results revealed a positive correlation between adaptive quizzing system usage and content mastery. Two of the 69 students in the sample did not pass the NCLEX. With so few students failing the NCLEX, predictability of any course variables could not be determined. The power of predictor examinations to predict NCLEX failure could also not be supported. The most consistent factor among students, however, was their content mastery level within the adaptive quizzing system. Implications of these findings are discussed.

  20. Risk Factors for Free Flap Failure in 2,846 Patients With Head and Neck Cancer: A National Database Study in Japan.

    PubMed

    Ishimaru, Miho; Ono, Sachiko; Suzuki, Sayaka; Matsui, Hiroki; Fushimi, Kiyohide; Yasunaga, Hideo

    2016-06-01

    The risk factors for an unfavorable outcome after microvascular free flap reconstruction in head and neck cancer are not fully understood. We sought to identify factors affecting the occurrence of free flap failure. This was a retrospective cohort study using data from the national inpatient database in Japan between 2010 and 2012. We identified patients diagnosed with head and neck cancer who underwent tumor resection and consecutive free flap reconstruction. Cox proportional hazards regression was used to assess risk factors for free flap failure. The threshold for significance was P < .05. Missing data were imputed by using multiple imputation. We identified 2,846 eligible patients. The overall proportion of free flap failure was 3.3%. Free flap failure was associated with diabetes mellitus (hazard ratio [HR], 1.80; 95% confidence interval [95% CI], 1.18 to 2.76; P = .007), peripheral vascular disease (HR, 4.49; 95% CI, 1.61 to 12.52; P = .004), renal failure (HR, 3.67; 95% CI, 1.45 to 9.33; P = .006), preoperative radiotherapy (HR, 2.14; 95% CI, 1.11 to 4.13; P = .022), and duration of anesthesia greater than 18 hours (compared with <12 hours; HR, 2.72; 95% CI, 1.19 to 6.22; P = .018). Diabetes mellitus, peripheral vascular disease, renal failure, preoperative radiotherapy, and a longer duration of anesthesia were significant predictors of the occurrence of free flap failure. Copyright © 2016 The American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  1. MONITORING OF LARGE INSTABLE AREAS: system reliability and new tools.

    NASA Astrophysics Data System (ADS)

    Leandro, G.; Mucciarelli, M.; Pellicani, R.; Spilotro, G.

    2009-04-01

    The monitoring of unstable or potentially unstable areas is a necessary operation every time you can not remove the conditions of risk and apply to mitigation measures. In Italian Apennine regions there are many urban or extra-urban areas affected by instability, for which it is impracticable to remove hazard conditions, because of size and cost problems. The technological evolution exportable to the field of land instability monitoring is particularly lively and allows the use of warning systems unthinkable just few years ago. However, the monitoring of unstable or potentially unstable areas requires a very great knowledge of the specific problems, without which the reliability of the system may be dangerously overestimated. The movement may arise, indeed, in areas not covered by instrumentation, or covered with vegetation that prevents the acquisition of both reflected signals in the multi-beam laser techniques and radar signals. Environmental conditions (wind, concentrated sources of light, temperature changes, presence of animals) may also invalidate the accuracy of the measures, by introducing modulations or disturbance at a level well above the threshold of alarm signal, leading consequently to raise the values of the warning threshold. The Authors have gained long experience with the observation and monitoring of some large landslides in the Southern Apennine (Aliano, Buoninventre, Calciano, Carlantino, etc.) and unstable areas also at regional scale. One of the most important experiences is about the case of landslides of extensive areas, where unstable and stables zones coexist along transverse and longitudinal axis. In many of these cases you need the accurate control of the movement at selected points to evaluate the trend of displacement velocity, which can be achieved by means of a single-beam laser. The control of these movements, however, does not provide information on stress pattern into the stable areas. Among the sensitive precursors, acoustic emission (AE) measurement has constituted for decades a monitoring system able to define precisely, in presence of a favourable geometry, the micro-cracks pattern, describing both propagation of cracks and formation of failure surfaces. In stiff materials the deformations are associated to micro distortions or micro failures, with release of vibrational energy in the acoustic range of the frequencies. Therefore, the survey of such precursors can result extremely profitable to the goals of an early recognition of the evolution of stress states towards the failure. For this reason, the survey of AE in geological materials, besides monitoring and forecast of landslide movements, has revealed very useful in other fields, as the safety auscultation of the mines, the stability of underground deposits of liquid and gaseous hydrocarbons and radioactive cinders, and the forecast of avalanches. The studies in literature, especially in the mining field, have underlined that the typical emission is in the low frequencies range, where waves suffer a limited attenuation in the propagation through the rocks. The generation of micro-fractures is, on the other hand, accompanied by acoustic emission at higher frequencies. The Authors in the last years have experienced the possibility of acoustic emission measurement, finally coming to the design and construction of a simple mono-channel device. This device is unable to determine the location of the spreading point of new failures, which is possible only with more than three channels devices. A mono-channel device can realize: a - recognition of the phenomenon (definition of AE frequency range of rock mass under examination, characteristic ampleness), b - recognition of the physiological activity (number of issues in time), c - enucleation of the remarkable noises from those physiological or occasional produced outside. The third point is realized through the transformation of sampled noise in FFT, for which is easier to define a threshold, after an appropriate period of learning, which allows to configure the system as an expert system. The selection of in situ signals allows the use of very short transmission reports, simply consisting in outside threshold events into the time range, usually chosen as 1 hour. The monitoring of AE is currently in progress in numerous landslide sites in Southern Italy. The longest recording is performed in the Aliano landslide site, were still today part of the landslide body translates at the velocity of 0.5 mm per day. In the Aliano site, part of the recordings has been contextual to a notable mobilization of the landslide, so that has been possible to compare the released acoustic emission with the recordings of the surface movements. Data analysis revealed that the first significant acoustic activity preceded mass movements of almost 24 hours. The good performance confirm the potentiality of the AE monitoring to retrieve early elements discriminated by the wave type, form and intensity, which can be well correlated to the beginning of the rupture of an unstable soil mass, time before the same moves. The technique of measurement is not affected by typical problems of optical measures, it is not subjected to reflection of surface EM signal and can be integrated into the new generation monitoring systems.

  2. Department of Defense Technology Transfer (T2) Program

    DTIC Science & Technology

    2014-04-08

    January February (1st Monday) Disposal System Performance Spec Production Representative Articles CARD – Cost Analysis Requirements Description CCE...Supportability Objectives Exit Criteria Met APB MTA FMECA FTA LORA RCM MS B Threshold/objective tradeoffs – Revised Performance Attributes MS C MS A...Evaluation FTA – Failure Tree Analysis IOT&E – Initial Operational Test & Evaluation ISR – In-Service Review ISP – Information Support Plan ITR – Initial

  3. Strategy Guideline: Modeling Enclosure Design in Above-Grade Walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lstiburek, J.; Ueno, K.; Musunuru, S.

    2016-02-24

    The Strategy Guideline describes how to model and interpret results of models for above grade walls. The Measure Guideline analyzes the failure thresholds and criteria for above grade walls. A library of above-grade walls with historically successful performance was used to calibrate WUFI (Warme Und Feuchte Instationar) software models. The information is generalized for application to a broad population of houses within the limits of existing experience.

  4. Ice-Cliff Failure via Retrogressive Slumping

    NASA Astrophysics Data System (ADS)

    Parizek, B. R.; Christianson, K.; Alley, R. B.; Voytenko, D.; Vankova, I.; Dixon, T. H.; Holland, D.

    2016-12-01

    The magnitude and rate of future sea-level rise from warming-induced ice-sheet shrinkage remain notably uncertain. Removal of most of an ice sheet by surface melting alone requires centuries to millennia. Oceanic warming may accelerate loss by removing buttressing ice shelves and thereby speeding flow of non-floating ice into the ocean, but, until recently, modeled timescales for major dynamic ice-sheet shrinkage were centuries or longer. Beyond certain thresholds, however, observations show that warming removes floating ice shelves, leaving grounded ice cliffs from which icebergs break off directly. Cliffs higher than some limit experience rapid structural failure. Recent parameterization of this process in a comprehensive ice-flow model produced much faster sea-level rise from future rapid warming than in previous modeling studies, through formation and retreat of tall ice cliffs. Fully physical representations of this process are not yet available, however. Here, we use modeling guided by terrestrial radar data from Helheim Glacier, Greenland to show that cliffs will fail by slumping and trigger rapid retreat at a threshold height that, in crevassed ice with surface melting, may be only slightly above the 100-m maximum observed today, but may be roughly twice that (180-275 m) in mechanically-competent ice under well-drained or low-melt conditions.

  5. Utility of respiratory ward-based NIV in acidotic hypercapnic respiratory failure.

    PubMed

    Dave, Chirag; Turner, Alice; Thomas, Ajit; Beauchamp, Ben; Chakraborty, Biman; Ali, Asad; Mukherjee, Rahul; Banerjee, Dev

    2014-11-01

    We sought to elicit predictors of in-hospital mortality for first and subsequent admissions with acidotic hypercapnic respiratory failure (AHRF) in a cohort of chronic obstructive pulmonary disease patients who have undergone ward-based non-invasive ventilation (NIV), and identify features associated with long-term survival. Analysis of prospectively collected data at a single centre on patients undergoing NIV for AHRF between 2004 and 2009. Predictors of in-hospital mortality and intubation were sought by logistic regression and predictors of long-term survival by Cox regression. Initial pH exhibited a threshold effect for in-hospital mortality at pH 7.15. This relationship remained in patients undergoing their first episode of AHRF. In both first and subsequent admissions, a pH threshold of 7.25 at 4 h was associated with better prognosis (P = 0.02 and P = 0.04 respectively). In second or subsequent episodes of AHRF, mortality was lower and predicted only by age (P = 0.002) on multivariate analysis. NIV could be used on medical wards for patients with pH 7.16 or greater on their first admission, although more conservative values should continue to be used for those with a second or subsequent episodes of AHRF. © 2014 Asian Pacific Society of Respirology.

  6. The effects of tensile preloads on the impact response of carbon/epoxy laminates

    NASA Technical Reports Server (NTRS)

    Nettles, Alan; Daniel, Vince; Branscomb, Caleb

    1995-01-01

    The effects of tensile preloads on the tension-after-impact (TAI) strength of composite laminates of IM7/8551-7 were examined. A failure threshold curve was first determined so the most informative values for preload/impact energy combinations could be determined. The impact tests were instrumented so maximum load of impact, as well as several other parameters could be measured. The elastic response data indicate that as the tensile preload is increased, the maximum load of impact also increases. The damage data show that at low impact energies, the damage/failure is an 'all-or-nothing' event but at higher impact energies, a region of preload values exists where the coupons could sustain damage, yet not fail catastrophically.

  7. A bivariate model for analyzing recurrent multi-type automobile failures

    NASA Astrophysics Data System (ADS)

    Sunethra, A. A.; Sooriyarachchi, M. R.

    2017-09-01

    The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by the bivariate model. The proposed model can be used to determine the time and type of failure that would occur in the automobiles considered here.

  8. Central nervous system mechanisms linking the consumption of palatable high-fat diets to the defense of greater adiposity.

    PubMed

    Ryan, Karen K; Woods, Stephen C; Seeley, Randy J

    2012-02-08

    The central nervous system (CNS) plays key role in the homeostatic regulation of body weight. Satiation and adiposity signals, providing acute and chronic information about available fuel, are produced in the periphery and act in the brain to influence energy intake and expenditure, resulting in the maintenance of stable adiposity. Diet-induced obesity (DIO) does not result from a failure of these central homeostatic circuits. Rather, the threshold for defended adiposity is increased in environments providing ubiquitous access to palatable, high-fat foods, making it difficult to achieve and maintain weight loss. Consequently, mechanisms by which nutritional environments interact with central homeostatic circuits to influence the threshold for defended adiposity represent critical targets for therapeutic intervention. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Development and testing of an algorithm to detect implantable cardioverter-defibrillator lead failure.

    PubMed

    Gunderson, Bruce D; Gillberg, Jeffrey M; Wood, Mark A; Vijayaraman, Pugazhendhi; Shepard, Richard K; Ellenbogen, Kenneth A

    2006-02-01

    Implantable cardioverter-defibrillator (ICD) lead failures often present as inappropriate shock therapy. An algorithm that can reliably discriminate between ventricular tachyarrhythmias and noise due to lead failure may prevent patient discomfort and anxiety and avoid device-induced proarrhythmia by preventing inappropriate ICD shocks. The goal of this analysis was to test an ICD tachycardia detection algorithm that differentiates noise due to lead failure from ventricular tachyarrhythmias. We tested an algorithm that uses a measure of the ventricular intracardiac electrogram baseline to discriminate the sinus rhythm isoelectric line from the right ventricular coil-can (i.e., far-field) electrogram during oversensing of noise caused by a lead failure. The baseline measure was defined as the product of the sum (mV) and standard deviation (mV) of the voltage samples for a 188-ms window centered on each sensed electrogram. If the minimum baseline measure of the last 12 beats was <0.35 mV-mV, then the detected rhythm was considered noise due to a lead failure. The first ICD-detected episode of lead failure and inappropriate detection from 24 ICD patients with a pace/sense lead failure and all ventricular arrhythmias from 56 ICD patients without a lead failure were selected. The stored data were analyzed to determine the sensitivity and specificity of the algorithm to detect lead failures. The minimum baseline measure for the 24 lead failure episodes (0.28 +/- 0.34 mV-mV) was smaller than the 135 ventricular tachycardia (40.8 +/- 43.0 mV-mV, P <.0001) and 55 ventricular fibrillation episodes (19.1 +/- 22.8 mV-mV, P <.05). A minimum baseline <0.35 mV-mV threshold had a sensitivity of 83% (20/24) with a 100% (190/190) specificity. A baseline measure of the far-field electrogram had a high sensitivity and specificity to detect lead failure noise compared with ventricular tachycardia or fibrillation.

  10. Construct validity of the Heart Failure Screening Tool (Heart-FaST) to identify heart failure patients at risk of poor self-care: Rasch analysis.

    PubMed

    Reynolds, Nicholas A; Ski, Chantal F; McEvedy, Samantha M; Thompson, David R; Cameron, Jan

    2018-02-14

    The aim of this study was to psychometrically evaluate the Heart Failure Screening Tool (Heart-FaST) via: (1) examination of internal construct validity; (2) testing of scale function in accordance with design; and (3) recommendation for change/s, if items are not well adjusted, to improve psychometric credential. Self-care is vital to the management of heart failure. The Heart-FaST may provide a prospective assessment of risk, regarding the likelihood that patients with heart failure will engage in self-care. Psychometric validation of the Heart-FaST using Rasch analysis. The Heart-FaST was administered to 135 patients (median age = 68, IQR = 59-78 years; 105 males) enrolled in a multidisciplinary heart failure management program. The Heart-FaST is a nurse-administered tool for screening patients with HF at risk of poor self-care. A Rasch analysis of responses was conducted which tested data against Rasch model expectations, including whether items serve as unbiased, non-redundant indicators of risk and measure a single construct and that rating scales operate as intended. The results showed that data met Rasch model expectations after rescoring or deleting items due to poor discrimination, disordered thresholds, differential item functioning, or response dependence. There was no evidence of multidimensionality which supports the use of total scores from Heart-FaST as indicators of risk. Aggregate scores from this modified screening tool rank heart failure patients according to their "risk of poor self-care" demonstrating that the Heart-FaST items constitute a meaningful scale to identify heart failure patients at risk of poor engagement in heart failure self-care. © 2018 John Wiley & Sons Ltd.

  11. A window on perception: Response times of odontocete cetaceans in audiometric tests

    NASA Astrophysics Data System (ADS)

    Blackwood, Diane J.; Ridgway, Sam H.; Evans, William E.

    2002-05-01

    A standard psychometric measurement is response time, the interval elapsing between a stimulus and a response. While studies of response time have been published for humans and other terrestrial mammals, this study marks the first report of response times for odontocete cetaceans at threshold in an audiometric task. Two white whales (Delphinapterus leucas) and four Atlantic bottlenose dolphins (Tursiops truncatus) were given audiometric tests to determine masked hearing thresholds. Animals were tested at 26 frequencies over a range from 200 Hz to 100 kHz using pure tones. The test tone amplitudes covered a range of 20 dB re 1 microPascal including the hearing threshold of the animal at that frequency. Hearing thresholds varied from 87.5 dB to 125.5 dB depending on frequency, masking noise intensity and individual animal. Data was analyzed to determine characteristic relationships between response time and amplitude of test tone for each frequency and animal. The two whales responded significantly slower (640 ms, 0.001) than the four dolphins (430 ms). As in terrestrial animals, reaction time became shorter as stimulus strength increased. At threshold, median response time across frequencies within each animal varied about 150 ms.

  12. Decomposition Technique for Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  13. Apology and forgiveness evolve to resolve failures in cooperative agreements.

    PubMed

    Martinez-Vaquero, Luis A; Han, The Anh; Pereira, Luís Moniz; Lenaerts, Tom

    2015-06-09

    Making agreements on how to behave has been shown to be an evolutionarily viable strategy in one-shot social dilemmas. However, in many situations agreements aim to establish long-term mutually beneficial interactions. Our analytical and numerical results reveal for the first time under which conditions revenge, apology and forgiveness can evolve and deal with mistakes within ongoing agreements in the context of the Iterated Prisoners Dilemma. We show that, when the agreement fails, participants prefer to take revenge by defecting in the subsisting encounters. Incorporating costly apology and forgiveness reveals that, even when mistakes are frequent, there exists a sincerity threshold for which mistakes will not lead to the destruction of the agreement, inducing even higher levels of cooperation. In short, even when to err is human, revenge, apology and forgiveness are evolutionarily viable strategies which play an important role in inducing cooperation in repeated dilemmas.

  14. Apology and forgiveness evolve to resolve failures in cooperative agreements

    PubMed Central

    Martinez-Vaquero, Luis A.; Han, The Anh; Pereira, Luís Moniz; Lenaerts, Tom

    2015-01-01

    Making agreements on how to behave has been shown to be an evolutionarily viable strategy in one-shot social dilemmas. However, in many situations agreements aim to establish long-term mutually beneficial interactions. Our analytical and numerical results reveal for the first time under which conditions revenge, apology and forgiveness can evolve and deal with mistakes within ongoing agreements in the context of the Iterated Prisoners Dilemma. We show that, when the agreement fails, participants prefer to take revenge by defecting in the subsisting encounters. Incorporating costly apology and forgiveness reveals that, even when mistakes are frequent, there exists a sincerity threshold for which mistakes will not lead to the destruction of the agreement, inducing even higher levels of cooperation. In short, even when to err is human, revenge, apology and forgiveness are evolutionarily viable strategies which play an important role in inducing cooperation in repeated dilemmas. PMID:26057819

  15. High-power AlGaInN lasers for Blu-ray disc system

    NASA Astrophysics Data System (ADS)

    Takeya, Motonubu; Ikeda, Shinroh; Sasaki, Tomomi; Fujimoto, Tsuyoshi; Ohfuji, Yoshio; Mizuno, Takashi; Oikawa, Kenji; Yabuki, Yoshifumi; Uchida, Shiro; Ikeda, Masao

    2003-07-01

    This paper describes an improved laser structure for AlGaInN based blue-violet lasers (BV-LDs). The design realizes a small beam divergence angle perpendicular to the junction plane and high characteristic temperature wihtout significant increase in threshold current density (Jth) by optimizing the position of the Mg-doped layer and introducing an undoped AlGaN layer between the active layer and the Mg-doped electron-blocking layer. The mean time to failure (MTTF) of devices based on this design was found to be closely related to the dislocation density of ELO-GaN basal layer. Under 50 mW CW operation at 70°C, a MTTF of over 5000 h was realized whenthe dark spot density (indicative of dislocation density) is less than ~5×106 cm-2. Power consumption under 50mW CW operation at 70°C was approximately 0.33 W, independent of the dislocation density.

  16. Primary HIV infection: a medical and public health emergency requiring rapid specialist management.

    PubMed

    Fidler, Sarah; Fox, Julie

    2016-04-01

    Primary HIV infection (PHI) refers to the first six months following HIV acquisition and represents a unique opportunity for expedited diagnosis, and consideration of rapid antiretroviral therapy (ART) initiation to improve immune function, reduce the size of the viral reservoir and limit the risk of onward viral transmission. Failure to diagnose and rapidly treat individuals with PHI has significant individual and public health implications. The Strategic Timing of AntiRetroviral Treatment trial recently identified a clinical benefit of immediate ART over deferral of treatment according to CD4 count threshold, and has led to rapid changes in World Health Organization and specialist national guidelines. For all individuals living with HIV, the offer of immediate therapy irrespective of CD4 count is now recommended. This paper summarises the presentation and management of PHI, incorporating current research and guideline changes and discusses the role of PHI in onward transmission. © 2016 Royal College of Physicians.

  17. Model for Predicting Passage of Invasive Fish Species Through Culverts

    NASA Astrophysics Data System (ADS)

    Neary, V.

    2010-12-01

    Conservation efforts to promote or inhibit fish passage include the application of simple fish passage models to determine whether an open channel flow allows passage of a given fish species. Derivations of simple fish passage models for uniform and nonuniform flow conditions are presented. For uniform flow conditions, a model equation is developed that predicts the mean-current velocity threshold in a fishway, or velocity barrier, which causes exhaustion at a given maximum distance of ascent. The derivation of a simple expression for this exhaustion-threshold (ET) passage model is presented using kinematic principles coupled with fatigue curves for threatened and endangered fish species. Mean current velocities at or above the threshold predict failure to pass. Mean current velocities below the threshold predict successful passage. The model is therefore intuitive and easily applied to predict passage or exclusion. The ET model’s simplicity comes with limitations, however, including its application only to uniform flow, which is rarely found in the field. This limitation is addressed by deriving a model that accounts for nonuniform conditions, including backwater profiles and drawdown curves. Comparison of these models with experimental data from volitional swimming studies of fish indicates reasonable performance, but limitations are still present due to the difficulty in predicting fish behavior and passage strategies that can vary among individuals and different fish species.

  18. Detection of sub-threshold periodic signal by multiplicative and additive cross-correlated sine-Wiener noises in the FitzHugh-Nagumo neuron

    NASA Astrophysics Data System (ADS)

    Yao, Yuangen; Ma, Chengzhang; Wang, Canjun; Yi, Ming; Gui, Rong

    2018-02-01

    We study the effects of multiplicative and additive cross-correlated sine-Wiener (CCSW) noises on the performance of sub-threshold periodic signal detection in the FitzHugh-Nagumo (FHN) neuron by calculating Fourier coefficients Q for measuring synchronization between sub-threshold input signal and the response of system. CCSW noises-induced transitions of electrical activity in the FHN neuron model can be observed. Moreover, the performance of sub-threshold periodic signal detection is achieved at moderate noise strength, cross-correlation time and cross-correlation strength of CCSW noises, which indicate the occurrence of CCSW noises-induced stochastic resonance. Furthermore, the performance of sub-threshold signal detection is strongly sensitive to cross-correlation time of CCSW noises. Therefore, the performance can be effectively controlled by regulating cross-correlation time of CCSW noises. These results provide a possible mechanism for amplifying or detecting the sub-threshold signal in the nervous system.

  19. Analysis of Hepatitis C Virus Genotype 1b Resistance Variants in Japanese Patients Treated with Paritaprevir-Ritonavir and Ombitasvir.

    PubMed

    Krishnan, Preethi; Schnell, Gretja; Tripathi, Rakesh; Beyer, Jill; Reisch, Thomas; Zhang, Xinyan; Setze, Carolyn; Rodrigues, Lino; Burroughs, Margaret; Redman, Rebecca; Chayama, Kazuaki; Kumada, Hiromitsu; Collins, Christine; Pilot-Matias, Tami

    2016-02-01

    Treatment of HCV genotype 1b (GT1b)-infected Japanese patients with paritaprevir (NS3/4A inhibitor boosted with ritonavir) and ombitasvir (NS5A inhibitor) in studies M12-536 and GIFT-I demonstrated high sustained virologic response (SVR) rates. The virologic failure rate was 3% (13/436) across the two studies. Analyses were conducted to evaluate the impact of baseline resistance-associated variants (RAVs) on treatment outcome and the emergence and persistence of RAVs in patients experiencing virologic failure. Baseline paritaprevir resistance-conferring variants in NS3 were infrequent, while Y93H in NS5A was the most prevalent ombitasvir resistance-conferring variant at baseline. A comparison of baseline prevalence of polymorphisms in Japanese and western patients showed that Q80L and S122G in NS3 and L28M, R30Q, and Y93H in NS5A were significantly more prevalent in Japanese patients. In the GIFT-I study, the prevalence of Y93H in NS5A varied between 13% and 21% depending on the deep-sequencing detection threshold. Among patients with Y93H comprising <1%, 1 to 40%, or >40% of their preexisting viral population, the 24-week SVR (SVR24) rates were >99% (276/277), 93% (38/41), and 76% (25/33), respectively, indicating that the prevalence of Y93H within a patient's viral population is a good predictor of treatment response. The predominant RAVs at the time of virologic failure were D168A/V in NS3 and Y93H alone or in combination with other variants in NS5A. While levels of NS3 RAVs declined over time, NS5A RAVs persisted through posttreatment week 48. Results from these analyses are informative in understanding the resistance profile of an ombitasvir- plus paritaprevir/ritonavir-based regimen in Japanese GT1b-infected patients. Copyright © 2016 Krishnan et al.

  20. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  1. The response time threshold for predicting favourable neurological outcomes in patients with bystander-witnessed out-of-hospital cardiac arrest.

    PubMed

    Ono, Yuichi; Hayakawa, Mineji; Iijima, Hiroaki; Maekawa, Kunihiko; Kodate, Akira; Sadamoto, Yoshihiro; Mizugaki, Asumi; Murakami, Hiromoto; Katabami, Kenichi; Sawamura, Atsushi; Gando, Satoshi

    2016-10-01

    It is well established that the period of time between a call being made to emergency medical services (EMS) and the time at which the EMS arrive at the scene (i.e. the response time) affects survival outcomes in patients who experience out-of-hospital cardiac arrest (OHCA). However, the relationship between the response time and favourable neurological outcomes remains unclear. We therefore aimed to determine a response time threshold in patients with bystander-witnessed OHCA that is associated with positive neurological outcomes and to assess the relationship between the response time and neurological outcomes in patients with OHCA. This study was a retrospective, observational analysis of data from 204,277 episodes of bystander-witnessed OHCA between 2006 and 2012 in Japan. We used classification and regression trees (CARTs) and receiver operating characteristic (ROC) curve analyses to determine the threshold of response time associated with favourable neurological outcomes (Cerebral Performance Category 1 or 2) 1 month after cardiac arrest. Both CARTs and ROC analyses indicated that a threshold of 6.5min was associated with improved neurological outcomes in all bystander-witnessed OHCA events of cardiac origin. Furthermore, bystander cardiopulmonary resuscitation (CPR) prolonged the threshold of response time by 1min (up to 7.5min). The adjusted odds ratio for favourable neurological outcomes in patients with OHCA who received care within ≤6.5min was 1.935 (95% confidential interval: 1.834-2.041, P<0.001). A response time of ≤6.5min was closely associated with favourable neurological outcomes in all bystander-witnessed patients with OHCA. Bystander CPR prolonged the response time threshold by 1min. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Reliability assessment of slender concrete columns at the stability failure

    NASA Astrophysics Data System (ADS)

    Valašík, Adrián; Benko, Vladimír; Strauss, Alfred; Täubling, Benjamin

    2018-01-01

    The European Standard for designing concrete columns within the use of non-linear methods shows deficiencies in terms of global reliability, in case that the concrete columns fail by the loss of stability. The buckling failure is a brittle failure which occurs without warning and the probability of its formation depends on the columns slenderness. Experiments with slender concrete columns were carried out in cooperation with STRABAG Bratislava LTD in Central Laboratory of Faculty of Civil Engineering SUT in Bratislava. The following article aims to compare the global reliability of slender concrete columns with slenderness of 90 and higher. The columns were designed according to methods offered by EN 1992-1-1 [1]. The mentioned experiments were used as basis for deterministic nonlinear modelling of the columns and subsequent the probabilistic evaluation of structural response variability. Final results may be utilized as thresholds for loading of produced structural elements and they aim to present probabilistic design as less conservative compared to classic partial safety factor based design and alternative ECOV method.

  3. 50 CFR 452.03 - Threshold review and determinations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 9 2011-10-01 2011-10-01 false Threshold review and determinations. 452... PROCESS CONSIDERATION OF APPLICATION BY THE SECRETARY § 452.03 Threshold review and determinations. (a) Threshold determinations. Within 20 days after receiving an exemption application, or a longer time agreed...

  4. 50 CFR 452.03 - Threshold review and determinations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 11 2014-10-01 2014-10-01 false Threshold review and determinations. 452... PROCESS CONSIDERATION OF APPLICATION BY THE SECRETARY § 452.03 Threshold review and determinations. (a) Threshold determinations. Within 20 days after receiving an exemption application, or a longer time agreed...

  5. The Influence of Cement Morphology Parameters on the Strength of the Cement-Bone Interface in Tibial Tray Fixation.

    PubMed

    Nagel, Katrin; Bishop, Nicholas E; Schlegel, Ulf J; Püschel, Klaus; Morlock, Michael M

    2017-02-01

    The strength of the cement-bone interface in tibial component fixation depends on the morphology of the cement mantle. The purpose of this study was to identify thresholds of cement morphology parameters to maximize fixation strength using a minimum amount of cement. Twenty-three cadaveric tibiae were analyzed that had been implanted with tibial trays in previous studies and for which the pull-out strength of the tray had been measured. Specimens were separated into a group failing at the cement-bone interface (INTERFACE) and one failing in the bulk bone (BULK). Maximum pull-out strength corresponds to the ultimate strength of the bulk bone if the cement-bone interface is sufficiently strong. 3D models of the cement mantle in situ were reconstructed from computed tomography scans. The influences of bone mineral density and 6 cement morphology parameters (reflecting cement penetration, bone-cement interface, cement volume) on pull-out strength of the BULK group were determined using multiple regression analysis. The threshold of each parameter for classification of the specimens into either group was determined using receiver operating characteristic analysis. Cement penetration exceeding a mean of 1.1 mm or with a maximum of 5.6 mm exclusively categorized all BULK bone failure specimens. Failure strength of BULK failure specimens increased with bone mineral density (R 2  = 0.67, P < .001) but was independent of the cement morphology parameters. To maximize fixation strength, a mean cement penetration depth of at least 1.1 mm should be achieved during tibial tray cementing. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Dynamic stresses, coulomb failure, and remote triggering: corrected

    USGS Publications Warehouse

    Hill, David P.

    2012-01-01

    Dynamic stresses associated with crustal surface waves with 15–30 s periods and peak amplitudes <1  MPa are capable of triggering seismicity at sites remote from the generating mainshock under appropriate conditions. Coulomb failure models based on a frictional strength threshold offer one explanation for instances of rapid‐onset triggered seismicity that develop during the surface‐wave peak dynamic stressing. Evaluation of the triggering potential of surface‐wave dynamic stresses acting on critically stressed faults using a Mohr’s circle representation together with the Coulomb failure criteria indicates that Love waves should have a higher triggering potential than Rayleigh waves for most fault orientations and wave incidence angles. That (1) the onset of triggered seismicity often appears to begin during the Rayleigh wave rather than the earlier arriving Love wave, and (2) Love‐wave amplitudes typically exceed those for Rayleigh waves suggests that the explanation for rapid‐onset dynamic triggering may not reside solely with a simple static‐threshold friction mode. The results also indicate that normal faults should be more susceptible to dynamic triggering by 20‐s Rayleigh‐wave stresses than thrust faults in the shallow seismogenic crust (<10  km) while the advantage tips in favor of reverse faults greater depths. This transition depth scales with wavelength and coincides roughly with the transition from retrograde‐to‐prograde particle motion. Locally elevated pore pressures may have a role in the observed prevalence of dynamic triggering in extensional regimes and geothermal/volcanic systems. The result is consistent with the apparent elevated susceptibility of extensional or transtensional tectonic regimes to remote triggering by Rayleigh‐wave dynamic stresses than compressional or transpressional regimes.

  7. Cost-effectiveness of sacubitril/valsartan in chronic heart-failure patients with reduced ejection fraction.

    PubMed

    Ademi, Zanfina; Pfeil, Alena M; Hancock, Elizabeth; Trueman, David; Haroun, Rola Haroun; Deschaseaux, Celine; Schwenkglenks, Matthias

    2017-11-29

    We aimed to assess the cost effectiveness of sacubitril/valsartan compared to angiotensin-converting enzyme inhibitors (ACEIs) for the treatment of individuals with chronic heart failure and reduced-ejection fraction (HFrEF) from the perspective of the Swiss health care system. The cost-effectiveness analysis was implemented as a lifelong regression-based cohort model. We compared sacubitril/valsartan with enalapril in chronic heart failure patients with HFrEF and New York-Heart Association Functional Classification II-IV symptoms. Regression models based on the randomised clinical phase III PARADIGM-HF trials were used to predict events (all-cause mortality, hospitalisations, adverse events and quality of life) for each treatment strategy modelled over the lifetime horizon, with adjustments for patient characteristics. Unit costs were obtained from Swiss public sources for the year 2014, and costs and effects were discounted by 3%. The main outcome of interest was the incremental cost-effectiveness ratio (ICER), expressed as cost per quality-adjusted life years (QALYs) gained. Deterministic sensitivity analysis (DSA) and scenario and probabilistic sensitivity analysis (PSA) were performed. In the base-case analysis, the sacubitril/valsartan strategy showed a decrease in the number of hospitalisations (6.0% per year absolute reduction) and lifetime hospital costs by 8.0% (discounted) when compared with enalapril. Sacubitril/valsartan was predicted to improve overall and quality-adjusted survival by 0.50 years and 0.42 QALYs, respectively. Additional net-total costs were CHF 10 926. This led to an ICER of CHF 25 684. In PSA, the probability of sacubitril/valsartan being cost-effective at thresholds of CHF 50 000 was 99.0%. The treatment of HFrEF patients with sacubitril/valsartan versus enalapril is cost effective, if a willingness-to-pay threshold of CHF 50 000 per QALY gained ratio is assumed.

  8. Development of a Numerical Model of Hypervelocity Impact into a Pressurized Composite Overwrapped Pressure Vessel

    NASA Technical Reports Server (NTRS)

    Garcia, M. A.; Davis, B. A.; Miller, J. E.

    2017-01-01

    As the outlook for space exploration becomes more ambitious and spacecraft travel deeper into space than ever before, it is increasingly important that propulsion systems perform reliably within the space environment. The increased reliability compels designers to increase design margin at the expense of system mass, which contrasts with the need to limit vehicle mass to maximize payload. Such are the factors that motivate the integration of high specific strength composite materials in the construction of pressure vessels commonly referred to as composite overwrapped pressure vessels (COPV). The COPV consists of a metallic liner for the inner shell of the COPV that is stiff, negates fluid permeation and serves as the anchor for composite laminates or filaments, but the liner itself cannot contain the stresses from the pressurant it contains. The compo-site-fiber reinforced polymer (CFRP) is wound around the liner using a combination of hoop (circumferential) and helical orientations. Careful consideration of wrap orientation allows the composite to evenly bear structural loading and creates the COPV's characteristic high strength to weight ratio. As the CFRP overwrap carries most of the stresses induced by pressurization, damage to the overwrap can affect mission duration, mission success and potentially cause loss-of-vehicle/loss-of-crew. For this reason, it is critical to establish a fundamental understanding of the mechanisms involved in the failure of a stressed composite such as that of the COPV. One of the greatest external threats to the integrity of a spacecraft's COPV is an impact from the meteoroid and orbital debris environments (MMOD). These impacts, even from submillimeter particles, generate extremely high stress states in the CFRP that can damage numerous fibers. As a result of this possibility, initial assumptions in survivability analysis for some human-rated NASA space-craft have assumed that any alteration of the vessel due to impact is considered a catastrophic failure. This assumption is conservative and made due to lack of knowledge on the level of allow-able damage to the composite overwrap that can be sustained and still allow successful completion of the mission. To quantify the allowable damage level to the composite overwrap involves assessing stress redistribution following damage as well as evaluating possible time-dependent mechanisms involved in the COPV response to an impact event. Limited published work in this subject has shown that COPV can withstand at least some level of damage due to high energy impacts. These observations have been confirmed and expanded upon in recent experimental research performed by NASA. This research has demonstrated that there is not only robustness in a COPV to compensate for CFRP damage, but has also identified two significant failure modes for pressurized COPV. The lowest threshold failure mode involves the perforation of the vessel, and the highest threshold failure mode is the catastrophic rupture. While both of these failure modes mean a loss of the COPV, system robustness affords some tolerance to the venting as opposed to the more catastrophic rupture. As a consequence, it is necessary to understand the conditions that result in the transition between these failure modes. The aforementioned experimental research has been performed in both the unpressurized and pressurized condition to identify the damage level that triggered the failure thresh-old. This COPV test program was sponsored by the NASA Engineering and Safety Center (NESC), and tests were performed at NASA White Sands Test Facility (WSTF). Planning and coordination were provided by NASA JSC Hypervelocity Impact Technology (HVIT) group, and the COPVs were provided by the ISS Program. Unpressurized testing has been conducted at the pressure of the vacuum test chamber, while, the pressurized testing has been conducted at 290 +/- 10 bar (4,200 ? 100 psi) using nitrogen as the pressurizing gas, which corresponds to the design pressure for the target COPV. In this research, spherical aluminum projectiles with varying diameter has been chosen as the impactor. For the unpressurized COPV, the dependence of penetration up to the dependence of hole size in the liner has been obtained as a function of impact conditions. For the pressurized research, the dependence of penetration up to rupture has been obtained as a function of im-pact conditions. Two representative post-test photographs of the failed COPV's from a nor-mal impact into the COPV surface are shown in Fig. 1. These images display the dramatic difference between failure modes, venting (Fig. 1a) and rupture (Fig. 1b). For venting, liner perforation, severed composite fibers/tows and ply delamination are commonly observed damage characteristics of this COPV failure mode. In the case of rupture, the COPV typically experienced a separation of its domed regions and severe break-up of the cylindrical region. Fully understanding the transition from venting to rupture experimentally is costly and potentially unachievable for conditions that cannot be generated in the laboratory. These shortcomings have motivated the performance of three-dimensional numerical simulations to expand the existing experimental database. These simulations have been carried out with the nonlinear-structural-dynamics, analysis-tool, CTH. A typical pressure contour plot from an impact simulation of an entire COPV is shown in Fig. 2. To generate the COPV stress state without initiating a shock wave, the pressure in the simulated COPV is ramped up to the final pressure over a millisecond prior to impact of the projectile with nitrogen gas. Figure 2a shows the system in this initial condition. After one millisecond, a projectile is initiated into the simulation and impacts the COPV. Figure 2b shows the system after this impact. In the figure, the onset of venting is represented as the change in pressure (µbar), red to green, at the perforation site. Also seen in the figure is the eroded projectile that had passed into the COPV vessel with the generated shock wave in the pressurant propagating just ahead of the material. In this paper, pertinent experimental details and the development of the material constitutive models necessary for this work along with the efforts to validate their use are dis-cussed. The simulation results are presented and compared with the NASA experimental observations. While work is on-going from this effort, early observations pertinent to the failure threshold are presented.

  9. Loop Gain Predicts the Response to Upper Airway Surgery in Patients With Obstructive Sleep Apnea.

    PubMed

    Joosten, Simon A; Leong, Paul; Landry, Shane A; Sands, Scott A; Terrill, Philip I; Mann, Dwayne; Turton, Anthony; Rangaswamy, Jhanavi; Andara, Christopher; Burgess, Glen; Mansfield, Darren; Hamilton, Garun S; Edwards, Bradley A

    2017-07-01

    Upper airway surgery is often recommended to treat patients with obstructive sleep apnea (OSA) who cannot tolerate continuous positive airways pressure. However, the response to surgery is variable, potentially because it does not improve the nonanatomical factors (ie, loop gain [LG] and arousal threshold) causing OSA. Measuring these traits clinically might predict responses to surgery. Our primary objective was to test the value of LG and arousal threshold to predict surgical success defined as 50% reduction in apnea-hypopnea index (AHI) and AHI <10 events/hour post surgery. We retrospectively analyzed data from patients who underwent upper airway surgery for OSA (n = 46). Clinical estimates of LG and arousal threshold were calculated from routine polysomnographic recordings presurgery and postsurgery (median of 124 [91-170] days follow-up). Surgery reduced both the AHI (39.1 ± 4.2 vs. 26.5 ± 3.6 events/hour; p < .005) and estimated arousal threshold (-14.8 [-22.9 to -10.2] vs. -9.4 [-14.5 to -6.0] cmH2O) but did not alter LG (0.45 ± 0.08 vs. 0.45 ± 0.12; p = .278). Responders to surgery had a lower baseline LG (0.38 ± 0.02 vs. 0.48 ± 0.01, p < .05) and were younger (31.0 [27.3-42.5] vs. 43.0 [33.0-55.3] years, p < .05) than nonresponders. Lower LG remained a significant predictor of surgical success after controlling for covariates (logistic regression p = .018; receiver operating characteristic area under curve = 0.80). Our study provides proof-of-principle that upper airway surgery most effectively resolves OSA in patients with lower LG. Predicting the failure of surgical treatment, consequent to less stable ventilatory control (elevated LG), can be achieved in the clinic and may facilitate avoidance of surgical failures. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  10. Time Poverty Thresholds and Rates for the US Population

    ERIC Educational Resources Information Center

    Kalenkoski, Charlene M.; Hamrick, Karen S.; Andrews, Margaret

    2011-01-01

    Time constraints, like money constraints, affect Americans' well-being. This paper defines what it means to be time poor based on the concepts of necessary and committed time and presents time poverty thresholds and rates for the US population and certain subgroups. Multivariate regression techniques are used to identify the key variables…

  11. Failure detection system risk reduction assessment

    NASA Technical Reports Server (NTRS)

    Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)

    2012-01-01

    A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.

  12. Optimizing Retransmission Threshold in Wireless Sensor Networks

    PubMed Central

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-01-01

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092

  13. Real-Time Risk Prediction on the Wards: A Feasibility Study.

    PubMed

    Kang, Michael A; Churpek, Matthew M; Zadravecz, Frank J; Adhikari, Richa; Twu, Nicole M; Edelson, Dana P

    2016-08-01

    Failure to detect clinical deterioration in the hospital is common and associated with poor patient outcomes and increased healthcare costs. Our objective was to evaluate the feasibility and accuracy of real-time risk stratification using the electronic Cardiac Arrest Risk Triage score, an electronic health record-based early warning score. We conducted a prospective black-box validation study. Data were transmitted via HL7 feed in real time to an integration engine and database server wherein the scores were calculated and stored without visualization for clinical providers. The high-risk threshold was set a priori. Timing and sensitivity of electronic Cardiac Arrest Risk Triage score activation were compared with standard-of-care Rapid Response Team activation for patients who experienced a ward cardiac arrest or ICU transfer. Three general care wards at an academic medical center. A total of 3,889 adult inpatients. The system generated 5,925 segments during 5,751 admissions. The area under the receiver operating characteristic curve for electronic Cardiac Arrest Risk Triage score was 0.88 for cardiac arrest and 0.80 for ICU transfer, consistent with previously published derivation results. During the study period, eight of 10 patients with a cardiac arrest had high-risk electronic Cardiac Arrest Risk Triage scores, whereas the Rapid Response Team was activated on two of these patients (p < 0.05). Furthermore, electronic Cardiac Arrest Risk Triage score identified 52% (n = 201) of the ICU transfers compared with 34% (n = 129) by the current system (p < 0.001). Patients met the high-risk electronic Cardiac Arrest Risk Triage score threshold a median of 30 hours prior to cardiac arrest or ICU transfer versus 1.7 hours for standard Rapid Response Team activation. Electronic Cardiac Arrest Risk Triage score identified significantly more cardiac arrests and ICU transfers than standard Rapid Response Team activation and did so many hours in advance.

  14. 76 FR 44280 - Defense Federal Acquisition Regulation Supplement; Simplified Acquisition Threshold for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-25

    ...-AH29 Defense Federal Acquisition Regulation Supplement; Simplified Acquisition Threshold for.... 2302(7) to invoke a simplified acquisition threshold that is two times the amount specified at 41 U.S.C... acquisition threshold is $150,000 as specified in Federal Acquisition Regulation 2.101. DATES: Effective Date...

  15. The Cool Club: Creating engaging, experimental and creative encounters between young minds and polar researchers at SPRI

    NASA Astrophysics Data System (ADS)

    Weeks, S. M.; Pope, A.

    2011-12-01

    Whilst the scientific case for current climate change is compelling, the consequences of climate change have largely failed to permeate through to individuals. This lack of public awareness of the science and the potential impacts could be considered a key obstacle to action. The possible reasons for such limited success centre on the issue that climate change is a complex subject, and that a wide ranging academic, political and social research literature on the science and wider implications of climate change has failed to communicate the key issues in an accessible way. These failures to adequately communicate both the science and the social science of climate change at a number of levels results in ';communication gaps' that act as fundamental barriers to both understanding and engagement with the issue. Meyer and Land (2003) suggest that learners can find certain ideas and concepts within a discipline difficult to understand and these act as a barrier to deeper understanding of a subject. To move beyond these threshold concepts, they suggest that the expert needs to support the learner through a range of learning experiences that allows the development of learning strategies particular to the individual. Meyer and Land's research into these threshold concepts has been situated within Economics, but has been suggested to be more widely applicable though there has been no attempt to either define or evaluate threshold concepts to climate change science. By identifying whether common threshold concepts exist specifically in climate science for cohorts of either formal or informal learners, scientists will be better able to support the public in understanding these concepts by changing how the knowledge is communicated to help overcome these barriers to learning. This paper reports on the findings of a study that examined the role of threshold concepts as barriers to understanding climate science in a UK University and considers its implications for wider scientific engagement with the public to develop climate literacy. The analysis of 3 successive cohorts of students' journals who followed the same degree module identified that threshold concepts do exist within the field, such as those related to: role of ocean circulation, use of proxy indicators, forcing factors and feedback mechanisms. Once identified, the study looked at possible strategies to overcome these barriers to support student climate literacy. It concluded that the use of threshold concepts could be problematic when trying to improve climate literacy, as each individual has their own concepts they find ';troublesome' that do not necessarily relate to others. For scientists this presents the difficulty of how to develop a strategy that supports the individual that is cost and time effective. However, the study identifies that eLearning can be used effectively to help people understand troublesome knowledge.

  16. Cost-effectiveness of cetuximab and panitumumab for chemotherapy-refractory metastatic colorectal cancer.

    PubMed

    Carvalho, Adriana Camargo; Leal, Frederico; Sasse, Andre Deeke

    2017-01-01

    Cetuximab and panitumumab are monoclonal antibodies targeting the epidermal growth factor receptor. Both drugs are active against RAS wild type metastatic colorectal cancer after chemotherapy failure, with similar efficacy and toxicity profiles. However, their cost and limited survival benefits may compromise incorporation in the Brazilian public healthcare system, the Unified Heath System (Sistema Único de Saúde) (SUS). A cost-effectiveness analysis was conducted using a Markov model from the Brazilian Public health perspective and a lifetime horizon in patients with RAS -wt mCRC. Transition probabilities and mortality rates were extracted from randomized studies. Treatment costs were obtained from price tables regulated by the Brazilian Health Ministry. The World Health Organization recommendation of three times GDP per capita was used to define the cost-effectiveness threshold. The use of cetuximab or panitumumab for chemotherapy-refractory mCRC patients resulted in 0.22 additional life-years relative to BSC, with incremental cost-effectiveness ratios (ICERs) of $58,240 and $52,772 per LY, respectively. That exceeds the pre-specified threshold for cost-effectiveness. Acquisition of biological agents was the major driver of increased costs. Our economic evaluation demonstrates that both cetuximab and panitumumab are not a cost-effective approach in RAS-wt mCRC patients. Discussion about drug price should be prioritized to enable incorporation of these monoclonal antibodies in the SUS.

  17. Threshold responses of Amazonian stream fishes to timing and extent of deforestation.

    PubMed

    Brejão, Gabriel L; Hoeinghaus, David J; Pérez-Mayorga, María Angélica; Ferraz, Silvio F B; Casatti, Lilian

    2017-12-06

    Deforestation is a primary driver of biodiversity change through habitat loss and fragmentation. Stream biodiversity may not respond to deforestation in a simple linear relationship. Rather, threshold responses to extent and timing of deforestation may occur. Identification of critical deforestation thresholds is needed for effective conservation and management. We tested for threshold responses of fish species and functional groups to degree of watershed and riparian zone deforestation and time since impact in 75 streams in the western Brazilian Amazon. We used remote sensing to assess deforestation from 1984 to 2011. Fish assemblages were sampled with seines and dip nets in a standardized manner. Fish species (n = 84) were classified into 20 functional groups based on ecomorphological traits associated with habitat use, feeding, and locomotion. Threshold responses were quantified using threshold indicator taxa analysis. Negative threshold responses to deforestation were common and consistently occurred at very low levels of deforestation (<20%) and soon after impact (<10 years). Sensitive species were functionally unique and associated with complex habitats and structures of allochthonous origin found in forested watersheds. Positive threshold responses of species were less common and generally occurred at >70% deforestation and >10 years after impact. Findings were similar at the community level for both taxonomic and functional analyses. Because most negative threshold responses occurred at low levels of deforestation and soon after impact, even minimal change is expected to negatively affect biodiversity. Delayed positive threshold responses to extreme deforestation by a few species do not offset the loss of sensitive taxa and likely contribute to biotic homogenization. © 2017 Society for Conservation Biology.

  18. Determining decision thresholds and evaluating indicators when conservation status is measured as a continuum.

    PubMed

    Connors, B M; Cooper, A B

    2014-12-01

    Categorization of the status of populations, species, and ecosystems underpins most conservation activities. Status is often based on how a system's current indicator value (e.g., change in abundance) relates to some threshold of conservation concern. Receiver operating characteristic (ROC) curves can be used to quantify the statistical reliability of indicators of conservation status and evaluate trade-offs between correct (true positive) and incorrect (false positive) classifications across a range of decision thresholds. However, ROC curves assume a discrete, binary relationship between an indicator and the conservation status it is meant to track, which is a simplification of the more realistic continuum of conservation status, and may limit the applicability of ROC curves in conservation science. We describe a modified ROC curve that treats conservation status as a continuum rather than a discrete state. We explored the influence of this continuum and typical sources of variation in abundance that can lead to classification errors (i.e., random variation and measurement error) on the true and false positive rates corresponding to varying decision thresholds and the reliability of change in abundance as an indicator of conservation status, respectively. We applied our modified ROC approach to an indicator of endangerment in Pacific salmon (Oncorhynchus nerka) (i.e., percent decline in geometric mean abundance) and an indicator of marine ecosystem structure and function (i.e., detritivore biomass). Failure to treat conservation status as a continuum when choosing thresholds for indicators resulted in the misidentification of trade-offs between true and false positive rates and the overestimation of an indicator's reliability. We argue for treating conservation status as a continuum when ROC curves are used to evaluate decision thresholds in indicators for the assessment of conservation status. © 2014 Society for Conservation Biology.

  19. Lightning Pin Injection Testing on MOSFETS

    NASA Technical Reports Server (NTRS)

    Ely, Jay J.; Nguyen, Truong X.; Szatkowski, George N.; Koppen, Sandra V.; Mielnik, John J.; Vaughan, Roger K.; Wysocki, Philip F.; Celaya, Jose R.; Saha, Sankalita

    2009-01-01

    Lightning transients were pin-injected into metal-oxide-semiconductor field-effect transistors (MOSFETs) to induce fault modes. This report documents the test process and results, and provides a basis for subsequent lightning tests. MOSFETs may be present in DC-DC power supplies and electromechanical actuator circuits that may be used on board aircraft. Results show that unprotected MOSFET Gates are susceptible to failure, even when installed in systems in well-shielded and partial-shielded locations. MOSFET Drains and Sources are significantly less susceptible. Device impedance decreased (current increased) after every failure. Such a failure mode may lead to cascading failures, as the damaged MOSFET may allow excessive current to flow through other circuitry. Preliminary assessments on a MOSFET subjected to 20-stroke pin-injection testing demonstrate that Breakdown Voltage, Leakage Current and Threshold Voltage characteristics show damage, while the device continues to meet manufacturer performance specifications. The purpose of this research is to develop validated tools, technologies, and techniques for automated detection, diagnosis and prognosis that enable mitigation of adverse events during flight, such as from lightning transients; and to understand the interplay between lightning-induced surges and aging (i.e. humidity, vibration thermal stress, etc.) on component degradation.

  20. Investigation of damage mechanisms in a cross-ply metal-matrix composite under thermomechanical loading. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schubbe, J.J.

    1990-12-01

    Metal matrix composites (MMCs) are rapidly becoming strong candidates for high temperature and high stiffness structural applications such as the Advanced Tactical Fighter (ATF). This study systematically investigated the failure modes and associated damage in a cross-ply, (0/90)2s SCS6/Ti-15-3 metal matrix composite under in-phase and out-of-phase thermomechanic fatigue. Initiation and progression of fatigue damage were recorded and correlated to changes in Young's Modulus of the composite material. Experimental results show an internal stabilization of reaction zone size but degradation and separation from constituent materials under extended cyclic thermal loading. Critical to damage were transverse cracks initiating in the 90 degreesmore » plies, growing and coalescing from fiber/matrix interfaces internal to the specimen, progressing outward through the 0 degree plies before failure. Maximum mechanical strain at failure was determined to be approximately 0.0075 mm/mm. A correlation was made relating maximum matrix stress to failure life, resulting in a fatigue threshold limit of 280 MPa. An attempt was made to correlate the degradation in Young's Modulus (Damage=1-E/Eo) with the applied life cycles from different TMF tests.« less

  1. Skeletal muscle mass and exercise performance in stable ambulatory patients with heart failure.

    PubMed

    Lang, C C; Chomsky, D B; Rayos, G; Yeoh, T K; Wilson, J R

    1997-01-01

    The purpose of this study was to determine whether skeletal muscle atrophy limits the maximal exercise capacity of stable ambulatory patients with heart failure. Body composition and maximal exercise capacity were measured in 100 stable ambulatory patients with heart failure. Body composition was assessed by using dual-energy X-ray absorption. Peak exercise oxygen consumption (VO2peak) and the anaerobic threshold were measured by using a Naughton treadmill protocol and a Medical Graphics CardioO2 System. VO2peak averaged 13.4 +/- 3.3 ml.min-1.kg-1 or 43 +/- 12% of normal. Lean body mass averaged 52.9 +/- 10.5 kg and leg lean mass 16.5 +/- 3.6 kg. Leg lean mass correlated linearly with VO2peak (r = 0.68, P < 0.01), suggesting that exercise performance is influences by skeletal muscle mass. However, lean body mass was comparable to levels noted in 1,584 normal control subjects, suggesting no decrease in muscle mass. Leg muscle mass was comparable to levels noted in 34 normal control subjects, further supporting this conclusion. These findings suggest that exercise intolerance in stable ambulatory patients with heart failure is not due to skeletal muscle atrophy.

  2. Prognostics Applied to Electric Propulsion UAV

    NASA Technical Reports Server (NTRS)

    Goebel, Kai; Saha, Bhaskar

    2013-01-01

    Health management plays an important role in operations of UAV. If there is equipment malfunction on critical components, safe operation of the UAV might possibly be compromised. A technology with particular promise in this arena is equipment prognostics. This technology provides a state assessment of the health of components of interest and, if a degraded state has been found, it estimates how long it will take before the equipment will reach a failure threshold, conditional on assumptions about future operating conditions and future environmental conditions. This chapter explores the technical underpinnings of how to perform prognostics and shows an implementation on the propulsion of an electric UAV. A particle filter is shown as the method of choice in performing state assessment and predicting future degradation. The method is then applied to the batteries that provide power to the propeller motors. An accurate run-time battery life prediction algorithm is of critical importance to ensure the safe operation of the vehicle if one wants to maximize in-air time. Current reliability based techniques turn out to be insufficient to manage the use of such batteries where loads vary frequently in uncertain environments.

  3. A machine learning model to predict the risk of 30-day readmissions in patients with heart failure: a retrospective analysis of electronic medical records data.

    PubMed

    Golas, Sara Bersche; Shibahara, Takuma; Agboola, Stephen; Otaki, Hiroko; Sato, Jumpei; Nakae, Tatsuya; Hisamitsu, Toru; Kojima, Go; Felsted, Jennifer; Kakarmath, Sujay; Kvedar, Joseph; Jethwani, Kamal

    2018-06-22

    Heart failure is one of the leading causes of hospitalization in the United States. Advances in big data solutions allow for storage, management, and mining of large volumes of structured and semi-structured data, such as complex healthcare data. Applying these advances to complex healthcare data has led to the development of risk prediction models to help identify patients who would benefit most from disease management programs in an effort to reduce readmissions and healthcare cost, but the results of these efforts have been varied. The primary aim of this study was to develop a 30-day readmission risk prediction model for heart failure patients discharged from a hospital admission. We used longitudinal electronic medical record data of heart failure patients admitted within a large healthcare system. Feature vectors included structured demographic, utilization, and clinical data, as well as selected extracts of un-structured data from clinician-authored notes. The risk prediction model was developed using deep unified networks (DUNs), a new mesh-like network structure of deep learning designed to avoid over-fitting. The model was validated with 10-fold cross-validation and results compared to models based on logistic regression, gradient boosting, and maxout networks. Overall model performance was assessed using concordance statistic. We also selected a discrimination threshold based on maximum projected cost saving to the Partners Healthcare system. Data from 11,510 patients with 27,334 admissions and 6369 30-day readmissions were used to train the model. After data processing, the final model included 3512 variables. The DUNs model had the best performance after 10-fold cross-validation. AUCs for prediction models were 0.664 ± 0.015, 0.650 ± 0.011, 0.695 ± 0.016 and 0.705 ± 0.015 for logistic regression, gradient boosting, maxout networks, and DUNs respectively. The DUNs model had an accuracy of 76.4% at the classification threshold that corresponded with maximum cost saving to the hospital. Deep learning techniques performed better than other traditional techniques in developing this EMR-based prediction model for 30-day readmissions in heart failure patients. Such models can be used to identify heart failure patients with impending hospitalization, enabling care teams to target interventions at their most high-risk patients and improving overall clinical outcomes.

  4. Precision pharmacology for Alzheimer's disease.

    PubMed

    Hampel, Harald; Vergallo, Andrea; Aguilar, Lisi Flores; Benda, Norbert; Broich, Karl; Cuello, A Claudio; Cummings, Jeffrey; Dubois, Bruno; Federoff, Howard J; Fiandaca, Massimo; Genthon, Remy; Haberkamp, Marion; Karran, Eric; Mapstone, Mark; Perry, George; Schneider, Lon S; Welikovitch, Lindsay A; Woodcock, Janet; Baldacci, Filippo; Lista, Simone

    2018-04-01

    The complex multifactorial nature of polygenic Alzheimer's disease (AD) presents significant challenges for drug development. AD pathophysiology is progressing in a non-linear dynamic fashion across multiple systems levels - from molecules to organ systems - and through adaptation, to compensation, and decompensation to systems failure. Adaptation and compensation maintain homeostasis: a dynamic equilibrium resulting from the dynamic non-linear interaction between genome, epigenome, and environment. An individual vulnerability to stressors exists on the basis of individual triggers, drivers, and thresholds accounting for the initiation and failure of adaptive and compensatory responses. Consequently, the distinct pattern of AD pathophysiology in space and time must be investigated on the basis of the individual biological makeup. This requires the implementation of systems biology and neurophysiology to facilitate Precision Medicine (PM) and Precision Pharmacology (PP). The regulation of several processes at multiple levels of complexity from gene expression to cellular cycle to tissue repair and system-wide network activation has different time delays (temporal scale) according to the affected systems (spatial scale). The initial failure might originate and occur at every level potentially affecting the whole dynamic interrelated systems within an organism. Unraveling the spatial and temporal dynamics of non-linear pathophysiological mechanisms across the continuum of hierarchical self-organized systems levels and from systems homeostasis to systems failure is key to understand AD. Measuring and, possibly, controlling space- and time-scaled adaptive and compensatory responses occurring during AD will represent a crucial step to achieve the capacity to substantially modify the disease course and progression at the best suitable timepoints, thus counteracting disrupting critical pathophysiological inputs. This approach will provide the conceptual basis for effective disease-modifying pathway-based targeted therapies. PP is based on an exploratory and integrative strategy to complex diseases such as brain proteinopathies including AD, aimed at identifying simultaneous aberrant molecular pathways and predicting their temporal impact on the systems levels. The depiction of pathway-based molecular signatures of complex diseases contributes to the accurate and mechanistic stratification of distinct subcohorts of individuals at the earliest compensatory stage when treatment intervention may reverse, stop, or delay the disease. In addition, individualized drug selection may optimize treatment safety by decreasing risk and amplitude of side effects and adverse reactions. From a methodological point of view, comprehensive "omics"-based biomarkers will guide the exploration of spatio-temporal systems-wide morpho-functional shifts along the continuum of AD pathophysiology, from adaptation to irreversible failure. The Alzheimer Precision Medicine Initiative (APMI) and the APMI cohort program (APMI-CP) have commenced to facilitate a paradigm shift towards effective drug discovery and development in AD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Study of Landslide Disaster Prevention System in Malaysia as a Disaster Mitigation Prototype for South East Asia Countries

    NASA Astrophysics Data System (ADS)

    Koay, Swee Peng; Fukuoka, Hiroshi; Tien Tay, Lea; Murakami, Satoshi; Koyama, Tomofumi; Chan, Huah Yong; Sakai, Naoki; Hazarika, Hemanta; Jamaludin, Suhaimi; Lateh, Habibah

    2016-04-01

    Every year, hundreds of landslides occur in Malaysia and other tropical monsoon South East Asia countries. Therefore, prevention casualties and economical losses, by rain induced slope failure, are those countries government most important agenda. In Malaysia, millions of Malaysian Ringgit are allocated for slope monitoring and mitigation in every year budget. Besides monitoring the slopes, here, we propose the IT system which provides hazard map information, landslide historical information, slope failure prediction, knowledge on natural hazard, and information on evacuation centres via internet for user to understand the risk of landslides as well as flood. Moreover, the user can obtain information on rainfall intensity in the monitoring sites to predict the occurrence of the slope failure. Furthermore, we are working with PWD, Malaysia to set the threshold value for the landslide prediction system which will alert the officer if there is a risk of the slope failure in the monitoring sites by calculating rainfall intensity. Although the IT plays a significant role in information dissemination, education is also important in disaster prevention by educating school students to be more alert in natural hazard, and there will be bottom up approach to alert parents on what is natural hazard, by conversion among family members, as most of the parents are busy and may not have time to attend natural hazard workshop. There are many races living in Malaysia as well in most of South East Asia countries. It is not easy to educate them in single education method as the level of living and education are different. We started landslides education workshops in primary schools in rural and urban area, in Malaysia. We found out that we have to use their mother tongue language while conducting natural hazard education for better understanding. We took questionnaires from the students before and after the education workshop. Learning from the questionnaire result, the students are more alert on natural disaster then before, after attending the workshop.

  6. Association of daily asthma emergency department visits and hospital admissions with ambient air pollutants among the pediatric Medicaid population in Detroit: time-series and time-stratified case-crossover analyses with threshold effects.

    PubMed

    Li, Shi; Batterman, Stuart; Wasilevich, Elizabeth; Wahl, Robert; Wirth, Julie; Su, Feng-Chiao; Mukherjee, Bhramar

    2011-11-01

    Asthma morbidity has been associated with ambient air pollutants in time-series and case-crossover studies. In such study designs, threshold effects of air pollutants on asthma outcomes have been relatively unexplored, which are of potential interest for exploring concentration-response relationships. This study analyzes daily data on the asthma morbidity experienced by the pediatric Medicaid population (ages 2-18 years) of Detroit, Michigan and concentrations of pollutants fine particles (PM2.5), CO, NO2 and SO2 for the 2004-2006 period, using both time-series and case-crossover designs. We use a simple, testable and readily implementable profile likelihood-based approach to estimate threshold parameters in both designs. Evidence of significant increases in daily acute asthma events was found for SO2 and PM2.5, and a significant threshold effect was estimated for PM2.5 at 13 and 11 μg m(-3) using generalized additive models and conditional logistic regression models, respectively. Stronger effect sizes above the threshold were typically noted compared to standard linear relationship, e.g., in the time series analysis, an interquartile range increase (9.2 μg m(-3)) in PM2.5 (5-day-moving average) had a risk ratio of 1.030 (95% CI: 1.001, 1.061) in the generalized additive models, and 1.066 (95% CI: 1.031, 1.102) in the threshold generalized additive models. The corresponding estimates for the case-crossover design were 1.039 (95% CI: 1.013, 1.066) in the conditional logistic regression, and 1.054 (95% CI: 1.023, 1.086) in the threshold conditional logistic regression. This study indicates that the associations of SO2 and PM2.5 concentrations with asthma emergency department visits and hospitalizations, as well as the estimated PM2.5 threshold were fairly consistent across time-series and case-crossover analyses, and suggests that effect estimates based on linear models (without thresholds) may underestimate the true risk. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Determination of optimum threshold values for EMG time domain features; a multi-dataset investigation

    NASA Astrophysics Data System (ADS)

    Nlandu Kamavuako, Ernest; Scheme, Erik Justin; Englehart, Kevin Brian

    2016-08-01

    Objective. For over two decades, Hudgins’ set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. Approach. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. Main results. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. Significance. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.

  8. Determination of optimum threshold values for EMG time domain features; a multi-dataset investigation.

    PubMed

    Kamavuako, Ernest Nlandu; Scheme, Erik Justin; Englehart, Kevin Brian

    2016-08-01

    For over two decades, Hudgins' set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.

  9. A multi-threshold sampling method for TOF-PET signal processing

    NASA Astrophysics Data System (ADS)

    Kim, H.; Kao, C. M.; Xie, Q.; Chen, C. T.; Zhou, L.; Tang, F.; Frisch, H.; Moses, W. W.; Choong, W. S.

    2009-04-01

    As an approach to realizing all-digital data acquisition for positron emission tomography (PET), we have previously proposed and studied a multi-threshold sampling method to generate samples of a PET event waveform with respect to a few user-defined amplitudes. In this sampling scheme, one can extract both the energy and timing information for an event. In this paper, we report our prototype implementation of this sampling method and the performance results obtained with this prototype. The prototype consists of two multi-threshold discriminator boards and a time-to-digital converter (TDC) board. Each of the multi-threshold discriminator boards takes one input and provides up to eight threshold levels, which can be defined by users, for sampling the input signal. The TDC board employs the CERN HPTDC chip that determines the digitized times of the leading and falling edges of the discriminator output pulses. We connect our prototype electronics to the outputs of two Hamamatsu R9800 photomultiplier tubes (PMTs) that are individually coupled to a 6.25×6.25×25 mm3 LSO crystal. By analyzing waveform samples generated by using four thresholds, we obtain a coincidence timing resolution of about 340 ps and an ˜18% energy resolution at 511 keV. We are also able to estimate the decay-time constant from the resulting samples and obtain a mean value of 44 ns with an ˜9 ns FWHM. In comparison, using digitized waveforms obtained at a 20 GSps sampling rate for the same LSO/PMT modules we obtain ˜300 ps coincidence timing resolution, ˜14% energy resolution at 511 keV, and ˜5 ns FWHM for the estimated decay-time constant. Details of the results on the timing and energy resolutions by using the multi-threshold method indicate that it is a promising approach for implementing digital PET data acquisition.

  10. Decision Models for Determining the Optimal Life Test Sampling Plans

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  11. PET-CMR in heart failure - synergistic or redundant imaging?

    PubMed

    Quail, Michael A; Sinusas, Albert J

    2017-07-01

    Imaging in heart failure (HF) provides data for diagnosis, prognosis and disease monitoring. Both MRI and nuclear imaging techniques have been successfully used for this purpose in HF. Positron Emission Tomography-Cardiac Magnetic Resonance (PET-CMR) is an example of a new multimodality diagnostic imaging technique with potential applications in HF. The threshold for adopting a new diagnostic tool to clinical practice must necessarily be high, lest they exacerbate costs without improving care. New modalities must demonstrate clinical superiority, or at least equivalence, combined with another important advantage, such as lower cost or improved patient safety. The purpose of this review is to outline the current status of multimodality PET-CMR with regard to HF applications, and determine whether the clinical utility of this new technology justifies the cost.

  12. Object tracking algorithm based on the color histogram probability distribution

    NASA Astrophysics Data System (ADS)

    Li, Ning; Lu, Tongwei; Zhang, Yanduo

    2018-04-01

    In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.

  13. A Model of Risk Analysis in Analytical Methodology for Biopharmaceutical Quality Control.

    PubMed

    Andrade, Cleyton Lage; Herrera, Miguel Angel De La O; Lemes, Elezer Monte Blanco

    2018-01-01

    One key quality control parameter for biopharmaceutical products is the analysis of residual cellular DNA. To determine small amounts of DNA (around 100 pg) that may be in a biologically derived drug substance, an analytical method should be sensitive, robust, reliable, and accurate. In principle, three techniques have the ability to measure residual cellular DNA: radioactive dot-blot, a type of hybridization; threshold analysis; and quantitative polymerase chain reaction. Quality risk management is a systematic process for evaluating, controlling, and reporting of risks that may affects method capabilities and supports a scientific and practical approach to decision making. This paper evaluates, by quality risk management, an alternative approach to assessing the performance risks associated with quality control methods used with biopharmaceuticals, using the tool hazard analysis and critical control points. This tool provides the possibility to find the steps in an analytical procedure with higher impact on method performance. By applying these principles to DNA analysis methods, we conclude that the radioactive dot-blot assay has the largest number of critical control points, followed by quantitative polymerase chain reaction, and threshold analysis. From the analysis of hazards (i.e., points of method failure) and the associated method procedure critical control points, we conclude that the analytical methodology with the lowest risk for performance failure for residual cellular DNA testing is quantitative polymerase chain reaction. LAY ABSTRACT: In order to mitigate the risk of adverse events by residual cellular DNA that is not completely cleared from downstream production processes, regulatory agencies have required the industry to guarantee a very low level of DNA in biologically derived pharmaceutical products. The technique historically used was radioactive blot hybridization. However, the technique is a challenging method to implement in a quality control laboratory: It is laborious, time consuming, semi-quantitative, and requires a radioisotope. Along with dot-blot hybridization, two alternatives techniques were evaluated: threshold analysis and quantitative polymerase chain reaction. Quality risk management tools were applied to compare the techniques, taking into account the uncertainties, the possibility of circumstances or future events, and their effects upon method performance. By illustrating the application of these tools with DNA methods, we provide an example of how they can be used to support a scientific and practical approach to decision making and can assess and manage method performance risk using such tools. This paper discusses, considering the principles of quality risk management, an additional approach to the development and selection of analytical quality control methods using the risk analysis tool hazard analysis and critical control points. This tool provides the possibility to find the method procedural steps with higher impact on method reliability (called critical control points). Our model concluded that the radioactive dot-blot assay has the larger number of critical control points, followed by quantitative polymerase chain reaction and threshold analysis. Quantitative polymerase chain reaction is shown to be the better alternative analytical methodology in residual cellular DNA analysis. © PDA, Inc. 2018.

  14. Cost-effectiveness analysis of timely dialysis referral after renal transplant failure in Spain.

    PubMed

    Villa, Guillermo; Sánchez-Álvarez, Emilio; Cuervo, Jesús; Fernández-Ortiz, Lucía; Rebollo, Pablo; Ortega, Francisco

    2012-08-16

    A cost-effectiveness analysis of timely dialysis referral after renal transplant failure was undertaken from the perspective of the Public Administration. The current Spanish situation, where all the patients undergoing graft function loss are referred back to dialysis in a late manner, was compared to an ideal scenario where all the patients are timely referred. A Markov model was developed in which six health states were defined: hemodialysis, peritoneal dialysis, kidney transplantation, late referral hemodialysis, late referral peritoneal dialysis and death. The model carried out a simulation of the progression of renal disease for a hypothetical cohort of 1,000 patients aged 40, who were observed in a lifetime temporal horizon of 45 years. In depth sensitivity analyses were performed in order to ensure the robustness of the results obtained. Considering a discount rate of 3 %, timely referral showed an incremental cost of 211 €, compared to late referral. This cost increase was however a consequence of the incremental survival observed. The incremental effectiveness was 0.0087 quality-adjusted life years (QALY). When comparing both scenarios, an incremental cost-effectiveness ratio of 24,390 €/QALY was obtained, meaning that timely dialysis referral might be an efficient alternative if a willingness-to-pay threshold of 45,000 €/QALY is considered. This result proved to be independent of the proportion of late referral patients observed. The acceptance probability of timely referral was 61.90 %, while late referral was acceptable in 38.10 % of the simulations. If we however restrict the analysis to those situations not involving any loss of effectiveness, the acceptance probability of timely referral was 70.10 %, increasing twofold that of late referral (29.90 %). Timely dialysis referral after graft function loss might be an efficient alternative in Spain, improving both patients' survival rates and health-related quality of life at an affordable cost. Spanish Public Health authorities might therefore promote the inclusion of specific recommendations for this group of patients within the existing clinical guidelines.

  15. Scenario-based risk analysis of winter snowstorms in the German lowlands

    NASA Astrophysics Data System (ADS)

    von Wulffen, Anja

    2014-05-01

    The northern German lowlands are not especially known for a high frequency of snowfall events. Nevertheless under certain synoptic conditions Lake-Effect-like phenomena caused by the proximity especially of the Baltic Sea can lead to significantly reinforced snowfall intensities that are often accompanied by rather high wind speeds. This makes for infrequent but potentially disastrous snowstorms in a region less accustomed to snow impacts. One possible consequence of an infrastructure failure cascade resulting from severe and longer-lasting snowstorms is a regional disruption of the food supply chain. In the context of "just-in-time"-logistics and the accompanying decrease of storage capabilities, this poses a significant threat to the population's food security. Within the project NeuENV ("New strategies to ensure sufficient food supply in case of crisis in Germany") a snowstorm in the German lowlands involving widespread disruptions of the transportation infrastructure as well as power failures is therefore used as one model for future food supply chain disruptions. In order to obtain a reliable evaluation of the supply chain and crisis management resilience, a detailed snowstorm scenario is being developed. For this purpose, a database of impact reports of past snowstorm events is assembled and analysed to obtain a comprehensive overview of potential infrastructure impairments and failures. Examples of events analysed in this context include the winter 1978/79 with its disastrous snow drifts that commonly attained heights of 3m to 5m leading to a transportation infrastructure collapse across a wide area, the wet snow event in November 2005 in the Münsterland region that caused power failures for up to 250.000 homes, and more recent snowstorms such as Daisy in January 2010. A catalogue of thresholds for relevant parameters indicating when significant failures can be expected is then compiled through a comparison of impact reports with the detailed meteorological conditions. Based on these findings, an exemplary synoptic evolution of a snowstorm leading to representative infrastructure failure cascades is constructed. In a next step, an extrapolation of this obtained scenario to future climate and societal conditions as well as plausible more extreme but not yet observed meteorological conditions is planned in order to obtain a thorough analysis of possible threats to the German food distribution system and a strong foundation for future disaster mitigation planning efforts.

  16. Strength Restoration of Cracked Sandstone and Coal under a Uniaxial Compression Test and Correlated Damage Source Location Based on Acoustic Emissions.

    PubMed

    Feng, Xiaowei; Zhang, Nong; Zheng, Xigui; Pan, Dongjiang

    2015-01-01

    Underground rock masses have shown a general trend of natural balance over billions of years of ground movement. Nonetheless, man-made underground constructions disturb this balance and cause rock stability failure. Fractured rock masses are frequently encountered in underground constructions, and this study aims to restore the strength of rock masses that have experienced considerable fracturing under uniaxial compression. Coal and sandstone from a deep-buried coal mine were chosen as experimental subjects; they were crushed by uniaxial compression and then carefully restored by a chemical adhesive called MEYCO 364 with an innovative self-made device. Finally, the restored specimens were crushed once again by uniaxial compression. Axial stress, axial strain, circumferential strain, and volumetric strain data for the entire process were fully captured and are discussed here. An acoustic emission (AE) testing system was adopted to cooperate with the uniaxial compression system to provide better definitions for crack closure thresholds, crack initiation thresholds, crack damage thresholds, and three-dimensional damage source locations in intact and restored specimens. Several remarkable findings were obtained. The restoration effects of coal are considerably better than those of sandstone because the strength recovery coefficient of the former is 1.20, whereas that of the latter is 0.33, which indicates that MEYCO 364 is particularly valid for fractured rocks whose initial intact peak stress is less than that of MEYCO 364. Secondary cracked traces of restored sandstone almost follow the cracked traces of the initial intact sandstone, and the final failure is mainly caused by decoupling between the adhesive and the rock mass. However, cracked traces of restored coal only partially follow the traces of intact coal, with the final failure of the restored coal being caused by both bonding interface decoupling and self-breakage in coal. Three-dimensional damage source locations manifest such that AE events are highly correlated with a strength recovery coefficient; the AE events show a decreasing tendency when the coefficient is larger than 1, and vice versa. This study provides a feasible scheme for the reinforcement of fractured rock masses in underground constructions and reveals an internal mechanism of the crushing process for restored rock masses, which has certain instructive significance.

  17. Strength Restoration of Cracked Sandstone and Coal under a Uniaxial Compression Test and Correlated Damage Source Location Based on Acoustic Emissions

    PubMed Central

    Feng, Xiaowei; Zhang, Nong; Zheng, Xigui; Pan, Dongjiang

    2015-01-01

    Underground rock masses have shown a general trend of natural balance over billions of years of ground movement. Nonetheless, man-made underground constructions disturb this balance and cause rock stability failure. Fractured rock masses are frequently encountered in underground constructions, and this study aims to restore the strength of rock masses that have experienced considerable fracturing under uniaxial compression. Coal and sandstone from a deep-buried coal mine were chosen as experimental subjects; they were crushed by uniaxial compression and then carefully restored by a chemical adhesive called MEYCO 364 with an innovative self-made device. Finally, the restored specimens were crushed once again by uniaxial compression. Axial stress, axial strain, circumferential strain, and volumetric strain data for the entire process were fully captured and are discussed here. An acoustic emission (AE) testing system was adopted to cooperate with the uniaxial compression system to provide better definitions for crack closure thresholds, crack initiation thresholds, crack damage thresholds, and three-dimensional damage source locations in intact and restored specimens. Several remarkable findings were obtained. The restoration effects of coal are considerably better than those of sandstone because the strength recovery coefficient of the former is 1.20, whereas that of the latter is 0.33, which indicates that MEYCO 364 is particularly valid for fractured rocks whose initial intact peak stress is less than that of MEYCO 364. Secondary cracked traces of restored sandstone almost follow the cracked traces of the initial intact sandstone, and the final failure is mainly caused by decoupling between the adhesive and the rock mass. However, cracked traces of restored coal only partially follow the traces of intact coal, with the final failure of the restored coal being caused by both bonding interface decoupling and self-breakage in coal. Three-dimensional damage source locations manifest such that AE events are highly correlated with a strength recovery coefficient; the AE events show a decreasing tendency when the coefficient is larger than 1, and vice versa. This study provides a feasible scheme for the reinforcement of fractured rock masses in underground constructions and reveals an internal mechanism of the crushing process for restored rock masses, which has certain instructive significance. PMID:26714324

  18. The Evolution of Utilizing Manual Throttles to Avoid Excessively Low LH2 NPSP at the SSME Inlet

    NASA Technical Reports Server (NTRS)

    Henfling, Rick

    2011-01-01

    In the late 1970s, years before the Space Shuttle flew its maiden voyage, it was understood low liquid hydrogen (LH2) Net Positive Suction Pressure (NPSP) at the inlet to the Space Shuttle Main Engine (SSME) could have adverse effects on engine operation. A number of failures within both the External Tank (ET) and the Orbiter Main Propulsion System (MPS) could result in a low LH2 NPSP condition, which at extremely low levels can result in cavitation of SSME turbomachinery. Operational workarounds were developed to take advantage of the onboard crew s ability to manually throttle down the SSMEs (via the Pilot s Speedbrake/Throttle Controller), which alleviated the low LH2 NPSP condition. Manually throttling the SSME to a lower power level resulted in an increase in NPSP, mainly due to the reduction in frictional flow losses while at the lower throttle setting. Early in the Space Shuttle Program s history, the relevant Flight Rule for the Booster flight controllers in Mission Control did not distinguish between ET and Orbiter MPS failures and the same crew action was taken for both. However, after a review of all Booster operational techniques following the Challenger disaster in the late 1980s, it was determined manually throttling the SSME to a lower power was only effective for Orbiter MPS failures and the Flight Rule was updated to reflect this change. The Flight Rule and associated crew actions initially called for a single throttle step to minimum power level when a low threshold for NPSP was met. As engineers refined their understanding of the NPSP requirements for the SSME (through a robust testing program), the operational techniques evolved to take advantage of the additional capabilities. This paper will examine the evolution of the Flight rule and associated procedure and how increases in knowledge about the SSME and the Space Shuttle vehicle as a whole have helped shape their development. What once was a single throttle step when NPSP decreased to a certain threshold has now become three throttle steps, each occurring at a lower NPSP threshold. Additionally the procedure, which for early Space Shuttle missions required a Return-to-Launch-Site abort, now results in a nominal Main Engine Cut Off and no loss of mission objectives.

  19. The Evolution of Utilizing Manual Throttling to Avoid Excessively Low LH2 NPSP at the SSME Inlet

    NASA Technical Reports Server (NTRS)

    Henfling, Rick

    2010-01-01

    In the late 1970s, years before the Space Shuttle flew its maiden voyage, it was understood low liquid hydrogen (LH2) Net Positive Suction Pressure (NPSP) at the inlet to the Space Shuttle Main Engine (SSME) could have adverse effects on engine operation. A number of failures within both the External Tank (ET) and the Orbiter Main Propulsion System (MPS) could result in a low LH2 NPSP condition, which at extremely low levels can result in cavitation of SSME turbomachinery. Operational workarounds were developed to take advantage of the onboard crew s ability to manually throttle down the SSMEs (via the Pilot s Speedbrake/Throttle Controller), which alleviated the low LH2 NPSP condition. Manually throttling the SSME to a lower power level resulted in an increase in NPSP, mainly due to the reduction in frictional flow losses while at the lower throttle setting. Early in the Space Shuttle Program s history, the relevant Flight Rule for the Booster flight controller in Mission Control did not distinguish between ET and Orbiter MPS failures and the same crew action was taken for both. However, after a review of all Booster operational techniques following the Challenger disaster in the late 1980s, it was determined manually throttling the SSME to a lower power was only effective for Orbiter MPS failures and the Flight Rule was updated to reflect this change. The Flight Rule and associated crew actions initially called for a single throttle step to minimum power level when a low threshold for NPSP was met. As engineers refined their understanding of the NPSP requirements for the SSME (through a robust testing program), the operational techniques evolved to take advantage of the additional capabilities. This paper will examine the evolution of the Flight rule and associated procedure and how increases in knowledge about the SSME and the Space Shuttle vehicle as a whole have helped shape their development. What once was a single throttle step when NPSP decreased to a certain low threshold has now become three throttle steps, each occurring at a lower NPSP threshold. Additionally the procedure, which for early Space Shuttle missions required a Return-to-Launch-Site abort, now results in a nominal Main Engine Cut Off and no loss of mission objectives.

  20. Contextual Interactions in Grating Plaid Configurations Are Explained by Natural Image Statistics and Neural Modeling

    PubMed Central

    Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter

    2016-01-01

    Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076

  1. Threshold model of cascades in empirical temporal networks

    NASA Astrophysics Data System (ADS)

    Karimi, Fariba; Holme, Petter

    2013-08-01

    Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.

  2. Probability of Loss of Assured Safety in Systems with Multiple Time-Dependent Failure Modes: Incorporation of Delayed Link Failure in the Presence of Aleatory Uncertainty.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximationmore » and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.« less

  3. Assessment of Heart Transplant Waitlist Time and Pre- and Post-transplant Failure: A Mixed Methods Approach.

    PubMed

    Goldstein, Benjamin A; Thomas, Laine; Zaroff, Jonathan G; Nguyen, John; Menza, Rebecca; Khush, Kiran K

    2016-07-01

    Over the past two decades, there have been increasingly long waiting times for heart transplantation. We studied the relationship between heart transplant waiting time and transplant failure (removal from the waitlist, pretransplant death, or death or graft failure within 1 year) to determine the risk that conservative donor heart acceptance practices confer in terms of increasing the risk of failure among patients awaiting transplantation. We studied a cohort of 28,283 adults registered on the United Network for Organ Sharing heart transplant waiting list between 2000 and 2010. We used Kaplan-Meier methods with inverse probability censoring weights to examine the risk of transplant failure accumulated over time spent on the waiting list (pretransplant). In addition, we used transplant candidate blood type as an instrumental variable to assess the risk of transplant failure associated with increased wait time. Our results show that those who wait longer for a transplant have greater odds of transplant failure. While on the waitlist, the greatest risk of failure is during the first 60 days. Doubling the amount of time on the waiting list was associated with a 10% (1.01, 1.20) increase in the odds of failure within 1 year after transplantation. Our findings suggest a relationship between time spent on the waiting list and transplant failure, thereby supporting research aimed at defining adequate donor heart quality and acceptance standards for heart transplantation.

  4. Evolution of strain localization in variable-width three-dimensional unsaturated laboratory-scale cut slopes

    USGS Publications Warehouse

    Morse, Michael S.; Lu, Ning; Wayllace, Alexandra; Godt, Jonathan W.

    2017-01-01

    To experimentally validate a recently developed theory for predicting the stability of cut slopes under unsaturated conditions, the authors measured increasing strain localization in unsaturated slope cuts prior to abrupt failure. Cut slope width and moisture content were controlled and varied in a laboratory, and a sliding door that extended the height of the free face of the slope was lowered until the cut slope failed. A particle image velocimetry tool was used to quantify soil displacement in the x-y">x-y (horizontal) and x-z">x-z (vertical) planes, and strain was calculated from the displacement. Areas of maximum strain localization prior to failure were shown to coincide with the location of the eventual failure plane. Experimental failure heights agreed with the recently developed stability theory for unsaturated cut slopes (within 14.3% relative error) for a range of saturation and cut slope widths. A theoretical threshold for sidewall influence on cut slope failures was also proposed to quantify the relationship between normalized sidewall width and critical height. The proposed relationship was consistent with the cut slope experiment results, and is intended for consideration in future geotechnical experiment design. The experimental data of evolution of strain localization presented herein provide a physical basis from which future numerical models of strain localization can be validated.

  5. Biomechanical Cadaveric Evaluation of Partial Acute Peroneal Tendon Tears.

    PubMed

    Wagner, Emilio; Wagner, Pablo; Ortiz, Cristian; Radkievich, Ruben; Palma, Felipe; Guzmán-Venegas, Rodrigo

    2018-06-01

    No clear guideline or solid evidence exists for peroneal tendon tears to determine when to repair, resect, or perform a tenodesis on the damaged tendon. The objective of this study was to analyze the mechanical behavior of cadaveric peroneal tendons artificially damaged and tested in a cyclic and failure mode. The hypothesis was that no failure would be observed in the cyclic phase. Eight cadaveric long leg specimens were tested on a specially designed frame. A longitudinal full thickness tendon defect was created, 3 cm in length, behind the tip of the fibula, compromising 66% of the visible width of the peroneal tendons. Cyclic testing was initially performed between 50 and 200 N, followed by a load-to-failure test. Tendon elongation and load to rupture were measured. No tendon failed or lengthened during cyclic testing. The mean load to failure for peroneus brevis was 416 N (95% confidence interval, 351-481 N) and for the peroneus longus was 723 N (95% confidence interval, 578-868 N). All failures were at the level of the defect created. In a cadaveric model of peroneal tendon tears, 33% of remaining peroneal tendon could resist high tensile forces, above the physiologic threshold. Some peroneal tendon tears can be treated conservatively without risking spontaneous ruptures. When surgically treating a symptomatic peroneal tendon tear, increased efforts may be undertaken to repair tears previously considered irreparable.

  6. Effect of low-velocity or ballistic impact damage on the strength of thin composite and aluminum shear panels

    NASA Technical Reports Server (NTRS)

    Farley, G. L.

    1985-01-01

    Impact tests were conducted on shear panels fabricated from 6061-T6 aluminum and from woven fabric prepreg of Du Pont Kevlara fiber/epoxy resin and graphite fiber/epoxy resin. The shear panels consisted of three different composite laminates and one aluminum material configuration. Three panel aspect ratios were evaluated for each material configuration. Composite panels were impacted with a 1.27-cm (0.05-in) diameter aluminum sphere at low velocities of 46 m/sec (150 ft/sec) and 67 m/sec (220 ft/sec). Ballistic impact conditions consisted of a tumbled 0.50-caliber projectile impacting loaded composite and aluminum shear panels. The results of these tests indicate that ballistic threshold load (the lowest load which will result in immediate failure upon penetration by the projectile) varied between 0.44 and 0.61 of the average failure load of undamaged panels. The residual strengths of the panels after ballistic impact varied between 0.55 and 0.75 of the average failure strength of the undamaged panels. The low velocity impacts at 67 m/sec (220 ft/sec) caused a 15 to 20 percent reduction in strength, whereas the impacts at 46 m/sec (150 ft/sec) resulted in negligible strength loss. Good agreement was obtained between the experimental failure strengths and the predicted strength with the point stress failure criterion.

  7. Local Bifurcations and Optimal Theory in a Delayed Predator-Prey Model with Threshold Prey Harvesting

    NASA Astrophysics Data System (ADS)

    Tankam, Israel; Tchinda Mouofo, Plaire; Mendy, Abdoulaye; Lam, Mountaga; Tewa, Jean Jules; Bowong, Samuel

    2015-06-01

    We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator-prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.

  8. CORRELATIONS IN LIGHT FROM A LASER AT THRESHOLD,

    DTIC Science & Technology

    Temporal correlations in the electromagnetic field radiated by a laser in the threshold region of oscillation (from one tenth of threshold intensity...to ten times threshold ) were measured by photoelectron counting techniques. The experimental results were compared with theoretical predictions based...shows that the intensity fluctuations at about one tenth threshold are nearly those of a Gaussian field and continuously approach those of a constant amplitude field as the intensity is increased. (Author)

  9. 30 CFR 71.700 - Inhalation hazards; threshold limit values for gases, dust, fumes, mists, and vapors.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Inhalation hazards; threshold limit values for... SURFACE WORK AREAS OF UNDERGROUND COAL MINES Airborne Contaminants § 71.700 Inhalation hazards; threshold... containing quartz, and asbestos dust) in excess of, on the basis of a time-weighted average, the threshold...

  10. CMDS9: Continuum Mechanics and Discrete Systems 9, Istanbul Technical University, Macka. Abstracts.

    DTIC Science & Technology

    1998-07-01

    that can only be achieved via cooperative behavior of the cells. It can be viewed as the action of a singular feedback between the micro -level (the...optimal micro -geometries of multicomponent mixtures. Also, we discuss dynamics of a transition in natural unstable systems that leads to a micro ...failure process. This occurs once the impact load reaches a critical threshold level and results in a collection of oriented matrix micro -cracks

  11. Fracture Mechanics Testing of Titanium 6AL-4V in AF-M315E

    NASA Technical Reports Server (NTRS)

    Sampson, J. W.; Martinez, J.; McLean, C.

    2016-01-01

    The Green Propellant Infusion Mission (GPIM) will demonstrate the performance of AF-M315E monopropellant on orbit. Flight certification requires a safe-life analysis of the titanium alloy fuel tank to ensure inherent processing flaws will not cause failure during the design life of the tank. Material property inputs for this analysis require testing to determine the stress intensity factor for environment-assisted cracking (KEAC) of Ti 6Al-4V in combination with the AF-M315E monopropellant. Testing of single-edge notched, or SE(B), specimens representing the bulk tank membrane and weld material were performed in accordance with ASTM E1681. Specimens with fatigue pre-cracks were loaded into test fixtures so that the crack tips were exposed to AF-M315E at 50 C for a duration of 1,000 hours. Specimens that did not fail during exposure were opened to inspect the crack surfaces for evidence of crack growth. The threshold stress intensity value, KEAC, is the highest applied stress intensity that produced neither a failure of the specimen during the exposure nor showed evidence of crack growth. The threshold stress intensity factor for environment-assisted cracking of the Ti 6Al-4V forged tank material was found to be at least 22 ksivin and at least 31 ksivin for the weld material when exposed to AF-M315E monopropellant.

  12. Anti-Müllerian hormone as a predictor of polycystic ovary syndrome treated with clomiphene citrate.

    PubMed

    Hestiantoro, Andon; Negoro, Yuwono Sri; Afrita, Yohana; Wiweko, Budi; Sumapradja, Kanadi; Natadisastra, Muharam

    2016-12-01

    This study aimed to determine the threshold of anti-Müllerian hormone (AMH) as predictor of follicular growth failure in polycystic ovary syndrome (PCOS) patients treated with clomiphene citrate (CC). Fifty female subjects with PCOS were recruited and divided into two groups based on successful and unsuccessful follicular growth. Related variables such as age, infertility duration, cigarette smoking, use of Moslem hijab, sunlight exposure, fiber intake, body mass index, waist circumference, AMH level, 25-hydroxy vitamin D level, and growth of dominant follicles were obtained, assessed, and statistically analyzed. The AMH levels of patients with successful follicular growth were significantly lower ( p =0.001) than those with unsuccessful follicular growth (6.10±3.52 vs. 10.43±4.78 ng/mL). A higher volume of fiber intake was also observed in the successful follicular growth group compared to unsuccessful follicular growth group ( p =0.001). Our study found the probability of successful follicle growth was a function of AMH level and the amount of fiber intake, expressed as Y=-2.35+(-0.312×AMH level)+(0.464×fiber intake) (area under the curve, 0.88; 95% confidence interval, 0.79-0.98; p <0.001). The optimal threshold of AMH level in predicting the failure of follicle growth in patients with PCOS treated with CC was 8.58 ng/mL.

  13. Influence of aging on thermal and vibratory thresholds of quantitative sensory testing.

    PubMed

    Lin, Yea-Huey; Hsieh, Song-Chou; Chao, Chi-Chao; Chang, Yang-Chyuan; Hsieh, Sung-Tsang

    2005-09-01

    Quantitative sensory testing has become a common approach to evaluate thermal and vibratory thresholds in various types of neuropathies. To understand the effect of aging on sensory perception, we measured warm, cold, and vibratory thresholds by performing quantitative sensory testing on a population of 484 normal subjects (175 males and 309 females), aged 48.61 +/- 14.10 (range 20-86) years. Sensory thresholds of the hand and foot were measured with two algorithms: the method of limits (Limits) and the method of level (Level). Thresholds measured by Limits are reaction-time-dependent, while those measured by Level are independent of reaction time. In addition, we explored (1) the correlations of thresholds between these two algorithms, (2) the effect of age on differences in thresholds between algorithms, and (3) differences in sensory thresholds between the two test sites. Age was consistently and significantly correlated with sensory thresholds of all tested modalities measured by both algorithms on multivariate regression analysis compared with other factors, including gender, body height, body weight, and body mass index. When thresholds were plotted against age, slopes differed between sensory thresholds of the hand and those of the foot: for the foot, slopes were steeper compared with those for the hand for each sensory modality. Sensory thresholds of both test sites measured by Level were highly correlated with those measured by Limits, and thresholds measured by Limits were higher than those measured by Level. Differences in sensory thresholds between the two algorithms were also correlated with age: thresholds of the foot were higher than those of the hand for each sensory modality. This difference in thresholds (measured with both Level and Limits) between the hand and foot was also correlated with age. These findings suggest that age is the most significant factor in determining sensory thresholds compared with the other factors of gender and anthropometric parameters, and this provides a foundation for investigating the neurobiologic significance of aging on the processing of sensory stimuli.

  14. Corrective response times in a coordinated eye-head-arm countermanding task.

    PubMed

    Tao, Gordon; Khan, Aarlenne Z; Blohm, Gunnar

    2018-06-01

    Inhibition of motor responses has been described as a race between two competing decision processes of motor initiation and inhibition, which manifest as the reaction time (RT) and the stop signal reaction time (SSRT); in the case where motor initiation wins out over inhibition, an erroneous movement occurs that usually needs to be corrected, leading to corrective response times (CRTs). Here we used a combined eye-head-arm movement countermanding task to investigate the mechanisms governing multiple effector coordination and the timing of corrective responses. We found a high degree of correlation between effector response times for RT, SSRT, and CRT, suggesting that decision processes are strongly dependent across effectors. To gain further insight into the mechanisms underlying CRTs, we tested multiple models to describe the distribution of RTs, SSRTs, and CRTs. The best-ranked model (according to 3 information criteria) extends the LATER race model governing RTs and SSRTs, whereby a second motor initiation process triggers the corrective response (CRT) only after the inhibition process completes in an expedited fashion. Our model suggests that the neural processing underpinning a failed decision has a residual effect on subsequent actions. NEW & NOTEWORTHY Failure to inhibit erroneous movements typically results in corrective movements. For coordinated eye-head-hand movements we show that corrective movements are only initiated after the erroneous movement cancellation signal has reached a decision threshold in an accelerated fashion.

  15. Real-time diagnostics of the reusable rocket engine using on-line system identification

    NASA Technical Reports Server (NTRS)

    Guo, T.-H.; Merrill, W.; Duyar, A.

    1990-01-01

    A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.

  16. Anik-E1 and E2 satellite failures of January 1994 revisited

    NASA Astrophysics Data System (ADS)

    Lam, H.-L.; Boteler, D. H.; Burlton, B.; Evans, J.

    2012-10-01

    The consecutive failures of the geosynchronous Anik-E1 communication satellite on January 20, 1994, and Anik-E2 about nine hours later on January 21 (both incidents occurred on January 20 local time) received considerable publicity because the malfunctions of the satellites disrupted television and computer data transmissions across Canada, as well as telephone services to remote northern communities for hours. This often-cited event is revisited here with materials not covered before. Using publicly available information, Anik-E failure details, media coverage, recovery effort and cost incurred are first presented. This is then followed by scrutiny of space weather conditions pertinent to the occurrences of the Anik-E upsets. We trace the space weather episode's inception on the Sun, propagation through interplanetary medium, and manifestation in magnetic field variations as well as in energetic electron flux increases, and its eventual impact on the Anik-Es. The genesis of the energetic electron enhancements that have been blamed for the satellite malfunctions is thus traceable via high-speed solar wind stream with Alfven wave fluctuations to a longitudinally wide coronal hole on the Sun. Furthermore, strong magnetic pulsations preceding electron flux peaks indicate Pc5 ULF (Ultra Low Frequency) waves as a probable acceleration mechanism for the energetic electron flux enhancement that resulted in the internal charging of the Anik-Es. The magnetic fluctuations may even be possible triggers for the subsequent discharge that caused the satellites to malfunction. This incident illustrates that satellite operators should be on alert for elevated high-energy electron environment that is above established thresholds, as specifications in satellite design may not render a satellite immune from internal charging.

  17. An abdominal active can defibrillator may facilitate a successful generator change when a lead failure is present.

    PubMed

    Solomon, A J; Moubarak, J B; Drood, J M; Tracy, C M; Karasik, P E

    1999-10-01

    Defibrillator generator changes are frequently performed on patients with an implantable cardioverter defibrillator in an abdominal pocket. These patients usually have epicardial patches or older endocardial lead systems. At the time of a defibrillator generator change defibrillation may be unsuccessful as a result of lead failure. We tested the hypothesis that an active can defibrillator implanted in the abdominal pocket could replace a non-functioning endocardial lead or epicardial patch. An abdominal defibrillator generator change was performed in 10 patients, (mean age = 67 +/- 13 years, nine men). Initially, a defibrillation threshold (DFT) was obtained using a passive defibrillator and the chronic endocardial or epicardial lead system. DFTs were then performed using an active can emulator and one chronic lead to simulate endocardial or epicardial lead failure. We tested 30 lead configurations (nine endocardial and 21 epicardial). Although a DFT of 7.3 +/- 4.2 joules was obtained with the intact chronic lead system, the active can emulator and one endocardial or epicardial lead still yielded an acceptable DFT of 19.9 +/- 6.1 joules. In addition, a successful implant (DFT < or = 24 joules) could have been accomplished in 28 of 30 (93%) lead configurations. An active can defibrillator in an abdominal pocket may allow for a successful generator change in patients with defibrillator lead malfunction. This would be simpler than abandoning the abdominal implant and moving to a new pectoral device and lead or tunnelling a new endocardial electrode. However, loss of defibrillation capability with a particular complex lead may be a warning of impending loss of other functions (eg. sensing and/or pacing).

  18. Clinical Evaluation of an Affordable Qualitative Viral Failure Assay for HIV Using Dried Blood Spots in Uganda.

    PubMed

    Balinda, Sheila N; Ondoa, Pascale; Obuku, Ekwaro A; Kliphuis, Aletta; Egau, Isaac; Bronze, Michelle; Kasambula, Lordwin; Schuurman, Rob; Spieker, Nicole; Rinke de Wit, Tobias F; Kityo, Cissy

    2016-01-01

    WHO recommends regular viral load (VL) monitoring of patients on antiretroviral therapy (ART) for timely detection of virological failure, prevention of acquired HIV drug resistance (HIVDR) and avoiding unnecessary switching to second-line ART. However, the cost and complexity of routine VL testing remains prohibitive in most resource limited settings (RLS). We evaluated a simple, low-cost, qualitative viral-failure assay (VFA) on dried blood spots (DBS) in three clinical settings in Uganda. We conducted a cross-sectional diagnostic accuracy study in three HIV/AIDS treatment centres at the Joint Clinical Research Centre in Uganda. The VFA employs semi-quantitative detection of HIV-1 RNA amplified from the LTR gene. We used paired dry blood spot (DBS) and plasma with the COBASAmpliPrep/COBASTaqMan, Roche version 2 (VLref) as the reference assay. We used the VFA at two thresholds of viral load, (>5,000 or >1,000 copies/ml). 496 paired VFA and VLref results were available for comparative analysis. Overall, VFA demonstrated 78.4% sensitivity, (95% CI: 69.7%-87.1%), 93% specificity (95% CI: 89.7%-96.4%), 89.3% accuracy (95% CI: 85%-92%) and an agreement kappa = 0.72 as compared to the VLref. The predictive values of positivity and negativity among patients on ART for >12 months were 72.7% and 99.3%, respectively. VFA allowed 89% of correct classification of VF. Only 11% of the patients were misclassified with the potential of unnecessary or late switch to second-line ART. Our findings present an opportunity to roll out simple and affordable VL monitoring for HIV-1 treatment in RLS.

  19. Passive seismic monitoring of natural and induced earthquakes: case studies, future directions and socio-economic relevance

    USGS Publications Warehouse

    Bohnhoff, Marco; Dresen, Georg; Ellsworth, William L.; Ito, Hisao; Cloetingh, Sierd; Negendank, Jörg

    2010-01-01

    An important discovery in crustal mechanics has been that the Earth’s crust is commonly stressed close to failure, even in tectonically quiet areas. As a result, small natural or man-made perturbations to the local stress field may trigger earthquakes. To understand these processes, Passive Seismic Monitoring (PSM) with seismometer arrays is a widely used technique that has been successfully applied to study seismicity at different magnitude levels ranging from acoustic emissions generated in the laboratory under controlled conditions, to seismicity induced by hydraulic stimulations in geological reservoirs, and up to great earthquakes occurring along plate boundaries. In all these environments the appropriate deployment of seismic sensors, i.e., directly on the rock sample, at the earth’s surface or in boreholes close to the seismic sources allows for the detection and location of brittle failure processes at sufficiently low magnitude-detection threshold and with adequate spatial resolution for further analysis. One principal aim is to develop an improved understanding of the physical processes occurring at the seismic source and their relationship to the host geologic environment. In this paper we review selected case studies and future directions of PSM efforts across a wide range of scales and environments. These include induced failure within small rock samples, hydrocarbon reservoirs, and natural seismicity at convergent and transform plate boundaries. Each example represents a milestone with regard to bridging the gap between laboratory-scale experiments under controlled boundary conditions and large-scale field studies. The common motivation for all studies is to refine the understanding of how earthquakes nucleate, how they proceed and how they interact in space and time. This is of special relevance at the larger end of the magnitude scale, i.e., for large devastating earthquakes due to their severe socio-economic impact.

  20. Fatigue Resistance of CAD/CAM Resin Composite Molar Crowns

    PubMed Central

    Shembish, Fatma A.; Tong, Hui; Kaizer, Marina; Janal, Malvin N.; Thompson, Van P.; Opdam, Niek J.; Zhang, Yu

    2016-01-01

    Objective To demonstrate the fatigue behavior of CAD/CAM resin composite molar crowns using a mouth-motion step-stress fatigue test. Monolithic leucite-reinforced glass-ceramic crowns were used as a reference. Methods Fully anatomically shaped monolithic resin composite molar crowns (Lava Ultimate, n = 24) and leucite reinforced glass-ceramic crowns (IPS Empress CAD, n = 24) were fabricated using CAD/CAM systems. Crowns were cemented on aged dentin-like resin composite tooth replicas (Filtek Z100) with resin-based cements (RelyX Ultimate for Lava Ultimate or Multilink Automix for IPS Empress). Three step-stress profiles (aggressive, moderate and mild) were employed for the accelerated sliding-contact mouth-motion fatigue test. Twenty one crowns from each group were randomly distributed among these three profiles (1:2:4). Failure was designated as chip-off or bulk fracture. Optical and electronic microscopes were used to examine the occlusal surface and subsurface damages, as well as the material microstructures. Results The resin composite crowns showed only minor occlusal damage during mouth-motion step-stress fatigue loading up to 1700 N. Cross-sectional views revealed contact-induced cone cracks in all specimens, and flexural radial cracks in 2 crowns. Both cone and radial cracks were relatively small compared to the crown thickness. Extending these cracks to the threshold for catastrophic failure would require much higher indentation loads or more loading cycles. In contrast, all of the glass-ceramic crowns fractured, starting at loads of approximately 450 N. Significance Monolithic CAD/CAM resin composite crowns endure, with only superficial damage, fatigue loads 3 – 4 times higher than those causing catastrophic failure in glass-ceramic CAD crowns. PMID:26777092

  1. The effect of the impactor diameter and temperature on low velocity impact behavior of CFRP laminates

    NASA Astrophysics Data System (ADS)

    Evci, C.; Uyandıran, I.

    2017-02-01

    Impact damage is one of the major concerns that should be taken into account with the new aircraft and spacecraft structures which employ ever-growing use of composite materials. Considering the thermal loads encountered at different altitudes, both low and high temperatures can affect the properties and impact behavior of composite materials. This study aims to investigate the effect of temperature and impactor diameter on the impact behavior and damage development in balanced and symmetrical CFRP laminates which were manufactured by employing vacuum bagging process with autoclave cure. Instrumented drop-weight impact testing system is used to perform the low velocity impact tests in a range of temperatures ranged from 60 down to -50 °C. Impact tests for each temperature level were conducted using three different hemispherical impactor diameters varying from 10 to 20 mm. Energy profile method is employed to determine the impact threshold energies for damage evolution. The level of impact damage is determined from the dent depth on the impacted face and delamination damage detected using ultrasonic C-Scan technique. Test results reveal that the threshold of penetration energy, main failure force and delamination area increase with impactor diameter at all temperature levels. No clear influence of temperature on the critical force thresholds could be derived. However, penetration threshold energy decreased as the temperature was lowered. Drop in the penetration threshold was more obvious with quite low temperatures. Delamination damage area increased while the temperature decreased from +60 °C to -50 °C.

  2. Redundancy Technology With A Focused Ion Beam

    NASA Astrophysics Data System (ADS)

    Komano, Haruki; Hashimoto, Kazuhiko; Takigawa, Tadahiro

    1989-08-01

    Fuse cutting with a focused ion beam to activate redundancy circuits is proposed. In order to verify its potential usefulness, experiments have been performed. Fuse-cutting time was evaluated using aluminum fuses with a thin passivation layer, which are difficult to cut by conventional laser-beam technology due to the material's high reflectivity. The fuse width and thickness were 2 and 0.8 μm, respectively. The fuse was cut in 5 seconds with a 30 keV focused ion beam of 0.3 A/cm2 current density. Since the fuses used in DRAMs will be smaller, their cutting time will become shorter by scanning an ion beam on narrower areas. Moreover, it can be shortened by increasing current density. Fuses for redundancy technology in 256 k CMOS SRAMs were cut with a focused ion beam. The operation of the memories was checked with a memory tester. It was confirmed that memories which had failure cells operated normally after focused-ion-beam fuse-cutting. Focused ion beam irradiation effects upon a device have been studied. When a 30 keV gallium focused ion beam was irradiated near the gate of MOSFETs, a threshold voltage shift was not observed at an ion dose of 0.3 C/cm2 which corresponded to the ion dose in cutting a fuse. However, when irradiated on the gate, a threshold voltage shift was observed at ion doses of more than 8 x 10-4 C/cm2. The voltage shift was caused by the charge of ions within the passivation layer. It is necessary at least not to irradiate a focused ion beam on a device in cutting fuses. It is concluded that the focused-ion-beam method will be advantageous for future redundancy technology application.

  3. Simulated microgravity [bed rest] has little influence on taste, odor or trigeminal sensitivity

    NASA Technical Reports Server (NTRS)

    Vickers, Z. M.; Rice, B. L.; Rose, M. S.; Lane, H. W.

    2001-01-01

    Anecdotal evidence suggests that astronauts' perceptions of foods in space flight may differ from their perceptions of the same foods on Earth. Fluid shifts toward the head experienced in space may alter the astronauts' sensitivity to odors and tastes, producing altered perceptions. Our objective was to determine whether head-down bed rest, which produces similar fluid shifts, would produce changes in sensitivity to taste, odor or trigeminal sensations. Six subjects were rested three times prior to bed rest, three times during bed rest and two times after bed rest to determine their threshold sensitivity to the odors isoamylbutyrate and menthone, the tastants sucrose, sodium chloride, citric acid, quinine and monosodium glutamate, and to capsaicin. Thresholds were measured using a modified staircase procedure. Self-reported congestion was also recorded at each test time. Thresholds for monosodium glutamate where slightly higher during bed rest. None of the other thresholds were altered by bed rest.

  4. On the mixing time of geographical threshold graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan

    In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Wemore » specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).« less

  5. Systemic inflammatory response syndrome criteria in defining severe sepsis.

    PubMed

    Kaukonen, Kirsi-Maija; Bailey, Michael; Pilcher, David; Cooper, D Jamie; Bellomo, Rinaldo

    2015-04-23

    The consensus definition of severe sepsis requires suspected or proven infection, organ failure, and signs that meet two or more criteria for the systemic inflammatory response syndrome (SIRS). We aimed to test the sensitivity, face validity, and construct validity of this approach. We studied data from patients from 172 intensive care units in Australia and New Zealand from 2000 through 2013. We identified patients with infection and organ failure and categorized them according to whether they had signs meeting two or more SIRS criteria (SIRS-positive severe sepsis) or less than two SIRS criteria (SIRS-negative severe sepsis). We compared their characteristics and outcomes and assessed them for the presence of a step increase in the risk of death at a threshold of two SIRS criteria. Of 1,171,797 patients, a total of 109,663 had infection and organ failure. Among these, 96,385 patients (87.9%) had SIRS-positive severe sepsis and 13,278 (12.1%) had SIRS-negative severe sepsis. Over a period of 14 years, these groups had similar characteristics and changes in mortality (SIRS-positive group: from 36.1% [829 of 2296 patients] to 18.3% [2037 of 11,119], P<0.001; SIRS-negative group: from 27.7% [100 of 361] to 9.3% [122 of 1315], P<0.001). Moreover, this pattern remained similar after adjustment for baseline characteristics (odds ratio in the SIRS-positive group, 0.96; 95% confidence interval [CI], 0.96 to 0.97; odds ratio in the SIRS-negative group, 0.96; 95% CI, 0.94 to 0.98; P=0.12 for between-group difference). In the adjusted analysis, mortality increased linearly with each additional SIRS criterion (odds ratio for each additional criterion, 1.13; 95% CI, 1.11 to 1.15; P<0.001) without any transitional increase in risk at a threshold of two SIRS criteria. The need for two or more SIRS criteria to define severe sepsis excluded one in eight otherwise similar patients with infection, organ failure, and substantial mortality and failed to define a transition point in the risk of death. (Funded by the Australian and New Zealand Intensive Care Research Centre.).

  6. Congestive kidney failure in cardiac surgery: the relationship between central venous pressure and acute kidney injury.

    PubMed

    Gambardella, Ivancarmine; Gaudino, Mario; Ronco, Claudio; Lau, Christopher; Ivascu, Natalia; Girardi, Leonard N

    2016-11-01

    Acute kidney injury (AKI) in cardiac surgery has traditionally been linked to reduced arterial perfusion. There is ongoing evidence that central venous pressure (CVP) has a pivotal role in precipitating acute renal dysfunction in cardiac medical and surgical settings. We can regard this AKI driven by systemic venous hypertension as 'kidney congestive failure'. In the cardiac surgery population as a whole, when the CVP value reaches the threshold of 14 mmHg in postoperative period, the risk of AKI increases 2-fold with an odds ratio (OR) of 1.99, 95% confidence interval (95% CI) of 1.16-3.40. In cardiac surgery subsets where venous hypertension is a hallmark feature, the incidence of AKI is higher (tricuspid disease 30%, carcinoid valve disease 22%). Even in the non-chronically congested coronary artery bypass population, CVP measured 6 h postoperatively showed significant association to renal failure: risk-adjusted OR for AKI was 5.5 (95% CI 1.93-15.5; P = 0.001) with every 5 mmHg rise in CVP for patients with CVP <9 mmHg; for CVP increments of 5 mmHg above the threshold of 9 mmHg, the risk-adjusted OR for AKI was 1.3 (95% CI 1.01-1.65; P = 0.045). This and other clinical evidence are discussed along with the underlying pathophysiological mechanisms, involving the supremacy of volume receptors in regulating the autonomic output in hypervolaemia, and the regional effect of venous congestion on the nephron. The effect of CVP on renal function was found to be modulated by ventricular function class, aetiology and acuity of venous congestion. Evidence suggests that acute increases of CVP should be actively treated to avoid a deterioration of the renal function, particularly in patients with poor ventricular fraction. Besides, the practice of treating right heart failure with fluid loading should be avoided in favour of other ways to optimize haemodynamics in this setting, because of the detrimental effects on the kidney function. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  7. Treatment of High Flow Arteriovenous Fistulas after Successful Renal Transplant Using a Simple Precision Banding Technique.

    PubMed

    Gkotsis, Georgios; Jennings, William C; Malik, Jan; Mallios, Alexandros; Taubman, Kevin

    2016-02-01

    Observation versus ligation of a functional arteriovenous fistula (AVF) after successful renal transplantation (SRT) has been a controversial topic of debate. Congestive heart failure and pulmonary hypertension are common in dialysis patients, and more frequent when vascular access flow is excessive. Renal transplant failure may occur in up to 34% of patients after 5 years, therefore maintaining a moderate flow AVF appears warranted. We review SRT patients with high flow-AVFs (HF-AVF) and clinical signs of heart failure where a modified precision banding procedure was used for access flow reduction. Patients referred for HF-AVF evaluation after SRT were identified and records reviewed retrospectively. In addition to recording clinical signs of heart failure, each patient had ultrasound AVF flow measurement before and after temporary AVF occlusion of the access by digital compression. Pulse rate and the presence or absence of a cardiac murmur was noted before and after AVF compression. Adequacy of access flow restriction was evaluated intraoperatively using ultrasound flow measurements, adjusting the banding diameter in 0.5 mm increments to achieve the targeted AVF flow. Twelve patients were evaluated over a 19-month period. Eight (66%) were male and one (8%) obese. Ages were 15-73 years (mean = 42). The AVFs were established 24-86 months previously. The mean pulse rate declined after AVF compression from 90/min to 72/min (range 110-78). Six patients had a precompression cardiac flow murmur that disappeared with temporary AVF compression. One patient with poor cardiac function underwent immediate AVF ligation with dramatic improvement in cardiac status. All other patients underwent a precision banding procedure with real-time flow monitoring. Mean access flow was 2,280 mL/min (1,148-3,320 mL/min) before access banding and was 598 mL/min (481-876) after flow reduction. The clinical signs of heart failure disappeared in all patients. All AVFs remained patent although one individual later requested ligation for cosmesis. Two patients had renal transplant failure and later successfully used the AVF. Follow-up postbanding was 1-18 months (mean = 12). Patients with successful renal transplants and HF-AVFs had resolution of heart failure findings and maintenance of access patency using a modified precision banding procedure. Flow reduction in symptomatic renal transplant patients with elevated access flow is recommended. Further study is warranted to substantiate these recommendations and clarify the appropriate thresholds for such interventions. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Analysis of Station Quality Issues from EarthScope's Transportable Array

    NASA Astrophysics Data System (ADS)

    Pfeifer, C.; Barstow, N.; Busby, R.; Hafner, K.

    2008-12-01

    160 of the first 400 Earthscope USARRY transportable array (TA) stations have completed their first two-year deployment and are being moved to their next locations. Over the past 4 years the majority of stations have run with few interruptions in the transfer of real time data to the Array Network Facility (ANF) at the Univ of CA San Diego and near real time data to the IRIS Data Management System (DMS). The combination of telemetered data and dedicated people reviewing the waveforms and state of health data have revealed several conditions that can affect the data quality or cause loss of data. The data problems fall into three broad categories; station power, equipment malfunction, and communication failures. Station power issues have been implicated in several types of noise seen in the seismic data (as well as causing station failures and resultant data gaps). The most common type of equipment problem that has been found to degrade data quality is caused by sensor problems, and has affected all 3 types of sensors used in the TA to varying degrees. While communication problems can cause real time data loss, they do not cause a degradation of the quality of the data, and any gaps in the real time data due solely to communications problems are filled in later with the continuous data recorded to disk at each TA station. Over the past 4 years the TA team has recognized a number of noise sources and have made several design changes to minimize the effects on data quality. Design/procedural changes include: stopping water incursion into the stations, power conditioning, changing mass re-center voltage thresholds. Figures that demonstrate examples are provided. Changes have created better data quality and improved the station performance. Vigilance and deployment of service teams to reestablish communications, replace noisy sensors, and troubleshoot problems is also key to maintaining the high-quality TA network.

  9. An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth

    NASA Astrophysics Data System (ADS)

    Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge

    2017-01-01

    A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.

  10. High Strain Rate Tensile Testing of Silver Nanowires: Rate-Dependent Brittle-to-Ductile Transition.

    PubMed

    Ramachandramoorthy, Rajaprakash; Gao, Wei; Bernal, Rodrigo; Espinosa, Horacio

    2016-01-13

    The characterization of nanomaterials under high strain rates is critical to understand their suitability for dynamic applications such as nanoresonators and nanoswitches. It is also of great theoretical importance to explore nanomechanics with dynamic and rate effects. Here, we report in situ scanning electron microscope (SEM) tensile testing of bicrystalline silver nanowires at strain rates up to 2/s, which is 2 orders of magnitude higher than previously reported in the literature. The experiments are enabled by a microelectromechanical system (MEMS) with fast response time. It was identified that the nanowire plastic deformation has a small activation volume (<10b(3)), suggesting dislocation nucleation as the rate controlling mechanism. Also, a remarkable brittle-to-ductile failure mode transition was observed at a threshold strain rate of 0.2/s. Transmission electron microscopy (TEM) revealed that along the nanowire, dislocation density and spatial distribution of plastic regions increase with increasing strain rate. Furthermore, molecular dynamic (MD) simulations show that deformation mechanisms such as grain boundary migration and dislocation interactions are responsible for such ductility. Finally, the MD and experimental results were interpreted using dislocation nucleation theory. The predicted yield stress values are in agreement with the experimental results for strain rates above 0.2/s when ductility is pronounced. At low strain rates, random imperfections on the nanowire surface trigger localized plasticity, leading to a brittle-like failure.

  11. ZERODUR - bending strength: review of achievements

    NASA Astrophysics Data System (ADS)

    Hartmann, Peter

    2017-08-01

    Increased demand for using the glass ceramic ZERODUR® with high mechanical loads called for strength data based on larger statistical samples. Design calculations for failure probability target value below 1: 100 000 cannot be made reliable with parameters derived from 20 specimen samples. The data now available for a variety of surface conditions, ground with different grain sizes and acid etched for full micro crack removal, allow stresses by factors four to ten times higher than before. The large sample revealed that breakage stresses of ground surfaces follow the three parameter Weibull distribution instead of the two parameter version. This is more reasonable considering that the micro cracks of such surfaces have a maximum depth which is reflected in the existence of a threshold breakage stress below which breakage probability is zero. This minimum strength allows calculating minimum lifetimes. Fatigue under load can be taken into account by using the stress corrosion coefficient for the actual environmental humidity. For fully etched surfaces Weibull statistics fails. The precondition of the Weibull distribution, the existence of one unique failure mechanism, is not given anymore. ZERODUR® with fully etched surfaces free from damages introduced after etching endures easily 100 MPa tensile stress. The possibility to use ZERODUR® for combined high precision and high stress application was confirmed by the successful launch and continuing operation of LISA Pathfinder the precursor experiment for the gravitational wave antenna satellite array eLISA.

  12. Evidence for thermally assisted threshold switching behavior in nanoscale phase-change memory cells

    NASA Astrophysics Data System (ADS)

    Le Gallo, Manuel; Athmanathan, Aravinthan; Krebs, Daniel; Sebastian, Abu

    2016-01-01

    In spite of decades of research, the details of electrical transport in phase-change materials are still debated. In particular, the so-called threshold switching phenomenon that allows the current density to increase steeply when a sufficiently high voltage is applied is still not well understood, even though there is wide consensus that threshold switching is solely of electronic origin. However, the high thermal efficiency and fast thermal dynamics associated with nanoscale phase-change memory (PCM) devices motivate us to reassess a thermally assisted threshold switching mechanism, at least in these devices. The time/temperature dependence of the threshold switching voltage and current in doped Ge2Sb2Te5 nanoscale PCM cells was measured over 6 decades in time at temperatures ranging from 40 °C to 160 °C. We observe a nearly constant threshold switching power across this wide range of operating conditions. We also measured the transient dynamics associated with threshold switching as a function of the applied voltage. By using a field- and temperature-dependent description of the electrical transport combined with a thermal feedback, quantitative agreement with experimental data of the threshold switching dynamics was obtained using realistic physical parameters.

  13. Technical Note: An operational landslide early warning system at regional scale based on space-time variable rainfall thresholds

    NASA Astrophysics Data System (ADS)

    Segoni, S.; Battistini, A.; Rossi, G.; Rosi, A.; Lagomarsino, D.; Catani, F.; Moretti, S.; Casagli, N.

    2014-10-01

    We set up an early warning system for rainfall-induced landslides in Tuscany (23 000 km2). The system is based on a set of state-of-the-art intensity-duration rainfall thresholds (Segoni et al., 2014b), makes use of LAMI rainfall forecasts and real-time rainfall data provided by an automated network of more than 300 rain-gauges. The system was implemented in a WebGIS to ease the operational use in civil protection procedures: it is simple and intuitive to consult and it provides different outputs. Switching among different views, the system is able to focus both on monitoring of real time data and on forecasting at different lead times up to 48 h. Moreover, the system can switch between a very straightforward view where a synoptic scenario of the hazard can be shown all over the region and a more in-depth view were the rainfall path of rain-gauges can be displayed and constantly compared with rainfall thresholds. To better account for the high spatial variability of the physical features, which affects the relationship between rainfall and landslides, the region is subdivided into 25 alert zones, each provided with a specific threshold. The warning system reflects this subdivision: using a network of 332 rain gauges, it allows monitoring each alert zone separately and warnings can be issued independently from an alert zone to another. An important feature of the warning system is the use of thresholds that may vary in time adapting at the conditions of the rainfall path recorded by the rain-gauges. Depending on when the starting time of the rainfall event is set, the comparison with the threshold may produce different outcomes. Therefore, a recursive algorithm was developed to check and compare with the thresholds all possible starting times, highlighting the worst scenario and showing in the WebGIS interface at what time and how much the rainfall path has exceeded or will exceed the most critical threshold. Besides forecasting and monitoring the hazard scenario over the whole region with hazard levels differentiated for 25 distinct alert zones, the system can be used to gather, analyze, visualize, explore, interpret and store rainfall data, thus representing a potential support to both decision makers and scientists.

  14. Margins Associated with Loss of Assured Safety for Systems with Multiple Time-Dependent Failure Modes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property valuemore » at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mossahebi, S; Feigenberg, S; Nichols, E

    Purpose: GammaPod™, the first stereotactic radiotherapy device for early stage breast cancer treatment, has been recently installed and commissioned at our institution. A multidisciplinary working group applied the failure mode and effects analysis (FMEA) approach to perform a risk analysis. Methods: FMEA was applied to the GammaPod™ treatment process by: 1) generating process maps for each stage of treatment; 2) identifying potential failure modes and outlining their causes and effects; 3) scoring the potential failure modes using the risk priority number (RPN) system based on the product of severity, frequency of occurrence, and detectability (ranging 1–10). An RPN of highermore » than 150 was set as the threshold for minimal concern of risk. For these high-risk failure modes, potential quality assurance procedures and risk control techniques have been proposed. A new set of severity, occurrence, and detectability values were re-assessed in presence of the suggested mitigation strategies. Results: In the single-day image-and-treat workflow, 19, 22, and 27 sub-processes were identified for the stages of simulation, treatment planning, and delivery processes, respectively. During the simulation stage, 38 potential failure modes were found and scored, in terms of RPN, in the range of 9-392. 34 potential failure modes were analyzed in treatment planning with a score range of 16-200. For the treatment delivery stage, 47 potential failure modes were found with an RPN score range of 16-392. The most critical failure modes consisted of breast-cup pressure loss and incorrect target localization due to patient upper-body alignment inaccuracies. The final RPN score of these failure modes based on recommended actions were assessed to be below 150. Conclusion: FMEA risk analysis technique was applied to the treatment process of GammaPod™, a new stereotactic radiotherapy technology. Application of systematic risk analysis methods is projected to lead to improved quality of GammaPod™ treatments. Ying Niu and Cedric Yu are affiliated with Xcision Medical Systems.« less

  16. Himalayan Sackung and Associations to Regional Structure

    NASA Astrophysics Data System (ADS)

    Shroder, J. F.; Bishop, M. P.; Olsenholler, J.

    2003-12-01

    Recognition of sackung slope failure or deep-seated, rock-slope deformation in the Himalaya has been rather limited, in part because: (1) many geoscientists do not recognize its characteristics; (2) large-scale aerial photographs and topographic maps used to identify the characteristic surficial, topographic manifestations of the failure type are commonly low-level state secrets in that region; and (3) no systematic survey for sackung has ever been made in the Himalaya. In the Pakistani-controlled, western Himalaya, some unconventional access to aerial photographs in the Kaghan and Nanga Parbat areas allowed first recognition of several characteristic ridge-top grabens and anti-slope scarps. Later release of declassified, stereo imagery from the CORONA and KEYHOLE satellite series enabled discovery of other examples in the K2 region. Comparison of mapped sackung failures with geologic base maps has demonstrated some coincidence of sackung with various structural trends, including synformal structures in upper thrust plates or along the traces of high-angle faults. In all probability these structural trends have provided plentiful ancillary planes of weakness along which gravitationally driven sackung is facilitated. Sackung failure in the Himalaya appears to be a spatially scale-dependent manifestation of a gravitational-collapse continuum of the brittle, upper crust, mainly involving mountain ridges. In contrast, gravitational collapse of the whole range may involve some similar failures but also include listric faulting, as well as subsidence movement into zones of ductility at depth. Temporal scale dependence of sackung may also be threshold dominated, wherein initial long-continued, slow failure ultimately leads to the commonly catastrophic rock-slope collapses recently recognized throughout the western Himalaya and now differentiated from their original mismapping as glacial moraines. Such sackung in Himalayan terrain undergoing active deglaciation from global warming may increase catastrophic slope-failure hazard.

  17. Predicting Failure Under Laboratory Conditions: Learning the Physics of Slow Frictional Slip and Dynamic Failure

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, B.; Hulbert, C.; Riviere, J.; Lubbers, N.; Barros, K.; Marone, C.; Johnson, P. A.

    2016-12-01

    Forecasting failure is a primary goal in diverse domains that include earthquake physics, materials science, nondestructive evaluation of materials and other engineering applications. Due to the highly complex physics of material failure and limitations on gathering data in the failure nucleation zone, this goal has often appeared out of reach; however, recent advances in instrumentation sensitivity, instrument density and data analysis show promise toward forecasting failure times. Here, we show that we can predict frictional failure times of both slow and fast stick slip failure events in the laboratory. This advance is made possible by applying a machine learning approach known as Random Forests1(RF) to the continuous acoustic emission (AE) time series recorded by detectors located on the fault blocks. The RF is trained using a large number of statistical features derived from the AE time series signal. The model is then applied to data not previously analyzed. Remarkably, we find that the RF method predicts upcoming failure time far in advance of a stick slip event, based only on a short time window of data. Further, the algorithm accurately predicts the time of the beginning and end of the next slip event. The predicted time improves as failure is approached, as other data features add to prediction. Our results show robust predictions of slow and dynamic failure based on acoustic emissions from the fault zone throughout the laboratory seismic cycle. The predictions are based on previously unidentified tremor-like acoustic signals that occur during stress build up and the onset of macroscopic frictional weakening. We suggest that the tremor-like signals carry information about fault zone processes and allow precise predictions of failure at any time in the slow slip or stick slip cycle2. If the laboratory experiments represent Earth frictional conditions, it could well be that signals are being missed that contain highly useful predictive information. 1Breiman, L. Random forests. Machine Learning 45, 5-32 (2001). 2Rouet-Leduc, B. C. Hulbert, N. Lubbers, K. Barros and P. A. Johnson, Learning the physics of failure, in review (2016).

  18. Big brown bats (Eptesicus fuscus) maintain hearing sensitivity after exposure to intense band-limited noise.

    PubMed

    Simmons, Andrea Megela; Hom, Kelsey N; Simmons, James A

    2017-03-01

    Thresholds to short-duration narrowband frequency-modulated (FM) sweeps were measured in six big brown bats (Eptesicus fuscus) in a two-alternative forced choice passive listening task before and after exposure to band-limited noise (lower and upper frequencies between 10 and 50 kHz, 1 h, 116-119 dB sound pressure level root mean square; sound exposure level 152 dB). At recovery time points of 2 and 5 min post-exposure, thresholds varied from -4 to +4 dB from pre-exposure threshold estimates. Thresholds after sham (control) exposures varied from -6 to +2 dB from pre-exposure estimates. The small differences in thresholds after noise and sham exposures support the hypothesis that big brown bats do not experience significant temporary threshold shifts under these experimental conditions. These results confirm earlier findings showing stability of thresholds to broadband FM sweeps at longer recovery times after exposure to broadband noise. Big brown bats may have evolved a lessened susceptibility to noise-induced hearing losses, related to the special demands of echolocation.

  19. New developments in supra-threshold perimetry.

    PubMed

    Henson, David B; Artes, Paul H

    2002-09-01

    To describe a series of recent enhancements to supra-threshold perimetry. Computer simulations were used to develop an improved algorithm (HEART) for the setting of the supra-threshold test intensity at the beginning of a field test, and to evaluate the relationship between various pass/fail criteria and the test's performance (sensitivity and specificity) and how they compare with modern threshold perimetry. Data were collected in optometric practices to evaluate HEART and to assess how the patient's response times can be analysed to detect false positive response errors in visual field test results. The HEART algorithm shows improved performance (reduced between-eye differences) over current algorithms. A pass/fail criterion of '3 stimuli seen of 3-5 presentations' at each test location reduces test/retest variability and combines high sensitivity and specificity. A large percentage of false positive responses can be detected by comparing their latencies to the average response time of a patient. Optimised supra-threshold visual field tests can perform as well as modern threshold techniques. Such tests may be easier to perform for novice patients, compared with the more demanding threshold tests.

  20. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  1. Towards a debris-flow warning system based on hydrological measurements of the triggering conditions. A study of El Rebaixader catchment (Central Pyrenees, Spain)

    NASA Astrophysics Data System (ADS)

    Abancó, Clàudia; Hürlimann, Marcel; Moya, José

    2014-05-01

    Debris flows represent a risk to the society due to their high destructive power. Rainfall is the main debris-flow triggering factor. Rainfall thresholds are generally used for warning of debris flow occurrence in susceptible catchments. However, the efficiency of such thresholds for real time hazard assessment is often conditioned by many factors, such as: the location and number of the rain gauges used (both to define the thresholds, and for setting off warnings); the temporal and spatial evolution of rainfall's convective cells or the effect of snow cover melting. These factors affect the length of the warning time, which is of crucial importance for issuing alert messages or alarms to the people and infrastructures at risk. The Rebaixader catchment (Central Pyrenees, Spain) is being monitored since 2009 by six stations recording information on initiation (4 stations) and flow detection and cinematic behaviour (2 stations). Until December 2013, 7 debris flows, 17 debris floods and 4 rockfalls have been recorded. The objectives of this work were: a) the definition of rainfall thresholds at two different rain gauges; b) the analysis of the infiltration patterns in order to define their potential use for warning systems and c) preliminary testing of rainfall thresholds' efficiency in terms of warning time, in this catchment. This last goal consisted in the comparison of the time elapsed between the rainfall threshold was exceeded and the event occurrence was detected by the stations at the channel area. The results suggest that the intensity-duration rainfall thresholds sometimes provide warning times which would be too short for an adequate reaction in the Rebaixader catchment (less than 10 minutes). The combination of such rainfall thresholds with infiltration measurements is useful to increase the warning time. This occurs especially in the events triggered in spring, when the snowmelt plays an important role in the event's triggering conditions. However, the effects of infiltration associated to the summer convective rainfalls are almost imperceptible; therefore their importance in warning systems decreases.

  2. Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr

    Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.

  3. Perfect Detection of Spikes in the Linear Sub-threshold Dynamics of Point Neurons

    PubMed Central

    Krishnan, Jeyashree; Porta Mana, PierGianLuca; Helias, Moritz; Diesmann, Markus; Di Napoli, Edoardo

    2018-01-01

    Spiking neuronal networks are usually simulated with one of three main schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work offers an alternative geometric point of view on neuronal dynamics, and derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but can be optimized in several ways. Comparison confirms earlier results that the imperfect tests rarely miss spikes (less than a fraction 1/108 of missed spikes) in biologically relevant settings. PMID:29379430

  4. Nociception, pain, negative moods and behavior selection

    PubMed Central

    Baliki, Marwan N.; Apkarian, A. Vania

    2015-01-01

    Recent neuroimaging studies suggest that the brain adapts with pain, as well as imparts risk for developing chronic pain. Within this context we revisit the concepts for nociception, acute and chronic pain, and negative moods relative to behavior selection. We redefine nociception as the mechanism protecting the organism from injury; while acute pain as failure of avoidant behavior; and a mesolimbic threshold process that gates the transformation of nociceptive activity to conscious pain. Adaptations in this threshold process are envisioned to be critical for development of chronic pain. We deconstruct chronic pain into four distinct phases, each with specific mechanisms; and outline current state of knowledge regarding these mechanisms: The limbic brain imparting risk, while mesolimbic learning processes reorganizing the neocortex into a chronic pain state. Moreover, pain and negative moods are envisioned as a continuum of aversive behavioral learning, which enhance survival by protecting against threats. PMID:26247858

  5. Urine benzodiazepines screening of involuntarily drugged and robbed or raped patients.

    PubMed

    Boussairi, A; Dupeyron, J P; Hernandez, B; Delaitre, D; Beugnet, L; Espinoza, P; Diamant-Berger, O

    1996-01-01

    This study involved 35 patients who claimed to have been drugged before being robbed or raped, despite urine negative toxicologic screening by immunoenzymatic methods. The urines were frozen for further investigations, including enzymatic hydrolysis of urinary conjugates, liquid-solid extraction and, finally, immunoenzymatic screening of concentrated urine extract. Urine benzodiazepines were analyzed by immunoenzymatic assay before and after enzymatic hydrolysis combined with extraction. On direct immunoenzymatic screening, 17 of the 35 urine samples were benzodiazepine positive. Enrichment of preserved specimens improved the detection threshold from 200 ng/mL to 50 ng/mL and 10 of the 18 negative urines became positive. This method allowed us to demonstrate the benzodiazepines in half of previously negative urine samples. Benzodiazepine screening is particularly problematic because of low dosage, rapid elimination, failure to detect conjugated metabolites by immunoenzymatic reagents and high threshold of sensitivity for certain substances.

  6. Tightness of Salt Rocks and Fluid Percolation

    NASA Astrophysics Data System (ADS)

    Lüdeling, C.; Minkley, W.; Brückner, D.

    2016-12-01

    Salt formations are used for storage of oil and gas and as waste repositiories because of their excellent barrier properties. We summarise the current knowledge regarding fluid tightness of saliferous rocks, in particular rock salt. Laboratory results, in-situ observations and natural analogues, as well as theoretical and numerical investigations, indicate that pressure-driven percolation is the most important mechanism for fluid transport: If the fluid pressure exceeds the percolation threshold, i.e. the minor principal stress, the fluid can open up grain boundaries, create connected flow paths and initiate directed migration in the direction of major principal stress. Hence, this mechanism provides the main failure mode for rock salt barriers, where integrity can be lost if the minor principal stress is lowered, e.g. due to excavations or thermomechanical uplift. We present new laboratory experiments showing that there is no fluid permeation below the percolation threshold also at high temperatures and pressures, contrary to recent claims in the literature.

  7. Rotor-to-stator rub vibration in centrifugal compressor

    NASA Technical Reports Server (NTRS)

    Gao, J. J.; Qi, Q. M.

    1985-01-01

    One example of excessive vibration encountered during loading of a centrifugal compressor train (H type compressor with HP casing) is discussed. An investigation was made of the effects of the dynamic load on the bearing stiffness and the rotor-bearing system critical speed. The high vibration occurred at a "threshold load," but the machine didn't run smoothly due to rubs even when it had passed through the threshold load. The acquisition and discussion of the data taken in the field as well as a description of the case history which utilizes background information to identify the malfunction conditions is presented. The analysis shows that the failures, including full reverse precession rub and exact one half subharmonic vibration, were caused by the oversize bearings and displacement of the rotor center due to foundation deformation and misalignment between gear shafts, etc. The corrective actions taken to alleviate excessive vibration and the problems which remain to be solved are also presented.

  8. Saturation-state sensitivity of marine bivalve larvae to ocean acidification

    NASA Astrophysics Data System (ADS)

    Waldbusser, George G.; Hales, Burke; Langdon, Chris J.; Haley, Brian A.; Schrader, Paul; Brunner, Elizabeth L.; Gray, Matthew W.; Miller, Cale A.; Gimenez, Iria

    2015-03-01

    Ocean acidification results in co-varying inorganic carbon system variables. Of these, an explicit focus on pH and organismal acid-base regulation has failed to distinguish the mechanism of failure in highly sensitive bivalve larvae. With unique chemical manipulations of seawater we show definitively that larval shell development and growth are dependent on seawater saturation state, and not on carbon dioxide partial pressure or pH. Although other physiological processes are affected by pH, mineral saturation state thresholds will be crossed decades to centuries ahead of pH thresholds owing to nonlinear changes in the carbonate system variables as carbon dioxide is added. Our findings were repeatable for two species of bivalve larvae could resolve discrepancies in experimental results, are consistent with a previous model of ocean acidification impacts due to rapid calcification in bivalve larvae, and suggest a fundamental ocean acidification bottleneck at early life-history for some marine keystone species.

  9. Regression analysis of clustered failure time data with informative cluster size under the additive transformation models.

    PubMed

    Chen, Ling; Feng, Yanqin; Sun, Jianguo

    2017-10-01

    This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.

  10. Evaluation of a Multi-Axial, Temperature, and Time Dependent (MATT) Failure Model

    NASA Technical Reports Server (NTRS)

    Richardson, D. E.; Anderson, G. L.; Macon, D. J.; Rudolphi, Michael (Technical Monitor)

    2002-01-01

    To obtain a better understanding the response of the structural adhesives used in the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle, an extensive effort has been conducted to characterize in detail the failure properties of these adhesives. This effort involved the development of a failure model that includes the effects of multi-axial loading, temperature, and time. An understanding of the effects of these parameters on the failure of the adhesive is crucial to the understanding and prediction of the safety of the RSRM nozzle. This paper documents the use of this newly developed multi-axial, temperature, and time (MATT) dependent failure model for modeling failure for the adhesives TIGA 321, EA913NA, and EA946. The development of the mathematical failure model using constant load rate normal and shear test data is presented. Verification of the accuracy of the failure model is shown through comparisons between predictions and measured creep and multi-axial failure data. The verification indicates that the failure model performs well for a wide range of conditions (loading, temperature, and time) for the three adhesives. The failure criterion is shown to be accurate through the glass transition for the adhesive EA946. Though this failure model has been developed and evaluated with adhesives, the concepts are applicable for other isotropic materials.

  11. Human factors process failure modes and effects analysis (HF PFMEA) software tool

    NASA Technical Reports Server (NTRS)

    Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)

    2011-01-01

    Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.

  12. Integrating real-time subsurface hydrologic monitoring with empirical rainfall thresholds to improve landslide early warning

    USGS Publications Warehouse

    Mirus, Benjamin B.; Becker, Rachel E.; Baum, Rex L.; Smith, Joel B.

    2018-01-01

    Early warning for rainfall-induced shallow landsliding can help reduce fatalities and economic losses. Although these commonly occurring landslides are typically triggered by subsurface hydrological processes, most early warning criteria rely exclusively on empirical rainfall thresholds and other indirect proxies for subsurface wetness. We explore the utility of explicitly accounting for antecedent wetness by integrating real-time subsurface hydrologic measurements into landslide early warning criteria. Our efforts build on previous progress with rainfall thresholds, monitoring, and numerical modeling along the landslide-prone railway corridor between Everett and Seattle, Washington, USA. We propose a modification to a previously established recent versus antecedent (RA) cumulative rainfall thresholds by replacing the antecedent 15-day rainfall component with an average saturation observed over the same timeframe. We calculate this antecedent saturation with real-time telemetered measurements from five volumetric water content probes installed in the shallow subsurface within a steep vegetated hillslope. Our hybrid rainfall versus saturation (RS) threshold still relies on the same recent 3-day rainfall component as the existing RA thresholds, to facilitate ready integration with quantitative precipitation forecasts. During the 2015–2017 monitoring period, this RS hybrid approach has an increase of true positives and a decrease of false positives and false negatives relative to the previous RA rainfall-only thresholds. We also demonstrate that alternative hybrid threshold formats could be even more accurate, which suggests that further development and testing during future landslide seasons is needed. The positive results confirm that accounting for antecedent wetness conditions with direct subsurface hydrologic measurements can improve thresholds for alert systems and early warning of rainfall-induced shallow landsliding.

  13. Measure of robustness for complex networks

    NASA Astrophysics Data System (ADS)

    Youssef, Mina Nabil

    Critical infrastructures are repeatedly attacked by external triggers causing tremendous amount of damages. Any infrastructure can be studied using the powerful theory of complex networks. A complex network is composed of extremely large number of different elements that exchange commodities providing significant services. The main functions of complex networks can be damaged by different types of attacks and failures that degrade the network performance. These attacks and failures are considered as disturbing dynamics, such as the spread of viruses in computer networks, the spread of epidemics in social networks, and the cascading failures in power grids. Depending on the network structure and the attack strength, every network differently suffers damages and performance degradation. Hence, quantifying the robustness of complex networks becomes an essential task. In this dissertation, new metrics are introduced to measure the robustness of technological and social networks with respect to the spread of epidemics, and the robustness of power grids with respect to cascading failures. First, we introduce a new metric called the Viral Conductance (VCSIS ) to assess the robustness of networks with respect to the spread of epidemics that are modeled through the susceptible/infected/susceptible (SIS) epidemic approach. In contrast to assessing the robustness of networks based on a classical metric, the epidemic threshold, the new metric integrates the fraction of infected nodes at steady state for all possible effective infection strengths. Through examples, VCSIS provides more insights about the robustness of networks than the epidemic threshold. In addition, both the paradoxical robustness of Barabasi-Albert preferential attachment networks and the effect of the topology on the steady state infection are studied, to show the importance of quantifying the robustness of networks. Second, a new metric VCSIR is introduced to assess the robustness of networks with respect to the spread of susceptible/infected/recovered (SIR) epidemics. To compute VCSIR, we propose a novel individual-based approach to model the spread of SIR epidemics in networks, which captures the infection size for a given effective infection rate. Thus, VCSIR quantitatively integrates the infection strength with the corresponding infection size. To optimize the VCSIR metric, a new mitigation strategy is proposed, based on a temporary reduction of contacts in social networks. The social contact network is modeled as a weighted graph that describes the frequency of contacts among the individuals. Thus, we consider the spread of an epidemic as a dynamical system, and the total number of infection cases as the state of the system, while the weight reduction in the social network is the controller variable leading to slow/reduce the spread of epidemics. Using optimal control theory, the obtained solution represents an optimal adaptive weighted network defined over a finite time interval. Moreover, given the high complexity of the optimization problem, we propose two heuristics to find the near optimal solutions by reducing the contacts among the individuals in a decentralized way. Finally, the cascading failures that can take place in power grids and have recently caused several blackouts are studied. We propose a new metric to assess the robustness of the power grid with respect to the cascading failures. The power grid topology is modeled as a network, which consists of nodes and links representing power substations and transmission lines, respectively. We also propose an optimal islanding strategy to protect the power grid when a cascading failure event takes place in the grid. The robustness metrics are numerically evaluated using real and synthetic networks to quantify their robustness with respect to disturbing dynamics. We show that the proposed metrics outperform the classical metrics in quantifying the robustness of networks and the efficiency of the mitigation strategies. In summary, our work advances the network science field in assessing the robustness of complex networks with respect to various disturbing dynamics.

  14. Stress Transmission and Failure in Disordered Porous Media

    NASA Astrophysics Data System (ADS)

    Laubie, Hadrien; Radjai, Farhang; Pellenq, Roland; Ulm, Franz-Josef

    2017-08-01

    By means of extensive lattice-element simulations, we investigate stress transmission and its relation with failure properties in increasingly disordered porous systems. We observe a non-Gaussian broadening of stress probability density functions under tensile loading with increasing porosity and disorder, revealing a gradual transition from a state governed by single-pore stress concentration to a state controlled by multipore interactions and metric disorder. This effect is captured by the excess kurtosis of stress distributions and shown to be nicely correlated with the second moment of local porosity fluctuations, which appears thus as a (dis)order parameter for the system. By generating statistical ensembles of porous textures with varying porosity and disorder, we derive a general expression for the fracture stress as a decreasing function of porosity and disorder. Focusing on critical sites where the local stress is above the global fracture threshold, we also analyze the transition to failure in terms of a coarse-graining length. These findings provide a general framework which can also be more generally applied to multiphase and structural heterogeneous materials.

  15. High-Temperature Creep Degradation of the AM1/NiAlPt/EBPVD YSZ System

    NASA Astrophysics Data System (ADS)

    Riallant, Fanny; Cormier, Jonathan; Longuet, Arnaud; Milhet, Xavier; Mendez, José

    2014-01-01

    The failure mechanisms of a NiAlPt/electron beam physical vapor deposition yttria-stabilized-zirconia thermal barrier coating system deposited on the AM1 single crystalline substrate have been investigated under pure creep conditions in the temperature range from 1273 K to 1373 K (1000 °C to 1100 °C) and for durations up to 1000 hours. Doubly tapered specimens were used allowing for the analysis of different stress states and different accumulated viscoplastic strains for a given creep condition. Under such experiments, two kinds of damage mechanisms were observed. Under low applied stress conditions ( i.e., long creep tests), microcracking is localized in the vicinity of the thermally grown oxide (TGO). Under high applied stress conditions, an unconventional failure mechanism at the substrate/bond coat interface is observed because of large creep strains and fast creep deformation, hence leading to a limited TGO growth. This unconventional failure mechanism is observed although the interfacial bond coat/top coat TGO thickening is accelerated by the mechanical applied stress beyond a given stress threshold.

  16. Origin of anomalous inverse notch effect in bulk metallic glasses

    NASA Astrophysics Data System (ADS)

    Pan, J.; Zhou, H. F.; Wang, Z. T.; Li, Y.; Gao, H. J.

    2015-11-01

    Understanding notch-related failure is crucial for the design of reliable engineering structures. However, substantial controversies exist in the literature on the notch effect in bulk metallic glasses (BMGs), and the underlying physical mechanism responsible for the apparent confusion is still poorly understood. Here we investigate the physical origin of an inverse notch effect in a Zr-based metallic glass, where the tensile strength of the material is dramatically enhanced, rather than decreased (as expected from the stress concentration point of view), by introduction of a notch. Our experiments and molecular dynamics simulations show that the seemingly anomalous inverse notch effect is in fact caused by a transition in failure mechanism from shear banding at the notch tip to cavitation and void coalescence. Based on our theoretical analysis, the transition occurs as the stress triaxiality in the notched sample exceeds a material-dependent threshold value. Our results fill the gap in the current understanding of BMG strength and failure mechanism by resolving the conflicts on notch effects and may inspire re-interpretation of previous reports on BMG fracture toughness where pre-existing notches were routinely adopted.

  17. A Modified Split Hopkinson Pressure Bar Approach for Mimicking Dynamic Oscillatory Stress Fluctuations During Earthquake Rupture

    NASA Astrophysics Data System (ADS)

    Braunagel, M. J.; Griffith, W. A.

    2017-12-01

    Past experimental work has demonstrated that rock failure at high strain rates occurs by fragmentation rather than discrete fracture and is accompanied by a dramatic increase in rock strength. However, these observations are difficult to reconcile with the assertion that pulverized rocks in fault zones are the product of impulsive stresses during the passage of earthquake ruptures, as the distance from the principal slip zones of some pulverized rock is too great to exceed fragmentation transition. One potential explanation to this paradox that has been suggested is that repeated loading over the course of multiple earthquake ruptures may gradually reduce the pulverization threshold, in terms of both strain rate and strength. We propose that oscillatory loading during a single earthquake rupture may further lower these pulverization thresholds, and that traditional dynamic experimental approaches, such as the Split Hopkinson Pressure Bar (SHPB) wherein load is applied as a single, smooth, sinusoidal compressive wave, may not reflect natural loading conditions. To investigate the effects of oscillatory compressive loading expected during earthquake rupture propagation, we develop a controlled cyclic loading model on a SHPB apparatus utilizing two striker bars connected by an elastic spring. Unlike traditional SHPB experiments that utilize a gas gun to fire a projectile bar and generate a single compressive wave on impact with the incident bar, our modified striker bar assembly oscillates while moving down the gun barrel and generates two separate compressive pulses separated by a lag time. By modeling the modified assembly as a mass-spring-mass assembly accelerating due to the force of the released gas, we can predict the compression time of the spring upon impact and therefore the time delay between the generation of the first and second compressive waves. This allows us to predictably control load cycles with durations of only a few hundred microseconds. Initial experimental results demonstrate that fragmentation of Westerly Granite samples occurs at lower stresses and strain rates than those expected from traditional SHPB experiments.

  18. Where do we stand after twenty years of dynamic triggering studies? (Invited)

    NASA Astrophysics Data System (ADS)

    Prejean, S. G.; Hill, D. P.

    2013-12-01

    In the past two decades, remote dynamic triggering of earthquakes by other earthquakes has been explored in a variety of physical environments with a wide array of observation and modeling techniques. These studies have significantly refined our understanding of the state of the crust and the physical conditions controlling earthquake nucleation. Despite an ever growing database of dynamic triggering observations, significant uncertainties remain and vigorous debate in almost all aspects of the science continues. For example, although dynamic earthquake triggering can occur with peak dynamic stresses as small as 1 kPa, triggering thresholds and their dependence on local stress state, hydrological environment, and frictional properties of faults are not well understood. Some studies find a simple threshold based on the peak amplitude of shaking while others find dependencies on frequency, recharge time, and other parameters. Considerable debate remains over the range of physical processes responsible for dynamic triggering, and the wide variation in dynamic triggering responses and time scales suggests triggering by multiple physical processes. Although Coulomb shear failure with various friction laws can often explain dynamic triggering, particularly instantaneous triggering, delayed dynamic triggering may be dependent on fluid transport and other slowly evolving aseismic processes. Although our understanding of the global distribution of dynamic triggering has improved, it is far from complete due to spatially uneven monitoring. A major challenge involves establishing statistical significance of potentially triggered earthquakes, particularly if they are isolated events or time-delayed with respect to triggering stresses. Here we highlight these challenges and opportunities with existing data. We focus on environmental dependence of dynamic triggering by large remote earthquakes particularly in volcanic and geothermal systems, as these systems often have high rates of background seismicity. In many volcanic and geothermal systems, such as the Geysers in Northern California, dynamic triggering of micro-earthquakes is frequent and predictable. In contrast, most active and even erupting volcanoes in Alaska (with the exception of the Katmai Volcanic Cluster) do not experience dynamic triggering. We explore why.

  19. Strategy Guideline. Modeling Enclosure Design in Above-Grade Walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Lstiburek; Ueno, K.; Musunuru, S.

    2016-02-01

    The Strategy Guideline, written by the U.S. Department of Energy's research team Building Science Corporation, 1) describes how to model and interpret results of models for above-grade walls, and 2) analyzes the failure thresholds and criteria for above-grade walls. A library of above-grade walls with historically successful performance was used to calibrate WUFI (Wärme und Feuchte instationär) software models. The information is generalized for application to a broad population of houses within the limits of existing experience.

  20. Expanded envelope concepts for aircraft control-element failure detection and identification

    NASA Technical Reports Server (NTRS)

    Weiss, Jerold L.; Hsu, John Y.

    1988-01-01

    The purpose of this effort was to develop and demonstrate concepts for expanding the envelope of failure detection and isolation (FDI) algorithms for aircraft-path failures. An algorithm which uses analytic-redundancy in the form of aerodynamic force and moment balance equations was used. Because aircraft-path FDI uses analytical models, there is a tradeoff between accuracy and the ability to detect and isolate failures. For single flight condition operation, design and analysis methods are developed to deal with this robustness problem. When the departure from the single flight condition is significant, algorithm adaptation is necessary. Adaptation requirements for the residual generation portion of the FDI algorithm are interpreted as the need for accurate, large-motion aero-models, over a broad range of velocity and altitude conditions. For the decision-making part of the algorithm, adaptation may require modifications to filtering operations, thresholds, and projection vectors that define the various hypothesis tests performed in the decision mechanism. Methods of obtaining and evaluating adequate residual generation and decision-making designs have been developed. The application of the residual generation ideas to a high-performance fighter is demonstrated by developing adaptive residuals for the AFTI-F-16 and simulating their behavior under a variety of maneuvers using the results of a NASA F-16 simulation.

Top