DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Representations are developed and illustrated for the distribution of link property values at the time of link failure in the presence of aleatory uncertainty in link properties. The following topics are considered: (i) defining properties for weak links and strong links, (ii) cumulative distribution functions (CDFs) for link failure time, (iii) integral-based derivation of CDFs for link property at time of link failure, (iv) sampling-based approximation of CDFs for link property at time of link failure, (v) verification of integral-based and sampling-based determinations of CDFs for link property at time of link failure, (vi) distributions of link properties conditional onmore » time of link failure, and (vii) equivalence of two different integral-based derivations of CDFs for link property at time of link failure.« less
NASA Astrophysics Data System (ADS)
Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung
2007-07-01
This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.
A bivariate model for analyzing recurrent multi-type automobile failures
NASA Astrophysics Data System (ADS)
Sunethra, A. A.; Sooriyarachchi, M. R.
2017-09-01
The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by the bivariate model. The proposed model can be used to determine the time and type of failure that would occur in the automobiles considered here.
System Lifetimes, The Memoryless Property, Euler's Constant, and Pi
ERIC Educational Resources Information Center
Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon
2013-01-01
A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.
1995-08-01
A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-27
... discussion of analyses of the initial submissions of data concerning mechanical fitting failures in... information. The DIMP rule also required distribution pipeline operators to report failures of mechanical... mechanical fitting failure reporting will be preliminary at this time. They will be based on a limited set of...
NASA Astrophysics Data System (ADS)
Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc
2004-09-01
High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and equations of drift mechanisms versus time fitted from experimental measurements.
Application of a truncated normal failure distribution in reliability testing
NASA Technical Reports Server (NTRS)
Groves, C., Jr.
1968-01-01
Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.
Alwan, Faris M; Baharum, Adam; Hassan, Geehan S
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.
Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346
[Hazard function and life table: an introduction to the failure time analysis].
Matsushita, K; Inaba, H
1987-04-01
Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.
NASA Technical Reports Server (NTRS)
Packard, Michael H.
2002-01-01
Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.
Quantile Regression Models for Current Status Data
Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen
2016-01-01
Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307
A Novel Solution-Technique Applied to a Novel WAAS Architecture
NASA Technical Reports Server (NTRS)
Bavuso, J.
1998-01-01
The Federal Aviation Administration has embarked on an historic task of modernizing and significantly improving the national air transportation system. One system that uses the Global Positioning System (GPS) to determine aircraft navigational information is called the Wide Area Augmentation System (WAAS). This paper describes a reliability assessment of one candidate system architecture for the WAAS. A unique aspect of this study regards the modeling and solution of a candidate system that allows a novel cold sparing scheme. The cold spare is a WAAS communications satellite that is fabricated and launched after a predetermined number of orbiting satellite failures have occurred and after some stochastic fabrication time transpires. Because these satellites are complex systems with redundant components, they exhibit an increasing failure rate with a Weibull time to failure distribution. Moreover, the cold spare satellite build-time is Weibull and upon launch is considered to be a good-as-new system with an increasing failure rate and a Weibull time to failure distribution as well. The reliability model for this system is non-Markovian because three distinct system clocks are required: the time to failure of the orbiting satellites, the build time for the cold spare, and the time to failure for the launched spare satellite. A powerful dynamic fault tree modeling notation and Monte Carlo simulation technique with importance sampling are shown to arrive at a reliability prediction for a 10 year mission.
NASA Technical Reports Server (NTRS)
Schneeweiss, W.
1977-01-01
It is shown how the availability and MTBF (Mean Time Between Failures) of a redundant system with subsystems maintenanced at the points of so-called stationary renewal processes can be determined from the distributions of the intervals between maintenance actions and of the failure-free operating intervals of the subsystems. The results make it possible, for example, to determine the frequency and duration of hidden failure states in computers which are incidentally corrected during the repair of observed failures.
Komada, Fusao
2018-01-01
The aim of this study was to investigate the time-to-onset of drug-induced interstitial lung disease (DILD) following the administration of small molecule molecularly-targeted drugs via the use of the spontaneous adverse reaction reporting system of the Japanese Adverse Drug Event Report database. DILD datasets for afatinib, alectinib, bortezomib, crizotinib, dasatinib, erlotinib, everolimus, gefitinib, imatinib, lapatinib, nilotinib, osimertinib, sorafenib, sunitinib, temsirolimus, and tofacitinib were used to calculate the median onset times of DILD and the Weibull distribution parameters, and to perform the hierarchical cluster analysis. The median onset times of DILD for afatinib, bortezomib, crizotinib, erlotinib, gefitinib, and nilotinib were within one month. The median onset times of DILD for dasatinib, everolimus, lapatinib, osimertinib, and temsirolimus ranged from 1 to 2 months. The median onset times of the DILD for alectinib, imatinib, and tofacitinib ranged from 2 to 3 months. The median onset times of the DILD for sunitinib and sorafenib ranged from 8 to 9 months. Weibull distributions for these drugs when using the cluster analysis showed that there were 4 clusters. Cluster 1 described a subgroup with early to later onset DILD and early failure type profiles or a random failure type profile. Cluster 2 exhibited early failure type profiles or a random failure type profile with early onset DILD. Cluster 3 exhibited a random failure type profile or wear out failure type profiles with later onset DILD. Cluster 4 exhibited an early failure type profile or a random failure type profile with the latest onset DILD.
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.
Voltage stress effects on microcircuit accelerated life test failure rates
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1976-01-01
The applicability of Arrhenius and Eyring reaction rate models for describing microcircuit aging characteristics as a function of junction temperature and applied voltage was evaluated. The results of a matrix of accelerated life tests with a single metal oxide semiconductor microcircuit operated at six different combinations of temperature and voltage were used to evaluate the models. A total of 450 devices from two different lots were tested at ambient temperatures between 200 C and 250 C and applied voltages between 5 Vdc and 15 Vdc. A statistical analysis of the surface related failure data resulted in bimodal failure distributions comprising two lognormal distributions; a 'freak' distribution observed early in time, and a 'main' distribution observed later in time. The Arrhenius model was shown to provide a good description of device aging as a function of temperature at a fixed voltage. The Eyring model also appeared to provide a reasonable description of main distribution device aging as a function of temperature and voltage. Circuit diagrams are shown.
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
The WorkPlace distributed processing environment
NASA Technical Reports Server (NTRS)
Ames, Troy; Henderson, Scott
1993-01-01
Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.
Flexural Progressive Failure of Carbon/Glass Interlayer and Intralayer Hybrid Composites.
Wang, Qingtao; Wu, Weili; Gong, Zhili; Li, Wei
2018-04-17
The flexural progressive failure modes of carbon fiber and glass fiber (C/G) interlayer and intralayer hybrid composites were investigated in this work. Results showed that the bending failure modes for interlayer hybrid composites are determined by the layup structure. Besides, the bending failure is characterized by the compression failure of the upper layer, when carbon fiber tends to distribute in the upper layer, the interlayer hybrid composite fails early, the failure force is characterized by a multi-stage slightly fluctuating decline and the fracture area exhibits a diamond shape. While carbon fiber distributes in the middle or bottom layer, the failure time starts late, and the failure process exhibits one stage sharp force/stress drop, the fracture zone of glass fiber above the carbon layers presents an inverted trapezoid shape, while the fracture of glass fiber below the carbon layers exhibits an inverted triangular shape. With regards to the intralayer hybrid composites, the C/G hybrid ratio plays a dominating role in the bending failure which could be considered as the mixed failures of four structures. The bending failure of intralayer hybrid composites occurs in advance since carbon fiber are located in each layer; the failure process shows a multi-stage fluctuating decline, and the decline slows down as carbon fiber content increases, and the fracture sound release has the characteristics of a low intensity and high frequency for a long time. By contrast, as glass fiber content increases, the bending failure of intralayer composites is featured with a multi-stage cliff decline with a high amplitude and low frequency for a short-time fracture sound release.
Application of Weibull analysis to SSME hardware
NASA Technical Reports Server (NTRS)
Gray, L. A. B.
1986-01-01
Generally, it has been documented that the wearing of engine parts forms a failure distribution which can be approximated by a function developed by Weibull. The purpose here is to examine to what extent the Weibull distribution approximates failure data for designated engine parts of the Space Shuttle Main Engine (SSME). The current testing certification requirements will be examined in order to establish confidence levels. An examination of the failure history of SSME parts/assemblies (turbine blades, main combustion chamber, or high pressure fuel pump first stage impellers) which are limited in usage by time or starts will be done by using updated Weibull techniques. Efforts will be made by the investigator to predict failure trends by using Weibull techniques for SSME parts (turbine temperature sensors, chamber pressure transducers, actuators, and controllers) which are not severely limited by time or starts.
Forecasting overhaul or replacement intervals based on estimated system failure intensity
NASA Astrophysics Data System (ADS)
Gannon, James M.
1994-12-01
System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.
Flexural Progressive Failure of Carbon/Glass Interlayer and Intralayer Hybrid Composites
Wu, Weili; Gong, Zhili
2018-01-01
The flexural progressive failure modes of carbon fiber and glass fiber (C/G) interlayer and intralayer hybrid composites were investigated in this work. Results showed that the bending failure modes for interlayer hybrid composites are determined by the layup structure. Besides, the bending failure is characterized by the compression failure of the upper layer, when carbon fiber tends to distribute in the upper layer, the interlayer hybrid composite fails early, the failure force is characterized by a multi-stage slightly fluctuating decline and the fracture area exhibits a diamond shape. While carbon fiber distributes in the middle or bottom layer, the failure time starts late, and the failure process exhibits one stage sharp force/stress drop, the fracture zone of glass fiber above the carbon layers presents an inverted trapezoid shape, while the fracture of glass fiber below the carbon layers exhibits an inverted triangular shape. With regards to the intralayer hybrid composites, the C/G hybrid ratio plays a dominating role in the bending failure which could be considered as the mixed failures of four structures. The bending failure of intralayer hybrid composites occurs in advance since carbon fiber are located in each layer; the failure process shows a multi-stage fluctuating decline, and the decline slows down as carbon fiber content increases, and the fracture sound release has the characteristics of a low intensity and high frequency for a long time. By contrast, as glass fiber content increases, the bending failure of intralayer composites is featured with a multi-stage cliff decline with a high amplitude and low frequency for a short-time fracture sound release. PMID:29673236
Applications of crude incidence curves.
Korn, E L; Dorey, F J
1992-04-01
Crude incidence curves display the cumulative number of failures of interest as a function of time. With competing causes of failure, they are distinct from cause-specific incidence curves that treat secondary types of failures as censored observations. After briefly reviewing their definition and estimation, we present five applications of crude incidence curves to show their utility in a broad range of studies. In some of these applications it is helpful to model survival-time distributions with use of two different time metameters, for example, time from diagnosis and age of the patient. We describe how one can incorporate published vital statistics into the models when secondary types of failure correspond to common causes of death.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximationmore » and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.« less
Snow fracture: From micro-cracking to global failure
NASA Astrophysics Data System (ADS)
Capelli, Achille; Reiweger, Ingrid; Schweizer, Jürg
2017-04-01
Slab avalanches are caused by a crack forming and propagating in a weak layer within the snow cover, which eventually causes the detachment of the overlying cohesive slab. The gradual damage process leading to the nucleation of the initial failure is still not entirely understood. Therefore, we studied the damage process preceding snow failure by analyzing the acoustic emissions (AE) generated by bond failure or micro-cracking. The AE allow studying the ongoing progressive failure in a non-destructive way. We performed fully load-controlled failure experiments on snow samples presenting a weak layer and recorded the generated AE. The size and frequency of the generated AE increased before failure revealing an acceleration of the damage process with increased size and frequency of damage and/or microscopic cracks. The AE energy was power-law distributed and the exponent (b-value) decreased approaching failure. The waiting time followed an exponential distribution with increasing exponential coefficient λ before failure. The decrease of the b-value and the increase of λ correspond to a change in the event distribution statistics indicating a transition from homogeneously distributed uncorrelated damage producing mostly small AE to localized damage, which cause larger correlated events which leads to brittle failure. We observed brittle failure for the fast experiment and a more ductile behavior for the slow experiments. This rate dependence was reflected also in the AE signature. In the slow experiments the b value and λ were almost constant, and the energy rate increase was moderate indicating that the damage process was in a stable state - suggesting the damage and healing processes to be balanced. On a shorter time scale, however, the AE parameters varied indicating that the damage process was not steady but consisted of a sum of small bursts. We assume that the bursts may have been generated by cascades of correlated micro-cracks caused by localization of stresses at a small scale. The healing process may then have prevented the self-organization of this small scale damage and, therefore, the total failure of the sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property valuemore » at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.« less
Shuang, Qing; Zhang, Mingyuan; Yuan, Yongbo
2014-01-01
As a mean of supplying water, Water distribution system (WDS) is one of the most important complex infrastructures. The stability and reliability are critical for urban activities. WDSs can be characterized by networks of multiple nodes (e.g. reservoirs and junctions) and interconnected by physical links (e.g. pipes). Instead of analyzing highest failure rate or highest betweenness, reliability of WDS is evaluated by introducing hydraulic analysis and cascading failures (conductive failure pattern) from complex network. The crucial pipes are identified eventually. The proposed methodology is illustrated by an example. The results show that the demand multiplier has a great influence on the peak of reliability and the persistent time of the cascading failures in its propagation in WDS. The time period when the system has the highest reliability is when the demand multiplier is less than 1. There is a threshold of tolerance parameter exists. When the tolerance parameter is less than the threshold, the time period with the highest system reliability does not meet minimum value of demand multiplier. The results indicate that the system reliability should be evaluated with the properties of WDS and the characteristics of cascading failures, so as to improve its ability of resisting disasters. PMID:24551102
Frequency-Magnitude relationships for Underwater Landslides of the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Urgeles, R.; Gràcia, E.; Lo Iacono, C.; Sànchez-Serra, C.; Løvholt, F.
2017-12-01
An updated version of the submarine landslide database of the Mediterranean Sea contains 955 MTDs and 2608 failure scars showing that submarine landslides are ubiquitous features along Mediterranean continental margins. Their distribution reveals that major deltaic wedges display the larger submarine landslides, while seismically active margins are characterized by relatively small failures. In all regions, landslide size distributions display power law scaling for landslides > 1 km3. We find consistent differences on the exponent of the power law depending on the geodynamic setting. Active margins present steep slopes of the frequency-magnitude relationship whereas passive margins tend to display gentler slopes. This pattern likely responds to the common view that tectonically active margins have numerous but small failures, while passive margins have larger but fewer failures. Available age information suggests that failures exceeding 1000 km3 are infrequent and may recur every 40 kyr. Smaller failures that can still cause significant damage might be relatively frequent, with failures > 1 km3 likely recurring every 40 years. The database highlights that our knowledge of submarine landslide activity with time is limited to a few tens of thousand years. Available data suggest that submarine landslides may preferentially occur during lowstand periods, but no firm conclusion can be made on this respect, as only 149 landslides (out of 955 included in the database) have relatively accurate age determinations. The timing and regional changes in the frequency-magnitude distribution suggest that sedimentation patterns and pore pressure development have had a major role in triggering slope failures and control the sediment flux from mass wasting to the deep basin.
1984-10-26
test for independence; ons i ser, -, of the poduct life estimator; dependent risks; 119 ASRACT Coniinue on ’wme-se f nereiary-~and iaen r~f> by Worst...the failure times associated with different failure - modes when we really should use a bivariate (or multivariate) distribution, then what is the...dependencies may be present, then what is the magnitude of the estimation error? S The third specific aim will attempt to obtain bounds on the
Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang
2017-11-01
This paper investigates the fault-tolerant time-varying formation control problems for high-order linear multi-agent systems in the presence of actuator failures. Firstly, a fully distributed formation control protocol is presented to compensate for the influences of both bias fault and loss of effectiveness fault. Using the adaptive online updating strategies, no global knowledge about the communication topology is required and the bounds of actuator failures can be unknown. Then an algorithm is proposed to determine the control parameters of the fault-tolerant formation protocol, where the time-varying formation feasible conditions and an approach to expand the feasible formation set are given. Furthermore, the stability of the proposed algorithm is proven based on the Lyapunov-like theory. Finally, two simulation examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
NASA Astrophysics Data System (ADS)
Belapurkar, Rohit K.
Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.
Bayesian transformation cure frailty models with multivariate failure time data.
Yin, Guosheng
2008-12-10
We propose a class of transformation cure frailty models to accommodate a survival fraction in multivariate failure time data. Established through a general power transformation, this family of cure frailty models includes the proportional hazards and the proportional odds modeling structures as two special cases. Within the Bayesian paradigm, we obtain the joint posterior distribution and the corresponding full conditional distributions of the model parameters for the implementation of Gibbs sampling. Model selection is based on the conditional predictive ordinate statistic and deviance information criterion. As an illustration, we apply the proposed method to a real data set from dentistry.
An investigation of gear mesh failure prediction techniques. M.S. Thesis - Cleveland State Univ.
NASA Technical Reports Server (NTRS)
Zakrajsek, James J.
1989-01-01
A study was performed in which several gear failure prediction methods were investigated and applied to experimental data from a gear fatigue test apparatus. The primary objective was to provide a baseline understanding of the prediction methods and to evaluate their diagnostic capabilities. The methods investigated use the signal average in both the time and frequency domain to detect gear failure. Data from eleven gear fatigue tests were recorded at periodic time intervals as the gears were run from initiation to failure. Four major failure modes, consisting of heavy wear, tooth breakage, single pits, and distributed pitting were observed among the failed gears. Results show that the prediction methods were able to detect only those gear failures which involved heavy wear or distributed pitting. None of the methods could predict fatigue cracks, which resulted in tooth breakage, or single pits. It is suspected that the fatigue cracks were not detected because of limitations in data acquisition rather than in methodology. Additionally, the frequency response between the gear shaft and the transducer was found to significantly affect the vibration signal. The specific frequencies affected were filtered out of the signal average prior to application of the methods.
An Autonomous Distributed Fault-Tolerant Local Positioning System
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2017-01-01
We describe a fault-tolerant, GPS-independent (Global Positioning System) distributed autonomous positioning system for static/mobile objects and present solutions for providing highly-accurate geo-location data for the static/mobile objects in dynamic environments. The reliability and accuracy of a positioning system fundamentally depends on two factors; its timeliness in broadcasting signals and the knowledge of its geometry, i.e., locations and distances of the beacons. Existing distributed positioning systems either synchronize to a common external source like GPS or establish their own time synchrony using a scheme similar to a master-slave by designating a particular beacon as the master and other beacons synchronize to it, resulting in a single point of failure. Another drawback of existing positioning systems is their lack of addressing various fault manifestations, in particular, communication link failures, which, as in wireless networks, are increasingly dominating the process failures and are typically transient and mobile, in the sense that they typically affect different messages to/from different processes over time.
Failure-Time Distribution Of An m-Out-of-n System
NASA Technical Reports Server (NTRS)
Scheuer, Ernest M.
1988-01-01
Formulas for reliability extended to more general cases. Useful in analyses of reliabilities of practical systems and structures, especially of redundant systems of identical components, among which operating loads distributed equally.
The implementation and use of Ada on distributed systems with reliability requirements
NASA Technical Reports Server (NTRS)
Reynolds, P. F.; Knight, J. C.; Urquhart, J. I. A.
1983-01-01
The issues involved in the use of the programming language Ada on distributed systems are discussed. The effects of Ada programs on hardware failures such as loss of a processor are emphasized. It is shown that many Ada language elements are not well suited to this environment. Processor failure can easily lead to difficulties on those processors which remain. As an example, the calling task in a rendezvous may be suspended forever if the processor executing the serving task fails. A mechanism for detecting failure is proposed and changes to the Ada run time support system are suggested which avoid most of the difficulties. Ada program structures are defined which allow programs to reconfigure and continue to provide service following processor failure.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
A real-time diagnostic and performance monitor for UNIX. M.S. Thesis
NASA Technical Reports Server (NTRS)
Dong, Hongchao
1992-01-01
There are now over one million UNIX sites and the pace at which new installations are added is steadily increasing. Along with this increase, comes a need to develop simple efficient, effective and adaptable ways of simultaneously collecting real-time diagnostic and performance data. This need exists because distributed systems can give rise to complex failure situations that are often un-identifiable with single-machine diagnostic software. The simultaneous collection of error and performance data is also important for research in failure prediction and error/performance studies. This paper introduces a portable method to concurrently collect real-time diagnostic and performance data on a distributed UNIX system. The combined diagnostic/performance data collection is implemented on a distributed multi-computer system using SUN4's as servers. The approach uses existing UNIX system facilities to gather system dependability information such as error and crash reports. In addition, performance data such as CPU utilization, disk usage, I/O transfer rate and network contention is also collected. In the future, the collected data will be used to identify dependability bottlenecks and to analyze the impact of failures on system performance.
Failure Forecasting in Triaxially Stressed Sandstones
NASA Astrophysics Data System (ADS)
Crippen, A.; Bell, A. F.; Curtis, A.; Main, I. G.
2017-12-01
Precursory signals to fracturing events have been observed to follow power-law accelerations in spatial, temporal, and size distributions leading up to catastrophic failure. In previous studies this behavior was modeled using Voight's relation of a geophysical precursor in order to perform `hindcasts' by solving for failure onset time. However, performing this analysis in retrospect creates a bias, as we know an event happened, when it happened, and we can search data for precursors accordingly. We aim to remove this retrospective bias, thereby allowing us to make failure forecasts in real-time in a rock deformation laboratory. We triaxially compressed water-saturated 100 mm sandstone cores (Pc= 25MPa, Pp = 5MPa, σ = 1.0E-5 s-1) to the point of failure while monitoring strain rate, differential stress, AEs, and continuous waveform data. Here we compare the current `hindcast` methods on synthetic and our real laboratory data. We then apply these techniques to increasing fractions of the data sets to observe the evolution of the failure forecast time with precursory data. We discuss these results as well as our plan to mitigate false positives and minimize errors for real-time application. Real-time failure forecasting could revolutionize the field of hazard mitigation of brittle failure processes by allowing non-invasive monitoring of civil structures, volcanoes, and possibly fault zones.
Failure rate analysis of Goddard Space Flight Center spacecraft performance during orbital life
NASA Technical Reports Server (NTRS)
Norris, H. P.; Timmins, A. R.
1976-01-01
Space life performance data on 57 Goddard Space Flight Center spacecraft are analyzed from the standpoint of determining an appropriate reliability model and the associated reliability parameters. Data from published NASA reports, which cover the space performance of GSFC spacecraft launched in the 1960-1970 decade, form the basis of the analyses. The results of the analyses show that the time distribution of 449 malfunctions, of which 248 were classified as failures (not necessarily catastrophic), follow a reliability growth pattern that can be described with either the Duane model or a Weibull distribution. The advantages of both mathematical models are used in order to: identify space failure rates, observe chronological trends, and compare failure rates with those experienced during the prelaunch environmental tests of the flight model spacecraft.
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrest, S.R.; Ban, V.S.; Gasparian, G.
1988-05-01
The authors measured the mean time to failure (MTTF) for a statistically significant population of planar In/sub 0.53/Ga/sub 0.47/As/InP heterostructure p-i-n photodetectors at several elevated temperatures. The probability for failure is fit to a log-normal distribution, with the result that the width of the failure distribution is sigma = 0.55 +- 0.2, and is roughly independent of temperature. From the temperature dependence of the MTFF data, they find that the failure mechanism is thermally activated, with an activation energy of 1.5 +- 0.2 eV measured in the temperature range of 170 - 250/sup 0/C. This extrapolates to a MTTF ofmore » less than 0.1 failure in 10/sup 9/ h (or < 0.1 FIT) at 70/sup 0/C, indicating that such devices are useful for systems requiring extremely high reliable components, even if operated at elevated temperatures for significant time periods. To the authors' knowledge, this activation energy is the highest value reported for In/sub 0.53/Ga/sub 0.47/As/InP photodetectors, and is significantly higher than the energies of -- 0.85 eV often suspected to these devices.« less
NASA Technical Reports Server (NTRS)
Behbehani, K.
1980-01-01
A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
A Weibull distribution accrual failure detector for cloud computing.
Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.
Failure Time Distributions: Estimates and Asymptotic Results.
1980-01-01
of the models. A parametric family of distributions is proposed for approximating life distri- butions whose hazard rate is bath-tub shaped, this...of the limiting dirtributions of the models. A parametric family of distributions is proposed for approximating life distribution~s whose hazard rate...12. always justified. But, because of this gener- ality, the possible limit laws for the maximum form a very large family . The
Processor tradeoffs in distributed real-time systems
NASA Technical Reports Server (NTRS)
Krishna, C. M.; Shin, Kang G.; Bhandari, Inderpal S.
1987-01-01
The problem of the optimization of the design of real-time distributed systems is examined with reference to a class of computer architectures similar to the continuously reconfigurable multiprocessor flight control system structure, CM2FCS. Particular attention is given to the impact of processor replacement and the burn-in time on the probability of dynamic failure and mean cost. The solution is obtained numerically and interpreted in the context of real-time applications.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
A Weibull distribution accrual failure detector for cloud computing
Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229
Remote maintenance monitoring system
NASA Technical Reports Server (NTRS)
Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)
1992-01-01
A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.
Holbrook, Christopher M.; Perry, Russell W.; Brandes, Patricia L.; Adams, Noah S.
2013-01-01
In telemetry studies, premature tag failure causes negative bias in fish survival estimates because tag failure is interpreted as fish mortality. We used mark-recapture modeling to adjust estimates of fish survival for a previous study where premature tag failure was documented. High rates of tag failure occurred during the Vernalis Adaptive Management Plan’s (VAMP) 2008 study to estimate survival of fall-run Chinook salmon (Oncorhynchus tshawytscha) during migration through the San Joaquin River and Sacramento-San Joaquin Delta, California. Due to a high rate of tag failure, the observed travel time distribution was likely negatively biased, resulting in an underestimate of tag survival probability in this study. Consequently, the bias-adjustment method resulted in only a small increase in estimated fish survival when the observed travel time distribution was used to estimate the probability of tag survival. Since the bias-adjustment failed to remove bias, we used historical travel time data and conducted a sensitivity analysis to examine how fish survival might have varied across a range of tag survival probabilities. Our analysis suggested that fish survival estimates were low (95% confidence bounds range from 0.052 to 0.227) over a wide range of plausible tag survival probabilities (0.48–1.00), and this finding is consistent with other studies in this system. When tags fail at a high rate, available methods to adjust for the bias may perform poorly. Our example highlights the importance of evaluating the tag life assumption during survival studies, and presents a simple framework for evaluating adjusted survival estimates when auxiliary travel time data are available.
Riding the Right Wavelet: Quantifying Scale Transitions in Fractured Rocks
NASA Astrophysics Data System (ADS)
Rizzo, Roberto E.; Healy, David; Farrell, Natalie J.; Heap, Michael J.
2017-12-01
The mechanics of brittle failure is a well-described multiscale process that involves a rapid transition from distributed microcracks to localization along a single macroscopic rupture plane. However, considerable uncertainty exists regarding both the length scale at which this transition occurs and the underlying causes that prompt this shift from a distributed to a localized assemblage of cracks or fractures. For the first time, we used an image analysis tool developed to investigate orientation changes at different scales in images of fracture patterns in faulted materials, based on a two-dimensional continuous wavelet analysis. We detected the abrupt change in the fracture pattern from distributed tensile microcracks to localized shear failure in a fracture network produced by triaxial deformation of a sandstone core plug. The presented method will contribute to our ability of unraveling the physical processes at the base of catastrophic rock failure, including the nucleation of earthquakes, landslides, and volcanic eruptions.
Time-dependent breakdown of fiber networks: Uncertainty of lifetime
NASA Astrophysics Data System (ADS)
Mattsson, Amanda; Uesaka, Tetsu
2017-05-01
Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
NASA Astrophysics Data System (ADS)
Thompson, C. J.; Croke, J. C.; Grove, J. R.
2012-04-01
Non-linearity in physical systems provides a conceptual framework to explain complex patterns and form that are derived from complex internal dynamics rather than external forcings, and can be used to inform modeling and improve landscape management. One process that has been investigated previously to explore the existence of self-organised critical system (SOC) in river systems at the basin-scale is bank failure. Spatial trends in bank failure have been previously quantified to determine if the distribution of bank failures at the basin scale exhibit the necessary power law magnitude/frequency distributions. More commonly bank failures are investigated at a small-scale using several cross-sections with strong emphasis on local-scale factors such as bank height, cohesion and hydraulic properties. Advancing our understanding of non-linearity in such processes, however, requires many more studies where both the spatial and temporal measurements of the process can be used to investigate the existence or otherwise of non-linearity and self-organised criticality. This study presents measurements of bank failure throughout the Lockyer catchment in southeast Queensland, Australia, which experienced an extreme flood event in January 2011 resulting in the loss of human lives and geomorphic channel change. The most dominant form of fluvial adjustment consisted of changes in channel geometry and notably widespread bank failures, which were readily identifiable as 'scalloped' shaped failure scarps. The spatial extents of these were mapped using high-resolution LiDAR derived digital elevation model and were verified by field surveys and air photos. Pre-flood event LiDAR coverage for the catchment also existed allowing direct comparison of the magnitude and frequency of bank failures from both pre and post-flood time periods. Data were collected and analysed within a GIS framework and investigated for power-law relationships. Bank failures appeared random and occurred throughout the basin but plots of magnitude and frequency did display power-law scaling of failures. In addition, there was a lack of site specific correlations between bank failure and other factors such channel width, bank height and stream power. The data are used here to discuss the existence of SOC in fluvial systems and the relative role of local and basin-wide processes in influencing their distribution in space and time.
NASA Technical Reports Server (NTRS)
Delucia, R. A.; Mangano, G. J.
1977-01-01
Statistics on gas turbine rotor failures that have occurred in U.S. commercial aviation during 1975 are presented. The compiled data were analyzed to establish: (1) The incidence of rotor failures and the number of contained and uncontained rotor bursts; (2) The distribution of rotor bursts with respect to engine rotor component; i.e., fan, compressor or turbine; (3) The type of rotor fragment (disk, rim or blade) typically generated at burst; (4) The cause of failure; (5) The type of engines involved; and (6) The flight condition at the time of failure.
Monitoring Distributed Real-Time Systems: A Survey and Future Directions
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Pike, Lee
2010-01-01
Runtime monitors have been proposed as a means to increase the reliability of safety-critical systems. In particular, this report addresses runtime monitors for distributed hard real-time systems. This class of systems has had little attention from the monitoring community. The need for monitors is shown by discussing examples of avionic systems failure. We survey related work in the field of runtime monitoring. Several potential monitoring architectures for distributed real-time systems are presented along with a discussion of how they might be used to monitor properties of interest.
Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1998-01-01
Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.
Shen, H; Xu, Y; Dickinson, B T
2014-11-18
Inspired by sensing strategies observed in birds and bats, a new attitude control concept of directly using real-time pressure and shear stresses has recently been studied. It was shown that with an array of onboard airflow sensors, small unmanned aircraft systems can promptly respond to airflow changes and improve flight performances. In this paper, a mapping function is proposed to compute aerodynamic moments from the real-time pressure and shear data in a practical and computationally tractable formulation. Since many microscale airflow sensors are embedded on the small unmanned aircraft system surface, it is highly possible that certain sensors may fail. Here, an adaptive control system is developed that is robust to sensor failure as well as other numerical mismatches in calculating real-time aerodynamic moments. The advantages of the proposed method are shown in the following simulation cases: (i) feedback pressure and wall shear data from a distributed array of 45 airflow sensors; (ii) 50% failure of the symmetrically distributed airflow sensor array; and (iii) failure of all the airflow sensors on one wing. It is shown that even if 50% of the airflow sensors have failures, the aircraft is still stable and able to track the attitude commands.
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
User-Perceived Reliability of M-for-N (M: N) Shared Protection Systems
NASA Astrophysics Data System (ADS)
Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue
In this paper we investigate the reliability of general type shared protection systems i.e. M for N (M: N) that can typically be applied to various telecommunication network devices. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner. The mathematical analysis gives the closed-form solution of the availability, the recursive computing algorithm of the MTTFF (Mean Time to First Failure) and the MTTF (Mean Time to Failure) perceived by an arbitrary end user. We also show that, under a certain condition, the probability distribution of TTFF (Time to First Failure) can be approximated by a simple exponential distribution. The analysis provides useful information for the analysis and the design of not only the telecommunication network devices but also other general shared protection systems that are subject to service level agreements (SLA) involving user-perceived reliability measures.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
Spatio-temporal changes in river bank mass failures in the Lockyer Valley, Queensland, Australia
NASA Astrophysics Data System (ADS)
Thompson, Chris; Croke, Jacky; Grove, James; Khanal, Giri
2013-06-01
Wet-flow river bank failure processes are poorly understood relative to the more commonly studied processes of fluvial entrainment and gravity-induced mass failures. Using high resolution topographic data (LiDAR) and near coincident aerial photography, this study documents the downstream distribution of river bank mass failures which occurred as a result of a catastrophic flood in the Lockyer Valley in January 2011. In addition, this distribution is compared with wet flow mass failure features from previous large floods. The downstream analysis of these two temporal data sets indicated that they occur across a range of river lengths, catchment areas, bank heights and angles and do not appear to be scale-dependent or spatially restricted to certain downstream zones. The downstream trends of each bank failure distribution show limited spatial overlap with only 17% of wet flows common to both distributions. The modification of these features during the catastrophic flood of January 2011 also indicated that such features tend to form at some 'optimum' shape and show limited evidence of subsequent enlargement even when flow and energy conditions within the banks and channel were high. Elevation changes indicate that such features show evidence for infilling during subsequent floods. The preservation of these features in the landscape for a period of at least 150 years suggests that the seepage processes dominant in their initial formation appear to have limited role in their continuing enlargement over time. No evidence of gully extension or headwall retreat is evident. It is estimated that at least 12 inundation events would be required to fill these failures based on the average net elevation change recorded for the 2011 event. Existing conceptual models of downstream bank erosion process zones may need to consider a wider array of mass failure processes to accommodate for wet flow failures.
Simulations of fast crab cavity failures in the high luminosity Large Hadron Collider
NASA Astrophysics Data System (ADS)
Yee-Rendon, Bruce; Lopez-Fernandez, Ricardo; Barranco, Javier; Calaga, Rama; Marsili, Aurelien; Tomás, Rogelio; Zimmermann, Frank; Bouly, Frédéric
2014-05-01
Crab cavities (CCs) are a key ingredient of the high luminosity Large Hadron Collider (HL-LHC) project for increasing the luminosity of the LHC. At KEKB, CCs have exhibited abrupt changes of phase and voltage during a time period of the order of a few LHC turns and considering the significant stored energy in the HL-LHC beam, CC failures represent a serious threat in regard to LHC machine protection. In this paper, we discuss the effect of CC voltage or phase changes on a time interval similar to, or longer than, the one needed to dump the beam. The simulations assume a quasistationary-state distribution to assess the particles losses for the HL-LHC. These distributions produce beam losses below the safe operation threshold for Gaussian tails, while, for non-Gaussian tails are on the same order of the limit. Additionally, some mitigation strategies are studied for reducing the damage caused by the CC failures.
Market failure, policy failure and other distortions in chronic disease markets
Watts, Jennifer J; Segal, Leonie
2009-01-01
Background The increasing prevalence of chronic disease represents a significant burden on most health systems. This paper explores the market failures and policy failures that exist in the management of chronic diseases. Discussion There are many sources of market failure in health care that undermine the efficiency of chronic disease management. These include incomplete information as well as information asymmetry between providers and consumers, the effect of externalities on consumer behaviour, and the divergence between social and private time preference rates. This has seen government and policy interventions to address both market failures and distributional issues resulting from the inability of private markets to reach an efficient and equitable distribution of resources. However, these have introduced a series of policy failures such as distorted re-imbursement arrangements across modalities and delivery settings. Summary The paper concludes that market failure resulting from a preference of individuals for 'immediate gratification' in the form of health care and disease management, rather than preventative services, where the benefits are delayed, has a major impact on achieving an efficient allocation of resources in markets for the management of chronic diseases. This distortion is compounded by government health policy that tends to favour medical and pharmaceutical interventions further contributing to distortions in the allocation of resources and inefficiencies in the management of chronic disease. PMID:19534822
CRANS - CONFIGURABLE REAL-TIME ANALYSIS SYSTEM
NASA Technical Reports Server (NTRS)
Mccluney, K.
1994-01-01
In a real-time environment, the results of changes or failures in a complex, interconnected system need evaluation quickly. Tabulations showing the effects of changes and/or failures of a given item in the system are generally only useful for a single input, and only with regard to that item. Subsequent changes become harder to evaluate as combinations of failures produce a cascade effect. When confronted by multiple indicated failures in the system, it becomes necessary to determine a single cause. In this case, failure tables are not very helpful. CRANS, the Configurable Real-time ANalysis System, can interpret a logic tree, constructed by the user, describing a complex system and determine the effects of changes and failures in it. Items in the tree are related to each other by Boolean operators. The user is then able to change the state of these items (ON/OFF FAILED/UNFAILED). The program then evaluates the logic tree based on these changes and determines any resultant changes to other items in the tree. CRANS can also search for a common cause for multiple item failures, and allow the user to explore the logic tree from within the program. A "help" mode and a reference check provide the user with a means of exploring an item's underlying logic from within the program. A commonality check determines single point failures for an item or group of items. Output is in the form of a user-defined matrix or matrices of colored boxes, each box representing an item or set of items from the logic tree. Input is via mouse selection of the matrix boxes, using the mouse buttons to toggle the state of the item. CRANS is written in C-language and requires the MIT X Window System, Version 11 Revision 4 or Revision 5. It requires 78K of RAM for execution and a three button mouse. It has been successfully implemented on Sun4 workstations running SunOS, HP9000 workstations running HP-UX, and DECstations running ULTRIX. No executable is provided on the distribution medium; however, a sample makefile is included. Sample input files are also included. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. This program was developed in 1992.
NASA Astrophysics Data System (ADS)
Bell, Andrew F.; Naylor, Mark; Heap, Michael J.; Main, Ian G.
2011-08-01
Power-law accelerations in the mean rate of strain, earthquakes and other precursors have been widely reported prior to material failure phenomena, including volcanic eruptions, landslides and laboratory deformation experiments, as predicted by several theoretical models. The Failure Forecast Method (FFM), which linearizes the power-law trend, has been routinely used to forecast the failure time in retrospective analyses; however, its performance has never been formally evaluated. Here we use synthetic and real data, recorded in laboratory brittle creep experiments and at volcanoes, to show that the assumptions of the FFM are inconsistent with the error structure of the data, leading to biased and imprecise forecasts. We show that a Generalized Linear Model method provides higher-quality forecasts that converge more accurately to the eventual failure time, accounting for the appropriate error distributions. This approach should be employed in place of the FFM to provide reliable quantitative forecasts and estimate their associated uncertainties.
Semiparametric regression analysis of interval-censored competing risks data.
Mao, Lu; Lin, Dan-Yu; Zeng, Donglin
2017-09-01
Interval-censored competing risks data arise when each study subject may experience an event or failure from one of several causes and the failure time is not observed directly but rather is known to lie in an interval between two examinations. We formulate the effects of possibly time-varying (external) covariates on the cumulative incidence or sub-distribution function of competing risks (i.e., the marginal probability of failure from a specific cause) through a broad class of semiparametric regression models that captures both proportional and non-proportional hazards structures for the sub-distribution. We allow each subject to have an arbitrary number of examinations and accommodate missing information on the cause of failure. We consider nonparametric maximum likelihood estimation and devise a fast and stable EM-type algorithm for its computation. We then establish the consistency, asymptotic normality, and semiparametric efficiency of the resulting estimators for the regression parameters by appealing to modern empirical process theory. In addition, we show through extensive simulation studies that the proposed methods perform well in realistic situations. Finally, we provide an application to a study on HIV-1 infection with different viral subtypes. © 2017, The International Biometric Society.
Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model
NING, JING; QIN, JING; SHEN, YU
2014-01-01
SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.; Gregory, S. T.; Urquhart, J. I. A.
1985-01-01
The use and implementation of Ada in distributed environments in which reliability is the primary concern were investigated. In particular, the concept that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors they are executing on, and that failures may occur in the software or underlying hardware was examined. Progress is discussed for the following areas: continued development and testing of the fault-tolerant Ada testbed; development of suggested changes to Ada so that it might more easily cope with the failure of interest; and design of new approaches to fault-tolerant software in real-time systems, and integration of these ideas into Ada.
Nucleation, growth and localisation of microcracks: implications for predictability of rock failure
NASA Astrophysics Data System (ADS)
Main, I. G.; Kun, F.; Pál, G.; Jánosi, Z.
2016-12-01
The spontaneous emergence of localized co-operative deformation is an important phenomenon in the development of shear faults in porous media. It can be studied by empirical observation, by laboratory experiment or by numerical simulation. Here we investigate the evolution of damage and fragmentation leading up to and including system-sized failure in a numerical model of a porous rock, using discrete element simulations of the strain-controlled uniaxial compression of cylindrical samples of different finite size. As the system approaches macroscopic failure the number of fractures and the energy release rate both increase as a time-reversed Omori law, with scaling constants for the frequency-size distribution and the inter-event time, including their temporal evolution, that closely resemble those of natural experiments. The damage progressively localizes in a narrow shear band, ultimately a fault 'gouge' containing a large number of poorly-sorted non-cohesive fragments on a broad bandwidth of scales, with properties similar to those of natural and experimental faults. We determine the position and orientation of the central fault plane, the width of the deformation band and the spatial and mass distribution of fragments. The relative width of the deformation band decreases as a power law of the system size and the probability distribution of the angle of the damage plane converges to around 30 degrees, representing an emergent internal coefficient of friction of 0.7 or so. The mass of fragments is power law distributed, with an exponent that does not depend on scale, and is near that inferred for experimental and natural fault gouges. The fragments are in general angular, with a clear self-affine geometry. The consistency of this model with experimental and field results confirms the critical roles of pre-existing heterogeneity, elastic interactions, and finite system size to grain size ratio on the development of faults, and ultimately to assessing the predictive power of forecasts of failure time in such media.
2017-01-01
Producing predictions of the probabilistic risks of operating materials for given lengths of time at stated operating conditions requires the assimilation of existing deterministic creep life prediction models (that only predict the average failure time) with statistical models that capture the random component of creep. To date, these approaches have rarely been combined to achieve this objective. The first half of this paper therefore provides a summary review of some statistical models to help bridge the gap between these two approaches. The second half of the paper illustrates one possible assimilation using 1Cr1Mo-0.25V steel. The Wilshire equation for creep life prediction is integrated into a discrete hazard based statistical model—the former being chosen because of its novelty and proven capability in accurately predicting average failure times and the latter being chosen because of its flexibility in modelling the failure time distribution. Using this model it was found that, for example, if this material had been in operation for around 15 years at 823 K and 130 MPa, the chances of failure in the next year is around 35%. However, if this material had been in operation for around 25 years, the chance of failure in the next year rises dramatically to around 80%. PMID:29039773
Guest Editor's Introduction: Special section on dependable distributed systems
NASA Astrophysics Data System (ADS)
Fetzer, Christof
1999-09-01
We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potential disadvantages when used in dependable systems. For example, minimizing the time to market might lead to the release of components with inherent design faults (e.g. use of `shortcuts' that only work most of the time). In addition, the components might be more complex than needed and, hence, potentially have more design faults than simpler components. However, given economic constraints and the ability to cope with some of the problems using fault-evasion and fault-tolerance, only for a small percentage of systems can one justify not using COTS components. Distributed systems built from current COTS components are asynchronous systems in the sense that there exists no a priori known bound on the transmission delay of messages or the execution time of processes. When designing a distributed algorithm, one would like to make sure (e.g. by testing or verification) that it is correct, i.e. satisfies its specification. Many distributed algorithms make use of consensus (eventually all non-crashed processes have to agree on a value), leader election (a crashed leader is eventually replaced by a new leader, but at any time there is at most one leader) or a group membership detection service (a crashed process is eventually suspected to have crashed but only crashed processes are suspected). From a theoretical point of view, the service specifications given for such services are not implementable in asynchronous systems. In particular, for each implementation one can derive a counter example in which the service violates its specification. From a practical point of view, the consensus, the leader election, and the membership detection problem are solvable in asynchronous distributed systems. In this special section, Raynal and Tronel show how to bridge this difference by showing how to implement the group membership detection problem with a negligible probability [1] to fail in an asynchronous system. The group membership detection problem is specified by a liveness condition (L) and a safety property (S): (L) if a process p crashes, then eventually every non-crashed process q has to suspect that p has crashed; and (S) if a process q suspects p, then p has indeed crashed. One can show that either (L) or (S) is implementable, but one cannot implement both (L) and (S) at the same time in an asynchronous system. In practice, one only needs to implement (L) and (S) such that the probability that (L) or (S) is violated becomes negligible. Raynal and Tronel propose and analyse a protocol that implements (L) with certainty and that can be tuned such that the probability that (S) is violated becomes negligible. Designing and implementing distributed fault-tolerant protocols for asynchronous systems is a difficult but not an impossible task. A fault-tolerant protocol has to detect and mask certain failure classes, e.g. crash failures and message omission failures. There is a trade-off between the performance of a fault-tolerant protocol and the failure classes the protocol can tolerate. One wants to tolerate as many failure classes as needed to satisfy the stochastic requirements of the protocol [1] while still maintaining a sufficient performance. Since clients of a protocol have different requirements with respect to the performance/fault-tolerance trade-off, one would like to be able to customize protocols such that one can select an appropriate performance/fault-tolerance trade-off. In this special section Hiltunen et al describe how one can compose protocols from micro-protocols in their Cactus system. They show how a group RPC system can be tailored to the needs of a client. In particular, they show how considering additional failure classes affects the performance of a group RPC system. References [1] Cristian F 1991 Understanding fault-tolerant distributed systems Communications of ACM 34 (2) 56-78 [2] Heimerdinger W L and Weinstock C B 1992 A conceptual framework for system fault tolerance Technical Report 92-TR-33, CMU/SEI [3] Laprie J C (ed) 1992 Dependability: Basic Concepts and Terminology (Vienna: Springer)
Ouwens, Mario; Hauch, Ole; Franzén, Stefan
2018-05-01
The rank-preserving structural failure time model (RPSFTM) is used for health technology assessment submissions to adjust for switching patients from reference to investigational treatment in cancer trials. It uses counterfactual survival (survival when only reference treatment would have been used) and assumes that, at randomization, the counterfactual survival distribution for the investigational and reference arms is identical. Previous validation reports have assumed that patients in the investigational treatment arm stay on therapy throughout the study period. To evaluate the validity of the RPSFTM at various levels of crossover in situations in which patients are taken off the investigational drug in the investigational arm. The RPSFTM was applied to simulated datasets differing in percentage of patients switching, time of switching, underlying acceleration factor, and number of patients, using exponential distributions for the time on investigational and reference treatment. There were multiple scenarios in which two solutions were found: one corresponding to identical counterfactual distributions, and the other to two different crossing counterfactual distributions. The same was found for the hazard ratio (HR). Unique solutions were observed only when switching patients were on investigational treatment for <40% of the time that patients in the investigational arm were on treatment. Distributions other than exponential could have been used for time on treatment. An HR equal to 1 is a necessary but not always sufficient condition to indicate acceleration factors associated with equal counterfactual survival. Further assessment to distinguish crossing counterfactual curves from equal counterfactual curves is especially needed when the time that switchers stay on investigational treatment is relatively long compared to the time direct starters stay on investigational treatment.
NASA Astrophysics Data System (ADS)
Zaccaria, V.; Tucker, D.; Traverso, A.
2016-09-01
Solid oxide fuel cells are characterized by very high efficiency, low emissions level, and large fuel flexibility. Unfortunately, their elevated costs and relatively short lifetimes reduce the economic feasibility of these technologies at the present time. Several mechanisms contribute to degrade fuel cell performance during time, and the study of these degradation modes and potential mitigation actions is critical to ensure the durability of the fuel cell and their long-term stability. In this work, localized degradation of a solid oxide fuel cell is modeled in real-time and its effects on various cell parameters are analyzed. Profile distributions of overpotential, temperature, heat generation, and temperature gradients in the stack are investigated during degradation. Several causes of failure could occur in the fuel cell if no proper control actions are applied. A local analysis of critical parameters conducted shows where the issues are and how they could be mitigated in order to extend the life of the cell.
Robust inference in discrete hazard models for randomized clinical trials.
Nguyen, Vinh Q; Gillen, Daniel L
2012-10-01
Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.
NASA Astrophysics Data System (ADS)
Zhou, Peng
2013-06-01
As temperature increases, it is suggested that atoms on lattice sites serve as dynamic defects and cause a much more homogeneous distribution of the Maxwell stress throughout the crystal lattice compared with that caused by static defects. Though this stressing effect mostly leads to Joule heating, it also results in distortion of the crystal lattice, which leads to a decrease in the activation energy for atomic diffusion and causes enhancements in the phase growth rates at both interfaces of diffusion couples. Due to this stressing effect, the decrease in the activation energy is proportional to a square term of the current density J. A mean-time-to-failure analysis is performed for failure caused by excessive growth of intermediate phases, and a mean-time-to-failure (MTTF) equation is found. This equation appears similar to Black's equation but with an extra exponential term arising from the stressing effect of the crystal lattice.
NASA Astrophysics Data System (ADS)
Yuan-hui, Li; Gang, Lei; Shi-da, Xu; Da-wei, Wu
2018-07-01
Under high stress and blasting disturbance, the failure of deep rock masses is a complex, dynamic evolutionary process. To reveal the relation between macroscopic failure of deep rock masses and spatial-temporal evolution law of micro-cracking within, the initiation, extension, and connection of micro-cracks under blasting disturbance and the deformation and failure mechanism of deep rock masses were studied. The investigation was carried out using the microseismic (MS) monitoring system established in the deep mining area of Ashele Copper Mine (Xinjiang Uygur Autonomous Region, China). The results showed that the failure of the deep rock masses is a dynamic process accompanied with stress release and stress adjustment. It is not only related to the blasting-based mining, but also associated with zones of stress concentration formed due to the mining. In that space, the concentrated area in the cloud chart for the distribution of MS event density before failure of the rocks shows the basically same pattern with the damaged rocks obtained through scanning of mined-out areas, which indicates that the cloud chart can be used to determine potential risk areas of rocks in the spatial domain. In the time domain, relevant parameters of MS events presented different changes before the failure of the rocks: the energy index decreased while the cumulative apparent volume gradually increased, the magnitude distribution of microseismic events decreased rapidly, and the fractal dimension decreased at first and then remained stable. This demonstrates that the different changes in relevant MS parameters allow researchers to predict the failure time of the rocks. By analysing the dynamic evolution process of the failure of the deep rock masses, areas at potential risk can be predicted spatially and temporally. The result provides guidance for those involved in the safe production and management of underground engineering and establishes a theoretical basis for the study on the stability of deep rock masses.
A study of total space life performance of GSFC spacecraft
NASA Technical Reports Server (NTRS)
Timmins, A. R.
1975-01-01
The space life performance of 57 Goddard Space Flight Center spacecraft is given. The time distribution of 449 malfunctions, of which 248 were classified as failures, is presented. Test data were available for 39 of the spacecraft and permitted a comparison of system test performance with the first-day, first-month, and total space life performance. The failures per spacecraft for the system environmental tests and the three time periods in space were 12, 0.9, 1.7, and 5.0, respectively. Relevance of the data to the pre-shuttle and shuttle eras is discussed. Classifications of failures by type of device and spacecraft subsystem are included. A continuation of the Goddard philosophy of requiring a system-level environmental test program is justified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilton, Harry H.
Protocols are developed for formulating optimal viscoelastic designer functionally graded materials tailored to best respond to prescribed loading and boundary conditions. In essence, an inverse approach is adopted where material properties instead of structures per se are designed and then distributed throughout structural elements. The final measure of viscoelastic material efficacy is expressed in terms of failure probabilities vs. survival time000.
A method for developing design diagrams for ceramic and glass materials using fatigue data
NASA Technical Reports Server (NTRS)
Heslin, T. M.; Magida, M. B.; Forrest, K. A.
1986-01-01
The service lifetime of glass and ceramic materials can be expressed as a plot of time-to-failure versus applied stress whose plot is parametric in percent probability of failure. This type of plot is called a design diagram. Confidence interval estimates for such plots depend on the type of test that is used to generate the data, on assumptions made concerning the statistical distribution of the test results, and on the type of analysis used. This report outlines the development of design diagrams for glass and ceramic materials in engineering terms using static or dynamic fatigue tests, assuming either no particular statistical distribution of test results or a Weibull distribution and using either median value or homologous ratio analysis of the test results.
Rocket Science for the Internet
NASA Technical Reports Server (NTRS)
2000-01-01
Rainfinity, a company resulting from the commercialization of Reliable Array of Independent Nodes (RAIN), produces the product, Rainwall. Rainwall runs a cluster of computer workstations, creating a distributed Internet gateway. When Rainwall detects a failure in software or hardware, traffic is shifted to a healthy gateway without interruptions to Internet service. It more evenly distributes workload across servers, providing less down time.
Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz
2013-01-01
The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...
A double hit model for the distribution of time to AIDS onset
NASA Astrophysics Data System (ADS)
Chillale, Nagaraja Rao
2013-09-01
Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.
A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The two identified failure modes follow different acceleration functions. Catastrophic failures follow the traditional power-law relationship to the applied voltage. Slow degradation failures fit well to an exponential law relationship to the applied electrical field. Finally, the impact of capacitor structure on the reliability of BME capacitors is discussed with respect to the number of dielectric layers in an MLCC unit, the number of BaTiO3 grains per dielectric layer, and the chip size of the capacitor device.
Time-frequency distributions for propulsion-system diagnostics
NASA Astrophysics Data System (ADS)
Griffin, Michael E.; Tulpule, Sharayu
1991-12-01
The Wigner distribution and its smoothed versions, i.e., Choi-Williams and Gaussian kernels, are evaluated for propulsion system diagnostics. The approach is intended for off-line kernel design by using the ambiguity domain to select the appropriate Gaussian kernel. The features produced by the Wigner distribution and its smoothed versions correlate remarkably well with documented failure indications. The selection of the kernel on the other hand is very subjective for our unstructured data.
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
NASA Astrophysics Data System (ADS)
Sawant, M.; Christou, A.
2012-12-01
While use of LEDs in Fiber Optics and lighting applications is common, their use in medical diagnostic applications is not very extensive. Since the precise value of light intensity will be used to interpret patient results, understanding failure modes [1-4] is very important. We used the Failure Modes and Effects Criticality Analysis (FMECA) tool to identify the critical failure modes of the LEDs. FMECA involves identification of various failure modes, their effects on the system (LED optical output in this context), their frequency of occurrence, severity and the criticality of the failure modes. The competing failure modes/mechanisms were degradation of: active layer (where electron-hole recombination occurs to emit light), electrodes (provides electrical contact to the semiconductor chip), Indium Tin Oxide (ITO) surface layer (used to improve current spreading and light extraction), plastic encapsulation (protective polymer layer) and packaging failures (bond wires, heat sink separation). A FMECA table is constructed and the criticality is calculated by estimating the failure effect probability (β), failure mode ratio (α), failure rate (λ) and the operating time. Once the critical failure modes were identified, the next steps were generation of prior time to failure distribution and comparing with our accelerated life test data. To generate the prior distributions, data and results from previous investigations were utilized [5-33] where reliability test results of similar LEDs were reported. From the graphs or tabular data, we extracted the time required for the optical power output to reach 80% of its initial value. This is our failure criterion for the medical diagnostic application. Analysis of published data for different LED materials (AlGaInP, GaN, AlGaAs), the Semiconductor Structures (DH, MQW) and the mode of testing (DC, Pulsed) was carried out. The data was categorized according to the materials system and LED structure such as AlGaInP-DH-DC, AlGaInP-MQW-DC, GaN-DH-DC, and GaN-DH-DC. Although the reported testing was carried out at different temperature and current, the reported data was converted to the present application conditions of the medical environment. Comparisons between the model data and accelerated test results carried out in the present are reported. The use of accelerating agent modeling and regression analysis was also carried out. We have used the Inverse Power Law model with the current density J as the accelerating agent and the Arrhenius model with temperature as the accelerating agent. Finally, our reported methodology is presented as an approach for analyzing LED suitability for the target medical diagnostic applications.
Experimental study on infrared radiation temperature field of concrete under uniaxial compression
NASA Astrophysics Data System (ADS)
Lou, Quan; He, Xueqiu
2018-05-01
Infrared thermography, as a nondestructive, non-contact and real-time monitoring method, has great significance in assessing the stability of concrete structure and monitoring its failure. It is necessary to conduct in depth study on the mechanism and application of infrared radiation (IR) of concrete failure under loading. In this paper, the concrete specimens with size of 100 × 100 × 100 mm were adopted to carry out the uniaxial compressions for the IR tests. The distribution of IR temperatures (IRTs), surface topography of IRT field and the reconstructed IR images were studied. The results show that the IRT distribution follows the Gaussian distribution, and the R2 of Gaussian fitting changes along with the loading time. The abnormities of R2 and AE counts display the opposite variation trends. The surface topography of IRT field is similar to the hyperbolic paraboloid, which is related to the stress distribution in the sample. The R2 of hyperbolic paraboloid fitting presents an upward trend prior to the fracture which enables to change the IRT field significantly. This R2 has a sharp drop in response to this large destruction. The normalization images of IRT field, including the row and column normalization images, were proposed as auxiliary means to analyze the IRT field. The row and column normalization images respectively show the transverse and longitudinal distribution of the IRT field, and they have clear responses to the destruction occurring on the sample surface. In this paper, the new methods and quantitative index were proposed for the analysis of IRT field, which have some theoretical and instructive significance for the analysis of the characteristics of IRT field, as well as the monitoring of instability and failure for concrete structure.
to do so, and (5) three distinct versions of the problem of estimating component reliability from system failure-time data are treated, each resulting inconsistent estimators with asymptotically normal distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
NASA Astrophysics Data System (ADS)
Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen
2018-05-01
To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.
NASA Technical Reports Server (NTRS)
Vitali, Roberto; Lutomski, Michael G.
2004-01-01
National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.
NASA Astrophysics Data System (ADS)
Vallianatos, Filippos; Chatzopoulos, George
2014-05-01
Strong observational indications support the hypothesis that many large earthquakes are preceded by accelerating seismic release rates which described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We derive the time-to-failure power-law of: a) cumulative number of earthquakes, b) cumulative Benioff strain and c) cumulative energy released in a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. Considering the analytic conditions near the time of failure, we derive from first principles the time-to-failure power-law and show that a common critical exponent m(q) exists, which is a function of the non-extensive entropic parameter q. We conclude that the cumulative precursory parameters are function of the energy supplied to the system and the size of the precursory volume. In addition the q-exponential distribution which describes the fault system is a crucial factor on the appearance of power-law acceleration in the seismicity. Our results based on Tsallis entropy and the energy conservation gives a new view on the empirical laws derived by other researchers. Examples and applications of this technique to observations of accelerating seismicity will also be presented and discussed. This work was implemented through the project IMPACT-ARC in the framework of action "ARCHIMEDES III-Support of Research Teams at TEI of Crete" (MIS380353) of the Operational Program "Education and Lifelong Learning" and is co-financed by the European Union (European Social Fund) and Greek national funds
Sparacia, Gianvincenzo; Cannella, Roberto; Lo Re, Vincenzina; Gambino, Angelo; Mamone, Giuseppe; Miraglia, Roberto
2018-02-17
Cerebral microbleeds (CMBs) are small rounded lesions representing cerebral hemosiderin deposits surrounded by macrophages that results from previous microhemorrhages. The aim of this study was to review the distribution of cerebral microbleeds in patients with end-stage organ failure and their association with specific end-stage organ failure risk factors. Between August 2015 and June 2017, we evaluated 15 patients, 9 males, and 6 females, (mean age 65.5 years). Patients population was subdivided into three groups according to the organ failure: (a) chronic kidney failure (n = 8), (b) restrictive cardiomyopathy undergoing heart transplantation (n = 1), and (c) end-stage liver failure undergoing liver transplantation (n = 6). The MR exams were performed on a 3T MR unit and the SWI sequence was used for the detection of CMBs. CMBs were subdivided in supratentorial lobar distributed, supratentorial non-lobar distributed, and infratentorial distributed. A total of 91 microbleeds were observed in 15 patients. Fifty-nine CMBs lesions (64.8%) had supratentorial lobar distribution, 17 CMBs lesions (18.8%) had supratentorial non-lobar distribution and the remaining 15 CMBs lesions (16.4%) were infratentorial distributed. An overall predominance of supratentorial multiple lobar localizations was found in all types of end-stage organ failure. The presence of CMBs was significantly correlated with age, hypertension, and specific end-stage organ failure risk factors (p < 0.001). CMBs are mostly founded in supratentorial lobar localization in end-stage organ failure. The improved detection of CMBs with SWI sequences may contribute to a more accurate identification of patients with cerebral risk factors to prevent complications during or after the organ transplantation.
Performance evaluation of the croissant production line with reparable machines
NASA Astrophysics Data System (ADS)
Tsarouhas, Panagiotis H.
2015-03-01
In this study, the analytical probability models for an automated serial production system, bufferless that consists of n-machines in series with common transfer mechanism and control system was developed. Both time to failure and time to repair a failure are assumed to follow exponential distribution. Applying those models, the effect of system parameters on system performance in actual croissant production line was studied. The production line consists of six workstations with different numbers of reparable machines in series. Mathematical models of the croissant production line have been developed using Markov process. The strength of this study is in the classification of the whole system in states, representing failures of different machines. Failure and repair data from the actual production environment have been used to estimate reliability and maintainability for each machine, workstation, and the entire line is based on analytical models. The analysis provides a useful insight into the system's behaviour, helps to find design inherent faults and suggests optimal modifications to upgrade the system and improve its performance.
Statistical study of the reliability of oxide-defined stripe cw lasers of (AlGa)As
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ettenberg, M.
1979-03-01
In this report, we describe a statistical study of the reliability of oxide-defined stripe-contact cw injection lasers of (AlGa)As. These devices have one facet coated with Al/sub 2/O/sub 3/ and one facet coated with an Al/sub 2/O/sub 3//Si dichroic reflector; the lasers are optimized for cw low-threshold currents at room temperature, with values typically about 50 mA. Lifetests were carried out at 70 /sup 0/C ambient, in the cw mode of operation with about 5 mW output. Previous lifetests showed that the degradation rate followed a 0.95-eV activation energy so the 70 /sup 0/C environment provides a degradation acceleration factormore » of 190 over that at room temperature. We have found that the device failures follow a log-normal distribution, characterized by a mean time before failure of 4200 h and a standard deviation of 1.3. This corresponds to a mean time to failure (MTTF) of 10/sup 6/ h at room temperature. Failure is defined here as the inability of the device to emit 1 mW of stimulated cw output at 70 /sup 0/C, and assumes that optical feedback will be employed to adjust the laser current during operation. If a constant-current drive is envisioned, the failures for a 3-dB drop in light output also follow a log-normal distribution with a similar slope (standard deviation=1.1) and a MTTF of 2000 h at 70 /sup 0/C (500 000 h at room temperature). The failures were found to be mainly due to bulk gradual degradation and not facet or contact failure. Careful study of lasers before and after lifetest showed a significant increase in contact thermal resistance. However, this increase accounts for only a small portion of the nearly 70% increase in room-temperature cw threshold after failure at 70 /sup 0/C. After failure at 70 /sup 0/C, we also noted a degradation in the near-field and associated far-field pattern of the laser.« less
A Custom Data Logger for Real-Time Remote Field Data Collections
2017-03-01
ERDC/CHL CHETN-VI-46 March 2017 Approved for public release; distribution is unlimited. A Custom Data Logger for Real- Time Remote Field Data...Field Research Facility (FRF), for remote real- time data collections. This custom data logger is compact and energy efficient but has the same...INTRODUCTION: Real- time data collections offer many advantages: 1. Instrument failures can be rapidly detected and repaired, thereby minimizing
NASA Astrophysics Data System (ADS)
Kempa, Wojciech M.
2017-12-01
A finite-capacity queueing system with server breakdowns is investigated, in which successive exponentially distributed failure-free times are followed by repair periods. After the processing a customer may either rejoin the queue (feedback) with probability q, or definitely leave the system with probability 1 - q. The system of integral equations for transient queue-size distribution, conditioned by the initial level of buffer saturation, is build. The solution of the corresponding system written for Laplace transforms is found using the linear algebraic approach. The considered queueing system can be successfully used in modelling production lines with machine failures, in which the parameter q may be considered as a typical fraction of items demanding corrections. Morever, this queueing model can be applied in the analysis of real TCP/IP performance, where q stands for the fraction of packets requiring retransmission.
Simulating fail-stop in asynchronous distributed systems
NASA Technical Reports Server (NTRS)
Sabel, Laura; Marzullo, Keith
1994-01-01
The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.
NASA Astrophysics Data System (ADS)
Liu, Shuhuan; Du, Xuecheng; Du, Xiaozhi; Zhang, Yao; Mubashiru, Lawal Olarewaju; Luo, Dongyang; yuan, Yuan; Deng, Tianxiang; Li, Zhuoqi; Zang, Hang; Li, Yonghong; He, Chaohui; Ma, Yingqi; Shangguan, Shipeng
2017-09-01
The impacts of the external dynamic memory (DDR3) failures on the performance of 28 nm Xilinx Zynq-7010 SoC based system (MicroZed) were investigated with two sets of 1064 nm laser platforms. The failure sensitive area distributionsons on the back surface of the test DDR3 were primarily localized with a CW laser irradiation platform. During the CW laser scanning on the back surface of the DDR3 of the test board system, various failure modes except SEU and SEL (MBU, SEFI, data storage address error, rebooting, etc) were found in the testing embedded modules (ALU, PL, Register, Cache and DMA, etc) of SoC. Moreover, the experimental results demonstrated that there were 16 failure sensitive blocks symmetrically distributed on the back surface of the DDR3 with every sensitive block area measured was about 1 mm × 0.5 mm. The influence factors on the failure modes of the embedded modules were primarily analyzed and the SEE characteristics of DDR3 induced by the picoseconds pulsed laser were tested. The failure modes of DDR3 found were SEU, SEFI, SEL, test board rebooting by itself, unknown data, etc. Furthermore, the time interval distributions of failure occurrence in DDR3 changes with the pulsed laser irradiation energy and the CPU operating frequency were measured and compared. Meanwhile, the failure characteristics of DDR3 induced by pulsed laser irradiation were primarily explored. The measured results and the testing techniques designed in this paper provide some reference information for evaluating the reliability of the test system or other similar electronic system in harsh environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olalla, Carlos; Maksimovic, Dragan; Deline, Chris
Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less
Olalla, Carlos; Maksimovic, Dragan; Deline, Chris; ...
2017-04-26
Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less
A FORTRAN program for multivariate survival analysis on the personal computer.
Mulder, P G
1988-01-01
In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.
NASA Technical Reports Server (NTRS)
Powers, L. M.; Jadaan, O. M.; Gyekenyesi, J. P.
1998-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural application such as in advanced turbine engine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilizes commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life, of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the Ceramics Analysis and Reliability Evaluation of Structures/CREEP (CARES/CREEP) integrated design program, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benchmark problems and engine components are included.
NASA Technical Reports Server (NTRS)
Gyekenyesi, J. P.; Powers, L. M.; Jadaan, O. M.
1998-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilized commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the CARES/CREEP (Ceramics Analysis and Reliability Evaluation of Structures/CREEP) integrated design programs, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benechmark problems and engine components are included.
Deviation from Power Law Behavior in Landslide Phenomenon
NASA Astrophysics Data System (ADS)
Li, L.; Lan, H.; Wu, Y.
2013-12-01
Power law distribution of magnitude is widely observed in many natural hazards (e.g., earthquake, floods, tornadoes, and forest fires). Landslide is unique as the size distribution of landslide is characterized by a power law decrease with a rollover in the small size end. Yet, the emergence of the rollover, i.e., the deviation from power law behavior for small size landslides, remains a mystery. In this contribution, we grouped the forces applied on landslide bodies into two categories: 1) the forces proportional to the volume of failure mass (gravity and friction), and 2) the forces proportional to the area of failure surface (cohesion). Failure occurs when the forces proportional to volume exceed the forces proportional to surface area. As such, given a certain mechanical configuration, the failure volume to failure surface area ratio must exceed a corresponding threshold to guarantee a failure. Assuming all landslides share a uniform shape, which means the volume to surface area ratio of landslide regularly increase with the landslide volume, a cutoff of landslide volume distribution in the small size end can be defined. However, in realistic landslide phenomena, where heterogeneities of landslide shape and mechanical configuration are existent, a simple cutoff of landslide volume distribution does not exist. The stochasticity of landslide shape introduce a probability distribution of the volume to surface area ratio with regard to landslide volume, with which the probability that the volume to surface ratio exceed the threshold can be estimated regarding values of landslide volume. An experiment based on empirical data showed that this probability can induce the power law distribution of landslide volume roll down in the small size end. We therefore proposed that the constraints on the failure volume to failure surface area ratio together with the heterogeneity of landslide geometry and mechanical configuration attribute for the deviation from power law behavior in landslide phenomenon. Figure shows that a rollover of landslide size distribution in the small size end is produced as the probability for V/S (the failure volume to failure surface ratio of landslide) exceeding the mechanical threshold applied to the power law distribution of landslide volume.
NASA Technical Reports Server (NTRS)
Sepehry-Fard, F.; Coulthard, Maurice H.
1995-01-01
The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.
NASA Technical Reports Server (NTRS)
Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.
2014-01-01
A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.
Time-dependent earthquake probabilities
Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.
2005-01-01
We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.
Two-sided Topp-Leone Weibull distribution
NASA Astrophysics Data System (ADS)
Podeang, Krittaya; Bodhisuwan, Winai
2017-11-01
In this paper, we introduce a general class of lifetime distributions, called the two-sided Topp-Leone generated family of distribution. A special case of new family is the two-sided Topp-Leone Weibull distribution. This distribution used the two-sided Topp-Leone distribution as a generator for the Weibull distribution. The two-sided Topp-Leone Weibull distribution is presented in several shapes of distributions such as decreasing, unimodal, and bimodal which make this distribution more than flexible than the Weibull distribution. Its quantile function is presented. The parameter estimation method by using maximum likelihood estimation is discussed. The proposed distribution is applied to the strength data set, remission times of bladder cancer patients data set and time to failure of turbocharger data set. We compare the proposed distribution to the Topp-Leone Generated Weibull distribution. In conclusion, the two-sided Topp-Leone Weibull distribution performs similarly as the Topp-Leone Generated Weibull distribution in the first and second data sets. However, the proposed distribution can perform better than fit to Topp-Leone Generated Weibull distribution for the other.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
NASA Technical Reports Server (NTRS)
Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.
1997-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. Such long life requirements necessitate subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this work is to present a design methodology for predicting the lifetimes of structural components subjected to multiaxial creep loading. This methodology utilizes commercially available finite element packages and takes into account the time varying creep stress distributions (stress relaxation). In this methodology, the creep life of a component is divided into short time steps, during which, the stress and strain distributions are assumed constant. The damage, D, is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. For components subjected to predominantly tensile loading, failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity.
The microscopic basis for strain localisation in porous media
NASA Astrophysics Data System (ADS)
Main, Ian; Kun, Ferenz; Pal, Gergo; Janosi, Zoltan
2017-04-01
The spontaneous emergence of localized cooperative deformation is an important phenomenon in the development of shear faults in porous media. It can be studied by empirical observation, by laboratory experiment or by numerical simulation. Here we investigate the evolution of damage and fragmentation leading up to and including system-sized failure in a numerical model of a porous rock, using discrete element simulations of the strain-controlled uni-axial compression of cylindrical samples of different finite size. As the system approaches macroscopic failure the number of fractures and the energy release rate both increase as a time-reversed Omori law, with scaling constants for the frequency-size distribution and the inter-event time, including their temporal evolution, that closely resemble those of natural experiments. The damage progressively localizes in a narrow shear band, ultimately a fault 'gouge' containing a large number of poorly-sorted non-cohesive fragments on a broad bandwidth of scales, with properties similar to those of natural and experimental faults. We determine the position and orientation of the central fault plane, the width of the deformation band and the spatial and mass distribution of fragments. The relative width of the deformation band decreases as a power law of the system size and the probability distribution of the angle of the damage plane converges to around 30 degrees, representing an emergent internal coefficient of friction of 0.7 or so. The mass of fragments is power law distributed, with an exponent that does not depend on scale, and is near that inferred for experimental and natural fault gouges. The fragments are in general angular, with a clear self-affine geometry. The consistency of this model with experimental and field results confirms the critical roles of preexisting heterogeneity, elastic interactions, and finite system size to grain size ratio on the development of faults, and ultimately to assessing the predictive power of forecasts of failure time in such media.
Bond strength with various etching times on young permanent teeth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, W.N.; Lu, T.C.
1991-07-01
Tensile bond strengths of an orthodontic resin cement were compared for 15-, 30-, 60-, 90-, or 120-second etching times, with a 37% phosphoric acid solution on the enamel surfaces of young permanent teeth. Fifty extracted premolars from 9- to 16-year-old children were used for testing. An orthodontic composite resin was used to bond the bracket directly onto the buccal surface of the enamel. The tensile bond strengths were tested with an Instron machine. Bond failure interfaces between bracket bases and teeth surfaces were examined with a scanning electron microscope and calculated with mapping of energy-dispersive x-ray spectrometry. The results ofmore » tensile bond strength for 15-, 30-, 60-, or 90-second etching times were not statistically different. For the 120-second etching time, the decrease was significant. Of the bond failures, 43%-49% occurred between bracket and resin interface, 12% to 24% within the resin itself, 32%-40% between resin and tooth interface, and 0% to 4% contained enamel fragments. There was no statistical difference in percentage of bond failure interface distribution between bracket base and resin, resin and enamel, or the enamel detachment. Cohesive failure within the resin itself at the 120-second etching time was less than at other etching times, with a statistical significance. To achieve good retention, to decrease enamel loss, and to reduce moisture contamination in the clinic, as well as to save chairside time, a 15-second etching time is suggested for teenage orthodontic patients.« less
Model-centric distribution automation: Capacity, reliability, and efficiency
Onen, Ahmet; Jung, Jaesung; Dilek, Murat; ...
2016-02-26
A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less
Model-centric distribution automation: Capacity, reliability, and efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onen, Ahmet; Jung, Jaesung; Dilek, Murat
A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less
NASA Astrophysics Data System (ADS)
Lakowicz, Joseph R.; Szmacinski, Henryk; Johnson, Michael L.
1990-05-01
We examined the time -dependent donor decays of 2 - amino purine (2 -APU) , in the presence of increasing amounts of acceptor 2-aminobenzophenine (2-ABP). As the concentration of 2-ABP increases, the frequency-responses diverge from that predicted by Forster. The data were found to be consistent with modified Forster equations, but at this time we do not state that these modified expressions provide a correct molecular description of this donor-acceptor system. To the best of our knowledge this is the first paper which reports a failure of the Forster theory for randomly distributed donors and acceptors.
A fuzzy set approach for reliability calculation of valve controlling electric actuators
NASA Astrophysics Data System (ADS)
Karmachev, D. P.; Yefremov, A. A.; Luneva, E. E.
2017-02-01
The oil and gas equipment and electric actuators in particular frequently perform in various operational modes and under dynamic environmental conditions. These factors affect equipment reliability measures in a vague, uncertain way. To eliminate the ambiguity, reliability model parameters could be defined as fuzzy numbers. We suggest a technique that allows constructing fundamental fuzzy-valued performance reliability measures based on an analysis of electric actuators failure data in accordance with the amount of work, completed before the failure, instead of failure time. Also, this paper provides a computation example of fuzzy-valued reliability and hazard rate functions, assuming Kumaraswamy complementary Weibull geometric distribution as a lifetime (reliability) model for electric actuators.
Fault detection and diagnosis using neural network approaches
NASA Technical Reports Server (NTRS)
Kramer, Mark A.
1992-01-01
Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.
A framework for building real-time expert systems
NASA Technical Reports Server (NTRS)
Lee, S. Daniel
1991-01-01
The Space Station Freedom is an example of complex systems that require both traditional and artificial intelligence (AI) real-time methodologies. It was mandated that Ada should be used for all new software development projects. The station also requires distributed processing. Catastrophic failures on the station can cause the transmission system to malfunction for a long period of time, during which ground-based expert systems cannot provide any assistance to the crisis situation on the station. This is even more critical for other NASA projects that would have longer transmission delays (e.g., the lunar base, Mars missions, etc.). To address these issues, a distributed agent architecture (DAA) is proposed that can support a variety of paradigms based on both traditional real-time computing and AI. The proposed testbed for DAA is an autonomous power expert (APEX) which is a real-time monitoring and diagnosis expert system for the electrical power distribution system of the space station.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly
Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition,more » substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.« less
Epidemic failure detection and consensus for extreme parallelism
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...
2017-02-01
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
Adaptive and technology-independent architecture for fault-tolerant distributed AAL solutions.
Schmidt, Michael; Obermaisser, Roman
2018-04-01
Today's architectures for Ambient Assisted Living (AAL) must cope with a variety of challenges like flawless sensor integration and time synchronization (e.g. for sensor data fusion) while abstracting from the underlying technologies at the same time. Furthermore, an architecture for AAL must be capable to manage distributed application scenarios in order to support elderly people in all situations of their everyday life. This encompasses not just life at home but in particular the mobility of elderly people (e.g. when going for a walk or having sports) as well. Within this paper we will introduce a novel architecture for distributed AAL solutions whose design follows a modern Microservices approach by providing small core services instead of a monolithic application framework. The architecture comprises core services for sensor integration, and service discovery while supporting several communication models (periodic, sporadic, streaming). We extend the state-of-the-art by introducing a fault-tolerance model for our architecture on the basis of a fault-hypothesis describing the fault-containment regions (FCRs) with their respective failure modes and failure rates in order to support safety-critical AAL applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Martin, Adrian
As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.
Degradation modeling of mid-power white-light LEDs by using Wiener process.
Huang, Jianlin; Golubović, Dušan S; Koh, Sau; Yang, Daoguo; Li, Xiupeng; Fan, Xuejun; Zhang, G Q
2015-07-27
The IES standard TM-21-11 provides a guideline for lifetime prediction of LED devices. As it uses average normalized lumen maintenance data and performs non-linear regression for lifetime modeling, it cannot capture dynamic and random variation of the degradation process of LED devices. In addition, this method cannot capture the failure distribution, although it is much more relevant in reliability analysis. Furthermore, the TM-21-11 only considers lumen maintenance for lifetime prediction. Color shift, as another important performance characteristic of LED devices, may also render significant degradation during service life, even though the lumen maintenance has not reached the critical threshold. In this study, a modified Wiener process has been employed for the modeling of the degradation of LED devices. By using this method, dynamic and random variations, as well as the non-linear degradation behavior of LED devices, can be easily accounted for. With a mild assumption, the parameter estimation accuracy has been improved by including more information into the likelihood function while neglecting the dependency between the random variables. As a consequence, the mean time to failure (MTTF) has been obtained and shows comparable result with IES TM-21-11 predictions, indicating the feasibility of the proposed method. Finally, the cumulative failure distribution was presented corresponding to different combinations of lumen maintenance and color shift. The results demonstrate that a joint failure distribution of LED devices could be modeled by simply considering their lumen maintenance and color shift as two independent variables.
Robustness and Vulnerability of Networks with Dynamical Dependency Groups.
Bai, Ya-Nan; Huang, Ning; Wang, Lei; Wu, Zhi-Xi
2016-11-28
The dependency property and self-recovery of failure nodes both have great effects on the robustness of networks during the cascading process. Existing investigations focused mainly on the failure mechanism of static dependency groups without considering the time-dependency of interdependent nodes and the recovery mechanism in reality. In this study, we present an evolving network model consisting of failure mechanisms and a recovery mechanism to explore network robustness, where the dependency relations among nodes vary over time. Based on generating function techniques, we provide an analytical framework for random networks with arbitrary degree distribution. In particular, we theoretically find that an abrupt percolation transition exists corresponding to the dynamical dependency groups for a wide range of topologies after initial random removal. Moreover, when the abrupt transition point is above the failure threshold of dependency groups, the evolving network with the larger dependency groups is more vulnerable; when below it, the larger dependency groups make the network more robust. Numerical simulations employing the Erdős-Rényi network and Barabási-Albert scale free network are performed to validate our theoretical results.
Mohamed, Moumouni Guero; Fan, Xuejun; Zhang, Guoqi; Pecht, Michael
2017-01-01
With the expanding application of light-emitting diodes (LEDs), the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD), defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED’s optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1) the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2) the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs), and color rendering indexes (CRIs) of phosphor-converted (pc)-white LEDs, and also can estimate the residual color life. PMID:28773176
Fan, Jiajie; Mohamed, Moumouni Guero; Qian, Cheng; Fan, Xuejun; Zhang, Guoqi; Pecht, Michael
2017-07-18
With the expanding application of light-emitting diodes (LEDs), the color quality of white LEDs has attracted much attention in several color-sensitive application fields, such as museum lighting, healthcare lighting and displays. Reliability concerns for white LEDs are changing from the luminous efficiency to color quality. However, most of the current available research on the reliability of LEDs is still focused on luminous flux depreciation rather than color shift failure. The spectral power distribution (SPD), defined as the radiant power distribution emitted by a light source at a range of visible wavelength, contains the most fundamental luminescence mechanisms of a light source. SPD is used as the quantitative inference of an LED's optical characteristics, including color coordinates that are widely used to represent the color shift process. Thus, to model the color shift failure of white LEDs during aging, this paper first extracts the features of an SPD, representing the characteristics of blue LED chips and phosphors, by multi-peak curve-fitting and modeling them with statistical functions. Then, because the shift processes of extracted features in aged LEDs are always nonlinear, a nonlinear state-space model is then developed to predict the color shift failure time within a self-adaptive particle filter framework. The results show that: (1) the failure mechanisms of LEDs can be identified by analyzing the extracted features of SPD with statistical curve-fitting and (2) the developed method can dynamically and accurately predict the color coordinates, correlated color temperatures (CCTs), and color rendering indexes (CRIs) of phosphor-converted (pc)-white LEDs, and also can estimate the residual color life.
NASA Astrophysics Data System (ADS)
Wang, Xiao Juan; Guo, Shi Ze; Jin, Lei; Chen, Mo
We study the structural robustness of the scale free network against the cascading failure induced by overload. In this paper, a failure mechanism based on betweenness-degree ratio distribution is proposed. In the cascading failure model we built the initial load of an edge which is proportional to the node betweenness of its ends. During the edge random deletion, we find a phase transition. Then based on the phase transition, we divide the process of the cascading failure into two parts: the robust area and the vulnerable area, and define the corresponding indicator to measure the performance of the networks in both areas. From derivation, we find that the vulnerability of the network is determined by the distribution of betweenness-degree ratio. After that we use the connection between the node ability coefficient and distribution of betweenness-degree ratio to explain the cascading failure mechanism. In simulations, we verify the correctness of our derivations. By changing connecting preferences, we find scale free networks with a slight assortativity, which performs better both in robust area and vulnerable area.
Wilson, R.C.; Wieczorek, G.F.; Keefer, D.K.; Harp, E.L.; Tannaci, N.E.
1985-01-01
Information about the individual ground failures may be obtained from the map and the brief descriptions in table 1. The following text is a general discussion of the distribution and the mechanisms of the ground failures, followed by a discussion of the effects of wet winter conditions and of topographic amplification on the distribution and mechanisms of slope failure, and it concludes with a description of our (unsuccessful) efforts to locate any ground failures due to liquefaction. The discussion is intended not only to describe the GMDES slope failures but also to place them into the larger general context of seismically induced slope failures.
Improved Results for Route Planning in Stochastic Transportation Networks
NASA Technical Reports Server (NTRS)
Boyan, Justin; Mitzenmacher, Michael
2000-01-01
In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.
A failure management prototype: DR/Rx
NASA Technical Reports Server (NTRS)
Hammen, David G.; Baker, Carolyn G.; Kelly, Christine M.; Marsh, Christopher A.
1991-01-01
This failure management prototype performs failure diagnosis and recovery management of hierarchical, distributed systems. The prototype, which evolved from a series of previous prototypes following a spiral model for development, focuses on two functions: (1) the diagnostic reasoner (DR) performs integrated failure diagnosis in distributed systems; and (2) the recovery expert (Rx) develops plans to recover from the failure. Issues related to expert system prototype design and the previous history of this prototype are discussed. The architecture of the current prototype is described in terms of the knowledge representation and functionality of its components.
Comparing Different Fault Identification Algorithms in Distributed Power System
NASA Astrophysics Data System (ADS)
Alkaabi, Salim
A power system is a huge complex system that delivers the electrical power from the generation units to the consumers. As the demand for electrical power increases, distributed power generation was introduced to the power system. Faults may occur in the power system at any time in different locations. These faults cause a huge damage to the system as they might lead to full failure of the power system. Using distributed generation in the power system made it even harder to identify the location of the faults in the system. The main objective of this work is to test the different fault location identification algorithms while tested on a power system with the different amount of power injected using distributed generators. As faults may lead the system to full failure, this is an important area for research. In this thesis different fault location identification algorithms have been tested and compared while the different amount of power is injected from distributed generators. The algorithms were tested on IEEE 34 node test feeder using MATLAB and the results were compared to find when these algorithms might fail and the reliability of these methods.
Military Health Service System Ambulatory Work Unit (AWU).
1988-04-01
E-40 BBC-4 Ambulatory Work Unit Distribution Screen Passes BBC - Neurosurgery Clinic .... ............. . E-40 BBD -I Initial Record...Screen Failures BBD - Ophthalmology Clinic ... ............ E-41 BBD -2 Distribution Screen Failures BBD - Ophthalmology Clinic ............ E-41 BBD -3...Descriptive Statistics Distribution Screen Passes BBD - Ophthalmology Clinic ............ E-42 BBD -4 Ambulatory Work Unit Distribution Screen Passes BBD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly; Albert Malkhasyan
2010-06-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “industry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an application of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates the development of a generic prior distribution, which incorporates multiple sources of generic data via weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
Size distribution of submarine landslides along the U.S. Atlantic margin
Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.
2009-01-01
Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.
Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S
2017-05-30
We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Distribution of a Generic Mission Planning and Scheduling Toolkit for Astronomical Spacecraft
NASA Technical Reports Server (NTRS)
Kleiner, Steven C.
1998-01-01
This 2-year report describes the progress made to date on the project to package and distribute the planning and scheduling toolkit for the SWAS astronomical spacecraft. SWAS was scheduled to be launched on a Pegasus XL vehicle in fall 1995. Three separate failures in the launch vehicle have delayed the SWAS launch. The researchers have used this time to continue developing scheduling algorithms and GUI design. SWAS is expected to be launched this year.
A Monte Carlo Risk Analysis of Life Cycle Cost Prediction.
1975-09-01
process which occurs with each FLU failure. With this in mind there is no alternative other than the binomial distribution. 24 GOR/SM/75D-6 With all of...Weibull distribution of failures as selected by user. For each failure of the ith FLU, the model then samples from the binomial distribution to deter- mine...which is sampled from the binomial . Neither of the two conditions for normality are met, i.e., that RTS Ie close to .5 and the number of samples close
NASA Astrophysics Data System (ADS)
Bao, Yi; Hoehler, Matthew S.; Smith, Christopher M.; Bundy, Matthew; Chen, Genda
2017-10-01
In this study, Brillouin scattering-based distributed fiber optic sensor is implemented to measure temperature distributions and detect cracks in concrete structures subjected to fire for the first time. A telecommunication-grade optical fiber is characterized as a high temperature sensor with pulse pre-pump Brillouin optical time domain analysis (PPP-BODTA), and implemented to measure spatially-distributed temperatures in reinforced concrete beams in fire. Four beams were tested to failure in a natural gas fueled compartment fire, each instrumented with one fused silica, single-mode optical fiber as a distributed sensor and four thermocouples. Prior to concrete cracking, the distributed temperature was validated at locations of the thermocouples by a relative difference of less than 9%. The cracks in concrete can be identified as sharp peaks in the temperature distribution since the cracks are locally filled with hot air. Concrete cracking did not affect the sensitivity of the distributed sensor but concrete spalling broke the optical fiber loop required for PPP-BOTDA measurements.
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
Lindley frailty model for a class of compound Poisson processes
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Ata, Nihal
2013-10-01
The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Cooper, Robert; Marzullo, Keith
1990-01-01
The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.
Statistical analysis of lithium iron sulfide status cell cycle life and failure mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gay, E.C.; Battles, J.E.; Miller, W.E.
1983-08-01
A statistical model was developed for life cycle testing of electrochemical cell life cycle trials and verified experimentally. The Weibull distribution was selected to predict the end of life for a cell, based on a 20 percent loss of initial stabilized capacity or a decrease to less than 95 percent coulombic efficiency. Groups of 12 or more Li-alloy/FeS cells were cycled to determine the mean time to failure (MTTF) and also to identify the failure modes. The cells were all full size electric vehicle batteries with 150-350 A-hr capacity. The Weibull shape factors were determined and verified in prediction ofmore » the number of cell failures in two 10 cell modules. The short circuit failure in the cells with BN-felt and MgO powder separators were found to be caused by the formation of Li-Al protrusions that penetrated the BN-felt separators, and the extrusion of active material at the edge of the electrodes.« less
NASA Astrophysics Data System (ADS)
Basavalingappa, Adarsh
Copper interconnects are typically polycrystalline and follow a lognormal grain size distribution. Polycrystalline copper interconnect microstructures with a lognormal grain size distribution were obtained with a Voronoi tessellation approach. The interconnect structures thus obtained were used to study grain growth mechanisms, grain boundary scattering, scattering dependent resistance of interconnects, stress evolution, vacancy migration, reliability life times, impact of orientation dependent anisotropy on various mechanisms, etc. In this work, the microstructures were used to study the impact of microstructure and elastic anisotropy of copper on thermal and electromigration induced failure. A test structure with copper and bulk moduli values was modeled to do a comparative study with the test structures with textured microstructure and elastic anisotropy. By subjecting the modeled test structure to a thermal stress by ramping temperature down from 400 °C to 100 °C, a significant variation in normal stresses and pressure were observed at the grain boundaries. This variation in normal stresses and hydrostatic stresses at the grain boundaries was found to be dependent on the orientation, dimensions, surroundings, and location of the grains. This may introduce new weak points within the metal line where normal stresses can be very high depending on the orientation of the grains leading to delamination and accumulation sites for vacancies. Further, the hydrostatic stress gradients act as a driving force for vacancy migration. The normal stresses can exceed certain grain orientation dependent critical threshold values and induce delamination at the copper and cap material interface, thereby leading to void nucleation and growth. Modeled test structures were subjected to a series of copper depositions at 250 °C followed by copper etch at 25 °C to obtain initial stress conditions. Then the modeled test structures were subjected to 100,000 hours ( 11.4 years) of simulated thermal stress at an elevated temperature of 150 °C. Vacancy migration due to concentration gradients, thermal gradients, and mechanical stress gradients were considered under the applied thermal stress. As a result, relatively high concentrations of vacancies were observed in the test structure due to a driving force caused by the pressure gradients resulting from the elastic anisotropy of copper. The grain growth mechanism was not considered in these simulations. Studies with two grain analysis demonstrated that the stress gradients developed will be severe when (100) grains are adjacent to (111) grains, therefore making them the weak points for potentially reliability failures. Ilan Blech discovered that electromigration occurs above a critical product of the current density and metal length, commonly referred as Blech condition. Electromigration stress simulations in this work were carried out by subjecting test structures to scaled current densities to overcome the Blech condition of (jL)crit for small dimensions of test structure and the low temperature stress condition used. Vacancy migration under the electromigration stress conditions was considered along with the vacancy migration induced stress evolution. A simple void growth model was used which assumes voids start to form when vacancies reach a critical level. Increase of vacancies in a localized region increases the resistance of the metal line. Considering a 10% increase in resistance as a failure criterion, the distributions of failure times were obtained for given electromigration stress conditions. Bimodal/multimodal failure distributions were obtained as a result. The sigma values were slightly lower than the ones commonly observed from experiments. The anisotropy of the elastic moduli of copper leads to the development of significantly different stress values which are dependent on the orientation of the grains. This results in some grains having higher normal stress than the others. This grain orientation dependent normal stress can reach a critical stress necessary to induce delamination at the copper and cap interface. Time taken to reach critical stress was considered as time to fail and distributions of failure times were obtained for structures with different grain orientations in the microstructure for different critical stress values. The sigma values of the failure distributions thus obtained for different constant critical stress values had a strong dependence of on the critical stress. It is therefore critical to use the appropriate critical stress value for the delamination of copper and cap interface. The critical stress necessary to overcome the local adhesion of the copper and the cap material interface is dependent on grain orientation of the copper. Simulations were carried out by considering grain orientation dependent critical normal stress values as failure criteria. The sigma value thus obtained with selected critical stress values were comparable to sigma values commonly observed from experiments.
ETARA PC version 3.3 user's guide: Reliability, availability, maintainability simulation model
NASA Technical Reports Server (NTRS)
Hoffman, David J.; Viterna, Larry A.
1991-01-01
A user's manual describing an interactive, menu-driven, personal computer based Monte Carlo reliability, availability, and maintainability simulation program called event time availability reliability (ETARA) is discussed. Given a reliability block diagram representation of a system, ETARA simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair intervals as a function of exponential and/or Weibull distributions. Availability parameters such as equivalent availability, state availability (percentage of time as a particular output state capability), continuous state duration and number of state occurrences can be calculated. Initial spares allotment and spares replenishment on a resupply cycle can be simulated. The number of block failures are tabulated both individually and by block type, as well as total downtime, repair time, and time waiting for spares. Also, maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can be calculated over a cumulative period of time or at specific points in time.
Analysis of particulates on tape lift samples
NASA Astrophysics Data System (ADS)
Moision, Robert M.; Chaney, John A.; Panetta, Chris J.; Liu, De-Ling
2014-09-01
Particle counts on tape lift samples taken from a hardware surface exceeded threshold requirements in six successive tests despite repeated cleaning of the surface. Subsequent analysis of the particle size distributions of the failed tests revealed that the handling and processing of the tape lift samples may have played a role in the test failures. In order to explore plausible causes for the observed size distribution anomalies, scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDX), and time-of-flight secondary ion mass spectrometry (ToF-SIMS) were employed to perform chemical analysis on collected particulates. SEM/EDX identified Na and S containing particles on the hardware samples in a size range identified as being responsible for the test failures. ToF-SIMS was employed to further examine the Na and S containing particulates and identified the molecular signature of sodium alkylbenzene sulfonates, a common surfactant used in industrial detergent. The root cause investigation suggests that the tape lift test failures originated from detergent residue left behind on the glass slides used to mount and transport the tape following sampling and not from the hardware surface.
Failure detection and fault management techniques for flush airdata sensing systems
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.
1992-01-01
A high-angle-of-attack flush airdata sensing system was installed and flight tested on the F-18 High Alpha Research Vehicle at NASA-Dryden. This system uses a matrix of pressure orifices arranged in concentric circles on the nose of the vehicle to determine angles of attack, angles of sideslip, dynamic pressure, and static pressure as well as other airdata parameters. Results presented use an arrangement of 11 symmetrically distributed ports on the aircraft nose. Experience with this sensing system data indicates that the primary concern for real-time implementation is the detection and management of overall system and individual pressure sensor failures. The multiple port sensing system is more tolerant to small disturbances in the measured pressure data than conventional probe-based intrusive airdata systems. However, under adverse circumstances, large undetected failures in individual pressure ports can result in algorithm divergence and catastrophic failure of the entire system. How system and individual port failures may be detected using chi sq. analysis is shown. Once identified, the effects of failures are eliminated using weighted least squares.
Conesa-Muñoz, Jesús; Gonzalez-de-Soto, Mariano; Gonzalez-de-Santos, Pablo; Ribeiro, Angela
2015-03-05
This paper describes a supervisor system for monitoring the operation of automated agricultural vehicles. The system analyses all of the information provided by the sensors and subsystems on the vehicles in real time and notifies the user when a failure or potentially dangerous situation is detected. In some situations, it is even able to execute a neutralising protocol to remedy the failure. The system is based on a distributed and multi-level architecture that divides the supervision into different subsystems, allowing for better management of the detection and repair of failures. The proposed supervision system was developed to perform well in several scenarios, such as spraying canopy treatments against insects and diseases and selective weed treatments, by either spraying herbicide or burning pests with a mechanical-thermal actuator. Results are presented for selective weed treatment by the spraying of herbicide. The system successfully supervised the task; it detected failures such as service disruptions, incorrect working speeds, incorrect implement states, and potential collisions. Moreover, the system was able to prevent collisions between vehicles by taking action to avoid intersecting trajectories. The results show that the proposed system is a highly useful tool for managing fleets of autonomous vehicles. In particular, it can be used to manage agricultural vehicles during treatment operations.
Conesa-Muñoz, Jesús; Gonzalez-de-Soto, Mariano; Gonzalez-de-Santos, Pablo; Ribeiro, Angela
2015-01-01
This paper describes a supervisor system for monitoring the operation of automated agricultural vehicles. The system analyses all of the information provided by the sensors and subsystems on the vehicles in real time and notifies the user when a failure or potentially dangerous situation is detected. In some situations, it is even able to execute a neutralising protocol to remedy the failure. The system is based on a distributed and multi-level architecture that divides the supervision into different subsystems, allowing for better management of the detection and repair of failures. The proposed supervision system was developed to perform well in several scenarios, such as spraying canopy treatments against insects and diseases and selective weed treatments, by either spraying herbicide or burning pests with a mechanical-thermal actuator. Results are presented for selective weed treatment by the spraying of herbicide. The system successfully supervised the task; it detected failures such as service disruptions, incorrect working speeds, incorrect implement states, and potential collisions. Moreover, the system was able to prevent collisions between vehicles by taking action to avoid intersecting trajectories. The results show that the proposed system is a highly useful tool for managing fleets of autonomous vehicles. In particular, it can be used to manage agricultural vehicles during treatment operations. PMID:25751079
Numerical Simulation on the Dynamic Splitting Tensile Test of reinforced concrete
NASA Astrophysics Data System (ADS)
Zhao, Zhuan; Jia, Haokai; Jing, Lin
2018-03-01
The research for crack resistance was of RC was based on the split Hopkinson bar and numerical simulate software LS-DYNA3D. In the research, the difference of dynamic splitting failure modes between plane concrete and reinforced concrete were completed, and the change rule of tensile stress distribution with reinforcement ratio was studied; also the effect rule with the strain rate and the crack resistance was also discussed by the radial tensile stress time history curve of RC specimen under different loading speeds. The results shows that the reinforcement in the concrete can impede the crack extension, defer the failure time of concrete, increase the tension intensity of concrete; with strain rate of concrete increased, the crack resistance of RC increased.
Modelling of Rainfall Induced Landslides in Puerto Rico
NASA Astrophysics Data System (ADS)
Lepore, C.; Arnone, E.; Sivandran, G.; Noto, L. V.; Bras, R. L.
2010-12-01
We performed an island-wide determination of static landslide susceptibility and hazard assessment as well as dynamic modeling of rainfall-induced shallow landslides in a particular hydrologic basin. Based on statistical analysis of past landslides, we determined that reliable prediction of the susceptibility to landslides is strongly dependent on the resolution of the digital elevation model (DEM) employed and the reliability of the rainfall data. A distributed hydrology model, Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator with VEGetation Generator for Interactive Evolution (tRIBS-VEGGIE), tRIBS-VEGGIE, has been implemented for the first time in a humid tropical environment like Puerto Rico and validated against in-situ measurements. A slope-failure module has been added to tRIBS-VEGGIE’s framework, after analyzing several failure criterions to identify the most suitable for our application; the module is used to predict the location and timing of landsliding events. The Mameyes basin, located in the Luquillo Experimental Forest in Puerto Rico, was selected for modeling based on the availability of soil, vegetation, topographical, meteorological and historic landslide data. Application of the model yields a temporal and spatial distribution of predicted rainfall-induced landslides.
Continuum Damage Mechanics Used to Predict the Creep Life of Monolithic Ceramics
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Jadaan, Osama M.
1998-01-01
Significant improvements in propulsion and power generation for the next century will require revolutionary advances in high-temperature materials and structural design. Advanced ceramics are candidate materials for these elevated temperature applications. High-temperature and long-duration applications of monolithic ceramics can place their failure mode in the creep rupture regime. An analytical methodology in the form of the integrated design program-Ceramics Analysis and Reliability Evaluation of Structures/Creep (CARES/Creep) has been developed by the NASA Lewis Research Center to predict the life of ceramic structural components subjected to creep rupture conditions. This program utilizes commercially available finite element packages and takes into account the transient state of stress and creep strain distributions (stress relaxation as well as the asymmetric response to tension and compression). The creep life of a component is discretized into short time steps, during which the stress distribution is assumed constant. Then, the damage is calculated for each time step on the basis of a modified Monkman-Grant (MMG) creep rupture criterion. The cumulative damage is subsequently calculated as time elapses in a manner similar to Miner's rule for cyclic fatigue loading. Failure is assumed to occur when the normalized cumulative damage at any point in the component reaches unity. The corresponding time is the creep rupture life for that component.
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.
1986-01-01
The use and implementation of Ada in distributed environments in which reliability is the primary concern were investigted. A distributed system, programmed entirely in Ada, was studied to assess the use of individual tasks without concern for the processor used. Continued development and testing of the fault tolerant Ada testbed; development of suggested changes to Ada to cope with the failures of interest; design of approaches to fault tolerant software in real time systems, and the integration of these ideas into Ada; and the preparation of various papers and presentations were discussed.
NASA Astrophysics Data System (ADS)
Jakovetic, Dusan; Xavier, João; Moura, José M. F.
2011-08-01
We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x=x^\\star. The objective function of the corresponding optimization problem is the sum of private (known only by a node,) convex, nodes' objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, decentralized algorithm. We refer to this algorithm as AL-G (augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations the effectiveness on two applications: l_1-regularized logistic regression for classification and cooperative spectrum sensing for cognitive radio networks.
NASA Technical Reports Server (NTRS)
Schmeckpeper, K. R.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 1671 failure modes analyzed, 9 single failures were determined to result in loss of crew or vehicle. Three single failures unique to intact abort were determined to result in possible loss of the crew or vehicle. A possible loss of mission could result if any of 136 single failures occurred. Six of the criticality 1/1 failures are in two rotary and two pushbutton switches that control External Tank and Solid Rocket Booster separation. The other 6 criticality 1/1 failures are fuses, one each per Aft Power Control Assembly (APCA) 4, 5, and 6 and one each per Forward Power Control Assembly (FPCA) 1, 2, and 3, that supply power to certain Main Propulsion System (MPS) valves and Forward Reaction Control System (RCS) circuits.
Structural Constraints On The Spatial Distribution of Aftershocks
NASA Astrophysics Data System (ADS)
McCloskey, J.; Nalbant, S. S.; Steacy, S.; Nostro, C.; Scotti, O.; Baumont, D.
Real-time, forward modelling of spatial distributions of potentially damaging after- shocks by calculating stress perturbations due to large earthquakes may produce so- cially useful, time- dependent hazard estimates in the foreseeable future. Such calcula- tions, however, rely on the resolution of a stress perturbation tensor (SPT) onto planes whose geometry is unknown and decisions as to the orientations of these planes have a first order effect on the geometry of the resulting hazard distributions. Commonly, these decisions are based on the assumption that structures optimally oriented for fail- ure in the regional stress field, exist everywhere and stress maps are produced by resolving onto these orientations. Here we investigate this proposition using a 3D cal- culation for the optimally oriented planes (OOPs) for the 1992 Landers earthquake (M = 7.3). We examine the encouraged mechanisms as a function of location and show that enhancement for failure exists over a much wider area than in the equivalent, and more usual, 2.5D calculations. Mechanisms predicted in these areas are not consistent with the local structural geology, however, and corresponding aftershocks are gener- ally not observed. We argue that best hazard estimates will result from geometrically restricted versions of the OOP concept in which observed structure constrains possible orientations for failure.
Accelerated life testing and reliability of high K multilayer ceramic capacitors
NASA Technical Reports Server (NTRS)
Minford, W. J.
1981-01-01
The reliability of one lot of high K multilayer ceramic capacitors was evaluated using accelerated life testing. The degradation in insulation resistance was characterized as a function of voltage and temperature. The times to failure at a voltage-temperature stress conformed to a lognormal distribution with a standard deviation approximately 0.5.
A Byzantine-Fault Tolerant Self-Stabilizing Protocol for Distributed Clock Synchronization Systems
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2006-01-01
Embedded distributed systems have become an integral part of safety-critical computing applications, necessitating system designs that incorporate fault tolerant clock synchronization in order to achieve ultra-reliable assurance levels. Many efficient clock synchronization protocols do not, however, address Byzantine failures, and most protocols that do tolerate Byzantine failures do not self-stabilize. Of the Byzantine self-stabilizing clock synchronization algorithms that exist in the literature, they are based on either unjustifiably strong assumptions about initial synchrony of the nodes or on the existence of a common pulse at the nodes. The Byzantine self-stabilizing clock synchronization protocol presented here does not rely on any assumptions about the initial state of the clocks. Furthermore, there is neither a central clock nor an externally generated pulse system. The proposed protocol converges deterministically, is scalable, and self-stabilizes in a short amount of time. The convergence time is linear with respect to the self-stabilization period. Proofs of the correctness of the protocol as well as the results of formal verification efforts are reported.
Reliability demonstration test for load-sharing systems with exponential and Weibull components
Hu, Qingpei; Yu, Dan; Xie, Min
2017-01-01
Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn’t yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics. PMID:29284030
Reliability demonstration test for load-sharing systems with exponential and Weibull components.
Xu, Jianyu; Hu, Qingpei; Yu, Dan; Xie, Min
2017-01-01
Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn't yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics.
Failure probability analysis of optical grid
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng
2008-11-01
Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.
NASA Technical Reports Server (NTRS)
Wolitz, K.; Brockmann, W.; Fischer, T.
1979-01-01
Acoustic emission analysis as a quasi-nondestructive test method makes it possible to differentiate clearly, in judging the total behavior of fiber-reinforced plastic composites, between critical failure modes (in the case of unidirectional composites fiber fractures) and non-critical failure modes (delamination processes or matrix fractures). A particular advantage is that, for varying pressure demands on the composites, the emitted acoustic pulses can be analyzed with regard to their amplitude distribution. In addition, definite indications as to how the damages occurred can be obtained from the time curves of the emitted acoustic pulses as well as from the particular frequency spectrum. Distinct analogies can be drawn between the various analytical methods with respect to whether the failure modes can be classified as critical or non-critical.
[Low Fidelity Simulation of a Zero-Y Robot
NASA Technical Reports Server (NTRS)
Sweet, Adam
2001-01-01
The item to be cleared is a low-fidelity software simulation model of a hypothetical freeflying robot designed for use in zero gravity environments. This simulation model works with the HCC simulation system that was developed by Xerox PARC and NASA Ames Research Center. HCC has been previously cleared for distribution. When used with the HCC software, the model computes the location and orientation of the simulated robot over time. Failures (such as a broken motor) can be injected into the simulation to produce simulated behavior corresponding to the failure. Release of this simulation will allow researchers to test their software diagnosis systems by attempting to diagnose the simulated failure from the simulated behavior. This model does not contain any encryption software nor can it perform any control tasks that might be export controlled.
Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.
2015-12-01
This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.
NASA Technical Reports Server (NTRS)
Anderson, Leif; Box, Neil; Carter, Katrina; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael
2012-01-01
There are two general shortcomings to the current annual sparing assessment: 1. The vehicle functions are currently assessed according to confidence targets, which can be misleading- overly conservative or optimistic. 2. The current confidence levels are arbitrarily determined and do not account for epistemic uncertainty (lack of knowledge) in the ORU failure rate. There are two major categories of uncertainty that impact Sparing Assessment: (a) Aleatory Uncertainty: Natural variability in distribution of actual failures around an Mean Time Between Failure (MTBF) (b) Epistemic Uncertainty : Lack of knowledge about the true value of an Orbital Replacement Unit's (ORU) MTBF We propose an approach to revise confidence targets and account for both categories of uncertainty, an approach we call Probability and Confidence Trade-space (PACT) evaluation.
Use of Life-Test Data Analysis Methodology for Analyzing Undesirable Habitual Behavior
1979-07-06
of times to failu’e for the hardware subjected to life test. In the present context it has been round that detoxification provides a time that one may...the fact that resulting data generated by a single individual tend to belong to a classical distribution of times to failure. Detoxification is used...drug and its immediate toxic effects from his or her body and brain. Detoxification in this sense may result from inpatient or outpatient treatment at a
Overview of the Smart Network Element Architecture and Recent Innovations
NASA Technical Reports Server (NTRS)
Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.
2008-01-01
In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.
Enhanced Component Performance Study: Emergency Diesel Generators 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using (1) Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2014 and (2) maintenance unavailability (UA) performance data from Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2014. The objective is to show estimates of current failure probabilities and rates related to EDGs, trend these data on an annual basis, determine if the current data are consistent with the probability distributions currently recommended for use inmore » NRC probabilistic risk assessments, show how the reliability data differ for different EDG manufacturers and for EDGs with different ratings; and summarize the subcomponents, causes, detection methods, and recovery associated with each EDG failure mode. Engineering analyses were performed with respect to time period and failure mode without regard to the actual number of EDGs at each plant. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating. Six trends with varying degrees of statistical significance were identified in the data.« less
NASA Astrophysics Data System (ADS)
Jelani, Mohsan; Li, Zewen; Shen, Zhonghua; Sardar, Maryam; Tabassum, Aasma
2017-05-01
The present work reports the investigation of the thermal and mechanical behaviour of aluminium alloys under the combined action of tensile loading and laser irradiations. The two types of aluminium alloys (Al-1060 and Al-6061) are used for the experiments. The continuous wave Ytterbium fibre laser (wavelength 1080 nm) was employed as irradiation source, while tensile loading was provided by tensile testing machine. The effects of various pre-loading and laser power densities on the failure time, temperature distribution and on deformation behaviour of aluminium alloys are analysed. The experimental results represents the significant reduction in failure time and temperature for higher laser powers and for high load values, which implies that preloading may contribute a significant role in the failure of the material at elevated temperature. The reason and characterization of material failure by tensile and laser loading are explored in detail. A comparative behaviour of under tested materials is also investigated. This work suggests that, studies considering only combined loading are not enough to fully understand the mechanical behaviour of under tested materials. For complete characterization, one must consider the effect of heating as well as loading rate.
Joint scale-change models for recurrent events and failure time.
Xu, Gongjun; Chiou, Sy Han; Huang, Chiung-Yu; Wang, Mei-Cheng; Yan, Jun
2017-01-01
Recurrent event data arise frequently in various fields such as biomedical sciences, public health, engineering, and social sciences. In many instances, the observation of the recurrent event process can be stopped by the occurrence of a correlated failure event, such as treatment failure and death. In this article, we propose a joint scale-change model for the recurrent event process and the failure time, where a shared frailty variable is used to model the association between the two types of outcomes. In contrast to the popular Cox-type joint modeling approaches, the regression parameters in the proposed joint scale-change model have marginal interpretations. The proposed approach is robust in the sense that no parametric assumption is imposed on the distribution of the unobserved frailty and that we do not need the strong Poisson-type assumption for the recurrent event process. We establish consistency and asymptotic normality of the proposed semiparametric estimators under suitable regularity conditions. To estimate the corresponding variances of the estimators, we develop a computationally efficient resampling-based procedure. Simulation studies and an analysis of hospitalization data from the Danish Psychiatric Central Register illustrate the performance of the proposed method.
Decision Models for Determining the Optimal Life Test Sampling Plans
NASA Astrophysics Data System (ADS)
Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.
2010-11-01
Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.
Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven
2015-01-01
Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.
Development of a Distributed Crack Sensor Using Coaxial Cable.
Zhou, Zhi; Jiao, Tong; Zhao, Peng; Liu, Jia; Xiao, Hai
2016-07-29
Cracks, the important factor of structure failure, reflect structural damage directly. Thus, it is significant to realize distributed, real-time crack monitoring. To overcome the shortages of traditional crack detectors, such as the inconvenience of installation, vulnerability, and low measurement range, etc., an improved topology-based cable sensor with a shallow helical groove on the outside surface of a coaxial cable is proposed in this paper. The sensing mechanism, fabrication method, and performances are investigated both numerically and experimentally. Crack monitoring experiments of the reinforced beams are also presented in this paper, illustrating the utility of this sensor in practical applications. These studies show that the sensor can identify a minimum crack width of 0.02 mm and can measure multiple cracks with a spatial resolution of 3 mm. In addition, it is also proved that the sensor performs well to detect the initiation and development of cracks until structure failure.
Development of a Distributed Crack Sensor Using Coaxial Cable
Zhou, Zhi; Jiao, Tong; Zhao, Peng; Liu, Jia; Xiao, Hai
2016-01-01
Cracks, the important factor of structure failure, reflect structural damage directly. Thus, it is significant to realize distributed, real-time crack monitoring. To overcome the shortages of traditional crack detectors, such as the inconvenience of installation, vulnerability, and low measurement range, etc., an improved topology-based cable sensor with a shallow helical groove on the outside surface of a coaxial cable is proposed in this paper. The sensing mechanism, fabrication method, and performances are investigated both numerically and experimentally. Crack monitoring experiments of the reinforced beams are also presented in this paper, illustrating the utility of this sensor in practical applications. These studies show that the sensor can identify a minimum crack width of 0.02 mm and can measure multiple cracks with a spatial resolution of 3 mm. In addition, it is also proved that the sensor performs well to detect the initiation and development of cracks until structure failure. PMID:27483280
Substantial vertebral body osteophytes protect against severe vertebral fractures in compression
Aubin, Carl-Éric; Chaumoître, Kathia; Mac-Thiong, Jean-Marc; Ménard, Anne-Laure; Petit, Yvan; Garo, Anaïs; Arnoux, Pierre-Jean
2017-01-01
Recent findings suggest that vertebral osteophytes increase the resistance of the spine to compression. However, the role of vertebral osteophytes on the biomechanical response of the spine under fast dynamic compression, up to failure, is unclear. Seventeen human spine specimens composed of three vertebrae (from T5-T7 to T11-L1) and their surrounding soft tissues were harvested from nine cadavers, aged 77 to 92 years. Specimens were imaged using quantitative computer tomography (QCT) for medical observation, classification of the intervertebral disc degeneration (Thomson grade) and measurement of the vertebral trabecular density (VTD), height and cross-sectional area. Specimens were divided into two groups (with (n = 9) or without (n = 8) substantial vertebral body osteophytes) and compressed axially at a dynamic displacement rate of 1 m/s, up to failure. Normalized force-displacement curves, videos and QCT images allowed characterizing failure parameters (force, displacement and energy at failure) and fracture patterns. Results were analyzed using chi-squared tests for sampling distributions and linear regression for correlations between VTD and failure parameters. Specimens with substantial vertebral body osteophytes present higher stiffness (2.7 times on average) and force at failure (1.8 times on average) than other segments. The presence of osteophytes significantly influences the location, pattern and type of fracture. VTD was a good predictor of the dynamic force and energy at failure for specimens without substantial osteophytes. This study also showed that vertebral body osteophytes provide a protective mechanism to the underlying vertebra against severe compression fractures. PMID:29065144
Flood impacts on a water distribution network
NASA Astrophysics Data System (ADS)
Arrighi, Chiara; Tarani, Fabio; Vicario, Enrico; Castelli, Fabio
2017-12-01
Floods cause damage to people, buildings and infrastructures. Water distribution systems are particularly exposed, since water treatment plants are often located next to the rivers. Failure of the system leads to both direct losses, for instance damage to equipment and pipework contamination, and indirect impact, since it may lead to service disruption and thus affect populations far from the event through the functional dependencies of the network. In this work, we present an analysis of direct and indirect damages on a drinking water supply system, considering the hazard of riverine flooding as well as the exposure and vulnerability of active system components. The method is based on interweaving, through a semi-automated GIS procedure, a flood model and an EPANET-based pipe network model with a pressure-driven demand approach, which is needed when modelling water distribution networks in highly off-design conditions. Impact measures are defined and estimated so as to quantify service outage and potential pipe contamination. The method is applied to the water supply system of the city of Florence, Italy, serving approximately 380 000 inhabitants. The evaluation of flood impact on the water distribution network is carried out for different events with assigned recurrence intervals. Vulnerable elements exposed to the flood are identified and analysed in order to estimate their residual functionality and to simulate failure scenarios. Results show that in the worst failure scenario (no residual functionality of the lifting station and a 500-year flood), 420 km of pipework would require disinfection with an estimated cost of EUR 21 million, which is about 0.5 % of the direct flood losses evaluated for buildings and contents. Moreover, if flood impacts on the water distribution network are considered, the population affected by the flood is up to 3 times the population directly flooded.
77 FR 34457 - Pipeline Safety: Mechanical Fitting Failure Reports
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-11
... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... notice provides clarification to owners and operators of gas distribution pipeline facilities when... of a gas distribution pipeline facility to file a written report for any mechanical fitting failure...
NASA Astrophysics Data System (ADS)
Cohen, D.; Michlmayr, G.; Or, D.
2012-04-01
Shearing of dense granular materials appears in many engineering and Earth sciences applications. Under a constant strain rate, the shearing stress at steady state oscillates with slow rises followed by rapid drops that are linked to the build up and failure of force chains. Experiments indicate that these drops display exponential statistics. Measurements of acoustic emissions during shearing indicates that the energy liberated by failure of these force chains has power-law statistics. Representing force chains as fibers, we use a stick-slip fiber bundle model to obtain analytical solutions of the statistical distribution of stress drops and failure energy. In the model, fibers stretch, fail, and regain strength during deformation. Fibers have Weibull-distributed threshold strengths with either quenched and annealed disorder. The shape of the distribution for drops and energy obtained from the model are similar to those measured during shearing experiments. This simple model may be useful to identify failure events linked to force chain failures. Future generalizations of the model that include different types of fiber failure may also allow identification of different types of granular failures that have distinct statistical acoustic emission signatures.
Moussawi, A; Derzsy, N; Lin, X; Szymanski, B K; Korniss, G
2017-09-15
Cascading failures are a critical vulnerability of complex information or infrastructure networks. Here we investigate the properties of load-based cascading failures in real and synthetic spatially-embedded network structures, and propose mitigation strategies to reduce the severity of damages caused by such failures. We introduce a stochastic method for optimal heterogeneous distribution of resources (node capacities) subject to a fixed total cost. Additionally, we design and compare the performance of networks with N-stable and (N-1)-stable network-capacity allocations by triggering cascades using various real-world node-attack and node-failure scenarios. We show that failure mitigation through increased node protection can be effectively achieved against single-node failures. However, mitigating against multiple node failures is much more difficult due to the combinatorial increase in possible sets of initially failing nodes. We analyze the robustness of the system with increasing protection, and find that a critical tolerance exists at which the system undergoes a phase transition, and above which the network almost completely survives an attack. Moreover, we show that cascade-size distributions measured in this region exhibit a power-law decay. Finally, we find a strong correlation between cascade sizes induced by individual nodes and sets of nodes. We also show that network topology alone is a weak predictor in determining the progression of cascading failures.
Woodin, Sarah A; Hilbish, Thomas J; Helmuth, Brian; Jones, Sierra J; Wethey, David S
2013-09-01
Modeling the biogeographic consequences of climate change requires confidence in model predictions under novel conditions. However, models often fail when extended to new locales, and such instances have been used as evidence of a change in physiological tolerance, that is, a fundamental niche shift. We explore an alternative explanation and propose a method for predicting the likelihood of failure based on physiological performance curves and environmental variance in the original and new environments. We define the transient event margin (TEM) as the gap between energetic performance failure, defined as CTmax, and the upper lethal limit, defined as LTmax. If TEM is large relative to environmental fluctuations, models will likely fail in new locales. If TEM is small relative to environmental fluctuations, models are likely to be robust for new locales, even when mechanism is unknown. Using temperature, we predict when biogeographic models are likely to fail and illustrate this with a case study. We suggest that failure is predictable from an understanding of how climate drives nonlethal physiological responses, but for many species such data have not been collected. Successful biogeographic forecasting thus depends on understanding when the mechanisms limiting distribution of a species will differ among geographic regions, or at different times, resulting in realized niche shifts. TEM allows prediction of the likelihood of such model failure.
Kantor, Rami; Smeaton, Laura; Vardhanabhuti, Saran; Hudelson, Sarah E.; Wallis, Carol L.; Tripathy, Srikanth; Morgado, Mariza G.; Saravanan, Shanmugham; Balakrishnan, Pachamuthu; Reitsma, Marissa; Hart, Stephen; Mellors, John W.; Halvas, Elias; Grinsztejn, Beatriz; Hosseinipour, Mina C.; Kumwenda, Johnstone; La Rosa, Alberto; Lalloo, Umesh G.; Lama, Javier R.; Rassool, Mohammed; Santos, Breno R.; Supparatpinyo, Khuanchai; Hakim, James; Flanigan, Timothy; Kumarasamy, Nagalingeswaran; Campbell, Thomas B.; Eshleman, Susan H.
2015-01-01
Background. Evaluation of pretreatment HIV genotyping is needed globally to guide treatment programs. We examined the association of pretreatment (baseline) drug resistance and subtype with virologic failure in a multinational, randomized clinical trial that evaluated 3 antiretroviral treatment (ART) regimens and included resource-limited setting sites. Methods. Pol genotyping was performed in a nested case-cohort study including 270 randomly sampled participants (subcohort), and 218 additional participants failing ART (case group). Failure was defined as confirmed viral load (VL) >1000 copies/mL. Cox proportional hazards models estimated resistance–failure association. Results. In the representative subcohort (261/270 participants with genotypes; 44% women; median age, 35 years; median CD4 cell count, 151 cells/µL; median VL, 5.0 log10 copies/mL; 58% non-B subtypes), baseline resistance occurred in 4.2%, evenly distributed among treatment arms and subtypes. In the subcohort and case groups combined (466/488 participants with genotypes), used to examine the association between resistance and treatment failure, baseline resistance occurred in 7.1% (9.4% with failure, 4.3% without). Baseline resistance was significantly associated with shorter time to virologic failure (hazard ratio [HR], 2.03; P = .035), and after adjusting for sex, treatment arm, sex–treatment arm interaction, pretreatment CD4 cell count, baseline VL, and subtype, was still independently associated (HR, 2.1; P = .05). Compared with subtype B, subtype C infection was associated with higher failure risk (HR, 1.57; 95% confidence interval [CI], 1.04–2.35), whereas non-B/C subtype infection was associated with longer time to failure (HR, 0.47; 95% CI, .22–.98). Conclusions. In this global clinical trial, pretreatment resistance and HIV-1 subtype were independently associated with virologic failure. Pretreatment genotyping should be considered whenever feasible. Clinical Trials Registration. NCT00084136. PMID:25681380
Caballero Morales, Santiago Omar
2013-01-01
The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082
Failure rate and reliability of the KOMATSU hydraulic excavator in surface limestone mine
NASA Astrophysics Data System (ADS)
Harish Kumar N., S.; Choudhary, R. P.; Murthy, Ch. S. N.
2018-04-01
The model with failure rate function of bathtub-shaped is helpful in reliability analysis of any system and particularly in reliability associated privative maintenance. The usual Weibull distribution is, however, not capable to model the complete lifecycle of the any with a bathtub-shaped failure rate function. In this paper, failure rate and reliability analysis of the KOMATSU hydraulic excavator/shovel in surface mine is presented and also to improve the reliability and decrease the failure rate of each subsystem of the shovel based on the preventive maintenance. The model of the bathtub-shaped for shovel can also be seen as a simplification of the Weibull distribution.
The resilient hybrid fiber sensor network with self-healing function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Shibo, E-mail: Shibo-Xu@tju.edu.cn; Liu, Tiegen; Ge, Chunfeng
This paper presents a novel resilient fiber sensor network (FSN) with multi-ring architecture, which could interconnect various kinds of fiber sensors responsible for more than one measurands. We explain how the intelligent control system provides sensors with self-healing function meanwhile sensors are working properly, besides each fiber in FSN is under real-time monitoring. We explain the software process and emergency mechanism to respond failures or other circumstances. To improve the efficiency in the use of limited spectrum resources in some situations, we have two different structures to distribute the light sources rationally. Then, we propose a hybrid sensor working inmore » FSN which is a combination of a distributed sensor and a FBG (Fiber Bragg Grating) array fused in a common fiber sensing temperature and vibrations simultaneously with neglectable crosstalk to each other. By making a failure to a working fiber in experiment, the feasibility and effectiveness of the network with a hybrid sensor has been demonstrated, hybrid sensors could not only work as designed but also survive from destructive failures with the help of resilient network and smart and quick self-healing actions. The network has improved the viability of the fiber sensors and diversity of measurands.« less
Approach to failure in porous granular materials under compression
NASA Astrophysics Data System (ADS)
Kun, Ferenc; Varga, Imre; Lennartz-Sassinek, Sabine; Main, Ian G.
2013-12-01
We investigate the approach to catastrophic failure in a model porous granular material undergoing uniaxial compression. A discrete element computational model is used to simulate both the microstructure of the material and the complex dynamics and feedbacks involved in local fracturing and the production of crackling noise. Under strain-controlled loading, microcracks initially nucleate in an uncorrelated way all over the sample. As loading proceeds the damage localizes into a narrow damage band inclined at 30∘-45∘ to the load direction. Inside the damage band the material is crushed into a poorly sorted mixture of mainly fine powder hosting some larger fragments. The mass probability density distribution of particles in the damage zone is a power law of exponent 2.1, similar to a value of 1.87 inferred from observations of the length distribution of wear products (gouge) in natural and laboratory faults. Dynamic bursts of radiated energy, analogous to acoustic emissions observed in laboratory experiments on porous sedimentary rocks, are identified as correlated trails or cascades of local ruptures that emerge from the stress redistribution process. As the system approaches macroscopic failure consecutive bursts become progressively more correlated. Their size distribution is also a power law, with an equivalent Gutenberg-Richter b value of 1.22 averaged over the whole test, ranging from 3 to 0.5 at the time of failure, all similar to those observed in laboratory tests on granular sandstone samples. The formation of the damage band itself is marked by a decrease in the average distance between consecutive bursts and an emergent power-law correlation integral of event locations with a correlation dimension of 2.55, also similar to those observed in the laboratory (between 2.75 and 2.25).
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Modeling the roles of damage accumulation and mechanical healing on rainfall-induced landslides
NASA Astrophysics Data System (ADS)
Fan, Linfeng; Lehmann, Peter; Or, Dani
2014-05-01
The abrupt release of rainfall-induced shallow landslides is preceded by local failures that may abruptly coalesce and form a continuous failure plane within a hillslope. The mechanical status of hillslopes reflects a competition between the extent of severity of accumulated local damage during prior rainfall events and the rates of mechanically healing (i.e. regaining of strength) by closure of micro-cracks, regrowth of roots, etc. The interplay of these processes affects the initial conditions for landslide modeling and shapes potential failure patterns during future rainfall events. We incorporated these competing mechanical processes in a hydro-mechanical landslide triggering model subjected to a sequence of rainfall scenarios. The model employs the Fiber Bundle Model (FBM) with bonds (fiber bundle) with prescribed threshold linking adjacent soil columns and soil to bedrock. Prior damage was represented by a fraction of broken fibers during previous rainfall events, and the healing of broken fibers was described by strength regaining models for soil and roots at different characteristic time scales. Results show that prior damage and healing introduce highly nonlinear response to landslide triggering. For small prior damage, mechanical bonds at soil-bedrock interface may fail early in next rainfall event but lead to small perturbations onto lateral bonds without triggering a landslide. For more severe damage weakening lateral bonds, excess load due to failure at soil-bedrock interface accumulates at downslope soil columns resulting in early soil failure with patterns strongly correlated with prior damage distribution. Increasing prior damage over the hillslope decreases the volume of first landslide and prolongs the time needed to trigger the second landslide due to mechanical relaxation of the system. The mechanical healing of fibers diminishes effects of prior damage on the time of failure, and shortens waiting time between the first and second landslides. These findings highlight the need to improve definition of initial conditions and the shortcomings of assuming pristine hillslopes.
Liao, Baopeng; Yan, Meichen; Zhang, Weifang; Zhou, Kun
2017-01-01
Due to the increase in working hours, the reliability of rubber O-ring seals used in hydraulic systems of transfer machines will change. While traditional methods can only analyze one of the material properties or seal properties, the failure of the O-ring is caused by these two factors together. In this paper, two factors are mainly analyzed: the degradation of material properties and load randomization by processing technology. Firstly, the two factors are defined in terms of material failure and seal failure, before the experimental methods of rubber materials are studied. Following this, the time-variant material properties through experiments and load distribution by monitoring the processing can be obtained. Thirdly, compressive stress and contact stress have been calculated, which was combined with the reliability model to acquire the time-variant reliability for the O-ring. Finally, the life prediction and effect of oil pressure were discussed, then compared with the actual situation. The results show a lifetime of 12 months for the O-ring calculated in this paper, and compared with the replacement records from the maintenance workshop, the result is credible. PMID:29053597
Rapid quenching effects in glassy polymers
NASA Technical Reports Server (NTRS)
Mcgarry, F. J.
1982-01-01
Using a specially constructed microbalance for hydrostatic weighing, density changes in PVC thin film due to rapid quenching through the glass transition temperature were observed. The more severe the quench, the greater the free volume content. Isobaric volume recovery of PVC was also studied by volume dilatometry. Both show aging of relaxing molecular rearrangements taking place as a linear function of logarithmic aging time at room temperature. Distribution of retardation times and Primak's distributed activation energy spectra were applied to the volume recovery data. The concomitant changes in mechanical properties of PVC after quenching were monitored by tensile creep stress-stran to failure rates. All reflect the presence of excess free volume content due to rapid quenching.
The HAL 9000 Space Operating System Real-Time Planning Engine Design and Operations Requirements
NASA Technical Reports Server (NTRS)
Stetson, Howard; Watson, Michael D.; Shaughnessy, Ray
2012-01-01
In support of future deep space manned missions, an autonomous/automated vehicle, providing crew autonomy and an autonomous response planning system, will be required due to the light time delays in communication. Vehicle capabilities as a whole must provide for tactical response to vehicle system failures and space environmental effects induced failures, for risk mitigation of permanent loss of communication with Earth, and for assured crew return capabilities. The complexity of human rated space systems and the limited crew sizes and crew skills mix drive the need for a robust autonomous capability on-board the vehicle. The HAL 9000 Space Operating System[2] designed for such missions and space craft includes the first distributed real-time planning / re-planning system. This paper will detail the software architecture of the multiple planning engine system, and the interface design for plan changes, approval and implementation that is performed autonomously. Operations scenarios will be defined for analysis of the planning engines operations and its requirements for nominal / off nominal activities. An assessment of the distributed realtime re-planning system, in the defined operations environment, will be provided as well as findings as it pertains to the vehicle, crew, and mission control requirements needed for implementation.
Analysis of reliability for multi-ring interconnection of RPR networks
NASA Astrophysics Data System (ADS)
Liu, Jia; Jin, Depeng; Zeng, Lieguang; Li, Yong
2008-11-01
In this paper, the reliability and MTTF (Mean Time to Failure) for multi-ring RPR (Resilient Packet Ring) are calculated on the conditions of single-link failures, double-link failures and no failure, respectively. The parameters such as the total number of stations N, the number of the sub-rings R, and the distribution of Ni which represents the number of the stations in the i-th sub-ring (1<=i<=R) are contained in the formulas. The relationship between the value of the reliability/MTTF and the parameters N, R and Ni is analyzed. The result shows that reliability/MTTF of the RPR multi-rings is increasing while the variance of Ni is decreasing. It is also proved that the value of the reliability/MTTF is maximum when Ni=Nj ( i ≠j and 1<=i, j<=R) by using Lagrange multipliers method, i.e. the condition of the optimal reliability of multi-ring RPR is satisfied when var(Ni) =0.
NASA Astrophysics Data System (ADS)
Sun, Huarui; Bajo, Miguel Montes; Uren, Michael J.; Kuball, Martin
2015-01-01
Gate leakage degradation of AlGaN/GaN high electron mobility transistors under OFF-state stress is investigated using a combination of electrical, optical, and surface morphology characterizations. The generation of leakage "hot spots" at the edge of the gate is found to be strongly temperature accelerated. The time for the formation of each failure site follows a Weibull distribution with a shape parameter in the range of 0.7-0.9 from room temperature up to 120 °C. The average leakage per failure site is only weakly temperature dependent. The stress-induced structural degradation at the leakage sites exhibits a temperature dependence in the surface morphology, which is consistent with a surface defect generation process involving temperature-associated changes in the breakdown sites.
Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael
On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning ofmore » component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the Bayesian method to lifecycle prognostics consisted of identifying the prior, which is the RUL estimate and uncertainty from the previous prognostics type, and combining it with observational data related to the newer prognostics type. The resulting lifecycle prognostics algorithm uses all available information throughout the component lifecycle.« less
Evaluating the best time to intervene acute liver failure in rat models induced by d-galactosamine.
Éboli, Lígia Patrícia de Carvalho Batista; Netto, Alcides Augusto Salzedas; Azevedo, Ramiro Antero de; Lanzoni, Valéria Pereira; Paula, Tatiana Sugayama de; Goldenberg, Alberto; Gonzalez, Adriano Miziara
2016-12-01
To describe an animal model for acute liver failure by intraperitoneal d-galactosamine injections in rats and to define when is the best time to intervene through King's College and Clichy´s criteria evaluation. Sixty-one Wistar female rats were distributed into three groups: group 1 (11 rats received 1.4 g/kg of d-galactosamine intraperitoneally and were observed until they died); group 2 (44 rats received a dose of 1.4 g/kg of d-galactosamine and blood and histological samples were collected for analysis at 12 , 24, 48 , 72 and 120 hours after the injection); and the control group as well (6 rats) . Twelve hours after applying d-galactosamine, AST/ALT, bilirubin, factor V, PT and INR were already altered. The peak was reached at 48 hours. INR > 6.5 was found 12 hours after the injection and factor V < 30% after 24 hours. All the laboratory variables presented statistical differences, except urea (p = 0.758). There were statistical differences among all the histological variables analyzed. King's College and Clichy´s criteria were fulfilled 12 hours after the d-galactosamine injection and this time may represent the best time to intervene in this acute liver failure animal model.
Epinette, J-A; Brunschweiler, B; Mertl, P; Mole, D; Cazenave, A
2012-10-01
This study originated from a symposium held by the French Hip and Knee Society (Société française de la hanche et du genou [SFHG]) and was carried out to better assess the distribution of causes of unicompartmental knee arthroplasty (UKA) failures, as well as cause-specific delay to onset. Our working hypothesis was that most failures were traceable to wear occurring over a period of many years. A multicentre retrospective study (25 centres) was conducted in 418 failed UKAs performed between 1978 and 2009. We determined the prevalence and time to onset of the main reasons for revision surgery based upon available preoperative findings. Additional intraoperative findings were analysed. The results were compared to those of nation wide registries to evaluate the representativeness of our study population. Times to revision surgery were short: 19% of revisions occurred within the first year and 48.5% within the first 5 years. Loosening was the main reason for failure (45%), followed by osteoarthritis progression (15%) and, finally, by wear (12%). Other reasons were technical problems in 11.5% of cases, unexplained pain in 5.5%, and failure of the supporting bone in 3.6%. The infection rate was 1.9%. Our results were consistent with those of Swedish and Australian registries. Our hypothesis was not confirmed. The short time to failure in most cases suggests a major role for surgical technique issues. Morbidity related to the implant per se may be seen as moderate and not greater than with total knee prostheses. The good agreement between our data and those of nationwide registries indicates that our population was representative. A finer analysis is needed, indicating that the establishment of a French registry would be of interest. Copyright © 2012. Published by Elsevier Masson SAS.
NASA Astrophysics Data System (ADS)
Niu, Xiqun
Polybutylene (PB) is a semicrystalline thermoplastics. It has been widely used in potable water distribution piping system. However, field practice shows that failure occurs much earlier than the expected service lifetime. What are the causes and how to appropriately evaluate its lifetime motivate this study. In this thesis, three parts of work have been done. First is the understanding of PB, which includes material thermo and mechanical characterization, aging phenomena and notch sensitivity. The second part analyzes the applicability of the existing lifetime testing method for PB. It is shown that PB is an anomaly in terms of the temperature-lifetime relation because of the fracture mechanism transition across the testing temperature range. The third part is the development of the methodology of lifetime prediction for PB pipe. The fracture process of PB pipe consists of three stages, i.e., crack initiation, slow crack growth (SCG) and crack instability. The practical lifetime of PB pipe is primarily determined by the duration of the first two stages. The mechanism of crack initiation and the quantitative estimation of the time to crack initiation are studied by employing environment stress cracking technique. A fatigue slow crack growth testing method has been developed and applied in the study of SCG. By using Paris-Erdogan equation, a model is constructed to evaluate the time for SCG. As a result, the total lifetime is determined. Through this work, the failure mechanisms of PB pipe has been analyzed and the lifetime prediction methodology has been developed.
NASA Astrophysics Data System (ADS)
Paliwal, Bhasker
The constitutive behaviors and failure processes of brittle materials under far-field compressive loading are studied in this work. Several approaches are used: experiments to study the compressive failure behavior of ceramics, design of experimental techniques by means of finite element simulations, and the development of micro-mechanical damage models to analyze and predict mechanical response of brittle materials under far-field compression. Experiments have been conducted on various ceramics, (primarily on a transparent polycrystalline ceramic, aluminum oxynitride or AlON) under loading rates ranging from quasi-static (˜ 5X10-6) to dynamic (˜ 200 MPa/mus), using a servo-controlled hydraulic test machine and a modified compression Kolsky bar (MKB) technique respectively. High-speed photography has also been used with exposure times as low as 20 ns to observe the dynamic activation, growth and coalescence of cracks and resulting damage zones in the specimen. The photographs were correlated in time with measurements of the stresses in the specimen. Further, by means of 3D finite element simulations, an experimental technique has been developed to impose a controlled, homogeneous, planar confinement in the specimen. The technique can be used in conjunction with a high-speed camera to study the in situ dynamic failure behavior of materials under confinement. AlON specimens are used for the study. The statically pre-compressed specimen is subjected to axial dynamic compressive loading using the MKB. Results suggest that confinement not only increases the load carrying capacity, it also results in a non-linear stress evolution in the material. High-speed photographs also suggest an inelastic deformation mechanism in AlON under confinement which evolves more slowly than the typical brittle-cracking type of damage in the unconfined case. Next, an interacting micro-crack damage model is developed that explicitly accounts for the interaction among the micro-cracks in brittle materials. The model incorporates pre-existing defect distributions and a crack growth law. The damage is defined as a scalar parameter which is a function of the micro-crack density, the evolution of which is a function of the existing defect distribution and the crack growth dynamics. A specific case of a uniaxial compressive loading under constant strain-rate has been studied to predict the effects of the strain-rate, defect distribution and the crack growth dynamics on the constitutive response and failure behavior of brittle materials. Finally, the effects of crack growth dynamics on the strain-rate sensitivity of brittle materials are studied with the help of the micro-mechanical damage model. The results are compared with the experimentally observed damage evolution and the rate-sensitive behavior of the compressive strength of several engineering ceramics. The dynamic failure of armor-grade hot-pressed boron carbide (B 4C) under loading rates of ˜ 5X10-6 to 200 MPa/mus is also discussed.
Failure-probability driven dose painting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogelius, Ivan R.; Håkansson, Katrin; Due, Anne K.
Purpose: To demonstrate a data-driven dose-painting strategy based on the spatial distribution of recurrences in previously treated patients. The result is a quantitative way to define a dose prescription function, optimizing the predicted local control at constant treatment intensity. A dose planning study using the optimized dose prescription in 20 patients is performed.Methods: Patients treated at our center have five tumor subvolumes from the center of the tumor (PET positive volume) and out delineated. The spatial distribution of 48 failures in patients with complete clinical response after (chemo)radiation is used to derive a model for tumor control probability (TCP). Themore » total TCP is fixed to the clinically observed 70% actuarial TCP at five years. Additionally, the authors match the distribution of failures between the five subvolumes to the observed distribution. The steepness of the dose–response is extracted from the literature and the authors assume 30% and 20% risk of subclinical involvement in the elective volumes. The result is a five-compartment dose response model matching the observed distribution of failures. The model is used to optimize the distribution of dose in individual patients, while keeping the treatment intensity constant and the maximum prescribed dose below 85 Gy.Results: The vast majority of failures occur centrally despite the small volumes of the central regions. Thus, optimizing the dose prescription yields higher doses to the central target volumes and lower doses to the elective volumes. The dose planning study shows that the modified prescription is clinically feasible. The optimized TCP is 89% (range: 82%–91%) as compared to the observed TCP of 70%.Conclusions: The observed distribution of locoregional failures was used to derive an objective, data-driven dose prescription function. The optimized dose is predicted to result in a substantial increase in local control without increasing the predicted risk of toxicity.« less
49 CFR 191.12 - Distribution Systems: Mechanical Fitting Failure Reports
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Distribution Systems: Mechanical Fitting Failure Reports 191.12 Section 191.12 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER...
Statistical analysis of field data for aircraft warranties
NASA Astrophysics Data System (ADS)
Lakey, Mary J.
Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.
Reliability of stiffened structural panels: Two examples
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson; Davis, D. Dale, Jr.; Maring, Lise D.; Krishnamurthy, Thiagaraja; Elishakoff, Isaac
1992-01-01
The reliability of two graphite-epoxy stiffened panels that contain uncertainties is examined. For one panel, the effect of an overall bow-type initial imperfection is studied. The size of the bow is assumed to be a random variable. The failure mode is buckling. The benefits of quality control are explored by using truncated distributions. For the other panel, the effect of uncertainties in a strain-based failure criterion is studied. The allowable strains are assumed to be random variables. A geometrically nonlinear analysis is used to calculate a detailed strain distribution near an elliptical access hole in a wing panel that was tested to failure. Calculated strains are used to predict failure. Results are compared with the experimental failure load of the panel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethi, Roshan V.; Giantsoudi, Drosoula; Raiford, Michael
2014-03-01
Purpose: The pattern of failure in medulloblastoma patients treated with proton radiation therapy is unknown. For this increasingly used modality, it is important to ensure that outcomes are comparable to those in modern photon series. It has been suggested this pattern may differ from photons because of variations in linear energy transfer (LET) and relative biological effectiveness (RBE). In addition, the use of matching fields for delivery of craniospinal irradiation (CSI) may influence patterns of relapse. Here we report the patterns of failure after the use of protons, compare it to that in the available photon literature, and determine themore » LET and RBE values in areas of recurrence. Methods and Materials: Retrospective review of patients with medulloblastoma treated with proton radiation therapy at Massachusetts General Hospital (MGH) between 2002 and 2011. We documented the locations of first relapse. Discrete failures were contoured on the original planning computed tomography scan. Monte Carlo calculation methods were used to estimate the proton LET distribution. Models were used to estimate RBE values based on the LET distributions. Results: A total of 109 patients were followed for a median of 38.8 months (range, 1.4-119.2 months). Of the patients, 16 experienced relapse. Relapse involved the supratentorial compartment (n=8), spinal compartment (n=11), and posterior fossa (n=5). Eleven failures were isolated to a single compartment; 6 failures in the spine, 4 failures in the supratentorium, and 1 failure in the posterior fossa. The remaining patients had multiple sites of disease. One isolated spinal failure occurred at the spinal junction of 2 fields. None of the 70 patients treated with an involved-field-only boost failed in the posterior fossa outside of the tumor bed. We found no correlation between Monte Carlo-calculated LET distribution and regions of recurrence. Conclusions: The most common site of failure in patients treated with protons for medulloblastoma was outside of the posterior fossa. The most common site for isolated local failure was the spine. We recommend consideration of spinal imaging in follow-up and careful attention to dose distribution in the spinal junction regions. Development of techniques that do not require field matching may be of benefit. We did not identify a direct correlation between lower LET values and recurrence in medulloblastoma patients treated with proton therapy. Patterns of failure do not appear to differ from those in patients treated with photon therapy.« less
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.
Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N
2016-01-01
Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.
NASA Astrophysics Data System (ADS)
Mirus, B. B.; Baum, R. L.; Stark, B.; Smith, J. B.; Michel, A.
2015-12-01
Previous USGS research on landslide potential in hillside areas and coastal bluffs around Puget Sound, WA, has identified rainfall thresholds and antecedent moisture conditions that correlate with heightened probability of shallow landslides. However, physically based assessments of temporal and spatial variability in landslide potential require improved quantitative characterization of the hydrologic controls on landslide initiation in heterogeneous geologic materials. Here we present preliminary steps towards integrating monitoring of hydrologic response with physically based numerical modeling to inform the development of a landslide warning system for a railway corridor along the eastern shore of Puget Sound. We instrumented two sites along the steep coastal bluffs - one active landslide and one currently stable slope with the potential for failure - to monitor rainfall, soil-moisture, and pore-pressure dynamics in near-real time. We applied a distributed model of variably saturated subsurface flow for each site, with heterogeneous hydraulic-property distributions based on our detailed site characterization of the surficial colluvium and the underlying glacial-lacustrine deposits that form the bluffs. We calibrated the model with observed volumetric water content and matric potential time series, then used simulated pore pressures from the calibrated model to calculate the suction stress and the corresponding distribution of the factor of safety against landsliding with the infinite slope approximation. Although the utility of the model is limited by uncertainty in the deeper groundwater flow system, the continuous simulation of near-surface hydrologic response can help to quantify the temporal variations in the potential for shallow slope failures at the two sites. Thus the integration of near-real time monitoring and physically based modeling contributes a useful tool towards mitigating hazards along the Puget Sound railway corridor.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.
2014-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.
2016-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Making statistical inferences about software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1988-01-01
Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.
43 CFR 30.262 - When may a tribe exercise its statutory option to purchase?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 43 Public Lands: Interior 1 2011-10-01 2011-10-01 false When may a tribe exercise its statutory... a tribe exercise its statutory option to purchase? (a) A tribe may exercise its statutory option to... the tribe. (b) On failure to timely file a notice of purchase, the right to distribution of all...
The Revolution Starts Now: Why It Is Time to Reclaim Education's Connection to Our Democracy
ERIC Educational Resources Information Center
Coleman, Elizabeth
2011-01-01
This author contends that, during the past decade, people have witnessed escalating crises in the most vital areas of their public life, including: (1) a relentless acceleration in the awesome failure to effectively educate vast numbers of the young; (2) a no-less-relentless increase in the spectacular inequities in the distribution of wealth; (3)…
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
Distribution Tables and Private Tests: The Failure of Middle School Reform in Japan.
ERIC Educational Resources Information Center
LeTendre, Gerald K.
1994-01-01
In November 1992, Japanese Ministry of Education declared middle school teachers could no longer use distribution tables produced by private testing companies to predetermine high school students' curricula. Failure to implement reform stems from structural and cultural roots. By presorting students and molding their expectations, traditional…
Wear-Out Sensitivity Analysis Project Abstract
NASA Technical Reports Server (NTRS)
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
Reduction of water losses by rehabilitation of water distribution network.
Güngör, Mahmud; Yarar, Ufuk; Firat, Mahmut
2017-09-11
Physical or real losses may be indicated as the most important component of the water losses occurring in a water distribution network (WDN). The objective of this study is to examine the effects of piping material management and network rehabilitation on the physical water losses and water losses management in a WDN. For this aim, the Denizli WDN consisting of very old pipes that have exhausted their economic life is selected as the study area. The fact that the current network is old results in the decrease of pressure strength, increase of failure intensity, and inefficient use of water resources thus leading to the application of the rehabilitation program. In Denizli, network renewal works have been carried out since the year 2009 under the rehabilitation program. It was determined that the failure rate at regions where network renewal constructions have been completed decreased down to zero level. Renewal of piping material enables the minimization of leakage losses as well as the failure rate. On the other hand, the system rehabilitation has the potential to amortize itself in a very short amount of time if the initial investment cost of network renewal is considered along with the operating costs of the old and new systems, as well as water loss costs. As a result, it can be stated that renewal of piping material in water distribution systems, enhancement of the physical properties of the system, provide significant contributions such as increase of water and energy efficiency and more effective use of resources.
Dual permeability FEM models for distributed fiber optic sensors development
NASA Astrophysics Data System (ADS)
Aguilar-López, Juan Pablo; Bogaard, Thom
2017-04-01
Fiber optic cables are commonly known for being robust and reliable mediums for transferring information at the speed of light in glass. Billions of kilometers of cable have been installed around the world for internet connection and real time information sharing. Yet, fiber optic cable is not only a mean for information transfer but also a way to sense and measure physical properties of the medium in which is installed. For dike monitoring, it has been used in the past for detecting inner core and foundation temperature changes which allow to estimate water infiltration during high water events. The DOMINO research project, aims to develop a fiber optic based dike monitoring system which allows to directly sense and measure any pore pressure change inside the dike structure. For this purpose, questions like which location, how many sensors, which measuring frequency and which accuracy are required for the sensor development. All these questions may be initially answered with a finite element model which allows to estimate the effects of pore pressure change in different locations along the cross section while having a time dependent estimation of a stability factor. The sensor aims to monitor two main failure mechanisms at the same time; The piping erosion failure mechanism and the macro-stability failure mechanism. Both mechanisms are going to be modeled and assessed in detail with a finite element based dual permeability Darcy-Richards numerical solution. In that manner, it is possible to assess different sensing configurations with different loading scenarios (e.g. High water levels, rainfall events and initial soil moisture and permeability conditions). The results obtained for the different configurations are later evaluated based on an entropy based performance evaluation. The added value of this kind of modelling approach for the sensor development is that it allows to simultaneously model the piping erosion and macro-stability failure mechanisms in a time dependent manner. In that way, the estimated pore pressures may be related to the monitored one and to both failure mechanisms. Furthermore, the approach is intended to be used in a later stage for the real time monitoring of the failure.
Sensitivity of goodness-of-fit statistics to rainfall data rounding off
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Puliga, Michelangelo
An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.
Numerical simulation of failure behavior of granular debris flows based on flume model tests.
Zhou, Jian; Li, Ye-xun; Jia, Min-cai; Li, Cui-na
2013-01-01
In this study, the failure behaviors of debris flows were studied by flume model tests with artificial rainfall and numerical simulations (PFC(3D)). Model tests revealed that grain sizes distribution had profound effects on failure mode, and the failure in slope of medium sand started with cracks at crest and took the form of retrogressive toe sliding failure. With the increase of fine particles in soil, the failure mode of the slopes changed to fluidized flow. The discrete element method PFC(3D) can overcome the hypothesis of the traditional continuous medium mechanic and consider the simple characteristics of particle. Thus, a numerical simulations model considering liquid-solid coupled method has been developed to simulate the debris flow. Comparing the experimental results, the numerical simulation result indicated that the failure mode of the failure of medium sand slope was retrogressive toe sliding, and the failure of fine sand slope was fluidized sliding. The simulation result is consistent with the model test and theoretical analysis, and grain sizes distribution caused different failure behavior of granular debris flows. This research should be a guide to explore the theory of debris flow and to improve the prevention and reduction of debris flow.
Meltzer, Andrew J; Graham, Ashley; Connolly, Peter H; Karwowski, John K; Bush, Harry L; Frazier, Peter I; Schneider, Darren B
2013-01-01
We apply an innovative and novel analytic approach, based on reliability engineering (RE) principles frequently used to characterize the behavior of manufactured products, to examine outcomes after peripheral endovascular intervention. We hypothesized that this would allow for improved prediction of outcome after peripheral endovascular intervention, specifically with regard to identification of risk factors for early failure. Patients undergoing infrainguinal endovascular intervention for chronic lower-extremity ischemia from 2005 to 2010 were identified in a prospectively maintained database. The primary outcome of failure was defined as patency loss detected by duplex ultrasonography, with or without clinical failure. Analysis included univariate and multivariate Cox regression models, as well as RE-based analysis including product life-cycle models and Weibull failure plots. Early failures were distinguished using the RE principle of "basic rating life," and multivariate models identified independent risk factors for early failure. From 2005 to 2010, 434 primary endovascular peripheral interventions were performed for claudication (51.8%), rest pain (16.8%), or tissue loss (31.3%). Fifty-five percent of patients were aged ≥75 years; 57% were men. Failure was noted after 159 (36.6%) interventions during a mean follow-up of 18 months (range, 0-71 months). Using multivariate (Cox) regression analysis, rest pain and tissue loss were independent predictors of patency loss, with hazard ratios of 2.5 (95% confidence interval, 1.6-4.1; P < 0.001) and 3.2 (95% confidence interval, 2.0-5.2, P < 0.001), respectively. The distribution of failure times for both claudication and critical limb ischemia fit distinct Weibull plots, with different characteristics: interventions for claudication demonstrated an increasing failure rate (β = 1.22, θ = 13.46, mean time to failure = 12.603 months, index of fit = 0.99037, R(2) = 0.98084), whereas interventions for critical limb ischemia demonstrated a decreasing failure rate, suggesting the predominance of early failures (β = 0.7395, θ = 6.8, mean time to failure = 8.2, index of fit = 0.99391, R(2) = 0.98786). By 3.1 months, 10% of interventions failed. This point (90% reliability) was identified as the basic rating life. Using multivariate analysis of failure data, independent predictors of early failure (before 3.1 months) included tissue loss, long lesion length, chronic total occlusions, heart failure, and end-stage renal disease. Application of a RE framework to the assessment of clinical outcomes after peripheral interventions is feasible, and potentially more informative than traditional techniques. Conceptualization of interventions as "products" permits application of product life-cycle models that allow for empiric definition of "early failure" may facilitate comparative effectiveness analysis and enable the development of individualized surveillance programs after endovascular interventions. Copyright © 2013 Annals of Vascular Surgery Inc. Published by Elsevier Inc. All rights reserved.
Automatic Monitoring System Design and Failure Probability Analysis for River Dikes on Steep Channel
NASA Astrophysics Data System (ADS)
Chang, Yin-Lung; Lin, Yi-Jun; Tung, Yeou-Koung
2017-04-01
The purposes of this study includes: (1) design an automatic monitoring system for river dike; and (2) develop a framework which enables the determination of dike failure probabilities for various failure modes during a rainstorm. The historical dike failure data collected in this study indicate that most dikes in Taiwan collapsed under the 20-years return period discharge, which means the probability of dike failure is much higher than that of overtopping. We installed the dike monitoring system on the Chiu-She Dike which located on the middle stream of Dajia River, Taiwan. The system includes: (1) vertical distributed pore water pressure sensors in front of and behind the dike; (2) Time Domain Reflectometry (TDR) to measure the displacement of dike; (3) wireless floating device to measure the scouring depth at the toe of dike; and (4) water level gauge. The monitoring system recorded the variation of pore pressure inside the Chiu-She Dike and the scouring depth during Typhoon Megi. The recorded data showed that the highest groundwater level insides the dike occurred 15 hours after the peak discharge. We developed a framework which accounts for the uncertainties from return period discharge, Manning's n, scouring depth, soil cohesion, and friction angle and enables the determination of dike failure probabilities for various failure modes such as overtopping, surface erosion, mass failure, toe sliding and overturning. The framework was applied to Chiu-She, Feng-Chou, and Ke-Chuang Dikes on Dajia River. The results indicate that the toe sliding or overturning has the highest probability than other failure modes. Furthermore, the overall failure probability (integrate different failure modes) reaches 50% under 10-years return period flood which agrees with the historical failure data for the study reaches.
NASA Technical Reports Server (NTRS)
Friedlander, A. L.; Wells, W. C.
1980-01-01
A study of long term risks is presented that treats an additional pathway that could result in earth reentry, namely, small radioactive particles released in solar orbit due to payload fragmentation by accidental explosion or meteoroid impact. A characterization of such an event and of the initial mass size distribution of particles is given for two extremes of waste form strength. Attention is given to numerical results showing the mass-time distribution of material and the fraction of initial mass intercepted by earth. It is concluded that it appears that program planners need not be to concerned about the risks of this particular failure mechanism and return pathway.
NASA Astrophysics Data System (ADS)
Meade, Brendan J.; DeVries, Phoebe M. R.; Faller, Jeremy; Viegas, Fernanda; Wattenberg, Martin
2017-11-01
Aftershocks may be triggered by the stresses generated by preceding mainshocks. The temporal frequency and maximum size of aftershocks are well described by the empirical Omori and Bath laws, but spatial patterns are more difficult to forecast. Coulomb failure stress is perhaps the most common criterion invoked to explain spatial distributions of aftershocks. Here we consider the spatial relationship between patterns of aftershocks and a comprehensive list of 38 static elastic scalar metrics of stress (including stress tensor invariants, maximum shear stress, and Coulomb failure stress) from 213 coseismic slip distributions worldwide. The rates of true-positive and false-positive classification of regions with and without aftershocks are assessed with receiver operating characteristic analysis. We infer that the stress metrics that are most consistent with observed aftershock locations are maximum shear stress and the magnitude of the second and third invariants of the stress tensor. These metrics are significantly better than random assignment at a significance level of 0.005 in over 80% of the slip distributions. In contrast, the widely used Coulomb failure stress criterion is distinguishable from random assignment in only 51-64% of the slip distributions. These results suggest that a number of alternative scalar metrics are better predictors of aftershock locations than classic Coulomb failure stress change.
Volcanic Eruption Forecasts From Accelerating Rates of Drumbeat Long-Period Earthquakes
NASA Astrophysics Data System (ADS)
Bell, Andrew F.; Naylor, Mark; Hernandez, Stephen; Main, Ian G.; Gaunt, H. Elizabeth; Mothes, Patricia; Ruiz, Mario
2018-02-01
Accelerating rates of quasiperiodic "drumbeat" long-period earthquakes (LPs) are commonly reported before eruptions at andesite and dacite volcanoes, and promise insights into the nature of fundamental preeruptive processes and improved eruption forecasts. Here we apply a new Bayesian Markov chain Monte Carlo gamma point process methodology to investigate an exceptionally well-developed sequence of drumbeat LPs preceding a recent large vulcanian explosion at Tungurahua volcano, Ecuador. For more than 24 hr, LP rates increased according to the inverse power law trend predicted by material failure theory, and with a retrospectively forecast failure time that agrees with the eruption onset within error. LPs resulted from repeated activation of a single characteristic source driven by accelerating loading, rather than a distributed failure process, showing that similar precursory trends can emerge from quite different underlying physics. Nevertheless, such sequences have clear potential for improving forecasts of eruptions at Tungurahua and analogous volcanoes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Huarui, E-mail: huarui.sun@bristol.ac.uk; Bajo, Miguel Montes; Uren, Michael J.
2015-01-26
Gate leakage degradation of AlGaN/GaN high electron mobility transistors under OFF-state stress is investigated using a combination of electrical, optical, and surface morphology characterizations. The generation of leakage “hot spots” at the edge of the gate is found to be strongly temperature accelerated. The time for the formation of each failure site follows a Weibull distribution with a shape parameter in the range of 0.7–0.9 from room temperature up to 120 °C. The average leakage per failure site is only weakly temperature dependent. The stress-induced structural degradation at the leakage sites exhibits a temperature dependence in the surface morphology, which ismore » consistent with a surface defect generation process involving temperature-associated changes in the breakdown sites.« less
Leone, Marina; Musa, Gentian; Engel, Felix Benedikt
2018-03-07
After birth mammalian cardiomyocytes initiate a last cell cycle which results in binucleation due to cytokinesis failure. Despite its importance for cardiac regenerative therapies, this process is poorly understood. Here, we aimed at a better understanding of the difference between cardiomyocyte proliferation and binucleation, and providing a new tool to distinguish these two processes. Monitoring of cell division by time-lapse imaging revealed that rat cardiomyocyte binucleation stems from a failure to properly ingress the cleavage furrow. Astral microtubule required for actomyosin ring anchorage and thus furrow ingression were not symmetrically distributed at the periphery of the equatorial region during anaphase in binucleating cardiomyocytes. Consequently, RhoA, the master regulator of actomyosin ring formation and constriction, non-muscle myosin IIB, a central component of the actomyosin ring, as well as IQGAP3 were abnormally localized during cytokinesis. In agreement with improper furrow ingression, binucleation in vitro as well as in vivo was associated with a failure of RhoA as well as IQGAP3 to localize to the stembody of the midbody. Taken together, these results indicate that naturally occurring cytokinesis failure in primary cardiomyocytes is due to an aberrant mitotic microtubule apparatus resulting in inefficient anchorage of the actomyosin ring to the plasma cell membrane. Thus, cardiomyocyte binucleation and division can be discriminated by the analysis of RhoA as well as IQGAP3 localization.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
A new test apparatus for studying the failure process during loading experiments of snow
NASA Astrophysics Data System (ADS)
Capelli, Achille; Reiweger, Ingrid; Schweizer, Jürg
2016-04-01
We developed a new apparatus for fully load-controlled snow failure experiments. The deformation and applied load are measured with two displacement and two force sensors, respectively. The loading experiments are recorded with a high speed camera, and the local strain is derived by a particle image velocimetry (PIV) algorithm. To monitor the progressive failure process within the snow sample, our apparatus includes six piezoelectric transducers that record the acoustic emissions in the ultrasonic range. The six sensors allow localizing the sources of the acoustic emissions, i.e. where the failure process starts and how it develops with time towards catastrophic failure. The quadratic snow samples have a side length of 50 cm and a height of 10 to 20 cm. With an area of 0.25 m2 they are clearly larger than samples used in previous experiments. The size of the samples, which is comparable to the critical size for the onset of crack propagation leading to dry-snow slab avalanche release, allows studying the failure nucleation process and its relation to the spatial distribution of the recorded acoustic emissions. Furthermore the occurrence of features in the acoustic emissions typical for imminent failure of the samples can be analysed. We present preliminary results of the acoustic emissions recorded during tests with homogeneous as well as layered snow samples, including a weak layer, for varying loading rates and loading angles.
Low cost management of replicated data in fault-tolerant distributed systems
NASA Technical Reports Server (NTRS)
Joseph, Thomas A.; Birman, Kenneth P.
1990-01-01
Many distributed systems replicate data for fault tolerance or availability. In such systems, a logical update on a data item results in a physical update on a number of copies. The synchronization and communication required to keep the copies of replicated data consistent introduce a delay when operations are performed. A technique is described that relaxes the usual degree of synchronization, permitting replicated data items to be updated concurrently with other operations, while at the same time ensuring that correctness is not violated. The additional concurrency thus obtained results in better response time when performing operations on replicated data. How this technique performs in conjunction with a roll-back and a roll-forward failure recovery mechanism is also discussed.
Non-inferiority tests for anti-infective drugs using control group quantiles.
Fay, Michael P; Follmann, Dean A
2016-12-01
In testing for non-inferiority of anti-infective drugs, the primary endpoint is often the difference in the proportion of failures between the test and control group at a landmark time. The landmark time is chosen to approximately correspond to the qth historic quantile of the control group, and the non-inferiority margin is selected to be reasonable for the target level q. For designing these studies, a troubling issue is that the landmark time must be pre-specified, but there is no guarantee that the proportion of control failures at the landmark time will be close to the target level q. If the landmark time is far from the target control quantile, then the pre-specified non-inferiority margin may not longer be reasonable. Exact variable margin tests have been developed by Röhmel and Kieser to address this problem, but these tests can have poor power if the observed control failure rate at the landmark time is far from its historic value. We develop a new variable margin non-inferiority test where we continue sampling until a pre-specified proportion of failures, q, have occurred in the control group, where q is the target quantile level. The test does not require any assumptions on the failure time distributions, and hence, no knowledge of the true [Formula: see text] control quantile for the study is needed. Our new test is exact and has power comparable to (or greater than) its competitors when the true control quantile from the study equals (or differs moderately from) its historic value. Our nivm R package performs the test and gives confidence intervals on the difference in failure rates at the true target control quantile. The tests can be applied to time to cure or other numeric variables as well. A substantial proportion of new anti-infective drugs being developed use non-inferiority tests in their development, and typically, a pre-specified landmark time and its associated difference margin are set at the design stage to match a specific target control quantile. If through changing standard of care or selection of a different population the target quantile for the control group changes from its historic value, then the appropriateness of the pre-specified margin at the landmark time may be questionable. Our proposed test avoids this problem by sampling until a pre-specified proportion of the controls have failed. © The Author(s) 2016.
Casella, Donato; Calabrese, Claudio; Orzalesi, Lorenzo; Gaggelli, Ilaria; Cecconi, Lorenzo; Santi, Caterina; Murgo, Roberto; Rinaldi, Stefano; Regolo, Lea; Amanti, Claudio; Roncella, Manuela; Serra, Margherita; Meneghini, Graziano; Bortolini, Massimiliano; Altomare, Vittorio; Cabula, Carlo; Catalano, Francesca; Cirilli, Alfredo; Caruso, Francesco; Lazzaretti, Maria Grazia; Meattini, Icro; Livi, Lorenzo; Cataliotti, Luigi; Bernini, Marco
2017-05-01
Reconstruction options following nipple-sparing mastectomy (NSM) are diverse and not yet investigated with level IA evidence. The analysis of surgical and oncological outcomes of NSM from the Italian National Registry shows its safety and wide acceptance both for prophylactic and therapeutic cases. A further in-depth analysis of the reconstructive approaches with their trend over time and their failures is the aim of this study. Data extraction from the National Database was performed restricting cases to the 2009-2014 period. Different reconstruction procedures were analyzed in terms of their distribution over time and with respect to specific indications. A 1-year minimum follow-up was conducted to assess reconstructive unsuccessful events. Univariate and multivariate analyses were performed to investigate the causes of both prosthetic and autologous failures. 913 patients, for a total of 1006 procedures, are included in the analysis. A prosthetic only reconstruction is accomplished in 92.2 % of cases, while pure autologous tissues are employed in 4.2 % and a hybrid (prosthetic plus autologous) in 3.6 %. Direct-to-implant (DTI) reaches 48.7 % of all reconstructions in the year 2014. Prophylactic NSMs have a DTI reconstruction in 35.6 % of cases and an autologous tissue flap in 12.9 % of cases. Failures are 2.7 % overall: 0 % in pure autologous flaps and 9.1 % in hybrid cases. Significant risk factors for failures are diabetes and the previous radiation therapy on the operated breast. Reconstruction following NSM is mostly prosthetic in Italy, with DTI gaining large acceptance over time. Failures are low and occurring in diabetic and irradiated patients at the multivariate analysis.
Bao, Yi; Hoehler, Matthew S; Smith, Christopher M; Bundy, Matthew; Chen, Genda
2017-10-01
In this study, distributed fiber optic sensors based on pulse pre-pump Brillouin optical time domain analysis (PPP-BODTA) are characterized and deployed to measure spatially-distributed temperatures in reinforced concrete specimens exposed to fire. Four beams were tested to failure in a natural gas fueled compartment fire, each instrumented with one fused silica, single-mode optical fiber as a distributed sensor and four thermocouples. Prior to concrete cracking, the distributed temperature was validated at locations of the thermocouples by a relative difference of less than 9 %. The cracks in concrete can be identified as sharp peaks in the temperature distribution since the cracks are locally filled with hot air. Concrete cracking did not affect the sensitivity of the distributed sensor but concrete spalling broke the optical fiber loop required for PPP-BOTDA measurements.
Subcritical crack growth in SiNx thin-film barriers studied by electro-mechanical two-point bending
NASA Astrophysics Data System (ADS)
Guan, Qingling; Laven, Jozua; Bouten, Piet C. P.; de With, Gijsbertus
2013-06-01
Mechanical failure resulting from subcritical crack growth in the SiNx inorganic barrier layer applied on a flexible multilayer structure was studied by an electro-mechanical two-point bending method. A 10 nm conducting tin-doped indium oxide layer was sputtered as an electrical probe to monitor the subcritical crack growth in the 150 nm dielectric SiNx layer carried by a polyethylene naphthalate substrate. In the electro-mechanical two-point bending test, dynamic and static loads were applied to investigate the crack propagation in the barrier layer. As consequence of using two loading modes, the characteristic failure strain and failure time could be determined. The failure probability distribution of strain and lifetime under each loading condition was described by Weibull statistics. In this study, results from the tests in dynamic and static loading modes were linked by a power law description to determine the critical failure over a range of conditions. The fatigue parameter n from the power law reduces greatly from 70 to 31 upon correcting for internal strain. The testing method and analysis tool as described in the paper can be used to understand the limit of thin-film barriers in terms of their mechanical properties.
NASA Astrophysics Data System (ADS)
Liu, P. F.; Li, X. K.
2018-06-01
The purpose of this paper is to study micromechanical progressive failure properties of carbon fiber/epoxy composites with thermal residual stress by finite element analysis (FEA). Composite microstructures with hexagonal fiber distribution are used for the representative volume element (RVE), where an initial fiber breakage is assumed. Fiber breakage with random fiber strength is predicted using Monte Carlo simulation, progressive matrix damage is predicted by proposing a continuum damage mechanics model and interface failure is simulated using Xu and Needleman's cohesive model. Temperature dependent thermal expansion coefficients for epoxy matrix are used. FEA by developing numerical codes using ANSYS finite element software is divided into two steps: 1. Thermal residual stresses due to mismatch between fiber and matrix are calculated; 2. Longitudinal tensile load is further exerted on the RVE to perform progressive failure analysis of carbon fiber/epoxy composites. Numerical convergence is solved by introducing the viscous damping effect properly. The extended Mori-Tanaka method that considers interface debonding is used to get homogenized mechanical responses of composites. Three main results by FEA are obtained: 1. the real-time matrix cracking, fiber breakage and interface debonding with increasing tensile strain is simulated. 2. the stress concentration coefficients on neighbouring fibers near the initial broken fiber and the axial fiber stress distribution along the broken fiber are predicted, compared with the results using the global and local load-sharing models based on the shear-lag theory. 3. the tensile strength of composite by FEA is compared with those by the shear-lag theory and experiments. Finally, the tensile stress-strain curve of composites by FEA is applied to the progressive failure analysis of composite pressure vessel.
NASA Astrophysics Data System (ADS)
Liu, P. F.; Li, X. K.
2017-09-01
The purpose of this paper is to study micromechanical progressive failure properties of carbon fiber/epoxy composites with thermal residual stress by finite element analysis (FEA). Composite microstructures with hexagonal fiber distribution are used for the representative volume element (RVE), where an initial fiber breakage is assumed. Fiber breakage with random fiber strength is predicted using Monte Carlo simulation, progressive matrix damage is predicted by proposing a continuum damage mechanics model and interface failure is simulated using Xu and Needleman's cohesive model. Temperature dependent thermal expansion coefficients for epoxy matrix are used. FEA by developing numerical codes using ANSYS finite element software is divided into two steps: 1. Thermal residual stresses due to mismatch between fiber and matrix are calculated; 2. Longitudinal tensile load is further exerted on the RVE to perform progressive failure analysis of carbon fiber/epoxy composites. Numerical convergence is solved by introducing the viscous damping effect properly. The extended Mori-Tanaka method that considers interface debonding is used to get homogenized mechanical responses of composites. Three main results by FEA are obtained: 1. the real-time matrix cracking, fiber breakage and interface debonding with increasing tensile strain is simulated. 2. the stress concentration coefficients on neighbouring fibers near the initial broken fiber and the axial fiber stress distribution along the broken fiber are predicted, compared with the results using the global and local load-sharing models based on the shear-lag theory. 3. the tensile strength of composite by FEA is compared with those by the shear-lag theory and experiments. Finally, the tensile stress-strain curve of composites by FEA is applied to the progressive failure analysis of composite pressure vessel.
NASA Astrophysics Data System (ADS)
Li, Hai-bo; Liu, Ming-chang; Xing, Wan-bo; Shao, Shuai; Zhou, Jia-wen
2017-07-01
The Jinping I underground powerhouse is deeply buried and is one of the largest underground powerhouses in China. As a result of high levels of in situ stress, complex geological conditions and the effects of excavation in adjacent caverns, the surrounding rock mass has been severely deformed and broken, and excavation damaged zones (EDZs) have become major obstacles to the design of cavern excavation and support. Field investigations and monitoring data indicated that there are two main modes of failure: high tangential stress induced failure and progressive failure, which have occurred on the mountain side and the river valley side of the Jinping I underground powerhouse. These two main modes of failure were due to strong secondary principal stress forces in the sub-parallel directions and sub-vertical directions, acting on the axes of the main powerhouse on the mountain side and on the river valley side, respectively. Deformations and EDZs on the river valley side were generally larger than those found along the mountain side, and the distribution of deformations was consistent with the distribution of EDZs. The evolution of the EDZ on the river valley side has clearly been time dependent, especially along the downstream arch abutment, and the EDZ was considerably enlarged with further excavation. Additionally, the deformation of the surrounding rock mass was first initiated from the edge of the excavation area and gradually extended to deeper areas away from the opening. However, the EDZ on the mountain side was enlarged only during the first two phases of excavation. The extension of pre-existing cracks and the creation of new fractures has mainly occurred in the oldest EDZ section, and the HDZ has been visibly enlarged, whereas the EDZ has shown little change in other excavation phases.
Structural reliability analysis of laminated CMC components
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.
1991-01-01
For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.
Distributed Health Monitoring System for Reusable Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Lin, C. F.; Figueroa, F.; Politopoulos, T.; Oonk, S.
2009-01-01
The ability to correctly detect and identify any possible failure in the systems, subsystems, or sensors within a reusable liquid rocket engine is a major goal at NASA John C. Stennis Space Center (SSC). A health management (HM) system is required to provide an on-ground operation crew with an integrated awareness of the condition of every element of interest by determining anomalies, examining their causes, and making predictive statements. However, the complexity associated with relevant systems, and the large amount of data typically necessary for proper interpretation and analysis, presents difficulties in implementing complete failure detection, identification, and prognostics (FDI&P). As such, this paper presents a Distributed Health Monitoring System for Reusable Liquid Rocket Engines as a solution to these problems through the use of highly intelligent algorithms for real-time FDI&P, and efficient and embedded processing at multiple levels. The end result is the ability to successfully incorporate a comprehensive HM platform despite the complexity of the systems under consideration.
Spatiotemporal earthquake clusters along the North Anatolian fault zone offshore Istanbul
Bulut, Fatih; Ellsworth, William L.; Bohnhoff, Marco; Aktar, Mustafa; Dresen, Georg
2011-01-01
We investigate earthquakes with similar waveforms in order to characterize spatiotemporal microseismicity clusters within the North Anatolian fault zone (NAFZ) in northwest Turkey along the transition between the 1999 ??zmit rupture zone and the Marmara Sea seismic gap. Earthquakes within distinct activity clusters are relocated with cross-correlation derived relative travel times using the double difference method. The spatiotemporal distribution of micro earthquakes within individual clusters is resolved with relative location accuracy comparable to or better than the source size. High-precision relative hypocenters define the geometry of individual fault patches, permitting a better understanding of fault kinematics and their role in local-scale seismotectonics along the region of interest. Temporal seismic sequences observed in the eastern Sea of Marmara region suggest progressive failure of mostly nonoverlapping areas on adjacent fault patches and systematic migration of microearthquakes within clusters during the progressive failure of neighboring fault patches. The temporal distributions of magnitudes as well as the number of events follow swarmlike behavior rather than a mainshock/aftershock pattern.
Seasonal and Local Characteristics of Lightning Outages of Power Distribution Lines in Hokuriku Area
NASA Astrophysics Data System (ADS)
Sugimoto, Hitoshi; Shimasaki, Katsuhiko
The proportion of the lightning outages in all outages on Japanese 6.6kV distribution lines is high with approximately 20 percent, and then lightning protections are very important for supply reliability of 6.6kV lines. It is effective for the lightning performance to apply countermeasures in order of the area where a large number of the lightning outages occur. Winter lightning occurs in Hokuriku area, therefore it is also important to understand the seasonal characteristics of the lightning outages. In summer 70 percent of the lightning outages on distribution lines in Hokuriku area were due to sparkover, such as power wire breakings and failures of pole-mounted transformers. However, in winter almost half of lightning-damaged equipments were surge arrester failures. The number of the lightning outages per lightning strokes detected by the lightning location system (LLS) in winter was 4.4 times larger than that in summer. The authors have presumed the occurrence of lightning outages from lightning stroke density, 50% value of lightning current and installation rate of lightning protection equipments and overhead ground wire by multiple regression analysis. The presumed results suggest the local difference in the lightning outages.
Distributed analysis functional testing using GangaRobot in the ATLAS experiment
NASA Astrophysics Data System (ADS)
Legger, Federica; ATLAS Collaboration
2011-12-01
Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.
Allen, Craig R.; Angeler, David G.; Moulton, Michael P.; Holling, Crawford S.
2015-01-01
Community saturation can help to explain why biological invasions fail. However, previous research has documented inconsistent relationships between failed invasions (i.e., an invasive species colonizes but goes extinct) and the number of species present in the invaded community. We use data from bird communities of the Hawaiian island of Oahu, which supports a community of 38 successfully established introduced birds and where 37 species were introduced but went extinct (failed invasions). We develop a modified approach to evaluate the effects of community saturation on invasion failure. Our method accounts (1) for the number of species present (NSP) when the species goes extinct rather than during its introduction; and (2) scaling patterns in bird body mass distributions that accounts for the hierarchical organization of ecosystems and the fact that interaction strength amongst species varies with scale. We found that when using NSP at the time of extinction, NSP was higher for failed introductions as compared to successful introductions, supporting the idea that increasing species richness and putative community saturation mediate invasion resistance. Accounting for scale-specific patterns in body size distributions further improved the relationship between NSP and introduction failure. Results show that a better understanding of invasion outcomes can be obtained when scale-specific community structure is accounted for in the analysis.
The effect of adhesive failure and defects on the stress distribution in all-ceramic crowns.
Liu, Yonggang; Xu, Yuanzhi; Su, Bo; Arola, Dwayne; Zhang, Dongsheng
2018-05-29
To explore the effect of adhesive failure and defects between the crown and cement on the stress distribution within all-ceramic crowns and the corresponding risk of failure. An IPS e.max crown of lithium disilicate produced by CAD/CAM for a first mandibular molar was modeled using finite element analysis based on X-ray micro-CT scanned images. Predefined debonding states and interfacial defects between the crown and cement were simulated using the model. The first principal stress distribution of the crown and cement was analyzed under a vertical occlusal load of 600 N. A concept of failure risk was proposed to evaluate the crown. Stress concentrations in the crown were identified on the occlusal surface surrounding the region of loading, beneath the area of loading and at the margin of the interior surface. Stress concentrations in the cement were also evident at the boundary of the debonded areas. The lower surface of the crown is safe to sustain the 600 N vertical load, but the top surface of the cement would undergo cohesive failure. According to the evaluation of failure risk of the crown, the conditions of highest risk corresponded to the conditions with highest percentage of cement damage. The risk of failure is not only associated with debonding between the crown and cement, but also associated with its distribution. Debonding related defects and cementing defects are more deleterious to the interfacial stress than debonding itself. The axial wall plays a critical role in maintaining the principal tensile stress of the crown at an acceptable level. Copyright © 2018 Elsevier Ltd. All rights reserved.
Moving Aerospace Structural Design Practice to a Load and Resistance Factor Approach
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.; Raju, Ivatury S.
2016-01-01
Aerospace structures are traditionally designed using the factor of safety (FOS) approach. The limit load on the structure is determined and the structure is then designed for FOS times the limit load - the ultimate load. Probabilistic approaches utilize distributions for loads and strengths. Failures are predicted to occur in the region of intersection of the two distributions. The load and resistance factor design (LRFD) approach judiciously combines these two approaches by intensive calibration studies on loads and strength to result in structures that are efficient and reliable. This paper discusses these three approaches.
Wireless cellular networks with Pareto-distributed call holding times
NASA Astrophysics Data System (ADS)
Rodriguez-Dagnino, Ramon M.; Takagi, Hideaki
2001-07-01
Nowadays, there is a growing interest in providing internet to mobile users. For instance, NTT DoCoMo in Japan deploys an important mobile phone network with that offers the Internet service, named 'i-mode', to more than 17 million subscribers. Internet traffic measurements show that the session duration of Call Holding Time (CHT) has probability distributions with heavy-tails, which tells us that they depart significantly from the traffic statistics of traditional voice services. In this environment, it is particularly important to know the number of handovers during a call for a network designer to make an appropriate dimensioning of virtual circuits for a wireless cell. The handover traffic has a direct impact on the Quality of Service (QoS); e.g. the service disruption due to the handover failure may significantly degrade the specified QoS of time-constrained services. In this paper, we first study the random behavior of the number of handovers during a call, where we assume that the CHT are Pareto distributed (heavy-tail distribution), and the Cell Residence Times (CRT) are exponentially distributed. Our approach is based on renewal theory arguments. We present closed-form formulae for the probability mass function (pmf) of the number of handovers during a Pareto distributed CHT, and obtain the probability of call completion as well as handover rates. Most of the formulae are expressed in terms of the Whittaker's function. We compare the Pareto case with cases of $k(subscript Erlang and hyperexponential distributions for the CHT.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
Kantor, Rami; Smeaton, Laura; Vardhanabhuti, Saran; Hudelson, Sarah E; Wallis, Carol L; Tripathy, Srikanth; Morgado, Mariza G; Saravanan, Shanmugham; Balakrishnan, Pachamuthu; Reitsma, Marissa; Hart, Stephen; Mellors, John W; Halvas, Elias; Grinsztejn, Beatriz; Hosseinipour, Mina C; Kumwenda, Johnstone; La Rosa, Alberto; Lalloo, Umesh G; Lama, Javier R; Rassool, Mohammed; Santos, Breno R; Supparatpinyo, Khuanchai; Hakim, James; Flanigan, Timothy; Kumarasamy, Nagalingeswaran; Campbell, Thomas B; Eshleman, Susan H
2015-05-15
Evaluation of pretreatment HIV genotyping is needed globally to guide treatment programs. We examined the association of pretreatment (baseline) drug resistance and subtype with virologic failure in a multinational, randomized clinical trial that evaluated 3 antiretroviral treatment (ART) regimens and included resource-limited setting sites. Pol genotyping was performed in a nested case-cohort study including 270 randomly sampled participants (subcohort), and 218 additional participants failing ART (case group). Failure was defined as confirmed viral load (VL) >1000 copies/mL. Cox proportional hazards models estimated resistance-failure association. In the representative subcohort (261/270 participants with genotypes; 44% women; median age, 35 years; median CD4 cell count, 151 cells/µL; median VL, 5.0 log10 copies/mL; 58% non-B subtypes), baseline resistance occurred in 4.2%, evenly distributed among treatment arms and subtypes. In the subcohort and case groups combined (466/488 participants with genotypes), used to examine the association between resistance and treatment failure, baseline resistance occurred in 7.1% (9.4% with failure, 4.3% without). Baseline resistance was significantly associated with shorter time to virologic failure (hazard ratio [HR], 2.03; P = .035), and after adjusting for sex, treatment arm, sex-treatment arm interaction, pretreatment CD4 cell count, baseline VL, and subtype, was still independently associated (HR, 2.1; P = .05). Compared with subtype B, subtype C infection was associated with higher failure risk (HR, 1.57; 95% confidence interval [CI], 1.04-2.35), whereas non-B/C subtype infection was associated with longer time to failure (HR, 0.47; 95% CI, .22-.98). In this global clinical trial, pretreatment resistance and HIV-1 subtype were independently associated with virologic failure. Pretreatment genotyping should be considered whenever feasible. NCT00084136. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
PV System Component Fault and Failure Compilation and Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne
This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.
Towards sensor array materials: can failure be delayed?
Mekid, Samir; Saheb, Nouari; Khan, Shafique M A; Qureshi, Khurram K
2015-01-01
Further to prior development in enhancing structural health using smart materials, an innovative class of materials characterized by the ability to feel senses like humans, i.e. ‘nervous materials’, is discussed. Designed at all scales, these materials will enhance personnel and public safety, and secure greater reliability of products. Materials may fail suddenly, but any system wishes that failure is known in good time and delayed until safe conditions are reached. Nervous materials are expected to be the solution to this statement. This new class of materials is based on the novel concept of materials capable of feeling multiple structural and external stimuli, e.g. stress, force, pressure and temperature, while feeding information back to a controller for appropriate real-time action. The strain–stress state is developed in real time with the identified and characterized source of stimulus, with optimized time response to retrieve initial specified conditions, e.g. shape and strength. Sensors are volumetrically embedded and distributed, emulating the human nervous system. Immediate applications are in aircraft, cars, nuclear energy and robotics. Such materials will reduce maintenance costs, detect initial failures and delay them with self-healing. This article reviews the common aspects and challenges surrounding this new class of materials with types of sensors to be embedded seamlessly or inherently, including appropriate embedding manufacturing techniques with modeling and simulation methods. PMID:27877794
Towards sensor array materials: can failure be delayed?
NASA Astrophysics Data System (ADS)
Mekid, Samir; Saheb, Nouari; Khan, Shafique M. A.; Qureshi, Khurram K.
2015-06-01
Further to prior development in enhancing structural health using smart materials, an innovative class of materials characterized by the ability to feel senses like humans, i.e. ‘nervous materials’, is discussed. Designed at all scales, these materials will enhance personnel and public safety, and secure greater reliability of products. Materials may fail suddenly, but any system wishes that failure is known in good time and delayed until safe conditions are reached. Nervous materials are expected to be the solution to this statement. This new class of materials is based on the novel concept of materials capable of feeling multiple structural and external stimuli, e.g. stress, force, pressure and temperature, while feeding information back to a controller for appropriate real-time action. The strain-stress state is developed in real time with the identified and characterized source of stimulus, with optimized time response to retrieve initial specified conditions, e.g. shape and strength. Sensors are volumetrically embedded and distributed, emulating the human nervous system. Immediate applications are in aircraft, cars, nuclear energy and robotics. Such materials will reduce maintenance costs, detect initial failures and delay them with self-healing. This article reviews the common aspects and challenges surrounding this new class of materials with types of sensors to be embedded seamlessly or inherently, including appropriate embedding manufacturing techniques with modeling and simulation methods.
5-year operation experience with the 1.8 K refrigeration units of the LHC cryogenic system
NASA Astrophysics Data System (ADS)
Ferlin, G.; Tavian, L.; Claudet, S.; Pezzetti, M.
2015-12-01
Since 2009, the Large Hadron Collider (LHC) is in operation at CERN. The LHC superconducting magnets distributed over eight sectors of 3.3-km long are cooled at 1.9 K in pressurized superfluid helium. The nominal operating temperature of 1.9 K is produced by eight 1.8-K refrigeration units based on centrifugal cold compressors (3 or 4 stages depending to the vendor) combined with warm volumetric screw compressors with sub-atmospheric suction. After about 5 years of continuous operation, we will present the results concerning the availability for the final user of these refrigeration units and the impact of the design choice on the recovery time after a system trip. We will also present the individual results for each rotating machinery in terms of failure origin and of Mean Time between Failure (MTBF), as well as the consolidations and upgrades applied to these refrigeration units.
Stochastic Model of Clogging in a Microfluidic Cell Sorter
NASA Astrophysics Data System (ADS)
Fai, Thomas; Rycroft, Chris
2016-11-01
Microfluidic devices for sorting cells by deformability show promise for various medical purposes, e.g. detecting sickle cell anemia and circulating tumor cells. One class of such devices consists of a two-dimensional array of narrow channels, each column containing several identical channels in parallel. Cells are driven through the device by an applied pressure or flow rate. Such devices allows for many cells to be sorted simultaneously, but cells eventually clog individual channels and change the device properties in an unpredictable manner. In this talk, we propose a stochastic model for the failure of such microfluidic devices by clogging and present preliminary theoretical and computational results. The model can be recast as an ODE that exhibits finite time blow-up under certain conditions. The failure time distribution is investigated analytically in certain limiting cases, and more realistic versions of the model are solved by computer simulation.
Möller, Thorsten; Schuldt, Heiko; Gerber, Andreas; Klusch, Matthias
2006-06-01
Healthcare digital libraries (DLs) increasingly make use of dedicated services to access functionality and/or data. Semantic (web) services enhance single services and facilitate compound services, thereby supporting advanced applications on top of a DL. The traditional process management approach tends to focus on process definition at build time rather than on actual service events in run time, and to anticipate failures in order to define appropriate strategies. This paper presents a novel approach where service coordination is distributed among a set of agents. A dedicated component plans compound semantic services on demand for a particular application. In failure, the planner is reinvoked to define contin- gency strategies. Finally, matchmaking is effected at runtime by choosing the appropriate service provider. These combined technologies will provide key support for highly flexible next-generation DL applications. Such technologies are under development within CASCOM.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Cooper, Robert; Marzullo, Keith
1990-01-01
ISIS and META are two distributed systems projects at Cornell University. The ISIS project, has developed a new methodology, virtual synchrony, for writing robust distributed software. This approach is directly supported by the ISIS Toolkit, a programming system that is distributed to over 300 academic and industrial sites. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project, is about distributed control in a soft real time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are presented. This approach to distributed computing, a philosophy that is believed to significantly distinguish the work from that of others in the field, is explained.
Using volcanic tremor for eruption forecasting at White Island volcano (Whakaari), New Zealand
NASA Astrophysics Data System (ADS)
Chardot, Lauriane; Jolly, Arthur D.; Kennedy, Ben M.; Fournier, Nicolas; Sherburn, Steven
2015-09-01
Eruption forecasting is a challenging task because of the inherent complexity of volcanic systems. Despite remarkable efforts to develop complex models in order to explain volcanic processes prior to eruptions, the material Failure Forecast Method (FFM) is one of the very few techniques that can provide a forecast time for an eruption. However, the method requires testing and automation before being used as a real-time eruption forecasting tool at a volcano. We developed an automatic algorithm to issue forecasts from volcanic tremor increase episodes recorded by Real-time Seismic Amplitude Measurement (RSAM) at one station and optimised this algorithm for the period August 2011-January 2014 which comprises the recent unrest period at White Island volcano (Whakaari), New Zealand. A detailed residual analysis was paramount to select the most appropriate model explaining the RSAM time evolutions. In a hindsight simulation, four out of the five small eruptions reported during this period occurred within a failure window forecast by our optimised algorithm and the probability of an eruption on a day within a failure window was 0.21, which is 37 times higher than the probability of having an eruption on any day during the same period (0.0057). Moreover, the forecasts were issued prior to the eruptions by a few hours which is important from an emergency management point of view. Whereas the RSAM time evolutions preceding these four eruptions have a similar goodness-of-fit with the FFM, their spectral characteristics are different. The duration-amplitude distributions of the precursory tremor episodes support the hypothesis that several processes were likely occurring prior to these eruptions. We propose that slow rock failure and fluid flow processes are plausible candidates for the tremor source of these episodes. This hindsight exercise can be useful for future real-time implementation of the FFM at White Island. A similar methodology could also be tested at other volcanoes even if only a limited network is available.
Distributed phased array architecture study
NASA Technical Reports Server (NTRS)
Bourgeois, Brian
1987-01-01
Variations in amplifiers and phase shifters can cause degraded antenna performance, depending also on the environmental conditions and antenna array architecture. The implementation of distributed phased array hardware was studied with the aid of the DISTAR computer program as a simulation tool. This simulation provides guidance in hardware simulation. Both hard and soft failures of the amplifiers in the T/R modules are modeled. Hard failures are catastrophic: no power is transmitted to the antenna elements. Noncatastrophic or soft failures are modeled as a modified Gaussian distribution. The resulting amplitude characteristics then determine the array excitation coefficients. The phase characteristics take on a uniform distribution. Pattern characteristics such as antenna gain, half power beamwidth, mainbeam phase errors, sidelobe levels, and beam pointing errors were studied as functions of amplifier and phase shifter variations. General specifications for amplifier and phase shifter tolerances in various architecture configurations for C band and S band were determined.
Moisture-Induced TBC Spallation on Turbine Blade Samples
NASA Technical Reports Server (NTRS)
Smialek, James
2011-01-01
Delayed failure of TBCs is a widely observed laboratory phenomenon, although many of the early observations went unreported. The weekend effect or DeskTop Spallation (DTS) is characterized by initial survival of a TBC after accelerated laboratory thermal cycling, then failure by exposure to ambient humidity or water. Once initiated, failure can occur quite dramatically in less than a second. To this end, the water drop test and digital video recordings have become useful techniques in studies at NASA (Smialek, Zhu, Cuy), DECHMA (Rudolphi, Renusch, Schuetze), and CNRS Toulouse/SNECMA (Deneux, Cadoret, Hervier, Monceau). In the present study the results for a commercial turbine blade, with a standard EB-PVD 7YSZ TBC top coat and Pt-aluminide diffusion bond coat are reported. Cut sections were intermittently oxidized at 1100, 1150, and 1200 C and monitored by weight change and visual appearance. Failures were distributed widely over a 5-100 hr time range, depending on temperature. At some opportune times, failure was captured by video recording, documenting the appearance and speed of the moisture-induced spallation process. Failure interfaces exhibited alumina scale grains, decorated with Ta-rich oxide particles, and alumina inclusions as islands and streamers. The phenomenon is thus rooted in moisture-induced delayed spallation (MIDS) of the alumina scale formed on the bond coat. In that regard, many studies show the susceptibility of alumina scales to moisture, as long as high strain energy and a partially exposed interface exist. The latter conditions result from severe cyclic oxidation conditions, which produce a highly stressed and partially damaged scale. In one model, it has been proposed that moisture reacts with aluminum in the bond coat to release hydrogen atoms that embrittle the interface. A negative synergistic effect with interfacial sulfur is also invoked.
Moisture-Induced TBC Spallation on Turbine Blade Samples
NASA Technical Reports Server (NTRS)
Smialek, James L.
2011-01-01
Delayed failure of TBCs is a widely observed laboratory phenomenon, although many of the early observations went unreported. "The weekend effect" or "DeskTop Spallation" (DTS) is characterized by initial survival of a TBC after accelerated laboratory thermal cycling, then failure by exposure to ambient humidity or water. Once initiated, failure can occur quite dramatically in less than a second. To this end, the water drop test and digital video recordings have become useful techniques in studies at NASA (Smialek, Zhu, Cuy), DECHMA (Rudolphi, Renusch, Schuetze), and CNRS Toulouse/SNECMA (Deneux, Cadoret, Hervier, Monceau). In the present study the results for a commercial turbine blade, with a standard EB-PVD 7YSZ TBC top coat and Pt-aluminide diffusion bond monitored by weight change and visual appearance. Failures were distributed widely over a 5-100 hr time range, depending on temperature. At some opportune times, failure was captured by video recording, documenting the appearance and speed of the moisture-induced spallation process. Failure interfaces exhibited alumina scale grains, decorated with Ta-rich oxide particles, and alumina inclusions as islands and streamers. The phenomenon is thus rooted in moisture-induced delayed spallation (MIDS) of the alumina scale formed on the bond coat. In that regard, many studies show the susceptibility of alumina scales to moisture, as long as high strain energy and a partially exposed interface exist. The latter conditions result from severe cyclic oxidation conditions, which produce a highly stressed and partially damaged scale. In one model, it has been proposed that moisture reacts with aluminum in the bond coat to release hydrogen atoms that 'embrittle' the interface. A negative synergistic effect with interfacial sulfur is also invoked.
Modelling river bank retreat by combining fluvial erosion, seepage and mass failure
NASA Astrophysics Data System (ADS)
Dapporto, S.; Rinaldi, M.
2003-04-01
Streambank erosion processes contribute significantly to the sediment yielded from a river system and represent an important issue in the contexts of soil degradation and river management. Bank retreat is controlled by a complex interaction of hydrologic, geotechnical, and hydraulic processes. The capability of modelling these different components allows for a full reconstruction and comprehension of the causes and rates of bank erosion. River bank retreat during a single flow event has been modelled by combining simulation of fluvial erosion, seepage, and mass failures. The study site, along the Sieve River (Central Italy), has been subject to extensive researches, including monitoring of pore water pressures for a period of 4 years. The simulation reconstructs fairly faithfully the observed changes, and is used to: a) test the potentiality and discuss advantages and limitations of such type of methodology for modelling bank retreat; c) quantify the contribution and mutual role of the different processes determining bank retreat. The hydrograph of the event is divided in a series of time steps. Modelling of the riverbank retreat includes for each step the following components: a) fluvial erosion and consequent changes in bank geometry; b) finite element seepage analysis; c) stability analysis by limit equilibrium method. Direct fluvial shear erosion is computed using empirically derived relationships expressing lateral erosion rate as a function of the excess of shear stress to the critical entrainment value for the different materials along the bank profile. Lateral erosion rate has been calibrated on the basis of the total bank retreat measured by digital terrestrial photogrammetry. Finite element seepage analysis is then conducted to reconstruct the saturated and unsaturated flow within the bank and the pore water pressure distribution for each time step. The safety factor for mass failures is then computed, using the pore water pressure distribution obtained by the seepage analysis, and the geometry of the upper bank is modified in case of failure.
Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations
NASA Technical Reports Server (NTRS)
Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor
2014-01-01
One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
NASA Astrophysics Data System (ADS)
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
NASA Astrophysics Data System (ADS)
Choi, W. J.; Yeh, E. C. C.; Tu, K. N.
2003-11-01
Electromigration of eutectic SnPb flip chip solder joints and their mean-time-to-failure (MTTF) have been studied in the temperature range of 100 to 140 °C with current densities of 1.9 to 2.75×104 A/cm2. In these joints, the under-bump-metallization (UBM) on the chip side is a multilayer thin film of Al/Ni(V)/Cu, and the metallic bond-pad on the substrate side is a very thick, electroless Ni layer covered with 30 nm of Au. When stressed at the higher current densities, the MTTF was found to decrease much faster than what is expected from the published Black's equation. The failure occurred by interfacial void propagation at the cathode side, and it is due to current crowding near the contact interface between the solder bump and the thin-film UBM. The current crowding is confirmed by a simulation of current distribution in the solder joint. Besides the interfacial void formation, the intermetallic compounds formed on the UBM as well as the Ni(V) film in the UBM have been found to dissolve completely into the solder bump during electromigration. Therefore, the electromigation failure is a combination of the interfacial void formation and the loss of UBM. Similar findings in eutectic SnAgCu flip chip solder joints have also been obtained and compared.
Integrated Hardware and Software for No-Loss Computing
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
When an algorithm is distributed across multiple threads executing on many distinct processors, a loss of one of those threads or processors can potentially result in the total loss of all the incremental results up to that point. When implementation is massively hardware distributed, then the probability of a hardware failure during the course of a long execution is potentially high. Traditionally, this problem has been addressed by establishing checkpoints where the current state of some or part of the execution is saved. Then in the event of a failure, this state information can be used to recompute that point in the execution and resume the computation from that point. A serious problem arises when one distributes a problem across multiple threads and physical processors is that one increases the likelihood of the algorithm failing due to no fault of the scientist but as a result of hardware faults coupled with operating system problems. With good reason, scientists expect their computing tools to serve them and not the other way around. What is novel here is a unique combination of hardware and software that reformulates an application into monolithic structure that can be monitored in real-time and dynamically reconfigured in the event of a failure. This unique reformulation of hardware and software will provide advanced aeronautical technologies to meet the challenges of next-generation systems in aviation, for civilian and scientific purposes, in our atmosphere and in atmospheres of other worlds. In particular, with respect to NASA s manned flight to Mars, this technology addresses the critical requirements for improving safety and increasing reliability of manned spacecraft.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
Distributed environmental control
NASA Technical Reports Server (NTRS)
Cleveland, Gary A.
1992-01-01
We present an architecture of distributed, independent control agents designed to work with the Computer Aided System Engineering and Analysis (CASE/A) simulation tool. CASE/A simulates behavior of Environmental Control and Life Support Systems (ECLSS). We describe a lattice of agents capable of distributed sensing and overcoming certain sensor and effector failures. We address how the architecture can achieve the coordinating functions of a hierarchical command structure while maintaining the robustness and flexibility of independent agents. These agents work between the time steps of the CASE/A simulation tool to arrive at command decisions based on the state variables maintained by CASE/A. Control is evaluated according to both effectiveness (e.g., how well temperature was maintained) and resource utilization (the amount of power and materials used).
A New Monte Carlo Filtering Method for the Diagnosis of Mission-Critical Failures
NASA Technical Reports Server (NTRS)
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2009-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the settings most likely to cause a mission-critical failure. This research benchmarks two treatment learning methods against standard optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. It is shown that these treatment learners are both faster than traditional methods and show demonstrably better results.
Computer-aided operations engineering with integrated models of systems and operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.
In vitro and in vivo testing of a totally implantable left ventricular assist system.
Jassawalla, J S; Daniel, M A; Chen, H; Lee, J; LaForge, D; Billich, J; Ramasamy, N; Miller, P J; Oyer, P E; Portner, P M
1988-01-01
The totally implantable Novacor LVAS is being tested under NIH auspices to demonstrate safety and efficacy before clinical trials. Twelve complete systems (submerged in saline at 37 degrees C) are being tested, with an NIH goal of demonstrating 80% reliability for 2 year operation with a 60% confidence level. The systems, which are continuously monitored, are diurnally cycled between two output levels by automatically varying preload and afterload. Currently, 14.3 years of failure-free operation have been accumulated, with a mean duration of 14 months. Using an exponential failure distribution model, the mean time to failure (MTTF) is greater than 8.8 years, corresponding to a demonstrated reliability (for a 2 year mission time) of 80% (80% confidence level). Recent ovine experiments with VAS subsystems include a 767 day volume compensator implant, a 279 day pump/drive unit implant and a 1,448 day BST implant. The last 12 chronic pump/drive unit experiments had a mean duration of 153 days (excluding early postoperative complications). This compares favorably with the NIH goals for complete systems (5 month mean duration). Complete system experiments are currently underway.
NASA Technical Reports Server (NTRS)
Patton, Jeff A.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C)/Electrical Power Generation (EPG) hardware. The EPD and C/EPG hardware is required for performing critical functions of cryogenic reactant storage, electrical power generation and product water distribution in the Orbiter. Specifically, the EPD and C/EPG hardware consists of the following components: Power Section Assembly (PSA); Reactant Control Subsystem (RCS); Thermal Control Subsystem (TCS); Water Removal Subsystem (WRS); and Power Reactant Storage and Distribution System (PRSDS). The IOA analysis process utilized available EPD and C/EPG hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Low-cost failure sensor design and development for water pipeline distribution systems.
Khan, K; Widdop, P D; Day, A J; Wood, A S; Mounce, S R; Machell, J
2002-01-01
This paper describes the design and development of a new sensor which is low cost to manufacture and install and is reliable in operation with sufficient accuracy, resolution and repeatability for use in newly developed systems for pipeline monitoring and leakage detection. To provide an appropriate signal, the concept of a "failure" sensor is introduced, in which the output is not necessarily proportional to the input, but is unmistakably affected when an unusual event occurs. The design of this failure sensor is based on the water opacity which can be indicative of an unusual event in a water distribution network. The laboratory work and field trials necessary to design and prove out this type of failure sensor are described here. It is concluded that a low-cost failure sensor of this type has good potential for use in a comprehensive water monitoring and management system based on Artificial Neural Networks (ANN).
Rapid quenching effects in PVC films
NASA Technical Reports Server (NTRS)
Lee, H. D.; Mandell, J. F.; Mcgarry, F. J.
1981-01-01
Using a specially constructed microbalance for hydrostatic weighing, density changes in PVC thin films (with no additives, 30-100 micrometers thick), due to rapid quenching (approximately 300 C/sec) through the glass transition temperature, have been observed. The more severe the quench, the greater is the free volume content. Isobaric volume recovery of PVC has also been studied by volume dilatometry. Both show aging of relaxing molecular rearrangements takes place as a linear function of logarithmic aging time at room temperature. Distribution of retardation times and Primak's distributed activation energy spectra have been applied to the volume recovery data. The concomitant changes in mechanical properties of PVC after quenching have been monitored by tensile creep and stress-strain to failure. All reflect the presence of excess free volume content, due to rapid quenching.
Health Vulnerability Index and newborn hearing screening: urban inequality.
Januário, Gabriela Cintra; Alves, Claudia Regina Lindgren; Lemos, Stela Maris Aguiar; Almeida, Maria Cristina de Mattos; Cruz, Ramon Costa; Friche, Amélia Augusta de Lima
To analyze the intra-urban differentials related to the outcome of the Newborn Hearing Screening (NHS) of children living in Belo Horizonte tested in a reference service using the Health Vulnerability Index (HVI). cross-sectional study with children living in Belo Horizonte evaluated by a Newborn Hearing Screening Reference Service (NHSRS) between 2010 and 2011. The HVI of the census tract of each child was obtained by the georeferencing of their respective addresses. Multivariate analysis was conducted using the decision tree technique, considering a statistical model for each response. A thematic map of points representing the geographic distribution of the children evaluated by the NHS program was also developed. The NHS failure rate for children living in areas with very high HVI, or without HVI data, was 1.5 times higher than that for children living in other census tracts. For children living in areas of low, medium, and high HVI, who underwent NHS after 30 days of life, the NHS failure rate was 2.1 times higher in children that presented Risk Indicator for Hearing Loss (RIHL) (17.2%) than in those who did not (8.1%). Uneven distribution was observed between areas for children that underwent the NHS and those who failed it. Significant intra-urban differentials were found in Belo Horizonte, indicating correlation between health vulnerability and NHS outcomes.
ALLEN, LARRY A.; FELKER, G. MICHAEL; MEHRA, MANDEEP R.; CHIONG, JUN R.; DUNLAP, STEPHANIE H.; GHALI, JALAL K.; LENIHAN, DANIEL J.; OREN, RON M.; WAGONER, LYNNE E.; SCHWARTZ, TODD A.; ADAMS, KIRKWOOD F.
2014-01-01
Background: Adverse outcomes have recently been linked to elevated red cell distribution width (RDW) in heart failure. Our study sought to validate the prognostic value of RDW in heart failure and to explore the potential mechanisms underlying this association. Methods and Results: Data from the Study of Anemia in a Heart Failure Population (STAMINA-HFP) registry, a prospective, multicenter cohort of ambulatory patients with heart failure supported multivariable modeling to assess relationships between RDW and outcomes. The association between RDW and iron metabolism, inflammation, and neurohormonal activation was studied in a separate cohort of heart failure patients from the United Investigators to Evaluate Heart Failure (UNITE-HF) Biomarker registry. RDW was independently predictive of outcome (for each 1% increase in RDW, hazard ratio for mortality 1.06, 95% CI 1.01-1.12; hazard ratio for hospitalization or mortality 1.06; 95% CI 1.02-1.10) after adjustment for other covariates. Increasing RDW correlated with decreasing hemoglobin, increasing interleukin-6, and impaired iron mobilization. Conclusions: Our results confirm previous observations that RDW is a strong, independent predictor of adverse outcome in chronic heart failure and suggest elevated RDW may indicate inflammatory stress and impaired iron mobilization. These findings encourage further research into the relationship between heart failure and the hematologic system. PMID:20206898
Allen, Larry A; Felker, G Michael; Mehra, Mandeep R; Chiong, Jun R; Dunlap, Stephanie H; Ghali, Jalal K; Lenihan, Daniel J; Oren, Ron M; Wagoner, Lynne E; Schwartz, Todd A; Adams, Kirkwood F
2010-03-01
Adverse outcomes have recently been linked to elevated red cell distribution width (RDW) in heart failure. Our study sought to validate the prognostic value of RDW in heart failure and to explore the potential mechanisms underlying this association. Data from the Study of Anemia in a Heart Failure Population (STAMINA-HFP) registry, a prospective, multicenter cohort of ambulatory patients with heart failure supported multivariable modeling to assess relationships between RDW and outcomes. The association between RDW and iron metabolism, inflammation, and neurohormonal activation was studied in a separate cohort of heart failure patients from the United Investigators to Evaluate Heart Failure (UNITE-HF) Biomarker registry. RDW was independently predictive of outcome (for each 1% increase in RDW, hazard ratio for mortality 1.06, 95% CI 1.01-1.12; hazard ratio for hospitalization or mortality 1.06; 95% CI 1.02-1.10) after adjustment for other covariates. Increasing RDW correlated with decreasing hemoglobin, increasing interleukin-6, and impaired iron mobilization. Our results confirm previous observations that RDW is a strong, independent predictor of adverse outcome in chronic heart failure and suggest elevated RDW may indicate inflammatory stress and impaired iron mobilization. These findings encourage further research into the relationship between heart failure and the hematologic system. Copyright (c) 2010 Elsevier Inc. All rights reserved.
7 CFR 281.7 - Indian tribal organization failure.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 4 2014-01-01 2014-01-01 false Indian tribal organization failure. 281.7 Section 281..., DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM ADMINISTRATION OF SNAP ON INDIAN RESERVATIONS § 281.7 Indian tribal organization failure. When Performance Reporting System reviews indicate...
Zand, Pouria; Dilo, Arta; Havinga, Paul
2013-06-27
Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead.
Zand, Pouria; Dilo, Arta; Havinga, Paul
2013-01-01
Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead. PMID:23807687
A stochastic model for soft tissue failure using acoustic emission data.
Sánchez-Molina, D; Martínez-González, E; Velázquez-Ameijide, J; Llumà, J; Rebollo Soria, M C; Arregui-Dalmases, C
2015-11-01
The strength of soft tissues is due mainly to collagen fibers. In most collagenous tissues, the arrangement of the fibers is random, but has preferred directions. The random arrangement makes it difficult to make deterministic predictions about the starting process of fiber breaking under tension. When subjected to tensile stress the fibers are progressively straighten out and then start to be stretched. At the beginning of fiber breaking, some of the fibers reach their maximum tensile strength and break down while some others remain unstressed (this latter fibers will assume then bigger stress until they eventually arrive to their failure point). In this study, a sample of human esophagi was subjected to a tensile breaking of fibers, up to the complete failure of the specimen. An experimental setup using Acoustic Emission to detect the elastic energy released is used during the test to detect the location of the emissions and the number of micro-failures per time unit. The data were statistically analyzed in order to be compared to a stochastic model which relates the level of stress in the tissue and the probability of breaking given the number of previously broken fibers (i.e. the deterioration in the tissue). The probability of a fiber breaking as the stretch increases in the tissue can be represented by a non-homogeneous Markov process which is the basis of the stochastic model proposed. This paper shows that a two-parameter model can account for the fiber breaking and the expected distribution for ultimate stress is a Fréchet distribution. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scintillation Breakdowns in Chip Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2008-01-01
Scintillations in solid tantalum capacitors are momentarily local breakdowns terminated by a self-healing or conversion to a high-resistive state of the manganese oxide cathode. This conversion effectively caps the defective area of the tantalum pentoxide dielectric and prevents short-circuit failures. Typically, this type of breakdown has no immediate catastrophic consequences and is often considered as nuisance rather than a failure. Scintillation breakdowns likely do not affect failures of parts under surge current conditions, and so-called "proofing" of tantalum chip capacitors, which is a controllable exposure of the part after soldering to voltages slightly higher than the operating voltage to verify that possible scintillations are self-healed, has been shown to improve the quality of the parts. However, no in-depth studies of the effect of scintillations on reliability of tantalum capacitors have been performed so far. KEMET is using scintillation breakdown testing as a tool for assessing process improvements and to compare quality of different manufacturing lots. Nevertheless, the relationship between failures and scintillation breakdowns is not clear, and this test is not considered as suitable for lot acceptance testing. In this work, scintillation breakdowns in different military-graded and commercial tantalum capacitors were characterized and related to the rated voltages and to life test failures. A model for assessment of times to failure, based on distributions of breakdown voltages, and accelerating factors of life testing are discussed.
Hypnosis closed loop TCI systems in outpatient surgery.
Ramos-Luengo, A; Asensio-Merino, F
Determine the influence of general anaesthesia with closed-loop systems in the results of outpatient varicose vein surgery. Retrospective observational study including data from 270 outpatients between 2014 and 2015. The patients were divided into 2 groups according to the type of general anaesthesia used. The CL Group included patients who received propofol in closed-loop guided by BIS and remifentanil using TCI, and the C Group received non-closed-loop anaesthesia. Age, sex, surgical time, discharge time and failure of outpatient surgery were recorded. Quantitative data were checked for normal distribution by the method of Kolmogorov-Smirnov-Lilliefors. Differences between groups were analysed by a Student-t-test or Mann-Whitney-Wilcoxon test, depending on their distribution. Categorical data were analysed by a Chi-squared test. We used Kaplan-Meier estimator and the effect size (calculated by Cohen's d) to study the discharge time. Statistical analysis was performed using R 3.2.3 binary for Mac OS X 10.9. There were no significant differences in age, sex and surgical time and failure of outpatient surgery. Discharge time was different in both groups: 200 (100) vs. 180 (82.5) minutes, C Group and CL Group, respectively (data are median and interquartile rank); P=.005. The use of closed-loop devices for the hypnotic component of anaesthesia hastens discharge time. However, for this effect to be clinically significant, some improvements still need to be made in our outpatient surgery units. Copyright © 2016 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
Cycles till failure of silver-zinc cells with competing failure modes - Preliminary data analysis
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Leibecki, H. F.; Bozek, J. M.
1980-01-01
The data analysis of cycles to failure of silver-zinc electrochemical cells with competing failure modes is presented. The test ran 129 cells through charge-discharge cycles until failure; preliminary data analysis consisted of response surface estimate of life. Batteries fail through low voltage condition and an internal shorting condition; a competing failure modes analysis was made using maximum likelihood estimation for the extreme value life distribution. Extensive residual plotting and probability plotting were used to verify data quality and selection of model.
NASA Astrophysics Data System (ADS)
Sexton, E.; Thomas, A.; Delbridge, B. G.
2017-12-01
Large earthquakes often exhibit complex slip distributions and occur along non-planar fault geometries, resulting in variable stress changes throughout the region of the fault hosting aftershocks. To better discern the role of geometric discontinuities on aftershock sequences, we compare areas of enhanced and reduced Coulomb failure stress and mean stress for systematic differences in the time dependence and productivity of these aftershock sequences. In strike-slip faults, releasing structures, including stepovers and bends, experience an increase in both Coulomb failure stress and mean stress during an earthquake, promoting fluid diffusion into the region and further failure. Conversely, Coulomb failure stress and mean stress decrease in restraining bends and stepovers in strike-slip faults, and fluids diffuse away from these areas, discouraging failure. We examine spatial differences in seismicity patterns along structurally complex strike-slip faults which have hosted large earthquakes, such as the 1992 Mw 7.3 Landers, the 2010 Mw 7.2 El-Mayor Cucapah, the 2014 Mw 6.0 South Napa, and the 2016 Mw 7.0 Kumamoto events. We characterize the behavior of these aftershock sequences with the Epidemic Type Aftershock-Sequence Model (ETAS). In this statistical model, the total occurrence rate of aftershocks induced by an earthquake is λ(t) = λ_0 + \\sum_{i:t_i
Hickin, Matthew Parker; Shariff, Jaffer A; Jennette, Philip J; Finkelstein, Joseph; Papapanou, Panos N
2017-10-01
The aim of this study was to use electronic health care records (EHRs) to examine retrospectively the incidence of and attributes associated with dental implant failures necessitating implant removal in a large cohort of patients treated in the student clinics of a U.S. dental school over three and a half years. EHRs were searched for all patients who received dental implants between July 1, 2011, and December 31, 2014. Characteristics of patients and implants that were actively removed due to irrevocable failure of any etiology ("failure cohort") during this period were compared to those of all other patients who received dental implants during the same time frame ("reference cohort"). Differences in the frequency distribution of various characteristics between the failure and reference cohorts were compared. Of a total 6,129 implants placed in 2,127 patients during the study period, 179 implants (2.9%) in 120 patients (5.6%) were removed. In the multivariate analysis, presence of a removable (OR=2.86) or fixed temporary prosthesis (OR=3.71) was statistically significantly associated with increased risk for implant failure. In contrast, antibiotic coverage (pre- and post-surgery OR=0.16; post-surgery only OR=0.38) and implants of certain manufacturers were associated with lower risk of implant failure. In this sizeable cohort of patients receiving care in dental student clinics, the review of EHRs facilitated identification of multiple variables associated with implant failure resulting in removal; however, these findings do not suggest causative relationships. The adopted analytical approach can enhance quality assurance measures and may contribute to the identification of true risk factors for dental implant failure.
Campos, Roberto E; Santos Filho, Paulo César F; de O Júnior, Osmir Batista; Ambrosano, Gláucia M B; Pereira, Cristina Alves
2018-01-01
Bond strength (BS) values from in vitro studies are useful when dentists are selecting an adhesive system, but there is no ideal measuring method. The purpose of this in vitro study was to investigate the influence of the evaluation method in the BS between dentin and composite resin. Molars with exposed superficial dentin (N=240) were divided into 3 groups according to the test: microtensile (μTBS), microshear (μSBS), and micropush-out (μPBS). Each one was subdivided into 4 groups according to the adhesive system: total etch, 3- and 2-step; and self-etch, 2- and 1-step). For the μPBS test, a conical cavity was prepared and restored with composite resin. An occlusal slice (1.5 mm in thickness) was obtained from each tooth. For the μSBS test, a composite resin cylinder (1 mm in diameter) was built on the dentin surface of each tooth. For the μTBS test, a 2-increment composite resin cylinder was built on the dentin surface, and beams with a sectional area of 0.5 mm 2 were obtained. Each subgroup was divided into 2 (n=10) as the specimens were tested after 7 days and 1 year of water storage. The specimens were submitted to load, and the failure recorded in units of megapascals. Original BS values from the μTBS and μSBS tests were normalized for the area from μPBS specimens. Original and normalized results were submitted to a 3-way ANOVA (α=.05). The correlation among mechanical results, stress distribution, and failure pattern was investigated. Significant differences (P<.05) were found among the adhesive systems and methods within both the original and normalized data but not between the storage times (P>.05). Within the 7 days of storage, the original BS values from μTBS were significantly higher (P<.001) than those from μPBS and μSBS. After 1 year, μSBS presented significantly lower results (P<.001). However, after the normalization for area, the BS values of the μTBS and μPBS tests were similar, and both were higher (P<.001) than that of μSBS in both storage times. In the μSBS and μTBS specimens, cohesive and adhesive failures were observed, whereas μPBS presented 100% of adhesive failures. The failure modes were compatible with the stress distribution. The storage time did not affect the results, but differences were found among the adhesives and methods. For comparisons of bond strength from tests with different bonding areas, the normalization for area seemed essential. The microshear bond test should not be used for bond strength evaluation, and the microtensile test needs improvement to enable reliable results regarding stress concentration and failure mode. The micropush-out test may be considered more reliable than the microtensile in the bond strength investigation, as demonstrated by the uniform stress concentration and adhesive failure pattern. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Earthquake Prediction in Large-scale Faulting Experiments
NASA Astrophysics Data System (ADS)
Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.
2004-12-01
We study repeated earthquake slip of a 2 m long laboratory granite fault surface with approximately homogenous frictional properties. In this apparatus earthquakes follow a period of controlled, constant rate shear stress increase, analogous to tectonic loading. Slip initiates and accumulates within a limited area of the fault surface while the surrounding fault remains locked. Dynamic rupture propagation and slip of the entire fault surface is induced when slip in the nucleating zone becomes sufficiently large. We report on the event to event reproducibility of loading time (recurrence interval), failure stress, stress drop, and precursory activity. We tentatively interpret these variations as indications of the intrinsic variability of small earthquake occurrence and source physics in this controlled setting. We use the results to produce measures of earthquake predictability based on the probability density of repeating occurrence and the reproducibility of near-field precursory strain. At 4 MPa normal stress and a loading rate of 0.0001 MPa/s, the loading time is ˜25 min, with a coefficient of variation of around 10%. Static stress drop has a similar variability which results almost entirely from variability of the final (rather than initial) stress. Thus, the initial stress has low variability and event times are slip-predictable. The variability of loading time to failure is comparable to the lowest variability of recurrence time of small repeating earthquakes at Parkfield (Nadeau et al., 1998) and our result may be a good estimate of the intrinsic variability of recurrence. Distributions of loading time can be adequately represented by a log-normal or Weibel distribution but long term prediction of the next event time based on probabilistic representation of previous occurrence is not dramatically better than for field-observed small- or large-magnitude earthquake datasets. The gradually accelerating precursory aseismic slip observed in the region of nucleation in these experiments is consistent with observations and theory of Dieterich and Kilgore (1996). Precursory strains can be detected typically after 50% of the total loading time. The Dieterich and Kilgore approach implies an alternative method of earthquake prediction based on comparing real-time strain monitoring with previous precursory strain records or with physically-based models of accelerating slip. Near failure, time to failure t is approximately inversely proportional to precursory slip rate V. Based on a least squares fit to accelerating slip velocity from ten or more events, the standard deviation of the residual between predicted and observed log t is typically 0.14. Scaling these results to natural recurrence suggests that a year prior to an earthquake, failure time can be predicted from measured fault slip rate with a typical error of 140 days, and a day prior to the earthquake with a typical error of 9 hours. However, such predictions require detecting aseismic nucleating strains, which have not yet been found in the field, and on distinguishing earthquake precursors from other strain transients. There is some field evidence of precursory seismic strain for large earthquakes (Bufe and Varnes, 1993) which may be related to our observations. In instances where precursory activity is spatially variable during the interseismic period, as in our experiments, distinguishing precursory activity might be best accomplished with deep arrays of near fault instruments and pattern recognition algorithms such as principle component analysis (Rundle et al., 2000).
A flexible cure rate model with dependent censoring and a known cure threshold.
Bernhardt, Paul W
2016-11-10
We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Duman, M. S.; Kaplan, E.; Cuvalcı, O.
2018-01-01
The present paper is based on experimental studies and numerical simulations on the surface fatigue failure of the PTFE-bronze layered journal bearings under real-time loading. ‘Permaglide Plain Bearings P10’ type journal bearings were experimentally tested under different real time dynamic loadings by using real time journal bearing test system in our laboratory. The journal bearing consists of a PTFE-bronze layer approximately 0.32 mm thick on the steel support layer with 2.18 mm thick. Two different approaches have been considered with in experiments: (i) under real- time constant loading with varying bearing widths, (ii) under different real-time loadings at constant bearing widths. Fatigue regions, micro-crack dispersion and stress distributions occurred at the journal bearing were experimentally and theoretically investigated. The relation between fatigue region and pressure distributions were investigated by determining the circumferential pressure distribution under real-time dynamic loadings for the position of every 10° crank angles. In the theoretical part; stress and deformation distributions at the surface of the journal bearing analysed by using finite element methods to determine the relationship between stress and fatigue behaviour. As a result of this study, the maximum oil pressure and fatigue cracks were observed in the most heavily loaded regions of the bearing surface. Experimental results show that PTFE-Bronze layered journal bearings fatigue behaviour is better than the bearings include white metal alloy.
Estimating a Service-Life Distribution Based on Production Counts and a Failure Database
Ryan, Kenneth J.; Hamada, Michael Scott; Vardeman, Stephen B.
2017-04-01
A manufacturer wanted to compare the service-life distributions of two similar products. These concern product lifetimes after installation (not manufacture). For each product, there were available production counts and an imperfect database providing information on failing units. In the real case, these units were expensive repairable units warrantied against repairs. Failure (of interest here) was relatively rare and driven by a different mode/mechanism than ordinary repair events (not of interest here). Approach: Data models for the service life based on a standard parametric lifetime distribution and a related limited failure population were developed. These models were used to develop expressionsmore » for the likelihood of the available data that properly accounts for information missing in the failure database. Results: A Bayesian approach was employed to obtain estimates of model parameters (with associated uncertainty) in order to investigate characteristics of the service-life distribution. Custom software was developed and is included as Supplemental Material to this case study. One part of a responsible approach to the original case was a simulation experiment used to validate the correctness of the software and the behavior of the statistical methodology before using its results in the application, and an example of such an experiment is included here. Because of confidentiality issues that prevent use of the original data, simulated data with characteristics like the manufacturer’s proprietary data are used to illustrate some aspects of our real analyses. Lastly, we also note that, although this case focuses on rare and complete product failure, the statistical methodology provided is directly applicable to more standard warranty data problems involving typically much larger warranty databases where entries are warranty claims (often for repairs) rather than reports of complete failures.« less
Estimating a Service-Life Distribution Based on Production Counts and a Failure Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Kenneth J.; Hamada, Michael Scott; Vardeman, Stephen B.
A manufacturer wanted to compare the service-life distributions of two similar products. These concern product lifetimes after installation (not manufacture). For each product, there were available production counts and an imperfect database providing information on failing units. In the real case, these units were expensive repairable units warrantied against repairs. Failure (of interest here) was relatively rare and driven by a different mode/mechanism than ordinary repair events (not of interest here). Approach: Data models for the service life based on a standard parametric lifetime distribution and a related limited failure population were developed. These models were used to develop expressionsmore » for the likelihood of the available data that properly accounts for information missing in the failure database. Results: A Bayesian approach was employed to obtain estimates of model parameters (with associated uncertainty) in order to investigate characteristics of the service-life distribution. Custom software was developed and is included as Supplemental Material to this case study. One part of a responsible approach to the original case was a simulation experiment used to validate the correctness of the software and the behavior of the statistical methodology before using its results in the application, and an example of such an experiment is included here. Because of confidentiality issues that prevent use of the original data, simulated data with characteristics like the manufacturer’s proprietary data are used to illustrate some aspects of our real analyses. Lastly, we also note that, although this case focuses on rare and complete product failure, the statistical methodology provided is directly applicable to more standard warranty data problems involving typically much larger warranty databases where entries are warranty claims (often for repairs) rather than reports of complete failures.« less
Borba, Márcia; Duan, Yuanyuan; Griggs, Jason A; Cesar, Paulo F; Della Bona, Álvaro
2015-04-01
The effect of the ceramic infrastructure (IS) on the failure behavior and stress distribution of fixed partial dentures (FPDs) was evaluated. Twenty FPDs with a connector cross-section of 16 mm(2) were produced for each IS and veneered with porcelain: (YZ) Vita In-Ceram YZ/Vita VM9 porcelain; (IZ) Vita In-Ceram Zirconia/Vita VM7 porcelain; (AL) Vita In-Ceram AL/Vita VM7 porcelain. Two experimental conditions were evaluated (n = 10). For control specimens, load was applied in the center of the pontic at 0.5 mm/min until failure, using a universal testing machine, in 37°C deionized water. For mechanical cycling (MC) specimens, FPDs were subjected to MC (2 Hz, 140 N, 10(6) cycles) and subsequently tested as described for the control group. For YZ, an extra group of 10 FPDs were built with a connector cross-section of 9 mm(2) and tested until failure. Fractography and FEA were performed. Data were analyzed by ANOVA and Tukey's test (α = 0.05). YZ16 showed the greatest fracture load mean value, followed by YZ16-MC. Specimens from groups YZ9, IZ16, IZ16-MC, AL16 and AL16-MC showed no significant difference for the fracture load. The failure behavior and stress distribution of FPDs was influenced by the type of IS. AL and IZ FPDs showed similar fracture load values but different failure modes and stress distribution. YZ showed the best mechanical behavior and may be considered the material of choice to produce posterior FPDs as it was possible to obtain a good mechanical performance even with a smaller connector dimension (9 mm(2)). Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sisson, R. D., Jr.; Sone, Ichiro; Biederman, R. R.
1985-01-01
Partially Stabilized Zirconia (PSZ) may become widely used for Thermal Barrier Coatings (TBC). Failure of these coatings can occur due to thermal fatigue in oxidizing atmospheres. The failure is due to the strains that develop due to thermal gradients, differences in thermal expansion coefficients, and oxidation of the bond coating. The role of microstructure and the cubic, tetragonal, and monoclinic phase distribution in the strain development and subsequent failure will be discussed. An X-ray diffraction technique for accurate determination of the fraction of each phase in PSZ will be applied to understanding the phase transformations and strain development. These results will be discussed in terms of developing a model for life prediction in PSZ coatings during thermal cycling.
Heroic Reliability Improvement in Manned Space Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.
Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation
Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan
2013-01-01
The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902
NASA Astrophysics Data System (ADS)
Spinner, Neil S.; Hinnant, Katherine M.; Mazurick, Ryan; Brandon, Andrew; Rose-Pehrsson, Susan L.; Tuttle, Steven G.
2016-04-01
Cylindrical 18650-type surrogate cells were designed and fabricated to mimic the thermophysical properties and behavior of active lithium-ion batteries. An internal jelly roll geometry consisting of alternating stainless steel and mica layers was created, and numerous techniques were used to estimate thermophysical properties. Surrogate cell density was measured to be 1593 ± 30 kg/m3, and heat capacity was found to be 727 ± 18 J/kg-K. Axial thermal conductivity was determined to be 5.1 ± 0.6 W/m-K, which was over an order of magnitude higher than radial thermal conductivity due to jelly roll anisotropy. Radial heating experiments were combined with numerical and analytical solutions to the time-dependent, radial heat conduction equation, and from the numerical method an additional estimate for heat capacity of 805 ± 23 J/kg-K was found. Using both heat capacities and analysis techniques, values for radial thermal conductivity were between 0.120 and 0.197 W/m-K. Under normal operating conditions, relatively low radial temperature distributions were observed; however, during extreme battery failure with a hexagonal cell package, instantaneous radial temperature distributions as high as 43-71 °C were seen. For a vertical cell package, even during adjacent cell failure, similar homogeneity in internal temperatures were observed, demonstrating thermal anisotropy.
How and why of orthodontic bond failures: An in vivo study
Vijayakumar, R. K.; Jagadeep, Raju; Ahamed, Fayyaz; Kanna, Aprose; Suresh, K.
2014-01-01
Introduction: The bonding of orthodontic brackets and their failure rates by both direct and in-direct procedures are well-documented in orthodontic literature. Over the years different adhesive materials and various indirect bonding transfer procedures have been compared and evaluated for bond failure rates. The aim of our study is to highlight the use of a simple, inexpensive and ease of manipulation of a single thermo-plastic transfer tray and the use the of a single light cure adhesive to evaluate the bond failure rates in clinical situations. Materials and Methods: A total of 30 patients were randomly divided into two groups (Group A and Group B). A split-mouth study design was used, for, both the groups so that they were distributed equally with-out bias. After initial prophylaxis, both the procedures were done as per manufactures instructions. All patients were initially motivated and reviewed for bond failures rates for 6 months. Results: Bond failure rates were assessed for over-all direct and indirect procedures, anterior and posterior arches, and for individual tooth. Z-test was used for statistically analyzing, the normal distribution of the sample in a spilt mouth study. The results of the two groups were compared and P value was calculated using Z-proportion test to assess the significance of the bond failure. Conclusion: Over-all bond failure was more for direct bonding. Anterior bracket failure was more in-direct bonding than indirect procedure, which showed more posterior bracket failures. In individual tooth bond failure, mandibular incisor, and premolar brackets showed more failure, followed by maxillary premolars and canines. PMID:25210392
Degradation data analysis based on a generalized Wiener process subject to measurement error
NASA Astrophysics Data System (ADS)
Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar
2017-09-01
Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.
Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant.
Moreno-Garcia, Isabel M; Palacios-Garcia, Emilio J; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J; Varo-Martinez, Marta; Real-Calvo, Rafael J
2016-05-26
There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.
Byzantine-fault tolerant self-stabilizing protocol for distributed clock synchronization systems
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R. (Inventor)
2010-01-01
A rapid Byzantine self-stabilizing clock synchronization protocol that self-stabilizes from any state, tolerates bursts of transient failures, and deterministically converges within a linear convergence time with respect to the self-stabilization period. Upon self-stabilization, all good clocks proceed synchronously. The Byzantine self-stabilizing clock synchronization protocol does not rely on any assumptions about the initial state of the clocks. Furthermore, there is neither a central clock nor an externally generated pulse system. The protocol converges deterministically, is scalable, and self-stabilizes in a short amount of time. The convergence time is linear with respect to the self-stabilization period.
Tang, Haibin; Chen, Zhangxing; Zhou, Guowei; ...
2018-02-06
To develop further understanding towards the role of a heterogeneous microstructure on tensile crack initiation and failure behavior in chopped carbon fiber chip-reinforced composites, uni-axial tensile tests are performed on coupons cut from compression molded plaque with varying directions. Our experimental results indicate that failure initiation is relevant to the strain localization, and a new criterion with the nominal modulus to predict the failure location is proposed based on the strain analysis. Furthermore, optical microscopic images show that the nominal modulus is determined by the chip orientation distribution. At the area with low nominal modulus, it is found that chipsmore » are mostly aligning along directions transverse to loading direction and/or less concentrated, while at the area with high nominal modulus, more chips are aligning to tensile direction. On the basis of failure mechanism analysis, it is concluded that transversely-oriented chips or resin-rich regions are easier for damage initiation, while longitudinally-oriented chips postpone the fracture. Good agreement is found among failure mechanism, strain localization and chip orientation distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Haibin; Chen, Zhangxing; Zhou, Guowei
To develop further understanding towards the role of a heterogeneous microstructure on tensile crack initiation and failure behavior in chopped carbon fiber chip-reinforced composites, uni-axial tensile tests are performed on coupons cut from compression molded plaque with varying directions. Our experimental results indicate that failure initiation is relevant to the strain localization, and a new criterion with the nominal modulus to predict the failure location is proposed based on the strain analysis. Furthermore, optical microscopic images show that the nominal modulus is determined by the chip orientation distribution. At the area with low nominal modulus, it is found that chipsmore » are mostly aligning along directions transverse to loading direction and/or less concentrated, while at the area with high nominal modulus, more chips are aligning to tensile direction. On the basis of failure mechanism analysis, it is concluded that transversely-oriented chips or resin-rich regions are easier for damage initiation, while longitudinally-oriented chips postpone the fracture. Good agreement is found among failure mechanism, strain localization and chip orientation distribution.« less
NASA Astrophysics Data System (ADS)
Sang, Z. X.; Huang, J. Q.; Yan, J.; Du, Z.; Xu, Q. S.; Lei, H.; Zhou, S. X.; Wang, S. C.
2017-11-01
The protection is an essential part for power device, especially for those in power grid, as the failure may cost great losses to the society. A study on the voltage and current abnormality in the power electronic devices in Distribution Electronic Power Transformer (D-EPT) during the failures on switching components is presented, as well as the operational principles for 10 kV rectifier, 10 kV/400 V DC-DC converter and 400 V inverter in D-EPT. Derived from the discussion on the effects of voltage and current distortion, the fault characteristics as well as a fault diagnosis method for D-EPT are introduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, T.Y.; Bentz, J.; Simpson, R.
1997-02-01
The objective of the Lower Head Failure (LHF) Experiment Program is to experimentally investigate and characterize the failure of the reactor vessel lower head due to thermal and pressure loads under severe accident conditions. The experiment is performed using 1/5-scale models of a typical PWR pressure vessel. Experiments are performed for various internal pressure and imposed heat flux distributions with and without instrumentation guide tube penetrations. The experimental program is complemented by a modest modeling program based on the application of vessel creep rupture codes developed in the TMI Vessel Investigation Project. The first three experiments under the LHF programmore » investigated the creep rupture of simulated reactor pressure vessels without penetrations. The heat flux distributions for the three experiments are uniform (LHF-1), center-peaked (LHF-2), and side-peaked (LHF-3), respectively. For all the experiments, appreciable vessel deformation was observed to initiate at vessel wall temperatures above 900K and the vessel typically failed at approximately 1000K. The size of failure was always observed to be smaller than the heated region. For experiments with non-uniform heat flux distributions, failure typically occurs in the region of peak temperature. A brief discussion of the effect of penetration is also presented.« less
Development and Application of a Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane
2007-01-01
This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.
Research on cascading failure in multilayer network with different coupling preference
NASA Astrophysics Data System (ADS)
Zhang, Yong; Jin, Lei; Wang, Xiao Juan
This paper is aimed at constructing robust multilayer networks against cascading failure. Considering link protection strategies in reality, we design a cascading failure model based on load distribution and extend it to multilayer. We use the cascading failure model to deduce the scale of the largest connected component after cascading failure, from which we can find that the performance of four kinds of load distribution strategies associates with the load ratio of the current edge to its adjacent edge. Coupling preference is a typical characteristic in multilayer networks which corresponds to the network robustness. The coupling preference of multilayer networks is divided into two forms: the coupling preference in layers and the coupling preference between layers. To analyze the relationship between the coupling preference and the multilayer network robustness, we design a construction algorithm to generate multilayer networks with different coupling preferences. Simulation results show that the load distribution based on the node betweenness performs the best. When the coupling coefficient in layers is zero, the scale-free network is the most robust. In the random network, the assortative coupling in layers is more robust than the disassortative coupling. For the coupling preference between layers, the assortative coupling between layers is more robust than the disassortative coupling both in the scale free network and the random network.
Size distributions and failure initiation of submarine and subaerial landslides
ten Brink, Uri S.; Barkan, R.; Andrews, B.D.; Chaytor, J.D.
2009-01-01
Landslides are often viewed together with other natural hazards, such as earthquakes and fires, as phenomena whose size distribution obeys an inverse power law. Inverse power law distributions are the result of additive avalanche processes, in which the final size cannot be predicted at the onset of the disturbance. Volume and area distributions of submarine landslides along the U.S. Atlantic continental slope follow a lognormal distribution and not an inverse power law. Using Monte Carlo simulations, we generated area distributions of submarine landslides that show a characteristic size and with few smaller and larger areas, which can be described well by a lognormal distribution. To generate these distributions we assumed that the area of slope failure depends on earthquake magnitude, i.e., that failure occurs simultaneously over the area affected by horizontal ground shaking, and does not cascade from nucleating points. Furthermore, the downslope movement of displaced sediments does not entrain significant amounts of additional material. Our simulations fit well the area distribution of landslide sources along the Atlantic continental margin, if we assume that the slope has been subjected to earthquakes of magnitude ??? 6.3. Regions of submarine landslides, whose area distributions obey inverse power laws, may be controlled by different generation mechanisms, such as the gradual development of fractures in the headwalls of cliffs. The observation of a large number of small subaerial landslides being triggered by a single earthquake is also compatible with the hypothesis that failure occurs simultaneously in many locations within the area affected by ground shaking. Unlike submarine landslides, which are found on large uniformly-dipping slopes, a single large landslide scarp cannot form on land because of the heterogeneous morphology and short slope distances of tectonically-active subaerial regions. However, for a given earthquake magnitude, the total area affected by subaerial landslides is comparable to that calculated by slope stability analysis for submarine landslides. The area distribution of subaerial landslides from a single event may be determined by the size distribution of the morphology of the affected area, not by the initiation process. ?? 2009 Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lall, Pradeep; Wei, Junchao; Sakalaukus, Peter
A new method has been developed for assessment of the onset of degradation in solid state luminaires to classify failure mechanisms by using metrics beyond lumen degradation that are currently used for identification of failure. Luminous Flux output, Correlated Color Temperature Data on Philips LED Lamps has been gathered under 85°C/85%RH till lamp failure. Failure modes of the test population of the lamps have been studied to understand the failure mechanisms in 85°C/85%RH accelerated test. Results indicate that the dominant failure mechanism is the discoloration of the LED encapsulant inside the lamps which is the likely cause for the luminousmore » flux degradation and the color shift. The acquired data has been used in conjunction with Bayesian Probabilistic Models to identify luminaires with onset of degradation much prior to failure through identification of decision boundaries between lamps with accrued damage and lamps beyond the failure threshold in the feature space. In addition luminaires with different failure modes have been classified separately from healthy pristine luminaires. The α-λ plots have been used to evaluate the robustness of the proposed methodology. Results show that the predicted degradation for the lamps tracks the true degradation observed during 85°C/85%RH during accelerated life test fairly closely within the ±20% confidence bounds. Correlation of model prediction with experimental results indicates that the presented methodology allows the early identification of the onset of failure much prior to development of complete failure distributions and can be used for assessing the damage state of SSLs in fairly large deployments. It is expected that, the new prediction technique will allow the development of failure distributions without testing till L70 life for the manifestation of failure.« less
NASA Astrophysics Data System (ADS)
Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.
2017-09-01
Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.
Megapixel mythology and photospace: estimating photospace for camera phones from large image sets
NASA Astrophysics Data System (ADS)
Hultgren, Bror O.; Hertel, Dirk W.
2008-01-01
It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.
Sounds of Failure: Passive Acoustic Measurements of Excited Vibrational Modes
NASA Astrophysics Data System (ADS)
Brzinski, Theodore A.; Daniels, Karen E.
2018-05-01
Granular materials can fail through spontaneous events like earthquakes or brittle fracture. However, measurements and analytic models which forecast failure in this class of materials, while of both fundamental and practical interest, remain elusive. Materials including numerical packings of spheres, colloidal glasses, and granular materials have been known to develop an excess of low-frequency vibrational modes as the confining pressure is reduced. Here, we report experiments on sheared granular materials in which we monitor the evolving density of excited modes via passive monitoring of acoustic emissions. We observe a broadening of the distribution of excited modes coincident with both bulk and local plasticity, and evolution in the shape of the distribution before and after bulk failure. These results provide a new interpretation of the changing state of the material on its approach to stick-slip failure.
Sounds of Failure: Passive Acoustic Measurements of Excited Vibrational Modes.
Brzinski, Theodore A; Daniels, Karen E
2018-05-25
Granular materials can fail through spontaneous events like earthquakes or brittle fracture. However, measurements and analytic models which forecast failure in this class of materials, while of both fundamental and practical interest, remain elusive. Materials including numerical packings of spheres, colloidal glasses, and granular materials have been known to develop an excess of low-frequency vibrational modes as the confining pressure is reduced. Here, we report experiments on sheared granular materials in which we monitor the evolving density of excited modes via passive monitoring of acoustic emissions. We observe a broadening of the distribution of excited modes coincident with both bulk and local plasticity, and evolution in the shape of the distribution before and after bulk failure. These results provide a new interpretation of the changing state of the material on its approach to stick-slip failure.
Line-edge roughness performance targets for EUV lithography
NASA Astrophysics Data System (ADS)
Brunner, Timothy A.; Chen, Xuemei; Gabor, Allen; Higgins, Craig; Sun, Lei; Mack, Chris A.
2017-03-01
Our paper will use stochastic simulations to explore how EUV pattern roughness can cause device failure through rare events, so-called "black swans". We examine the impact of stochastic noise on the yield of simple wiring patterns with 36nm pitch, corresponding to 7nm node logic, using a local Critical Dimension (CD)-based fail criteria Contact hole failures are examined in a similar way. For our nominal EUV process, local CD uniformity variation and local Pattern Placement Error variation was observed, but no pattern failures were seen in the modest (few thousand) number of features simulated. We degraded the image quality by incorporating Moving Standard Deviation (MSD) blurring to degrade the Image Log-Slope (ILS), and were able to find conditions where pattern failures were observed. We determined the Line Width Roughness (LWR) value as a function of the ILS. By use of an artificial "step function" image degraded by various MSD blur, we were able to extend the LWR vs ILS curve into regimes that might be available for future EUV imagery. As we decreased the image quality, we observed LWR grow and also began to see pattern failures. For high image quality, we saw CD distributions that were symmetrical and close to Gaussian in shape. Lower image quality caused CD distributions that were asymmetric, with "fat tails" on the low CD side (under-exposed) which were associated with pattern failures. Similar non-Gaussian CD distributions were associated with image conditions that caused missing contact holes, i.e. CD=0.
NASA Technical Reports Server (NTRS)
Shuart, M. J.; Williams, J. G.
1984-01-01
The response and failure of a + or - 45s class laminate was studied by transparent fiberglass epoxy composite birefringent material. The birefringency property allows the laminate stress distribution to be observed during the test and also after the test if permanent residual stresses occur. The location of initial laminate failure and of the subsequent failure propagation are observed through its transparency characteristics. Experimental results are presented.
NASA Astrophysics Data System (ADS)
Suhir, E.
2014-05-01
The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.
Modeling structure and resilience of the dark network.
De Domenico, Manlio; Arenas, Alex
2017-02-01
While the statistical and resilience properties of the Internet are no longer changing significantly across time, the Darknet, a network devoted to keep anonymous its traffic, still experiences rapid changes to improve the security of its users. Here we study the structure of the Darknet and find that its topology is rather peculiar, being characterized by a nonhomogeneous distribution of connections, typical of scale-free networks; very short path lengths and high clustering, typical of small-world networks; and lack of a core of highly connected nodes. We propose a model to reproduce such features, demonstrating that the mechanisms used to improve cybersecurity are responsible for the observed topology. Unexpectedly, we reveal that its peculiar structure makes the Darknet much more resilient than the Internet (used as a benchmark for comparison at a descriptive level) to random failures, targeted attacks, and cascade failures, as a result of adaptive changes in response to the attempts of dismantling the network across time.
Clinical and epidemiological aspects of cornea transplant patients of a reference hospital 1
Cruz, Giovanna Karinny Pereira; de Azevedo, Isabelle Campos; Carvalho, Diana Paula de Souza Rego Pinto; Vitor, Allyne Fortes; Santos, Viviane Euzébia Pereira; Ferreira, Marcos Antonio
2017-01-01
ABSTRACT Objective: clinically characterizing cornea transplant patients and their distribution according to indicated and post-operative conditions of cornea transplantation, as well as estimating the average waiting time. Method: a cross-sectional, descriptive and analytical study performed for all cornea transplants performed at a reference service (n=258). Data were analyzed using Statistical Package for the Social Sciences, version 20.0. Results: the main indicator for cornea transplant was keratoconus. The mean waiting time for the transplant was approximately 5 months and 3 weeks for elective transplants and 9 days for urgent cases. An association between the type of corneal disorder with gender, age, previous surgery, eye classification, glaucoma and anterior graft failure were found. Conclusion: keratoconus was the main indicator for cornea transplant. Factors such as age, previous corneal graft failure (retransplantation), glaucoma, cases of surgeries prior to cornea transplant (especially cataract surgery) may be related to the onset corneal endothelium disorders. PMID:28614429
Modeling structure and resilience of the dark network
NASA Astrophysics Data System (ADS)
De Domenico, Manlio; Arenas, Alex
2017-02-01
While the statistical and resilience properties of the Internet are no longer changing significantly across time, the Darknet, a network devoted to keep anonymous its traffic, still experiences rapid changes to improve the security of its users. Here we study the structure of the Darknet and find that its topology is rather peculiar, being characterized by a nonhomogeneous distribution of connections, typical of scale-free networks; very short path lengths and high clustering, typical of small-world networks; and lack of a core of highly connected nodes. We propose a model to reproduce such features, demonstrating that the mechanisms used to improve cybersecurity are responsible for the observed topology. Unexpectedly, we reveal that its peculiar structure makes the Darknet much more resilient than the Internet (used as a benchmark for comparison at a descriptive level) to random failures, targeted attacks, and cascade failures, as a result of adaptive changes in response to the attempts of dismantling the network across time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brodin, N. Patrik, E-mail: nils.patrik.brodin@rh.dk; Niels Bohr Institute, University of Copenhagen, Copenhagen; Vogelius, Ivan R.
2013-10-01
Purpose: As pediatric medulloblastoma (MB) is a relatively rare disease, it is important to extract the maximum information from trials and cohort studies. Here, a framework was developed for modeling tumor control with multiple modes of failure and time-to-progression for standard-risk MB, using published pattern of failure data. Methods and Materials: Outcome data for standard-risk MB published after 1990 with pattern of relapse information were used to fit a tumor control dose-response model addressing failures in both the high-dose boost volume and the elective craniospinal volume. Estimates of 5-year event-free survival from 2 large randomized MB trials were used tomore » model the time-to-progression distribution. Uncertainty in freedom from progression (FFP) was estimated by Monte Carlo sampling over the statistical uncertainty in input data. Results: The estimated 5-year FFP (95% confidence intervals [CI]) for craniospinal doses of 15, 18, 24, and 36 Gy while maintaining 54 Gy to the posterior fossa was 77% (95% CI, 70%-81%), 78% (95% CI, 73%-81%), 79% (95% CI, 76%-82%), and 80% (95% CI, 77%-84%) respectively. The uncertainty in FFP was considerably larger for craniospinal doses below 18 Gy, reflecting the lack of data in the lower dose range. Conclusions: Estimates of tumor control and time-to-progression for standard-risk MB provides a data-driven setting for hypothesis generation or power calculations for prospective trials, taking the uncertainties into account. The presented methods can also be applied to incorporate further risk-stratification for example based on molecular biomarkers, when the necessary data become available.« less
Hilton, Michael F; Whiteford, Harvey A
2010-12-01
This study investigates associations between psychological distress and workplace accidents, workplace failures and workplace successes. The Health and Work Performance Questionnaire (HPQ) was distributed to employees of 58 large employers. A total of 60,556 full-time employees were eligible for analysis. The HPQ probed whether the respondent had, in the past 30-days, a workplace accident, success or failure ("yes" or "no"). Psychological distress was quantified using the Kessler 6 (K6) scale and categorised into low, moderate and high psychological distress. Three binomial logistic regressions were performed with the dependent variables being workplace accident, success or failure. Covariates in the models were K6 category, gender, age, marital status, education level, job category, physical health and employment sector. Accounting for all other variables, moderate and high psychological distress significantly (P < 0.0001) increased the odds ratio (OR) for a workplace accident to 1.4 for both levels of distress. Moderate and high psychological distress significantly (P < 0.0001) increased the OR (OR = 2.3 and 2.6, respectively) for a workplace failure and significantly (P < 0.0001) decreased the OR for a workplace success (OR = 0.8 and 0.7, respectively). Moderate and high psychological distress increase the OR's for workplace accidents work failures and decrease the OR of workplace successes at similar levels. As the prevalence of moderate psychological distress is approximately double that of high psychological distress moderate distress consequentially has a greater workplace impact.
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
Army Battlefield Distribution Through the Lens of OIF: Logical Failures and the Way Ahead
2005-02-02
3 Historical Context of Logistics and Distribution Management Transformation...THEATER DISTRIBUTION UNITS ............................................... 66 iii TABLE OF FIGURES Figure 1. Distribution Management Center...consumer and a potential provider of logistics.8 Historical Context of Logistics and Distribution Management Transformation The critical role of
NASA Astrophysics Data System (ADS)
Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.
2017-04-01
Large mountain slopes in alpine environments undergo a complex long-term evolution from glacial to postglacial environments, through a transient period of paraglacial readjustment. During and after this transition, the interplay among rock strength, topographic relief, and morpho-climatic drivers varying in space and time can lead to the development of different types of slope instability, from sudden catastrophic failures to large, slow, long-lasting yet potentially catastrophic rockslides. Understanding the long-term evolution of large rock slopes requires accounting for the time-dependence of deglaciation unloading, permeability and fluid pressure distribution, displacements and failure mechanisms. In turn, this is related to a convincing description of rock mass damage processes and to their transition from a sub-critical (progressive failure) to a critical (catastrophic failure) character. Although mechanisms of damage occurrence in rocks have been extensively studied in the laboratory, the description of time-dependent damage under gravitational load and variable external actions remains difficult. In this perspective, starting from a time-dependent model conceived for laboratory rock deformation, we developed Dadyn-RS, a tool to simulate the long-term evolution of real, large rock slopes. Dadyn-RS is a 2D, FEM model programmed in Matlab, which combines damage and time-to-failure laws to reproduce both diffused damage and strain localization meanwhile tracking long-term slope displacements from primary to tertiary creep stages. We implemented in the model the ability to account for rock mass heterogeneity and property upscaling, time-dependent deglaciation, as well as damage-dependent fluid pressure occurrence and stress corrosion. We first tested DaDyn-RS performance on synthetic case studies, to investigate the effect of the different model parameters on the mechanisms and timing of long-term slope behavior. The model reproduces complex interactions between topography, deglaciation rate, mechanical properties and fluid pressure occurrence, resulting in different kinematics, damage patterns and timing of slope instabilities. We assessed the role of groundwater on slope damage and deformation mechanisms by introducing time-dependent pressure cycling within simulations. Then, we applied DaDyn-RS to real slopes located in the Italian Central Alps, affected by an active rockslide and a Deep Seated Gravitational Slope Deformation, respectively. From Last Glacial Maximum to present conditions, our model allows reproducing in an explicitly time-dependent framework the progressive development of damage-induced permeability, strain localization and shear band differentiation at different times between the Lateglacial period and the Mid-Holocene climatic transition. Different mechanisms and timings characterize different styles of slope deformations, consistently with available dating constraints. DaDyn-RS is able to account for different long-term slope dynamics, from slow creep to the delayed transition to fast-moving rockslides.
Public Risk Assessment Program
NASA Technical Reports Server (NTRS)
Mendeck, Gavin
2010-01-01
The Public Entry Risk Assessment (PERA) program addresses risk to the public from shuttle or other spacecraft re-entry trajectories. Managing public risk to acceptable levels is a major component of safe spacecraft operation. PERA is given scenario inputs of vehicle trajectory, probability of failure along that trajectory, the resulting debris characteristics, and field size and distribution, and returns risk metrics that quantify the individual and collective risk posed by that scenario. Due to the large volume of data required to perform such a risk analysis, PERA was designed to streamline the analysis process by using innovative mathematical analysis of the risk assessment equations. Real-time analysis in the event of a shuttle contingency operation, such as damage to the Orbiter, is possible because PERA allows for a change to the probability of failure models, therefore providing a much quicker estimation of public risk. PERA also provides the ability to generate movie files showing how the entry risk changes as the entry develops. PERA was designed to streamline the computation of the enormous amounts of data needed for this type of risk assessment by using an average distribution of debris on the ground, rather than pinpointing the impact point of every piece of debris. This has reduced the amount of computational time significantly without reducing the accuracy of the results. PERA was written in MATLAB; a compiled version can run from a DOS or UNIX prompt.
Failure mechanisms of fibrin-based surgical tissue adhesives
NASA Astrophysics Data System (ADS)
Sierra, David Hugh
A series of studies was performed to investigate the potential impact of heterogeneity in the matrix of multiple-component fibrin-based tissue adhesives upon their mechanical and biomechanical properties both in vivo and in vitro. Investigations into the failure mechanisms by stereological techniques demonstrated that heterogeneity could be measured quantitatively and that the variation in heterogeneity could be altered both by the means of component mixing and delivery and by the formulation of the sealant. Ex vivo tensile adhesive strength was found to be inversely proportional to the amount of heterogeneity. In contrast, in vivo tensile wound-closure strength was found to be relatively unaffected by the degree of heterogeneity, while in vivo parenchymal organ hemostasis in rabbits was found to be affected: greater heterogeneity appeared to correlate with an increase in hemostasis time and amount of sealant necessary to effect hemostasis. Tensile testing of the bulk sealant showed that mechanical parameters were proportional to fibrin concentration and that the physical characteristics of the failure supported a ductile mechanism. Strain hardening as a function of percentage of strain, and strain rate was observed for both concentrations, and syneresis was observed at low strain rates for the lower fibrin concentration. Blister testing demonstrated that burst pressure and failure energy were proportional to fibrin concentration and decreased with increasing flow rate. Higher fibrin concentration demonstrated predominately compact morphology debonds with cohesive failure loci, demonstrating shear or viscous failure in a viscoelastic rubbery adhesive. The lower fibrin concentration sealant exhibited predominately fractal morphology debonds with cohesive failure loci, supporting an elastoviscous material condition. The failure mechanism for these was hypothesized and shown to be flow-induced ductile fracture. Based on these findings, the failure mechanism was stochastic in nature because the mean failure energy and burst pressure values were not predictive of locus and morphology. Instead, flow rate and fibrin concentration showed the most predictive value, with the outcome best described as a probability distribution rather than a specific deterministic outcome.
Tools for distributed application management
NASA Technical Reports Server (NTRS)
Marzullo, Keith; Cooper, Robert; Wood, Mark; Birman, Kenneth P.
1990-01-01
Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system (a collection of tools for constructing distributed application management software) is described. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real-time reactive program. The underlying application is instrumented with a variety of built-in and user-defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when preexisting, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.
Tools for distributed application management
NASA Technical Reports Server (NTRS)
Marzullo, Keith; Wood, Mark; Cooper, Robert; Birman, Kenneth P.
1990-01-01
Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system is described: a collection of tools for constructing distributed application management software. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real time reactive program. The underlying application is instrumented with a variety of built-in and user defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when pre-existing, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.
Stability Characteristics of a Combat Aircraft with Control Surface Failure
1989-11-01
I TI l ’ ’- ELECT 71 JAN 0 219903 ~OF S STABILITY CHARACTERISTICS OF A COMBAT AIRCRAFT WITH CONTROL SURFACE FAILURE Thesis Captain Stephen M. Zaiser...CONTROL SURFACE FAILURE Thesis Captain Stephen M. Zaiser AFIT/GAEIENY/89D-42 Approved for Public Release; Distribution unlimited DTIC ELECTE JAN0 2 19901...m m mm m m mm immmmm m D - STABILITY CHARACTERISTICS OF A COMBAT AIRCRAFT WITH CONTROL SURFACE FAILURE Thesis Presented to the
On the Performance Evaluation of Query-Based Wireless Sensor Networks
2012-01-01
is ∆ ≡ P(T > X) = π0 ∫ ∞ 0 [1−B(x)] dH(x). (2) Proposition 1 can be proved using a simple conditioning argument . The expression for the proportion of...node by α ≡ α1. Assuming the event lifetime distribution function G has an increasing failure rate ( IFR ), then 0 < α ≤ α2 ≤ α3 ≤ · · · . Proposition 3...Suppose G is an IFR distribution function so that 0 < α ≤ α2 ≤ α3 ≤ · · · . Then for a fixed time-to-live counter ℓ, λe ≤ λ [ 1− (1− α)ℓ α ] ≤ λℓ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J.R.; Heger, A.S.; Koen, B.V.
1984-04-01
This report is the result of a preliminary feasibility study of the applicability of Stein and related parametric empirical Bayes (PEB) estimators to the Nuclear Plant Reliability Data System (NPRDS). A new estimator is derived for the means of several independent Poisson distributions with different sampling times. This estimator is applied to data from NPRDS in an attempt to improve failure rate estimation. Theoretical and Monte Carlo results indicate that the new PEB estimator can perform significantly better than the standard maximum likelihood estimator if the estimation of the individual means can be combined through the loss function or throughmore » a parametric class of prior distributions.« less
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
Thorndahl, S; Willems, P
2008-01-01
Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.
FEAT - FAILURE ENVIRONMENT ANALYSIS TOOL (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Pack, G.
1994-01-01
The Failure Environment Analysis Tool, FEAT, enables people to see and better understand the effects of failures in a system. FEAT uses digraph models to determine what will happen to a system if a set of failure events occurs and to identify the possible causes of a selected set of failures. Failures can be user-selected from either engineering schematic or digraph model graphics, and the effects or potential causes of the failures will be color highlighted on the same schematic or model graphic. As a design tool, FEAT helps design reviewers understand exactly what redundancies have been built into a system and where weaknesses need to be protected or designed out. A properly developed digraph will reflect how a system functionally degrades as failures accumulate. FEAT is also useful in operations, where it can help identify causes of failures after they occur. Finally, FEAT is valuable both in conceptual development and as a training aid, since digraphs can identify weaknesses in scenarios as well as hardware. Digraphs models for use with FEAT are generally built with the Digraph Editor, a Macintosh-based application which is distributed with FEAT. The Digraph Editor was developed specifically with the needs of FEAT users in mind and offers several time-saving features. It includes an icon toolbox of components required in a digraph model and a menu of functions for manipulating these components. It also offers FEAT users a convenient way to attach a formatted textual description to each digraph node. FEAT needs these node descriptions in order to recognize nodes and propagate failures within the digraph. FEAT users store their node descriptions in modelling tables using any word processing or spreadsheet package capable of saving data to an ASCII text file. From within the Digraph Editor they can then interactively attach a properly formatted textual description to each node in a digraph. Once descriptions are attached to them, a selected set of nodes can be saved as a library file which represents a generic digraph structure for a class of components. The Generate Model feature can then use library files to generate digraphs for every component listed in the modeling tables, and these individual digraph files can be used in a variety of ways to speed generation of complete digraph models. FEAT contains a preprocessor which performs transitive closure on the digraph. This multi-step algorithm builds a series of phantom bridges, or gates, that allow accurate bi-directional processing of digraphs. This preprocessing can be time-consuming, but once preprocessing is complete, queries can be answered and displayed within seconds. A UNIX X-Windows port of version 3.5 of FEAT, XFEAT, is also available to speed the processing of digraph models created on the Macintosh. FEAT v3.6, which is only available for the Macintosh, has some report generation capabilities which are not available in XFEAT. For very large integrated systems, FEAT can be a real cost saver in terms of design evaluation, training, and knowledge capture. The capability of loading multiple digraphs and schematics into FEAT allows modelers to build smaller, more focused digraphs. Typically, each digraph file will represent only a portion of a larger failure scenario. FEAT will combine these files and digraphs from other modelers to form a continuous mathematical model of the system's failure logic. Since multiple digraphs can be cumbersome to use, FEAT ties propagation results to schematic drawings produced using MacDraw II (v1.1v2 or later) or MacDraw Pro. This makes it easier to identify single and double point failures that may have to cross several system boundaries and multiple engineering disciplines before creating a hazardous condition. FEAT v3.6 for the Macintosh is written in C-language using Macintosh Programmer's Workshop C v3.2. It requires at least a Mac II series computer running System 7 or System 6.0.8 and 32 Bit QuickDraw. It also requires a math coprocessor or coprocessor emulator and a color monitor (or one with 256 gray scale capability). A minimum of 4Mb of free RAM is highly recommended. The UNIX version of FEAT includes both FEAT v3.6 for the Macintosh and XFEAT. XFEAT is written in C-language for Sun series workstations running SunOS, SGI workstations running IRIX, DECstations running ULTRIX, and Intergraph workstations running CLIX version 6. It requires the MIT X Window System, Version 11 Revision 4, with OSF/Motif 1.1.3, and 16Mb of RAM. The standard distribution medium for FEAT 3.6 (Macintosh version) is a set of three 3.5 inch Macintosh format diskettes. The standard distribution package for the UNIX version includes the three FEAT 3.6 Macintosh diskettes plus a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format which contains XFEAT. Alternate distribution media and formats for XFEAT are available upon request. FEAT has been under development since 1990. Both FEAT v3.6 for the Macintosh and XFEAT v3.5 were released in 1993.
NASA Technical Reports Server (NTRS)
Ricks, Trenton M.; Lacy, Thomas E., Jr.; Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.
2013-01-01
A multiscale modeling methodology, which incorporates a statistical distribution of fiber strengths into coupled micromechanics/ finite element analyses, is applied to unidirectional polymer matrix composites (PMCs) to analyze the effect of mesh discretization both at the micro- and macroscales on the predicted ultimate tensile (UTS) strength and failure behavior. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a PMC tensile specimen that initiates at the repeating unit cell (RUC) level. Three different finite element mesh densities were employed and each coupled with an appropriate RUC. Multiple simulations were performed in order to assess the effect of a statistical distribution of fiber strengths on the bulk composite failure and predicted strength. The coupled effects of both the micro- and macroscale discretizations were found to have a noticeable effect on the predicted UTS and computational efficiency of the simulations.
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Naora, K; Ichikawa, N; Hirano, H; Iwamoto, K
1999-05-01
Pharmacokinetic changes of various drugs have been reported in renal or hepatic failure. The present study employed ciprofloxacin, a quinolone antibiotic having neurotoxic side effects, to assess the influence of these diseases on distribution of ciprofloxacin into the central nervous system (CNS). After intravenous dosing of ciprofloxacin (10-30 mg kg(-1)), ciprofloxacin levels in plasma and brain were measured in normal rats (Wistar, male, 10-week-old) and those with acute renal and hepatic injuries which were induced by uranyl nitrate and carbon tetrachloride (CCl4), respectively. In the uranyl nitrate-treated rats, the plasma elimination half-life of ciprofloxacin was prolonged and the total body clearance was reduced when compared with those in the normal rats. Similar but smaller changes were observed in the CCl4-treated group. Brain levels of ciprofloxacin were significantly increased by both uranyl nitrate and CCl4 treatments. A proportional correlation between serum unbound levels and brain levels of ciprofloxacin was observed in the normal group. However, brain-to-serum unbound concentration ratios of ciprofloxacin were reduced in the rats with renal or hepatic failure. These results suggest that renal failure as well as hepatic failure retards elimination of ciprofloxacin from the blood, leading to elevation of the CNS level, and also that ciprofloxacin distribution in the brain is reduced in these disease states.
Monitoring and Identifying in Real time Critical Patients Events.
Chavez Mora, Emma
2014-01-01
Nowadays pervasive health care monitoring environments, as well as business activity monitoring environments, gather information from a variety of data sources. However it includes new challenges because of the use of body and wireless sensors, nontraditional operational and transactional sources. This makes the health data more difficult to monitor. Decision making in this environment is typically complex and unstructured as clinical work is essentially interpretative, multitasking, collaborative, distributed and reactive. Thus, the health care arena requires real time data management in areas such as patient monitoring, detection of adverse events and adaptive responses to operational failures. This research presents a new architecture that enables real time patient data management through the use of intelligent data sources.
Mesoscopic description of random walks on combs
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Iomin, Alexander; Campos, Daniel; Horsthemke, Werner
2015-12-01
Combs are a simple caricature of various types of natural branched structures, which belong to the category of loopless graphs and consist of a backbone and branches. We study continuous time random walks on combs and present a generic method to obtain their transport properties. The random walk along the branches may be biased, and we account for the effect of the branches by renormalizing the waiting time probability distribution function for the motion along the backbone. We analyze the overall diffusion properties along the backbone and find normal diffusion, anomalous diffusion, and stochastic localization (diffusion failure), respectively, depending on the characteristics of the continuous time random walk along the branches, and compare our analytical results with stochastic simulations.
Risk analysis and detection of thrombosis by measurement of electrical resistivity of blood.
Sapkota, Achyut; Asakura, Yuta; Maruyama, Osamu; Kosaka, Ryo; Yamane, Takashi; Takei, Masahiro
2013-01-01
Monitoring of thrombogenic process is very important in ventricular assistance devices (VADs) used as temporary or permanent measures in patients with advanced heart failure. Currently, there is a lack of a system which can perform a real-time monitoring of thrombogenic activity. Electrical signals vary according to the change in concentration of coagulation factors as well as the distribution of blood cells, and thus have potential to detect the thrombogenic process in an early stage. In the present work, we have made an assessment of an instrumentation system exploiting the electrical properties of blood. The experiments were conducted using bovine blood. Electrical resistance tomography with eight-electrode sensor was used to monitor the spatio-temporal change in electrical resistivity of blood in thrombogenic and non-thrombogenic condition. Under non-thrombogenic condition, the resistivity was uniform across the cross-section and average resistivity monotonically decreased with time before remaining almost flat. In contrary, under thrombogenic condition, there was non-uniform distribution across the cross-section, and average resistivity fluctuated with time.
NASA Astrophysics Data System (ADS)
Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang
2016-04-01
This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.
NASA Astrophysics Data System (ADS)
Xu, T.; Zhou, G. L.; Heap, Michael J.; Zhu, W. C.; Chen, C. F.; Baud, Patrick
2017-09-01
An understanding of the influence of temperature on brittle creep in granite is important for the management and optimization of granitic nuclear waste repositories and geothermal resources. We propose here a two-dimensional, thermo-mechanical numerical model that describes the time-dependent brittle deformation (brittle creep) of low-porosity granite under different constant temperatures and confining pressures. The mesoscale model accounts for material heterogeneity through a stochastic local failure stress field, and local material degradation using an exponential material softening law. Importantly, the model introduces the concept of a mesoscopic renormalization to capture the co-operative interaction between microcracks in the transition from distributed to localized damage. The mesoscale physico-mechanical parameters for the model were first determined using a trial-and-error method (until the modeled output accurately captured mechanical data from constant strain rate experiments on low-porosity granite at three different confining pressures). The thermo-physical parameters required for the model, such as specific heat capacity, coefficient of linear thermal expansion, and thermal conductivity, were then determined from brittle creep experiments performed on the same low-porosity granite at temperatures of 23, 50, and 90 °C. The good agreement between the modeled output and the experimental data, using a unique set of thermo-physico-mechanical parameters, lends confidence to our numerical approach. Using these parameters, we then explore the influence of temperature, differential stress, confining pressure, and sample homogeneity on brittle creep in low-porosity granite. Our simulations show that increases in temperature and differential stress increase the creep strain rate and therefore reduce time-to-failure, while increases in confining pressure and sample homogeneity decrease creep strain rate and increase time-to-failure. We anticipate that the modeling presented herein will assist in the management and optimization of geotechnical engineering projects within granite.
Detecting failure of climate predictions
Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve
2016-01-01
The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.
Independent Orbiter Assessment (IOA): Analysis of the instrumentation subsystem
NASA Technical Reports Server (NTRS)
Howard, B. S.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Instrumentation Subsystem are documented. The Instrumentation Subsystem (SS) consists of transducers, signal conditioning equipment, pulse code modulation (PCM) encoding equipment, tape recorders, frequency division multiplexers, and timing equipment. For this analysis, the SS is broken into two major groupings: Operational Instrumentation (OI) equipment and Modular Auxiliary Data System (MADS) equipment. The OI equipment is required to acquire, condition, scale, digitize, interleave/multiplex, format, and distribute operational Orbiter and payload data and voice for display, recording, telemetry, and checkout. It also must provide accurate timing for time critical functions for crew and payload specialist use. The MADS provides additional instrumentation to measure and record selected pressure, temperature, strain, vibration, and event data for post-flight playback and analysis. MADS data is used to assess vehicle responses to the flight environment and to permit correlation of such data from flight to flight. The IOA analysis utilized available SS hardware drawings and schematics for identifying hardware assemblies and components and their interfaces. Criticality for each item was assigned on the basis of the worst-case effect of the failure modes identified.
Epidemiology of urban water distribution systems
NASA Astrophysics Data System (ADS)
Bardet, Jean-Pierre; Little, Richard
2014-08-01
Urban water distribution systems worldwide contain numerous old and fragile pipes that inevitably break, flood streets and damage property, and disrupt economic and social activities. Such breaks often present dramatically in temporal clusters as occurred in Los Angeles during 2009. These clustered pipe breaks share many characteristics with human mortality observed during extreme climatological events such as heat waves or air pollution. Drawing from research and empirical studies in human epidemiology, a framework is introduced to analyze the time variations of disruptive pipe breaks that can help water agencies better understand clustered pipe failures and institute measures to minimize the disruptions caused by them. It is posited that at any time, a cohort of the pipes comprising the water distribution system will be in a weakened state due to fatigue and corrosion. This frail cohort becomes vulnerable during normal operations and ultimately breaks due to rapid increase in crack lengths induced by abnormal stressors. The epidemiological harvesting model developed in this paper simulates an observed time series of monthly pipe breaks and has both explanatory and predictive power. It also demonstrates that models from nonengineering disciplines such as medicine can provide improved insights into the performance of infrastructure systems.
Evolution of damage during deformation in porous granular materials (Louis Néel Medal Lecture)
NASA Astrophysics Data System (ADS)
Main, Ian
2014-05-01
'Crackling noise' occurs in a wide variety of systems that respond to external forcing in an intermittent way, leading to sudden bursts of energy release similar to those heard when crunching up a piece of paper or listening to a fire. In mineral magnetism ('Barkhausen') crackling noise occurs due to sudden changes in the size and orientation of microscopic ferromagnetic domains when the external magnetic field is changed. In rock physics sudden changes in internal stress associated with microscopically brittle failure events lead to acoustic emissions that can be recorded on the sample boundary, and used to infer the state of internal damage. Crackling noise is inherently stochastic, but the population of events often exhibits remarkably robust scaling properties, in terms of the source area, duration, energy, and in the waiting time between events. Here I describe how these scaling properties emerge and evolve spontaneously in a fully-dynamic discrete element model of sedimentary rocks subject to uniaxial compression at a constant strain rate. The discrete elements have structural disorder similar to that of a real rock, and this is the only source of heterogeneity. Despite the stationary loading and the lack of any time-dependent weakening processes, the results are all characterized by emergent power law distributions over a broad range of scales, in agreement with experimental observation. As deformation evolves, the scaling exponents change systematically in a way that is similar to the evolution of damage in experiments on real sedimentary rocks. The potential for real-time failure forecasting is examined by using synthetic and real data from laboratory tests and prior to volcanic eruptions. The combination of non-linearity and an irreducible stochastic component leads to significant variations in the precision and accuracy of the forecast failure time, leading to a significant proportion of 'false alarms' (forecast too early) and 'missed events' (forecast too late), as well as an over-optimistic assessments of forecasting power and quality when the failure time is known (the 'benefit of hindsight'). The evolution becomes progressively more complex, and the forecasting power diminishes, in going from ideal synthetics to controlled laboratory tests to open natural systems at larger scales in space and time.
A Study of Energy Management Systems and its Failure Modes in Smart Grid Power Distribution
NASA Astrophysics Data System (ADS)
Musani, Aatif
The subject of this thesis is distribution level load management using a pricing signal in a smart grid infrastructure. The project relates to energy management in a spe-cialized distribution system known as the Future Renewable Electric Energy Delivery and Management (FREEDM) system. Energy management through demand response is one of the key applications of smart grid. Demand response today is envisioned as a method in which the price could be communicated to the consumers and they may shift their loads from high price periods to the low price periods. The development and deployment of the FREEDM system necessitates controls of energy and power at the point of end use. In this thesis, the main objective is to develop the control model of the Energy Management System (EMS). The energy and power management in the FREEDM system is digitally controlled therefore all signals containing system states are discrete. The EMS is modeled as a discrete closed loop transfer function in the z-domain. A breakdown of power and energy control devices such as EMS components may result in energy con-sumption error. This leads to one of the main focuses of the thesis which is to identify and study component failures of the designed control system. Moreover, H-infinity ro-bust control method is applied to ensure effectiveness of the control architecture. A focus of the study is cyber security attack, specifically bad data detection in price. Test cases are used to illustrate the performance of the EMS control design, the effect of failure modes and the application of robust control technique. The EMS was represented by a linear z-domain model. The transfer function be-tween the pricing signal and the demand response was designed and used as a test bed. EMS potential failure modes were identified and studied. Three bad data detection meth-odologies were implemented and a voting policy was used to declare bad data. The run-ning mean and standard deviation analysis method proves to be the best method to detect bad data. An H-infinity robust control technique was applied for the first time to design discrete EMS controller for the FREEDM system.
GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.
Liu, Yangchuan; Tang, Yuguo; Gao, Xin
2017-12-01
The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.
Lovely, Lyndsay M; Chishti, Yasmin Z; Woodland, Jennifer L; Lalonde, Donald H
2018-05-01
Many surgeons and emergentologists use non-ultrasound-guided wrist nerve blocks. There is little evidence to guide the ideal volume of local anesthesia or how long we should wait after injection before performing pain-free procedures. This pilot study examined time to maximal anesthesia to painful needle stick in 14 volunteer participants receiving bilateral wrist blocks of 6 versus 11 mL of local. One surgeon performed all 14 bilateral wrist median nerve blocks in participants who remained blinded until after bandages were applied to their wrist. No one could see which wrist received the larger 11-mL volume injection versus the 6-mL block. Blinded sensory assessors then measured perceived maximal numbness time and numbness to needle stick pain in the fingertips of the median nerve distribution. Failure to get a complete median nerve block occurred in seven of fourteen 6-mL wrist blocks versus failure in only one of fourteen 11-mL blocks. Perceived maximal numbness occurred at roughly 40 minutes after injection, but actual numbness to painful needle stick took around 100 minutes. Incomplete median nerve numbness occurred with both 6- and 11-mL non-ultrasound-guided blocks at the wrist. In those with complete blocks, it took a surprisingly long time of 100 minutes for maximal anesthesia to occur to painful needle stick stimuli to the fingertips of the median nerve distribution. Non-ultrasound-guided median nerve blocks at the wrist as described in this article lack reliability and take too long to work.
One hundred years of return period: Strengths and limitations
NASA Astrophysics Data System (ADS)
Volpi, E.; Fiori, A.; Grimaldi, S.; Lombardo, F.; Koutsoyiannis, D.
2015-10-01
One hundred years from its original definition by Fuller, the probabilistic concept of return period is widely used in hydrology as well as in other disciplines of geosciences to give an indication on critical event rareness. This concept gains its popularity, especially in engineering practice for design and risk assessment, due to its ease of use and understanding; however, return period relies on some basic assumptions that should be satisfied for a correct application of this statistical tool. Indeed, conventional frequency analysis in hydrology is performed by assuming as necessary conditions that extreme events arise from a stationary distribution and are independent of one another. The main objective of this paper is to investigate the properties of return period when the independence condition is omitted; hence, we explore how the different definitions of return period available in literature affect results of frequency analysis for processes correlated in time. We demonstrate that, for stationary processes, the independence condition is not necessary in order to apply the classical equation of return period (i.e., the inverse of exceedance probability). On the other hand, we show that the time-correlation structure of hydrological processes modifies the shape of the distribution function of which the return period represents the first moment. This implies that, in the context of time-dependent processes, the return period might not represent an exhaustive measure of the probability of failure, and that its blind application could lead to misleading results. To overcome this problem, we introduce the concept of Equivalent Return Period, which controls the probability of failure still preserving the virtue of effectively communicating the event rareness.
NASA Technical Reports Server (NTRS)
Goldgof, Gregory M.
2005-01-01
Distributed systems allow scientists from around the world to plan missions concurrently, while being updated on the revisions of their colleagues in real time. However, permitting multiple clients to simultaneously modify a single data repository can quickly lead to data corruption or inconsistent states between users. Since our message broker, the Java Message Service, does not ensure that messages will be received in the order they were published, we must implement our own numbering scheme to guarantee that changes to mission plans are performed in the correct sequence. Furthermore, distributed architectures must ensure that as new users connect to the system, they synchronize with the database without missing any messages or falling into an inconsistent state. Robust systems must also guarantee that all clients will remain synchronized with the database even in the case of multiple client failure, which can occur at any time due to lost network connections or a user's own system instability. The final design for the distributed system behind the Mars rover mission planning software fulfills all of these requirements and upon completion will be deployed to MER at the end of 2005 as well as Phoenix (2007) and MSL (2009).
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.
1984-01-01
The use and implementation of Ada in distributed environments in which reliability is the primary concern is investigated. Emphasis is placed on the possibility that a distributed system may be programmed entirely in ADA so that the individual tasks of the system are unconcerned with which processors they are executing on, and that failures may occur in the software or underlying hardware. The primary activities are: (1) Continued development and testing of our fault-tolerant Ada testbed; (2) consideration of desirable language changes to allow Ada to provide useful semantics for failure; (3) analysis of the inadequacies of existing software fault tolerance strategies.
Goldstein, Benjamin A; Thomas, Laine; Zaroff, Jonathan G; Nguyen, John; Menza, Rebecca; Khush, Kiran K
2016-07-01
Over the past two decades, there have been increasingly long waiting times for heart transplantation. We studied the relationship between heart transplant waiting time and transplant failure (removal from the waitlist, pretransplant death, or death or graft failure within 1 year) to determine the risk that conservative donor heart acceptance practices confer in terms of increasing the risk of failure among patients awaiting transplantation. We studied a cohort of 28,283 adults registered on the United Network for Organ Sharing heart transplant waiting list between 2000 and 2010. We used Kaplan-Meier methods with inverse probability censoring weights to examine the risk of transplant failure accumulated over time spent on the waiting list (pretransplant). In addition, we used transplant candidate blood type as an instrumental variable to assess the risk of transplant failure associated with increased wait time. Our results show that those who wait longer for a transplant have greater odds of transplant failure. While on the waitlist, the greatest risk of failure is during the first 60 days. Doubling the amount of time on the waiting list was associated with a 10% (1.01, 1.20) increase in the odds of failure within 1 year after transplantation. Our findings suggest a relationship between time spent on the waiting list and transplant failure, thereby supporting research aimed at defining adequate donor heart quality and acceptance standards for heart transplantation.
Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant
Moreno-Garcia, Isabel M.; Palacios-Garcia, Emilio J.; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J.; Varo-Martinez, Marta; Real-Calvo, Rafael J.
2016-01-01
There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant’s components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid. PMID:27240365
Altenburg, Wytske A; Duiverman, Marieke L; Ten Hacken, Nick H T; Kerstjens, Huib A M; de Greef, Mathieu H G; Wijkstra, Peter J; Wempe, Johan B
2015-02-19
Although the endurance shuttle walk test (ESWT) has proven to be responsive to change in exercise capacity after pulmonary rehabilitation (PR) for COPD, the minimally important difference (MID) has not yet been established. We aimed to establish the MID of the ESWT in patients with severe COPD and chronic hypercapnic respiratory failure following PR. Data were derived from a randomized controlled trial, investigating the value of noninvasive positive pressure ventilation added to PR. Fifty-five patients with stable COPD, GOLD stage IV, with chronic respiratory failure were included (mean (SD) FEV1 31.1 (12.0) % pred, age 62 (9) y). MID estimates of the ESWT in seconds, percentage and meters change were calculated with anchor based and distribution based methods. Six minute walking distance (6MWD), peak work rate on bicycle ergometry (Wpeak) and Chronic Respiratory Questionnaire (CRQ) were used as anchors and Cohen's effect size was used as distribution based method. The estimated MID of the ESWT with the different anchors ranged from 186-199 s, 76-82% and 154-164 m. Using the distribution based method the MID was 144 s, 61% and 137 m. Estimates of the MID for the ESWT after PR showed only small differences using different anchors in patients with COPD and chronic respiratory failure. Therefore we recommend using a range of 186-199 s, 76-82% or 154-164 m as MID of the ESWT in COPD patients with chronic respiratory failure. Further research in larger populations should elucidate whether this cut-off value is also valid in other COPD populations and with other interventions. ClinicalTrials.Gov (ID NCT00135538).
Hypervelocity impact testing of the Space Station utility distribution system carrier
NASA Technical Reports Server (NTRS)
Lazaroff, Scott
1993-01-01
A two-phase, joint JSC and McDonnell Douglas Aerospace-Huntington Beach hypervelocity impact (HVI) test program was initiated to develop an improved understanding of how meteoroid and orbital debris (M/OD) impacts affect the Space Station Freedom (SSF) avionic and fluid lines routed in the Utility Distribution System (UDS) carrier. This report documents the first phase of the test program which covers nonpowered avionic line segment and pressurized fluid line segment HVI testing. From these tests, a better estimation of avionic line failures is approximately 15 failures per year and could very well drop to around 1 or 2 avionic line failures per year (depending upon the results of the second phase testing of the powered avionic line at White Sands). For the fluid lines, the initial McDonnell Douglas analysis calculated 1 to 2 line failures over a 30 year period. The data obtained from these tests indicate the number of predicted fluid line failures increased slightly to as many as 3 in the first 10 years and up to 15 for the entire 30 year life of SSF.
Using Generic Data to Establish Dormancy Failure Rates
NASA Technical Reports Server (NTRS)
Reistle, Bruce
2014-01-01
Many hardware items are dormant prior to being operated. The dormant period might be especially long, for example during missions to the moon or Mars. In missions with long dormant periods the risk incurred during dormancy can exceed the active risk contribution. Probabilistic Risk Assessments (PRAs) need to account for the dormant risk contribution as well as the active contribution. A typical method for calculating a dormant failure rate is to multiply the active failure rate by a constant, the dormancy factor. For example, some practitioners use a heuristic and divide the active failure rate by 30 to obtain an estimate of the dormant failure rate. To obtain a more empirical estimate of the dormancy factor, this paper uses the recently updated database NPRD-2011 [1] to arrive at a set of distributions for the dormancy factor. The resulting dormancy factor distributions are significantly different depending on whether the item is electrical, mechanical, or electro-mechanical. Additionally, this paper will show that using a heuristic constant fails to capture the uncertainty of the possible dormancy factors.
On boundary-element models of elastic fault interaction
NASA Astrophysics Data System (ADS)
Becker, T. W.; Schott, B.
2002-12-01
We present the freely available, modular, and UNIX command-line based boundary-element program interact. It is yet another implementation of Crouch and Starfield's (1983) 2-D and Okada's (1992) half-space solutions for constant slip on planar fault segments in an elastic medium. Using unconstrained or non-negative, standard-package matrix routines, the code can solve for slip distributions on faults given stress boundary conditions, or vice versa, both in a local or global reference frame. Based on examples of complex fault geometries from structural geology, we discuss the effects of different stress boundary conditions on the predicted slip distributions of interacting fault systems. Such one-step calculations can be useful to estimate the moment-release efficiency of alternative fault geometries, and so to evaluate the likelihood which system may be realized in nature. A further application of the program is the simulation of cyclic fault rupture based on simple static-kinetic friction laws. We comment on two issues: First, that of the appropriate rupture algorithm. Cellular models of seismicity often employ an exhaustive rupture scheme: fault cells fail if some critical stress is reached, then cells slip once-only by a given amount, and subsequently the redistributed stress is used to check for triggered activations on other cells. We show that this procedure can lead to artificial complexity in seismicity if time-to-failure is not calculated carefully because of numerical noise. Second, we address the question if foreshocks can be viewed as direct expressions of a simple statistical distribution of frictional strength on individual faults. Repetitive failure models based on a random distribution of frictional coefficients initially show irregular seismicity. By repeatedly selecting weaker patches, the fault then evolves into a quasi-periodic cycle. Each time, the pre-mainshock events build up the cumulative moment release in a non-linear fashion. These temporal seismicity patterns roughly resemble the accelerated moment-release features which are sometimes observed in nature.
Savannah River Site generic data base development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanton, C.H.; Eide, S.A.
This report describes the results of a project to improve the generic component failure data base for the Savannah River Site (SRS). A representative list of components and failure modes for SRS risk models was generated by reviewing existing safety analyses and component failure data bases and from suggestions from SRS safety analysts. Then sources of data or failure rate estimates were identified and reviewed for applicability. A major source of information was the Nuclear Computerized Library for Assessing Reactor Reliability, or NUCLARR. This source includes an extensive collection of failure data and failure rate estimates for commercial nuclear powermore » plants. A recent Idaho National Engineering Laboratory report on failure data from the Idaho Chemical Processing Plant was also reviewed. From these and other recent sources, failure data and failure rate estimates were collected for the components and failure modes of interest. This information was aggregated to obtain a recommended generic failure rate distribution (mean and error factor) for each component failure mode.« less
Real-time diagnostics of the reusable rocket engine using on-line system identification
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1990-01-01
A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.
Using process groups to implement failure detection in asynchronous environments
NASA Technical Reports Server (NTRS)
Ricciardi, Aleta M.; Birman, Kenneth P.
1991-01-01
Agreement on the membership of a group of processes in a distributed system is a basic problem that arises in a wide range of applications. Such groups occur when a set of processes cooperate to perform some task, share memory, monitor one another, subdivide a computation, and so forth. The group membership problems is discussed as it relates to failure detection in asynchronous, distributed systems. A rigorous, formal specification for group membership is presented under this interpretation. A solution is then presented for this problem.
Analysis on IGBT and Diode Failures in Distribution Electronic Power Transformers
NASA Astrophysics Data System (ADS)
Wang, Si-cong; Sang, Zi-xia; Yan, Jiong; Du, Zhi; Huang, Jia-qi; Chen, Zhu
2018-02-01
Fault characteristics of power electronic components are of great importance for a power electronic device, and are of extraordinary importance for those applied in power system. The topology structures and control method of Distribution Electronic Power Transformer (D-EPT) are introduced, and an exploration on fault types and fault characteristics for the IGBT and diode failures is presented. The analysis and simulation of different fault types for the fault characteristics lead to the D-EPT fault location scheme.
Health Care and Distributive Justice.
1993-04-01
granting Medicare funding to all patients with kidney failure who need hemodialysis or renal transplantation. Essentially, the government granted...example, more people are allowed to die of chronic renal failure than in any other comparable European country.ŝ Prior to 1973 and government funding...continued failure of the federal government to develop comprehensive health care policy reform led Oregon to seek a solution. The two basic, and
Lee, Kuo-Hua; Li, Szu-Yuan; Liu, Jin-Sin; Huang, Chi-Ting; Chen, Ying-Ying; Lin, Yao-Ping; Hsu, Chih-Cheng; Tarng, Der-Cherng
2017-05-01
The effect of warfarin on the risk of cardiovascular (CV) disease is unknown among chronic hemodialysis patients with atrial fibrillation (HD-AF). Population-based propensity score and prescription time-distribution matched cohort study including 6719 HD-AF patients with CHA 2 DS 2 -VASc score ≥ 2 were divided into warfarin users and nonusers and followed-up for CV events and death. Warfarin treatment in HD-AF patients with AF preceding HD was associated with higher risks of developing congestive heart failure [hazard ratio (HR)=1.82, 95% confidence interval (CI)=1.29-2.58, p<0.01], peripheral artery occlusive disease (HR=3.42, 95% CI=1.86-6.31, p<0.01), and aortic valve stenosis (HR=3.20, 95% CI=1.02-9.98, p<0.05). Warfarin users were not associated with risks of ischemic or hemorrhagic stroke and all-cause mortality as compared to nonusers. Warfarin may be associated with vascular calcification, increasing the risks of congestive heart failure and peripheral artery occlusive disease among HD-AF patients. Copyright © 2017. Published by Elsevier Taiwan LLC.
Failure mechanics in low-velocity impacts on thin composite plates
NASA Technical Reports Server (NTRS)
Elber, W.
1983-01-01
Eight-ply quasi-isotropic composite plates of Thornel 300 graphite in Narmco 5208 epoxy resin (T300/5208) were tested to establish the degree of equivalence between low-velocity impact and static testing. Both the deformation and failure mechanics under impact were representable by static indentation tests. Under low-velocity impacts such as tool drops, the dominant deformation mode of the plates was the first, or static, mode. Higher modes are excited on contact, but they decay significantly by the time the first-mode load reaches a maximum. The delamination patterns were observed by X-ray analysis. The areas of maximum delamination patterns were observed by X-ray analysis. The areas of maximum delamination coincided with the areas of highest peel stresses. The extent of delamination was similar for static and impact tests. Fiber failure damage was established by tensile tests on small fiber bundles obtained by deplying test specimens. The onset of fiber damage was in internal plies near the lower surface of the plates. The distribution and amount of fiber damage was similar fo impact and static tests.
Argonne National Laboratory Li-alloy/FeS cell testing and R and D programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gay, E.C.
1982-01-01
Groups of 12 or more identical Li-alloy/FeS cells fabricated by Eagle-Picher Industries, Inc. and Gould Inc. were operated at Argonne National Laboratory (ANL) in the status cell test program to obtain data for statistical analysis of cell cycle life and failure modes. The cells were full-size electric vehicle battery cells (150 to 350 Ah capacity) and they were cycled at the 4-h discharge rate and 8-h charge rate. The end of life was defined as a 20% loss of capacity or a decrease in the coulombic efficiency to less than 95%. Seventy-four cells (six groups of identical cells) were cycle-lifemore » tested and the results were analyzed statistically. The ultimate goal of this analysis was to predict cell and battery reliability. Testing of groups of identical cells also provided a means of identifying common failure modes which were eliminated by cell design changes. Mean time to failure (MTTF) for the cells based on the Weibull distribution is presented.« less
Degradation of Leakage Currents and Reliability Prediction for Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2016-01-01
Two types of failures in solid tantalum capacitors, catastrophic and parametric, and their mechanisms are described. Analysis of voltage and temperature reliability acceleration factors reported in literature shows a wide spread of results and requires more investigation. In this work, leakage currents in two types of chip tantalum capacitors were monitored during highly accelerated life testing (HALT) at different temperatures and voltages. Distributions of degradation rates were approximated using a general log-linear Weibull model and yielded voltage acceleration constants B = 9.8 +/- 0.5 and 5.5. The activation energies were Ea = 1.65 eV and 1.42 eV. The model allows for conservative estimations of times to failure and was validated by long-term life test data. Parametric degradation and failures are reversible and can be annealed at high temperatures. The process is attributed to migration of charged oxygen vacancies that reduce the barrier height at the MnO2/Ta2O5 interface and increase injection of electrons from the MnO2 cathode. Analysis showed that the activation energy of the vacancies' migration is 1.1 eV.
Evaluating the risk of water distribution system failure: A shared frailty model
NASA Astrophysics Data System (ADS)
Clark, Robert M.; Thurnau, Robert C.
2011-12-01
Condition assessment (CA) Modeling is drawing increasing interest as a technique that can assist in managing drinking water infrastructure. This paper develops a model based on the application of a Cox proportional hazard (PH)/shared frailty model and applies it to evaluating the risk of failure in drinking water networks using data from the Laramie Water Utility (located in Laramie, Wyoming, USA). Using the risk model a cost/ benefit analysis incorporating the inspection value method (IVM), is used to assist in making improved repair, replacement and rehabilitation decisions for selected drinking water distribution system pipes. A separate model is developed to predict failures in prestressed concrete cylinder pipe (PCCP). Various currently available inspection technologies are presented and discussed.
NASA Astrophysics Data System (ADS)
Bao, Yi; Cain, John; Chen, Yizheng; Huang, Ying; Chen, Genda; Palek, Leonard
2015-04-01
Thin concrete panels reinforced with alloy polymer macro-synthetic fibers have recently been introduced to rapidly and cost-effectively improve the driving condition of existing roadways by laying down a fabric sheet on the roadways, casting a thin layer of concrete, and then cutting the layer into panels. This study is aimed to understand the strain distribution and potential crack development of concrete panels under three-point loading. To this end, six full-size 6ft×6ft×3in concrete panels were tested to failure in the laboratory. They were instrumented with three types of single-mode optical fiber sensors whose performance and ability to measure the strain distribution and detect cracks were compared. Each optical fiber sensor was spliced and calibrated, and then attached to a fabric sheet using adhesive. A thin layer of mortar (0.25 ~ 0.5 in thick) was cast on the fabric sheet. The three types of distributed sensors were bare SM-28e+ fiber, SM-28e+ fiber with a tight buffer, and concrete crack cable, respectively. The concrete crack cable consisted of one SM-28e+ optical fiber with a tight buffer, one SM-28e+ optical fiber with a loose buffer for temperature compensation, and an outside protective tight sheath. Distributed strains were collected from the three optical fiber sensors with pre-pulse-pump Brillouin optical time domain analysis in room temperature. Among the three sensors, the bare fiber was observed to be most fragile during construction and operation, but most sensitive to strain change or micro-cracks. The concrete crack cable was most rugged, but not as sensitive to micro-cracks and robust in micro-crack measurement as the bare fiber. The ruggedness and sensitivity of the fiber with a tight buffer were in between the bare fiber and the concrete crack cable. The strain distribution resulted from the three optical sensors are in good agreement, and can be applied to successfully locate cracks in the concrete panels. It was observed that the three types of fibers were functional until the concrete panels have experienced inelastic deformation, making the distributed strain sensing technology promising for real applications in pavement engineering.
Project #153M: Guidance for Assessing the Remaining Strength of Corroded Pipelines
DOT National Transportation Integrated Search
2010-04-01
Incident statistics have consistently shown that corrosion is the primary cause of pipeline failures in liquid pipelines, and is the second largest cause of failures in natural gas transmission pipelines and distribution piping. Corrosion can cause m...
Global resilience analysis of water distribution systems.
Diao, Kegong; Sweetapple, Chris; Farmani, Raziyeh; Fu, Guangtao; Ward, Sarah; Butler, David
2016-12-01
Evaluating and enhancing resilience in water infrastructure is a crucial step towards more sustainable urban water management. As a prerequisite to enhancing resilience, a detailed understanding is required of the inherent resilience of the underlying system. Differing from traditional risk analysis, here we propose a global resilience analysis (GRA) approach that shifts the objective from analysing multiple and unknown threats to analysing the more identifiable and measurable system responses to extreme conditions, i.e. potential failure modes. GRA aims to evaluate a system's resilience to a possible failure mode regardless of the causal threat(s) (known or unknown, external or internal). The method is applied to test the resilience of four water distribution systems (WDSs) with various features to three typical failure modes (pipe failure, excess demand, and substance intrusion). The study reveals GRA provides an overview of a water system's resilience to various failure modes. For each failure mode, it identifies the range of corresponding failure impacts and reveals extreme scenarios (e.g. the complete loss of water supply with only 5% pipe failure, or still meeting 80% of demand despite over 70% of pipes failing). GRA also reveals that increased resilience to one failure mode may decrease resilience to another and increasing system capacity may delay the system's recovery in some situations. It is also shown that selecting an appropriate level of detail for hydraulic models is of great importance in resilience analysis. The method can be used as a comprehensive diagnostic framework to evaluate a range of interventions for improving system resilience in future studies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Failure rate of inferior alveolar nerve block among dental students and interns
AlHindi, Maryam; Rashed, Bayan; AlOtaibi, Noura
2016-01-01
Objectives: To report the failure rate of inferior alveolar nerve block (IANB) among dental students and interns, causes of failure, investigate awareness of different IANB techniques, and to report IANB-associated complications. Methods: A 3-page questionnaire containing 13 questions was distributed to a random sample of 350 third to fifth years students and interns at the College of Dentistry, King Saud University, Riyadh, Saudi Arabia on January 2011. It included demographic questions (age, gender, and academic level) and questions on IANB failure frequency and reasons, actions taken to overcome the failure, and awareness of different anesthetic techniques, supplementary techniques, and complications. Results: Of the 250 distributed questionnaires, 238 were returned (68% response rate). Most (85.7%) of surveyed sample had experienced IANB failure once or twice. The participants attributed the failures most commonly (66.45%) to anatomical variations. The most common alternative technique used was intraligamentary injection (57.1%), although 42.8% of the sample never attempted any alternatives. Large portion of the samples stated that they either lacked both knowledge of and training for other techniques (44.9%), or that they had knowledge of them but not enough training to perform them (45.8%). Conclusion: To decrease IANB failure rates for dental students and interns, knowledge of landmarks, anatomical variation and their training in alternatives to IANB, such as the Gow-Gates and Akinosi techniques, both theoretically and clinically in the dental curriculum should be enhanced. PMID:26739980
Failure rate of inferior alveolar nerve block among dental students and interns.
AlHindi, Maryam; Rashed, Bayan; AlOtaibi, Noura
2016-01-01
To report the failure rate of inferior alveolar nerve block (IANB) among dental students and interns, causes of failure, investigate awareness of different IANB techniques, and to report IANB-associated complications. A 3-page questionnaire containing 13 questions was distributed to a random sample of 350 third to fifth years students and interns at the College of Dentistry, King Saud University, Riyadh, Saudi Arabia on January 2011. It included demographic questions (age, gender, and academic level) and questions on IANB failure frequency and reasons, actions taken to overcome the failure, and awareness of different anesthetic techniques, supplementary techniques, and complications. Of the 250 distributed questionnaires, 238 were returned (68% response rate). Most (85.7%) of surveyed sample had experienced IANB failure once or twice. The participants attributed the failures most commonly (66.45%) to anatomical variations. The most common alternative technique used was intraligamentary injection (57.1%), although 42.8% of the sample never attempted any alternatives. Large portion of the samples stated that they either lacked both knowledge of and training for other techniques (44.9%), or that they had knowledge of them but not enough training to perform them (45.8%). To decrease IANB failure rates for dental students and interns, knowledge of landmarks, anatomical variation and their training in alternatives to IANB, such as the Gow-Gates and Akinosi techniques, both theoretically and clinically in the dental curriculum should be enhanced.
A Case Study on Engineering Failure Analysis of Link Chain
Lee, Seong-Beom; Lee, Hong-Chul
2010-01-01
Objectives The objective of this study was to investigate the effect of chain installation condition on stress distribution that could eventually cause disastrous failure from sudden deformation and geometric rupture. Methods Fractographic method used for the failed chain indicates that over-stress was considered as the root cause of failure. 3D modeling and finite element analysis for the chain, used in a crane hook, were performed with a three-dimensional interactive application program, CATIA, commercial finite element analysis and computational fluid dynamic software, ANSYS. Results The results showed that the state of stress was changed depending on the initial position of the chain that was installed in the hook. Especially, the magnitude of the stress was strongly affected by the bending forces, which are 2.5 times greater (under the simulation condition currently investigated) than that from the plain tensile load. Also, it was noted that the change of load state is strongly related to the failure of parts. The chain can hold an ultimate load of about 8 tons with only the tensile load acting on it. Conclusion The conclusions of this research clearly showed that a reduction of the loss from similar incidents can be achieved when an operator properly handles the installation of the chain. PMID:22953162
Fault Tolerant Homopolar Magnetic Bearings
NASA Technical Reports Server (NTRS)
Li, Ming-Hsiu; Palazzolo, Alan; Kenny, Andrew; Provenza, Andrew; Beach, Raymond; Kascak, Albert
2003-01-01
Magnetic suspensions (MS) satisfy the long life and low loss conditions demanded by satellite and ISS based flywheels used for Energy Storage and Attitude Control (ACESE) service. This paper summarizes the development of a novel MS that improves reliability via fault tolerant operation. Specifically, flux coupling between poles of a homopolar magnetic bearing is shown to deliver desired forces even after termination of coil currents to a subset of failed poles . Linear, coordinate decoupled force-voltage relations are also maintained before and after failure by bias linearization. Current distribution matrices (CDM) which adjust the currents and fluxes following a pole set failure are determined for many faulted pole combinations. The CDM s and the system responses are obtained utilizing 1D magnetic circuit models with fringe and leakage factors derived from detailed, 3D, finite element field models. Reliability results are presented vs. detection/correction delay time and individual power amplifier reliability for 4, 6, and 7 pole configurations. Reliability is shown for two success criteria, i.e. (a) no catcher bearing contact following pole failures and (b) re-levitation off of the catcher bearings following pole failures. An advantage of the method presented over other redundant operation approaches is a significantly reduced requirement for backup hardware such as additional actuators or power amplifiers.
Postdischarge growth assessment in very low birth weight infants.
Park, Joon-Sik; Han, Jungho; Shin, Jeong Eun; Lee, Soon Min; Eun, Ho Seon; Park, Min-Soo; Park, Kook-In; Namgung, Ran
2017-03-01
The goal of nutritional support for very-low-birth-weight (VLBW) infants from birth to term is to match the in utero growth rates; however, this is rarely achieved. We evaluated postdischarge growth patterns and growth failure in 81 Korean VLBW infants through a retrospective study. Weight and height were measured and calculated based on age percentile distribution every 3 months until age 24 months. Growth failure was defined as weight and height below the 10th percentile at 24 months. For the subgroup analysis, small-for-gestational age (SGA) and extremely low birth weight (ELBW) infants were evaluated. The growth patterns based on the Korean, World Health Organization (WHO), or Centers for Disease Control and Prevention (CDC) standard were serially compared over time. At postconception age (PCA) 40 weeks, 47 (58%) and 45 infants (55%) showed growth failure in terms of weight and height, respectively. At PCA 24 months, 20 infants (24%) showed growth failure for weight and 14 (18%) for height. Growth failure rates were higher for the SGA infants than for the appropriate-weight-for-gestational age infants at PCA 24 months ( P =0.045 for weight and P =0.038 for height). Growth failure rates were higher for the ELBW infants than for the non-ELBW infants at PCA 24 months ( P <0.001 for weight and P =0.003 for height). Significant differences were found among the WHO, CDC, and Korean standards ( P <0.001). Advancements in neonatal care have improved the catch-up growth of VLBW infants, but this is insufficient. Careful observation and aggressive interventions, especially in SGA and ELBW infants, are needed.
Baldassarri, Marta; Zhang, Yu; Thompson, Van P.; Rekow, Elizabeth D.; Stappert, Christian F. J.
2011-01-01
Summary Objectives To compare fatigue failure modes and reliability of hand-veneered and over-pressed implant-supported three-unit zirconium-oxide fixed-dental-prostheses(FDPs). Methods Sixty-four custom-made zirconium-oxide abutments (n=32/group) and thirty-two zirconium-oxide FDP-frameworks were CAD/CAM manufactured. Frameworks were veneered with hand-built up or over-pressed porcelain (n=16/group). Step-stress-accelerated-life-testing (SSALT) was performed in water applying a distributed contact load at the buccal cusp-pontic-area. Post failure examinations were carried out using optical (polarized-reflected-light) and scanning electron microscopy (SEM) to visualize crack propagation and failure modes. Reliability was compared using cumulative-damage step-stress analysis (Alta-7-Pro, Reliasoft). Results Crack propagation was observed in the veneering porcelain during fatigue. The majority of zirconium-oxide FDPs demonstrated porcelain chipping as the dominant failure mode. Nevertheless, fracture of the zirconium-oxide frameworks was also observed. Over-pressed FDPs failed earlier at a mean failure load of 696 ± 149 N relative to hand-veneered at 882 ± 61 N (profile I). Weibull-stress-number of cycles-unreliability-curves were generated. The reliability (2-sided at 90% confidence bounds) for a 400N load at 100K cycles indicated values of 0.84 (0.98-0.24) for the hand-veneered FDPs and 0.50 (0.82-0.09) for their over-pressed counterparts. Conclusions Both zirconium-oxide FDP systems were resistant under accelerated-life-time-testing. Over-pressed specimens were more susceptible to fatigue loading with earlier veneer chipping. PMID:21557985
A Unified Constitutive Model for Subglacial Till, Part I: The Disturbed State Concept
NASA Astrophysics Data System (ADS)
Jenson, J. W.; Desai, C. S.; Clark, P. U.; Contractor, D. N.; Sane, S. M.; Carlson, A. E.
2006-12-01
Classical plasticity models such as Mohr-Coulomb may not adequately represent the full range of possible motion and failure in tills underlying ice sheets. Such models assume that deformations are initially elastic, and that when a peak or failure stress level is reached the system experiences sudden failure, after which the stress remains constant and the deformations can tend to infinite magnitudes. However, theory suggests that the actual behavior of deforming materials, including granular materials such as glacial till, can involve plastic or irreversible strains almost from the beginning, in which localized zones of microcracking and "failure" can be distributed over the material element. As the loading increases, and with associated plastic and creep deformations, the distributed failure zones coalesce. When the extent of such coalesced zones reaches critical values of stresses and strains, the critical condition (failure) can occur in the till, which would cause associated movements of the ice sheet. Failure or collapse then may occur at much larger strain levels. Classical models (e.g., Mohr-Coulomb) may therefore not be able to fully and realistically characterize deformation behavior and the gradual developments of localized failures tending to the global failure and movements. We present and propose the application of the Disturbed State Concept (DSC), a unified model that incorporates the actual pre- and post-failure behavior, for characterizing the behavior of subglacial tills. In this presentation (Part I), we describe the DSC and propose its application to subglacial till. Part II (Desai et al.) describes our application of the DSC with laboratory testing, model calibration, and validations to evaluate the mechanical properties of two regionally significant Pleistocene tills.
Development of confidence limits by pivotal functions for estimating software reliability
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.
Memory beyond memory in heart beating, a sign of a healthy physiological condition.
Allegrini, P; Grigolini, P; Hamilton, P; Palatella, L; Raffaelli, G
2002-04-01
We describe two types of memory and illustrate each using artificial and actual heartbeat data sets. The first type of memory, yielding anomalous diffusion, implies the inverse power-law nature of the waiting time distribution and the second the correlation among distinct times, and consequently also the occurrence of many pseudoevents, namely, not genuinely random events. Using the method of diffusion entropy analysis, we establish the scaling that would be determined by the real events alone. We prove that the heart beating of healthy patients reveals the existence of many more pseudoevents than in the patients with congestive heart failure.
Vidale, J.E.; Agnew, D.C.; Johnston, M.J.S.; Oppenheimer, D.H.
1998-01-01
Because the rate of stress change from the Earth tides exceeds that from tectonic stress accumulation, tidal triggering of earthquakes would be expected if the final hours of loading of the fault were at the tectonic rate and if rupture began soon after the achievement of a critical stress level. We analyze the tidal stresses and stress rates on the fault planes and at the times of 13,042 earthquakes which are so close to the San Andreas and Calaveras faults in California that we may take the fault plane to be known. We find that the stresses and stress rates from Earth tides at the times of earthquakes are distributed in the same way as tidal stresses and stress rates at random times. While the rate of earthquakes when the tidal stress promotes failure is 2% higher than when the stress does not, this difference in rate is not statistically significant. This lack of tidal triggering implies that preseismic stress rates in the nucleation zones of earthquakes are at least 0.15 bar/h just preceding seismic failure, much above the long-term tectonic stress rate of 10-4 bar/h.
Some limitations of frequency as a component of risk: an expository note.
Cox, Louis Anthony
2009-02-01
Students of risk analysis are often taught that "risk is frequency times consequence" or, more generally, that risk is determined by the frequency and severity of adverse consequences. But is it? This expository note reviews the concepts of frequency as average annual occurrence rate and as the reciprocal of mean time to failure (MTTF) or mean time between failures (MTBF) in a renewal process. It points out that if two risks (represented as two (frequency, severity) pairs for adverse consequences) have identical values for severity but different values of frequency, then it is not necessarily true that the one with the smaller value of frequency is preferable-and this is true no matter how frequency is defined. In general, there is not necessarily an increasing relation between the reciprocal of the mean time until an event occurs, its long-run average occurrences per year, and other criteria, such as the probability or expected number of times that it will happen over a specific interval of interest, such as the design life of a system. Risk depends on more than frequency and severity of consequences. It also depends on other information about the probability distribution for the time of a risk event that can become lost in simple measures of event "frequency." More flexible descriptions of risky processes, such as point process models can avoid these limitations.
Reliability Analysis of Uniaxially Ground Brittle Materials
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Nemeth, Noel N.; Powers, Lynn M.; Choi, Sung R.
1995-01-01
The fast fracture strength distribution of uniaxially ground, alpha silicon carbide was investigated as a function of grinding angle relative to the principal stress direction in flexure. Both as-ground and ground/annealed surfaces were investigated. The resulting flexural strength distributions were used to verify reliability models and predict the strength distribution of larger plate specimens tested in biaxial flexure. Complete fractography was done on the specimens. Failures occurred from agglomerates, machining cracks, or hybrid flaws that consisted of a machining crack located at a processing agglomerate. Annealing eliminated failures due to machining damage. Reliability analyses were performed using two and three parameter Weibull and Batdorf methodologies. The Weibull size effect was demonstrated for machining flaws. Mixed mode reliability models reasonably predicted the strength distributions of uniaxial flexure and biaxial plate specimens.
Using Utility Functions to Control a Distributed Storage System
2008-05-01
Pinheiro et al. [2007] suggest this is not an accurate assumption. Nicola and Goyal [1990] examined correlated failures across multiversion software...F. and Goyal, A. (1990). Modeling of correlated failures and community error recovery in multiversion software. IEEE Transactions on Software
DOES GARP REALLY FAIL MISERABLY? A RESPONSE TO STOCKMAN ET AL. (2006)
Stockman et al. (2006) found that ecological niche models built using DesktopGARP 'failed miserably' to predict trapdoor spider (genus Promyrmekiaphila) distributions in California. This apparent failure of GARP (Genetic Algorithm for Rule-Set Production) was actually a failure ...
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Riviere, J.; Lubbers, N.; Barros, K.; Marone, C.; Johnson, P. A.
2016-12-01
Forecasting failure is a primary goal in diverse domains that include earthquake physics, materials science, nondestructive evaluation of materials and other engineering applications. Due to the highly complex physics of material failure and limitations on gathering data in the failure nucleation zone, this goal has often appeared out of reach; however, recent advances in instrumentation sensitivity, instrument density and data analysis show promise toward forecasting failure times. Here, we show that we can predict frictional failure times of both slow and fast stick slip failure events in the laboratory. This advance is made possible by applying a machine learning approach known as Random Forests1(RF) to the continuous acoustic emission (AE) time series recorded by detectors located on the fault blocks. The RF is trained using a large number of statistical features derived from the AE time series signal. The model is then applied to data not previously analyzed. Remarkably, we find that the RF method predicts upcoming failure time far in advance of a stick slip event, based only on a short time window of data. Further, the algorithm accurately predicts the time of the beginning and end of the next slip event. The predicted time improves as failure is approached, as other data features add to prediction. Our results show robust predictions of slow and dynamic failure based on acoustic emissions from the fault zone throughout the laboratory seismic cycle. The predictions are based on previously unidentified tremor-like acoustic signals that occur during stress build up and the onset of macroscopic frictional weakening. We suggest that the tremor-like signals carry information about fault zone processes and allow precise predictions of failure at any time in the slow slip or stick slip cycle2. If the laboratory experiments represent Earth frictional conditions, it could well be that signals are being missed that contain highly useful predictive information. 1Breiman, L. Random forests. Machine Learning 45, 5-32 (2001). 2Rouet-Leduc, B. C. Hulbert, N. Lubbers, K. Barros and P. A. Johnson, Learning the physics of failure, in review (2016).
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
Treatment and technical intervention time analysis of a robotic stereotactic radiotherapy system.
Crop, F; Lacornerie, T; Szymczak, H; Felin, A; Bailleux, C; Mirabel, X; Lartigau, E
2014-02-01
The purpose of this study is to obtain a better operational knowledge of Stereotactic Body Radiotherapy (SBRT) treatments with CyberKnife(r). An analysis of both In-room Times (IRT) and technical interventions of 5 years of treatments was performed, during which more than 1600 patients were treated for various indications, including liver (21%), lung (29%), intracranial (13%), head and neck (11%) and prostate (7%). Technical interventions were recorded along with the time of the failure, time to the intervention, and the complexity and duration of the repair. Analyses of Time Between Failures (TBF) and Service Disrupting TBF(disr) were performed. Treatment time data and variability per indication and following different system upgrades were evaluated. Large variations of IRTs were found between indications, but also large variations for each indication. The combination of the time reduction Tool (using Iris(r)) and Improved Stop Handling was of major impact to shortening of treatment times. The first implementation of the Iris collimator alone did not lead to significantly shorter IRTs for us except during prostate treatments. This was mostly due to the addition at the same time of larger rotational compensation for prostate treatments (58 instead of 1.58). Significant differences of duration between the first fraction and following fractions of a treatment, representing the necessity of defining imaging parameters and explanation to patients, were found for liver (12 min) and lung treatments using Xsight(r) Spine (5 min). Liver and lung treatments represent the longest IRT's and involve the largest variability's in IRT. The malfunction rate of the system followed a Weibull distribution with the shape and scale parameters of 0.8 and 39.7. Mean TBF(disr) was 68 work hours. 60 to 80% of the service disrupting interventions were resolved within 30-60 min, 5% required external intervention and 30% occurred in the morning. The presented results can be applied in the evaluation of the required machine time in order to implement robotic radiosurgery for different indications. The analytical distributions of IRTs and technical interruptions can be used for simulations.
Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr
Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.
GaN HEMTs with p-GaN gate: field- and time-dependent degradation
NASA Astrophysics Data System (ADS)
Meneghesso, G.; Meneghini, M.; Rossetto, I.; Canato, E.; Bartholomeus, J.; De Santi, C.; Trivellin, N.; Zanoni, E.
2017-02-01
GaN-HEMTs with p-GaN gate have recently demonstrated to be excellent normally-off devices for application in power conversion systems, thanks to the high and robust threshold voltage (VTH>1 V), the high breakdown voltage, and the low dynamic Ron increase. For this reason, studying the stability and reliability of these devices under high stress conditions is of high importance. This paper reports on our most recent results on the field- and time-dependent degradation of GaN-HEMTs with p-GaN gate submitted to stress with positive gate bias. Based on combined step-stress experiments, constant voltage stress and electroluminescence testing we demonstrated that: (i) when submitted to high/positive gate stress, the transistors may show a negative threshold voltage shift, that is ascribed to the injection of holes from the gate metal towards the p-GaN/AlGaN interface; (ii) in a step-stress experiment, the analyzed commercial devices fail at gate voltages higher than 9-10 V, due to the extremely high electric field over the p-GaN/AlGaN stack; (iii) constant voltage stress tests indicate that the failure is also time-dependent and Weibull distributed. The several processes that can explain the time-dependent failure are discussed in the following.
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
High-Strain Rate Failure Modeling Incorporating Shear Banding and Fracture
2017-11-22
High Strain Rate Failure Modeling Incorporating Shear Banding and Fracture The views, opinions and/or findings contained in this report are those of...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS...Report as of 05-Dec-2017 Agreement Number: W911NF-13-1-0238 Organization: Columbia University Title: High Strain Rate Failure Modeling Incorporating
Foo, Jonathan; Ilic, Dragan; Rivers, George; Evans, Darrell J R; Walsh, Kieran; Haines, Terry P; Paynter, Sophie; Morgan, Prue; Maloney, Stephen
2017-12-07
Student failure creates additional economic costs. Knowing the cost of failure helps to frame its economic burden relative to other educational issues, providing an evidence-base to guide priority setting and allocation of resources. The Ingredients Method is a cost-analysis approach which has been previously applied to health professions education research. In this study, the Ingredients Method is introduced, and applied to a case study, investigating the cost of pre-clinical student failure. The four step Ingredients Method was introduced and applied: (1) identify and specify resource items, (2) measure volume of resources in natural units, (3) assign monetary prices to resource items, and (4) analyze and report costs. Calculations were based on a physiotherapy program at an Australian university. The cost of failure was £5991 per failing student, distributed across students (70%), the government (21%), and the university (8%). If the cost of failure and attrition is distributed among the remaining continuing cohort, the cost per continuing student educated increases from £9923 to £11,391 per semester. The economics of health professions education is complex. Researchers should consider both accuracy and feasibility in their costing approach, toward the goal of better informing cost-conscious decision-making.
NASA Astrophysics Data System (ADS)
Sajun Prasad, K.; Panda, Sushanta Kumar; Kar, Sujoy Kumar; Sen, Mainak; Murty, S. V. S. Naryana; Sharma, Sharad Chandra
2017-04-01
Recently, aerospace industries have shown increasing interest in forming limits of Inconel 718 sheet metals, which can be utilised in designing tools and selection of process parameters for successful fabrication of components. In the present work, stress-strain response with failure strains was evaluated by uniaxial tensile tests in different orientations, and two-stage work-hardening behavior was observed. In spite of highly preferred texture, tensile properties showed minor variations in different orientations due to the random distribution of nanoprecipitates. The forming limit strains were evaluated by deforming specimens in seven different strain paths using limiting dome height (LDH) test facility. Mostly, the specimens failed without prior indication of localized necking. Thus, fracture forming limit diagram (FFLD) was evaluated, and bending correction was imposed due to the use of sub-size hemispherical punch. The failure strains of FFLD were converted into major-minor stress space ( σ-FFLD) and effective plastic strain-stress triaxiality space ( ηEPS-FFLD) as failure criteria to avoid the strain path dependence. Moreover, FE model was developed, and the LDH, strain distribution and failure location were predicted successfully using above-mentioned failure criteria with two stages of work hardening. Fractographs were correlated with the fracture behavior and formability of sheet metal.
NASA Astrophysics Data System (ADS)
Liu, Lu; Kamm, Paul; García-Moreno, Francisco; Banhart, John; Pasini, Damiano
2017-10-01
This paper examines three-dimensional metallic lattices with regular octet and rhombicuboctahedron units fabricated with geometric imperfections via Selective Laser Sintering. We use X-ray computed tomography to capture morphology, location, and distribution of process-induced defects with the aim of studying their role in the elastic response, damage initiation, and failure evolution under quasi-static compression. Testing results from in-situ compression tomography show that each lattice exhibits a distinct failure mechanism that is governed not only by cell topology but also by geometric defects induced by additive manufacturing. Extracted from X-ray tomography images, the statistical distributions of three sets of defects, namely strut waviness, strut thickness variation, and strut oversizing, are used to develop numerical models of statistically representative lattices with imperfect geometry. Elastic and failure responses are predicted within 10% agreement from the experimental data. In addition, a computational study is presented to shed light into the relationship between the amplitude of selected defects and the reduction of elastic properties compared to their nominal values. The evolution of failure mechanisms is also explained with respect to strut oversizing, a parameter that can critically cause failure mode transitions that are not visible in defect-free lattices.
Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering
NASA Astrophysics Data System (ADS)
Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping
2018-07-01
The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.
Cycles till failure of silver-zinc cells with completing failures modes: Preliminary data analysis
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Leibecki, H. F.; Bozek, J. M.
1980-01-01
One hundred and twenty nine cells were run through charge-discharge cycles until failure. The experiment design was a variant of a central composite factorial in five factors. Preliminary data analysis consisted of response surface estimation of life. Batteries fail under two basic modes; a low voltage condition and an internal shorting condition. A competing failure modes analysis using maximum likelihood estimation for the extreme value life distribution was performed. Extensive diagnostics such as residual plotting and probability plotting were employed to verify data quality and choice of model.
Chen, Ling; Feng, Yanqin; Sun, Jianguo
2017-10-01
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.
Evaluation of a Multi-Axial, Temperature, and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.; Rudolphi, Michael (Technical Monitor)
2002-01-01
To obtain a better understanding the response of the structural adhesives used in the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle, an extensive effort has been conducted to characterize in detail the failure properties of these adhesives. This effort involved the development of a failure model that includes the effects of multi-axial loading, temperature, and time. An understanding of the effects of these parameters on the failure of the adhesive is crucial to the understanding and prediction of the safety of the RSRM nozzle. This paper documents the use of this newly developed multi-axial, temperature, and time (MATT) dependent failure model for modeling failure for the adhesives TIGA 321, EA913NA, and EA946. The development of the mathematical failure model using constant load rate normal and shear test data is presented. Verification of the accuracy of the failure model is shown through comparisons between predictions and measured creep and multi-axial failure data. The verification indicates that the failure model performs well for a wide range of conditions (loading, temperature, and time) for the three adhesives. The failure criterion is shown to be accurate through the glass transition for the adhesive EA946. Though this failure model has been developed and evaluated with adhesives, the concepts are applicable for other isotropic materials.
NASA Astrophysics Data System (ADS)
Jezequel, T.; Auzoux, Q.; Le Boulch, D.; Bono, M.; Andrieu, E.; Blanc, C.; Chabretou, V.; Mozzani, N.; Rautenberg, M.
2018-02-01
During accidental power transient conditions with Pellet Cladding Interaction (PCI), the synergistic effect of the stress and strain imposed on the cladding by thermal expansion of the fuel, and corrosion by iodine released as a fission product, may lead to cladding failure by Stress Corrosion Cracking (SCC). In this study, internal pressure tests were conducted on unirradiated cold-worked stress-relieved Zircaloy-4 cladding tubes in an iodine vapor environment. The goal was to investigate the influence of loading type (constant pressure tests, constant circumferential strain rate tests, or constant circumferential strain tests) and test temperature (320, 350, or 380 °C) on iodine-induced stress corrosion cracking (I-SCC). The experimental results obtained with different loading types were consistent with each other. The apparent threshold hoop stress for I-SCC was found to be independent of the test temperature. SEM micrographs of the tested samples showed many pits distributed over the inner surface, which tended to coalesce into large pits in which a microcrack could initiate. A model for the time-to-failure of a cladding tube was developed using finite element simulations of the viscoplastic mechanical behavior of the material and a modified Kachanov's damage growth model. The times-to-failure predicted by this model are consistent with the experimental data.
Williams, Brent A; Agarwal, Shikhar
2018-02-23
Prediction models such as the Seattle Heart Failure Model (SHFM) can help guide management of heart failure (HF) patients, but the SHFM has not been validated in the office environment. This retrospective cohort study assessed the predictive performance of the SHFM among patients with new or pre-existing HF in the context of an office visit.Methods and Results:SHFM elements were ascertained through electronic medical records at an office visit. The primary outcome was all-cause mortality. A "warranty period" for the baseline SHFM risk estimate was sought by examining predictive performance over time through a series of landmark analyses. Discrimination and calibration were estimated according to the proposed warranty period. Low- and high-risk thresholds were proposed based on the distribution of SHFM estimates. Among 26,851 HF patients, 14,380 (54%) died over a mean 4.7-year follow-up period. The SHFM lost predictive performance over time, with C=0.69 and C<0.65 within 3 and beyond 12 months from baseline respectively. The diminishing predictive value was attributed to modifiable SHFM elements. Discrimination (C=0.66) and calibration for 12-month mortality were acceptable. A low-risk threshold of ∼5% mortality risk within 12 months reflects the 10% of HF patients in the office setting with the lowest risk. The SHFM has utility in the office environment.
NASA Astrophysics Data System (ADS)
Baccar, D.; Söffker, D.
2017-11-01
Acoustic Emission (AE) is a suitable method to monitor the health of composite structures in real-time. However, AE-based failure mode identification and classification are still complex to apply due to the fact that AE waves are generally released simultaneously from all AE-emitting damage sources. Hence, the use of advanced signal processing techniques in combination with pattern recognition approaches is required. In this paper, AE signals generated from laminated carbon fiber reinforced polymer (CFRP) subjected to indentation test are examined and analyzed. A new pattern recognition approach involving a number of processing steps able to be implemented in real-time is developed. Unlike common classification approaches, here only CWT coefficients are extracted as relevant features. Firstly, Continuous Wavelet Transform (CWT) is applied to the AE signals. Furthermore, dimensionality reduction process using Principal Component Analysis (PCA) is carried out on the coefficient matrices. The PCA-based feature distribution is analyzed using Kernel Density Estimation (KDE) allowing the determination of a specific pattern for each fault-specific AE signal. Moreover, waveform and frequency content of AE signals are in depth examined and compared with fundamental assumptions reported in this field. A correlation between the identified patterns and failure modes is achieved. The introduced method improves the damage classification and can be used as a non-destructive evaluation tool.
A public study of the lifetime distribution of soap films
NASA Astrophysics Data System (ADS)
Tobin, S. T.; Meagher, A. J.; Bulfin, B.; Möbius, M.; Hutzler, S.
2011-08-01
We present data for the lifetime distribution of soap films made from commercial dish-washing solution and contained in sealed cylinders. Data for over 2500 films were gathered during a 2-month exhibition on the science and art of bubbles and foams in Dublin's Science Gallery. Visitors to the gallery were invited to create 10-20 parallel soap films in acrylic tubes which were sealed with cork stoppers. Individual film bursts occurred at random and were uncorrelated. The total number of remaining films in the tubes was recorded every day. Visitors could monitor the status of their soap film tube and the daily updated histogram of the lifetime of all films. The histogram of the bubble lifetimes is well described by a Weibull distribution, which indicates that the failure rate is not constant and increases over time. Unsealed cylinders show drastically reduced film lifetimes. This experiment illustrates the difference between the unpredictability of the lifetime of individual films and the existence of a well-defined lifetime distribution for the ensemble.
NASA Astrophysics Data System (ADS)
Murayama, Hideaki; Kageyama, Kazuro; Kimpara, Isao; Akiyoshi, Shimada; Naruse, Hiroshi
2000-06-01
In this study, we developed a health monitoring system using a fiber optic distributed strain sensor for International America's Cup Class (IACC) yachts. Most structural components of an IACC yacht consist of an aluminum honeycomb core sandwiched between carbon fiber reinforced plastic (CFRP) laminates. In such structures, delamination, skin/core debonding and debonding between adhered members will be result in serious fracture of the structure. We equipped two IACC yachts with fiber optic strain sensors designed to measured the distributed strain using a Brillouin optical time domain reflectometer (BOTDR) and to detect any deterioration or damage to the yacht's structures caused by such failures. And based on laboratory test results, we proposed a structural health monitoring technique for IACC yachts that involves analyzing their strain distribution. Some important information about structural conditions of the IACC yachts could be obtained from this system through the periodical strain measurements in the field.
Locating inefficient links in a large-scale transportation network
NASA Astrophysics Data System (ADS)
Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu
2015-02-01
Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T < 0 or Δ T > 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.
Extreme reaction times determine fluctuation scaling in human color vision
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2016-11-01
In modern mental chronometry, human reaction time defines the time elapsed from stimulus presentation until a response occurs and represents a reference paradigm for investigating stochastic latency mechanisms in color vision. Here we examine the statistical properties of extreme reaction times and whether they support fluctuation scaling in the skewness-kurtosis plane. Reaction times were measured for visual stimuli across the cardinal directions of the color space. For all subjects, the results show that very large reaction times deviate from the right tail of reaction time distributions suggesting the existence of dragon-kings events. The results also indicate that extreme reaction times are correlated and shape fluctuation scaling over a wide range of stimulus conditions. The scaling exponent was higher for achromatic than isoluminant stimuli, suggesting distinct generative mechanisms. Our findings open a new perspective for studying failure modes in sensory-motor communications and in complex networks.
NASA Astrophysics Data System (ADS)
Iwakoshi, Takehisa; Hirota, Osamu
2014-10-01
This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.
Algorithm of probabilistic assessment of fully-mechanized longwall downtime
NASA Astrophysics Data System (ADS)
Domrachev, A. N.; Rib, S. V.; Govorukhin, Yu M.; Krivopalov, V. G.
2017-09-01
The problem of increasing the load on a long fully-mechanized longwall has several aspects, one of which is the improvement of efficiency in using available stoping equipment due to the increase in coefficient of the machine operating time of a shearer and other mining machines that form an integral part of the longwall set of equipment. The task of predicting the reliability indicators of stoping equipment is solved by the statistical evaluation of parameters of downtime exponential distribution and failure recovery. It is more difficult to solve the problems of downtime accounting in case of accidents in the face workings and, despite the statistical data on accidents in mine workings, no solution has been found to date. The authors have proposed a variant of probability assessment of workings caving using Poisson distribution and the duration of their restoration using normal distribution. The above results confirm the possibility of implementing the approach proposed by the authors.
Crack identification and evolution law in the vibration failure process of loaded coal
NASA Astrophysics Data System (ADS)
Li, Chengwu; Ai, Dihao; Sun, Xiaoyuan; Xie, Beijing
2017-08-01
To study the characteristics of coal cracks produced in the vibration failure process, we set up a static load and static and dynamic combination load failure test simulation system, prepared with different particle size, formation pressure, and firmness coefficient coal samples. Through static load damage testing of coal samples and then dynamic load (vibration exciter) and static (jack) combination destructive testing, the crack images of coal samples under the load condition were obtained. Combined with digital image processing technology, an algorithm of crack identification with high precision and in real-time is proposed. With the crack features of the coal samples under different load conditions as the research object, we analyzed the distribution of cracks on the surface of the coal samples and the factors influencing crack evolution using the proposed algorithm and a high-resolution industrial camera. Experimental results showed that the major portion of the crack after excitation is located in the rear of the coal sample where the vibration exciter cannot act. Under the same disturbance conditions, crack size and particle size exhibit a positive correlation, while crack size and formation pressure exhibit a negative correlation. Soft coal is more likely to lead to crack evolution than hard coal, and more easily causes instability failure. The experimental results and crack identification algorithm provide a solid basis for the prevention and control of instability and failure of coal and rock mass, and they are helpful in improving the monitoring method of coal and rock dynamic disasters.
Making real-time reactive systems reliable
NASA Technical Reports Server (NTRS)
Marzullo, Keith; Wood, Mark
1990-01-01
A reactive system is characterized by a control program that interacts with an environment (or controlled program). The control program monitors the environment and reacts to significant events by sending commands to the environment. This structure is quite general. Not only are most embedded real time systems reactive systems, but so are monitoring and debugging systems and distributed application management systems. Since reactive systems are usually long running and may control physical equipment, fault tolerance is vital. The research tries to understand the principal issues of fault tolerance in real time reactive systems and to build tools that allow a programmer to design reliable, real time reactive systems. In order to make real time reactive systems reliable, several issues must be addressed: (1) How can a control program be built to tolerate failures of sensors and actuators. To achieve this, a methodology was developed for transforming a control program that references physical value into one that tolerates sensors that can fail and can return inaccurate values; (2) How can the real time reactive system be built to tolerate failures of the control program. Towards this goal, whether the techniques presented can be extended to real time reactive systems is investigated; and (3) How can the environment be specified in a way that is useful for writing a control program. Towards this goal, whether a system with real time constraints can be expressed as an equivalent system without such constraints is also investigated.
Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach
NASA Astrophysics Data System (ADS)
Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.
2015-04-01
Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.
NASA Astrophysics Data System (ADS)
Oparin, Viktor; Tsoy, Pavel; Usoltseva, Olga; Semenov, Vladimir
2014-05-01
The aim of this study was to analyze distribution and development of stress-stress state in structured rock specimens subject to uniaxial loading to failure. Specific attention was paid to possible oscillating motion of structural elements of the rock specimens under constraints (pre-set stresses at the boundaries of the specimens) and the kinetic energy fractals. The detailed studies into the micro-level stress-strain state distribution and propagation over acting faces of rock specimens subject to uniaxial loading until failure, using automated digital speckle photography analyzer ALMEC-tv, have shown that: • under uniaxial stiff loading of prismatic sandstone, marble and sylvinite specimens on the Instron-8802 servohydraulic testing machine at the mobile grip displacement rate 0.02-0.2 mm/min, at a certain level of stressing, low-frequency micro-deformation processes originate in the specimens due to slow (quasi-static) force; • the amplitude of that deformation-wave processes greatly depends on the micro-loading stage: — at the elastic deformation stage, under the specimen stress lower than half ultimate strength of the specimen, there are no oscillations of microstrains; —at the nonlinearly elastic deformation stage, under stress varied from 0.5 to 1 ultimate strength of the specimens, the amplitudes of microstrains grow, including the descending stage 3; the oscillation frequency f=0.5-4 Hz; —at the residual strength stage, the amplitudes of the microstrains drop abruptly (3-5 times) as against stages 2 and 3; • in the elements of the scanned specimen surface in the region with the incipient crack, the microstrain rate amplitudes are a few times higher than in the undamged surface region of the same specimen. Sometimes, deformation rate greatly grows with increase in the load. The authors have used the energy scanning function of the deformation-wave processes in processing experimental speckle-photography data on the surface of the test specimen subject to loading until failure.
Probabilistic Analysis of a Composite Crew Module
NASA Technical Reports Server (NTRS)
Mason, Brian H.; Krishnamurthy, Thiagarajan
2011-01-01
An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.
Li, Gang; Wang, Zhenhai; Mao, Xinyu; Zhang, Yinghuang; Huo, Xiaoye; Liu, Haixiao; Xu, Shengyong
2016-01-01
Dynamic mapping of an object’s local temperature distribution may offer valuable information for failure analysis, system control and improvement. In this letter we present a computerized measurement system which is equipped with a hybrid, low-noise mechanical-electrical multiplexer for real-time two-dimensional (2D) mapping of surface temperatures. We demonstrate the performance of the system on a device embedded with 32 pieces of built-in Cr-Pt thin-film thermocouples arranged in a 4 × 8 matrix. The system can display a continuous 2D mapping movie of relative temperatures with a time interval around 1 s. This technique may find applications in a variety of practical devices and systems. PMID:27347969
Goldstein, David S; Holmes, Courtney; Kaufmann, Horacio; Freeman, Roy
2004-12-01
Oral L-threo-3,4-dihydroxyphenylserine (L-DOPS), a synthetic catechol amino acid, increases standing blood pressure and improves standing ability in patients with neurogenic orthostatic hypotension, by conversion of L-DOPS to norepinephrine (NE) outside the brain. This study assessed the pharmacokinetics of L-DOPS, NE, and dihydroxyphenylglycol (DHPG), the main neuronal metabolite of NE, in patients with primary chronic autonomic failure from pure autonomic failure (PAF) or multiple system atrophy (MSA). In 5 MSA and 4 PAF patients, antecubital venous blood was drawn during supine rest and plasma levels of catechols measured at various times for 48 hours after a single oral dose of 400 mg of L-DOPS. Plasma L-DOPS peaked at 1.9 microg/ml (9 micromol/L) about 3 hours after drug administration, followed by a monoexponential decline with a half-time of 2-3 hours in both patient groups. Plasma NE and DHPG also peaked at about 3 hours, but at much lower concentrations (4 and 42 nmol/L). Compared to the MSA group, the PAF group had a smaller calculated volume of distribution of L-DOPS and up to 10-fold lower plasma NE levels at all time points. Plasma NE was above baseline in MSA even at 48 hours after L-DOPS. The relatively long half-time for disappearance of L-DOPS compared to that of NE explains their very different attained plasma concentrations. The similar NE and DHPG responses in PAF and MSA suggests production of NE from LDOPS mainly in non-neuronal cells. Persistent elevation of plasma NE in MSA suggests residual release of NE from sympathetic nerves.
Time-related patterns of ventricular shunt failure.
Kast, J; Duong, D; Nowzari, F; Chadduck, W M; Schiff, S J
1994-11-01
Proximal obstruction is reported to be the most common cause of ventriculoperitoneal (VP) shunt failure, suggesting that imperfect ventricular catheter placement and inadequate valve mechanisms are major causes. This study retrospectively examined patterns of shunt failure in 128 consecutive patients with symptoms of shunt malfunction over a 2-year period. Factors analyzed included site of failure, time from shunt placement or last revision to failure, age of patient at time of failure, infections, and primary etiology of the hydrocephalus. One hundred of these patients required revisions; 14 revisions were due to infections. In this series there was a higher incidence of distal (43%) than of proximal (35%) failure. The difference was not statistically significant when the overall series was considered; however, when factoring time to failure as a variable, marked differences were noted regardless of the underlying cause of hydrocephalus or the age of the patient. Of the 49 patients needing a shunt revision or replacement within 2 years of the previous operation, 50% had proximal malfunction, 14% distal, and 10% had malfunctions attributable directly to the valve itself. Also, 12 of the 14 infections occurred during this time interval. In sharp contrast, of the 51 patients having shunt failure from 2 to more than 12 years after the previous procedure, 72% had distal malfunction, 21% proximal, and only 6% had a faulty valve or infection. This difference between time to failure for proximal versus distal failures was statistically significant (P < 0.00001 for both Student's t-test and non-parametric Mann-Whitney U-test).(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Rambalakos, Andreas
Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.
An operating system for future aerospace vehicle computer systems
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.
1984-01-01
The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.
An accelerating precursor to predict "time-to-failure" in creep and volcanic eruptions
NASA Astrophysics Data System (ADS)
Hao, Shengwang; Yang, Hang; Elsworth, Derek
2017-09-01
Real-time prediction by monitoring of the evolution of response variables is a central goal in predicting rock failure. A linear relation Ω˙Ω¨-1 = C(tf - t) has been developed to describe the time to failure, where Ω represents a response quantity, C is a constant and tf represents the failure time. Observations from laboratory creep failure experiments and precursors to volcanic eruptions are used to test the validity of the approach. Both cumulative and simple moving window techniques are developed to perform predictions and to illustrate the effects of data selection on the results. Laboratory creep failure experiments on granites show that the linear relation works well during the final approach to failure. For blind prediction, the simple moving window technique is preferred because it always uses the most recent data and excludes effects of early data deviating significantly from the predicted trend. When the predicted results show only small fluctuations, failure is imminent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciocca, Mario, E-mail: mario.ciocca@cnao.it; Cantone, Marie-Claire; Veronese, Ivan
2012-02-01
Purpose: Failure mode and effects analysis (FMEA) represents a prospective approach for risk assessment. A multidisciplinary working group of the Italian Association for Medical Physics applied FMEA to electron beam intraoperative radiation therapy (IORT) delivered using mobile linear accelerators, aiming at preventing accidental exposures to the patient. Methods and Materials: FMEA was applied to the IORT process, for the stages of the treatment delivery and verification, and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system,more » based on the product of three parameters (severity, frequency of occurrence and detectability, each ranging from 1 to 10); 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. Results: Twenty-four subprocesses were identified. Ten potential failure modes were found and scored, in terms of RPN, in the range of 42-216. The most critical failure modes consisted of internal shield misalignment, wrong Monitor Unit calculation and incorrect data entry at treatment console. Potential causes of failure included shield displacement, human errors, such as underestimation of CTV extension, mainly because of lack of adequate training and time pressures, failure in the communication between operators, and machine malfunctioning. The main effects of failure were represented by CTV underdose, wrong dose distribution and/or delivery, unintended normal tissue irradiation. As additional safety measures, the utilization of a dedicated staff for IORT, double-checking of MU calculation and data entry and finally implementation of in vivo dosimetry were suggested. Conclusions: FMEA appeared as a useful tool for prospective evaluation of patient safety in radiotherapy. The application of this method to IORT lead to identify three safety measures for risk mitigation.« less
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.
2011-01-01
A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.
Fishnet statistics for probabilistic strength and scaling of nacreous imbricated lamellar materials
NASA Astrophysics Data System (ADS)
Luo, Wen; Bažant, Zdeněk P.
2017-12-01
Similar to nacre (or brick masonry), imbricated (or staggered) lamellar structures are widely found in nature and man-made materials, and are of interest for biomimetics. They can achieve high defect insensitivity and fracture toughness, as demonstrated in previous studies. But the probability distribution with a realistic far-left tail is apparently unknown. Here, strictly for statistical purposes, the microstructure of nacre is approximated by a diagonally pulled fishnet with quasibrittle links representing the shear bonds between parallel lamellae (or platelets). The probability distribution of fishnet strength is calculated as a sum of a rapidly convergent series of the failure probabilities after the rupture of one, two, three, etc., links. Each of them represents a combination of joint probabilities and of additive probabilities of disjoint events, modified near the zone of failed links by the stress redistributions caused by previously failed links. Based on previous nano- and multi-scale studies at Northwestern, the strength distribution of each link, characterizing the interlamellar shear bond, is assumed to be a Gauss-Weibull graft, but with a deeper Weibull tail than in Type 1 failure of non-imbricated quasibrittle materials. The autocorrelation length is considered equal to the link length. The size of the zone of failed links at maximum load increases with the coefficient of variation (CoV) of link strength, and also with fishnet size. With an increasing width-to-length aspect ratio, a rectangular fishnet gradually transits from the weakest-link chain to the fiber bundle, as the limit cases. The fishnet strength at failure probability 10-6 grows with the width-to-length ratio. For a square fishnet boundary, the strength at 10-6 failure probability is about 11% higher, while at fixed load the failure probability is about 25-times higher than it is for the non-imbricated case. This is a major safety advantage of the fishnet architecture over particulate or fiber reinforced materials. There is also a strong size effect, partly similar to that of Type 1 while the curves of log-strength versus log-size for different sizes could cross each other. The predicted behavior is verified by about a million Monte Carlo simulations for each of many fishnet geometries, sizes and CoVs of link strength. In addition to the weakest-link or fiber bundle, the fishnet becomes the third analytically tractable statistical model of structural strength, and has the former two as limit cases.
Navigating the Decision Space: Shared Medical Decision Making as Distributed Cognition.
Lippa, Katherine D; Feufel, Markus A; Robinson, F Eric; Shalin, Valerie L
2017-06-01
Despite increasing prominence, little is known about the cognitive processes underlying shared decision making. To investigate these processes, we conceptualize shared decision making as a form of distributed cognition. We introduce a Decision Space Model to identify physical and social influences on decision making. Using field observations and interviews, we demonstrate that patients and physicians in both acute and chronic care consider these influences when identifying the need for a decision, searching for decision parameters, making actionable decisions Based on the distribution of access to information and actions, we then identify four related patterns: physician dominated; physician-defined, patient-made; patient-defined, physician-made; and patient-dominated decisions. Results suggests that (a) decision making is necessarily distributed between physicians and patients, (b) differential access to information and action over time requires participants to transform a distributed task into a shared decision, and (c) adverse outcomes may result from failures to integrate physician and patient reasoning. Our analysis unifies disparate findings in the medical decision-making literature and has implications for improving care and medical training.
A decentralized approach to reducing the social costs of cascading failures
NASA Astrophysics Data System (ADS)
Hines, Paul
Large cascading failures in electrical power networks come with enormous social costs. These can be direct financial costs, such as the loss of refrigerated foods in grocery stores, or more indirect social costs, such as the traffic congestion that results from the failure of traffic signals. While engineers and policy makers have made numerous technical and organizational changes to reduce the frequency and impact of large cascading failures, the existing data, as described in Chapter 2 of this work, indicate that the overall frequency and impact of large electrical blackouts in the United States are not decreasing. Motivated by the cascading failure problem, this thesis describes a new method for Distributed Model Predictive Control and a power systems application. The central goal of the method, when applied to power systems, is to reduce the social costs of cascading failures by making small, targeted reductions in load and generation and changes to generator voltage set points. Unlike some existing schemes that operate from centrally located control centers, the method is operated by software agents located at substations distributed throughout the power network. The resulting multi-agent control system is a new approach to decentralized control, combining Distributed Model Predictive Control and Reciprocal Altruism. Experimental results indicate that this scheme can in fact decrease the average size, and thus social costs, of cascading failures. Over 100 randomly generated disturbances to a model of the IEEE 300 bus test network, the method resulted in nearly an order of magnitude decrease in average event size (measured in cost) relative to cascading failure simulations without remedial control actions. Additionally, the communication requirements for the method are measured, and found to be within the bandwidth capabilities of current communications technology (on the order of 100kB/second). Experiments on several resistor networks with varying structures, including a random graph, a scale-free network and a power grid indicate that the effectiveness of decentralized control schemes, like the method proposed here, is a function of the structure of the network that is to be controlled.
A Novel Reliable WDM-PON System
NASA Astrophysics Data System (ADS)
Chen, Benyang; Gan, Chaoqin; Qi, Yongqian; Xia, Lei
2011-12-01
In this paper, a reliable Wavelength-Division-Multiplexing Passive Optical Network (WDM-PON) system is proposed. It can provide the protection against both the feeder fiber failure and the distribution fiber failure. When the fiber failure occurs, the corresponding switches in the OLT and in the ONU can switch to the protection link without affecting the users in normal status. That is to say, the protection for one ONU is independent of the other ONUs.
Wang, Junhua; Sun, Shuaiyi; Fang, Shouen; Fu, Ting; Stipancic, Joshua
2017-02-01
This paper aims to both identify the factors affecting driver drowsiness and to develop a real-time drowsy driving probability model based on virtual Location-Based Services (LBS) data obtained using a driving simulator. A driving simulation experiment was designed and conducted using 32 participant drivers. Collected data included the continuous driving time before detection of drowsiness and virtual LBS data related to temperature, time of day, lane width, average travel speed, driving time in heavy traffic, and driving time on different roadway types. Demographic information, such as nap habit, age, gender, and driving experience was also collected through questionnaires distributed to the participants. An Accelerated Failure Time (AFT) model was developed to estimate the driving time before detection of drowsiness. The results of the AFT model showed driving time before drowsiness was longer during the day than at night, and was longer at lower temperatures. Additionally, drivers who identified as having a nap habit were more vulnerable to drowsiness. Generally, higher average travel speeds were correlated to a higher risk of drowsy driving, as were longer periods of low-speed driving in traffic jam conditions. Considering different road types, drivers felt drowsy more quickly on freeways compared to other facilities. The proposed model provides a better understanding of how driver drowsiness is influenced by different environmental and demographic factors. The model can be used to provide real-time data for the LBS-based drowsy driving warning system, improving past methods based only on a fixed driving. Copyright © 2016 Elsevier Ltd. All rights reserved.
More About Software for No-Loss Computing
NASA Technical Reports Server (NTRS)
Edmonds, Iarina
2007-01-01
A document presents some additional information on the subject matter of "Integrated Hardware and Software for No- Loss Computing" (NPO-42554), which appears elsewhere in this issue of NASA Tech Briefs. To recapitulate: The hardware and software designs of a developmental parallel computing system are integrated to effectuate a concept of no-loss computing (NLC). The system is designed to reconfigure an application program such that it can be monitored in real time and further reconfigured to continue a computation in the event of failure of one of the computers. The design provides for (1) a distributed class of NLC computation agents, denoted introspection agents, that effects hierarchical detection of anomalies; (2) enhancement of the compiler of the parallel computing system to cause generation of state vectors that can be used to continue a computation in the event of a failure; and (3) activation of a recovery component when an anomaly is detected.
Dedication of emergency diesel generators` control air subsystem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrington, M.; Myers, G.; Palumbo, M.
1994-12-31
In the spring of 1993, the need to upgrade Seabrook Station`s emergency diesel generators` (EDGs`) control air system from nonsafety related to safety related was identified. This need was identified as a result of questions raised by the US Nuclear Regulatory Commission, which was conducting an Electrical Distribution Safety Functional Inspection at Seabrook at that time. The specific reason for the reassignment of safety classification was recognition that failure of the control air supply to the EDGs` jacket cooling water temperature control valves could cause overcooling of the EDGs, which potentially could result in EDG failure during long-term operation. Thismore » paper addresses how the installed control air system was upgraded to safety related using Seabrook`s Commercial Grade Dedication (CGD) Program and how, by using the dedication skills obtained over the past few years, it was done at minimal cost.« less
PEMFC MEA and System Design Considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knights, Shanna; Bashyam, Rajesh; He, Ping
2011-07-01
Proton exchange membrane fuel cells (PEMFCs) are being developed and sold commercially for multiple near term markets. Ballard Power Systems is focused on the near term markets of backup power, distributed generation, materials handling, and buses. Significant advances have been made in cost and durability of fuel cell products. Improved tolerance to a wide range of system operation and environmental noises will enable increased viability across a broad range of applications. In order to apply the most effective membrane electrode assembly (MEA) design for each market, the system requirements and associated MEA failures must be well understood. The failure modesmore » associated with the electrodes and membrane degradation are discussed with respect to associated system operation and mitigating approaches. A few key system considerations that influence MEA design include expected fuel quality, balance-of-plant materials, time under idle or open circuit operation, and start-up and shut-down conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riesen, Rolf E.; Bridges, Patrick G.; Stearley, Jon R.
Next-generation exascale systems, those capable of performing a quintillion (10{sup 18}) operations per second, are expected to be delivered in the next 8-10 years. These systems, which will be 1,000 times faster than current systems, will be of unprecedented scale. As these systems continue to grow in size, faults will become increasingly common, even over the course of small calculations. Therefore, issues such as fault tolerance and reliability will limit application scalability. Current techniques to ensure progress across faults like checkpoint/restart, the dominant fault tolerance mechanism for the last 25 years, are increasingly problematic at the scales of future systemsmore » due to their excessive overheads. In this work, we evaluate a number of techniques to decrease the overhead of checkpoint/restart and keep this method viable for future exascale systems. More specifically, this work evaluates state-machine replication to dramatically increase the checkpoint interval (the time between successive checkpoint) and hash-based, probabilistic incremental checkpointing using graphics processing units to decrease the checkpoint commit time (the time to save one checkpoint). Using a combination of empirical analysis, modeling, and simulation, we study the costs and benefits of these approaches on a wide range of parameters. These results, which cover of number of high-performance computing capability workloads, different failure distributions, hardware mean time to failures, and I/O bandwidths, show the potential benefits of these techniques for meeting the reliability demands of future exascale platforms.« less
NASA Astrophysics Data System (ADS)
Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.
2018-01-01
Large alpine rock slopes undergo long-term evolution in paraglacial to postglacial environments. Rock mass weakening and increased permeability associated with the progressive failure of deglaciated slopes promote the development of potentially catastrophic rockslides. We captured the entire life cycle of alpine slopes in one damage-based, time-dependent 2-D model of brittle creep, including deglaciation, damage-dependent fluid occurrence, and rock mass property upscaling. We applied the model to the Spriana rock slope (Central Alps), affected by long-term instability after Last Glacial Maximum and representing an active threat. We simulated the evolution of the slope from glaciated conditions to present day and calibrated the model using site investigation data and available temporal constraints. The model tracks the entire progressive failure path of the slope from deglaciation to rockslide development, without a priori assumptions on shear zone geometry and hydraulic conditions. Complete rockslide differentiation occurs through the transition from dilatant damage to a compacting basal shear zone, accounting for observed hydraulic barrier effects and perched aquifer formation. Our model investigates the mechanical role of deglaciation and damage-controlled fluid distribution in the development of alpine rockslides. The absolute simulated timing of rock slope instability development supports a very long "paraglacial" period of subcritical rock mass damage. After initial damage localization during the Lateglacial, rockslide nucleation initiates soon after the onset of Holocene, whereas full mechanical and hydraulic rockslide differentiation occurs during Mid-Holocene, supporting a key role of long-term damage in the reported occurrence of widespread rockslide clusters of these ages.
26 CFR 1.963-6 - Deficiency distribution.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (CONTINUED) INCOME TAXES Controlled Foreign Corporations § 1.963-6 Deficiency distribution. (a) In general... the election under section 963 applied, but only if the subsequent deficiency distribution meets the... to which such election relates does not apply for such taxable year because of the failure of such...
Covariate analysis of bivariate survival data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less
Scaled CMOS Technology Reliability Users Guide
NASA Technical Reports Server (NTRS)
White, Mark
2010-01-01
The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.
Kelly, J Robert; Rungruanganunt, Patchnee
2016-01-01
Zirconia is being widely used, at times apparently by simply copying a metal design into ceramic. Structurally, ceramics are sensitive to both design and processing (fabrication) details. The aim of this work was to examine four computer-aided design/computer-assisted manufacture (CAD/CAM) abutments using a modified International Standards Organization (ISO) implant fatigue protocol to determine performance as a function of design and processing. Two full zirconia and two hybrid (Ti-based) abutments (n = 12 each) were tested wet at 15 Hz at a variety of loads to failure. Failure probability distributions were examined at each load, and when found to be the same, data from all loads were combined for lifetime analysis from accelerated to clinical conditions. Two distinctly different failure modes were found for both full zirconia and Ti-based abutments. One of these for zirconia has been reported clinically in the literature, and one for the Ti-based abutments has been reported anecdotally. The ISO protocol modification in this study forced failures in the abutments; no implant bodies failed. Extrapolated cycles for 10% failure at 70 N were: full zirconia, Atlantis 2 × 10(7) and Straumann 3 × 10(7); and Ti-based, Glidewell 1 × 10(6) and Nobel 1 × 10(21). Under accelerated conditions (200 N), performance differed significantly: Straumann clearly outperformed Astra (t test, P = .013), and the Glidewell Ti-base abutment also outperformed Atlantis zirconia at 200 N (Nobel ran-out; t test, P = .035). The modified ISO protocol in this study produced failures that were seen clinically. The manufacture matters; differences in design and fabrication that influence performance cannot be discerned clinically.
Podczeck, Fridrun; Newton, J Michael; Fromme, Paul
2014-12-30
Flat, round tablets may have a breaking ("score") line. Pharmacopoeial tablet breaking load tests are diametral in their design, and industrially used breaking load testers often have automatic tablet feeding systems, which position the tablets between the loading platens of the machine with the breaking lines in random orientation to the applied load. The aim of this work was to ascertain the influence of the position of the breaking line in a diametral compression test using finite element methodology (FEM) and to compare the theoretical results with practical findings using commercially produced bevel-edged, scored tablets. Breaking line test positions at an angle of 0°, 22.5°, 45°, 67.5° and 90° relative to the loading plane were studied. FEM results obtained for fully elastic and elasto-plastic tablets were fairly similar, but they highlighted large differences in stress distributions depending on the position of the breaking line. The stress values at failure were predicted to be similar for tablets tested at an angle of 45° or above, whereas at lower test angles the predicted breaking loads were up to three times larger. The stress distributions suggested that not all breaking line angles would result in clean tensile failure. Practical results, however, did not confirm the differences in the predicted breaking loads, but they confirmed differences in the way tablets broke. The results suggest that it is not advisable to convert breaking loads obtained on scored tablets into tablet tensile strength values, and comparisons between different tablets or batches should carefully consider the orientation of the breaking line with respect to the loading plane, as the failure mechanisms appear to vary. Copyright © 2014 Elsevier B.V. All rights reserved.
DEPEND - A design environment for prediction and evaluation of system dependability
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.; Iyer, Ravishankar K.
1990-01-01
The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.
Reliability Growth in Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2014-01-01
A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.
Improved Multi-Axial, Temperature and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.
2002-01-01
An extensive effort has recently been completed by the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program to completely characterize the effects of multi-axial loading, temperature and time on the failure characteristics of three filled epoxy adhesives (TIGA 321, EA913NA, EA946). As part of this effort, a single general failure criterion was developed that accounted for these effects simultaneously. This model was named the Multi- Axial, Temperature, and Time Dependent or MATT failure criterion. Due to the intricate nature of the failure criterion, some parameters were required to be calculated using complex equations or numerical methods. This paper documents some simple but accurate modifications to the failure criterion to allow for calculations of failure conditions without complex equations or numerical techniques.
Time-dependent fiber bundles with local load sharing.
Newman, W I; Phoenix, S L
2001-02-01
Fiber bundle models, where fibers have random lifetimes depending on their load histories, are useful tools in explaining time-dependent failure in heterogeneous materials. Such models shed light on diverse phenomena such as fatigue in structural materials and earthquakes in geophysical settings. Various asymptotic and approximate theories have been developed for bundles with various geometries and fiber load-sharing mechanisms, but numerical verification has been hampered by severe computational demands in larger bundles. To gain insight at large size scales, interest has returned to idealized fiber bundle models in 1D. Such simplified models typically assume either equal load sharing (ELS) among survivors, or local load sharing (LLS) where a failed fiber redistributes its load onto its two nearest flanking survivors. Such models can often be solved exactly or asymptotically in increasing bundle size, N, yet still capture the essence of failure in real materials. The present work focuses on 1D bundles under LLS. As in previous works, a fiber has failure rate following a power law in its load level with breakdown exponent rho. Surviving fibers under fixed loads have remaining lifetimes that are independent and exponentially distributed. We develop both new asymptotic theories and new computational algorithms that greatly increase the bundle sizes that can be treated in large replications (e.g., one million fibers in thousands of realizations). In particular we develop an algorithm that adapts several concepts and methods that are well-known among computer scientists, but relatively unknown among physicists, to dramatically increase the computational speed with no attendant loss of accuracy. We consider various regimes of rho that yield drastically different behavior as N increases. For 1/2< or =rho< or =1, ELS and LLS have remarkably similar behavior (they have identical lifetime distributions at rho=1) with approximate Gaussian bundle lifetime statistics and a finite limiting mean. For rho>1 this Gaussian behavior also applies to ELS, whereas LLS behavior diverges sharply showing brittle, weakest volume behavior in terms of characteristic elements derived from critical cluster formation. For 0
Application of Statistically Derived CPAS Parachute Parameters
NASA Technical Reports Server (NTRS)
Romero, Leah M.; Ray, Eric S.
2013-01-01
The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.
2016-12-01
December 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A STUDY ON THE FAILURE OF CONVENTIONAL DETERRENCE: THE CASE ...Approved for public release. Distribution is unlimited A STUDY ON THE FAILURE OF CONVENTIONAL DETERRENCE: THE CASE STUDY OF THE BOMBARDMENT OF...necessary to analyze these diverse elements comprehensively. E. RESEARCH DESIGN This thesis focuses on a single case study about the bombardment of
Cascading failures in complex networks with community structure
NASA Astrophysics Data System (ADS)
Lin, Guoqiang; di, Zengru; Fan, Ying
2014-12-01
Much empirical evidence shows that when attacked with cascading failures, scale-free or even random networks tend to collapse more extensively when the initially deleted node has higher betweenness. Meanwhile, in networks with strong community structure, high-betweenness nodes tend to be bridge nodes that link different communities, and the removal of such nodes will reduce only the connections among communities, leaving the networks fairly stable. Understanding what will affect cascading failures and how to protect or attack networks with strong community structure is therefore of interest. In this paper, we have constructed scale-free Community Networks (SFCN) and Random Community Networks (RCN). We applied these networks, along with the Lancichinett-Fortunato-Radicchi (LFR) benchmark, to the cascading-failure scenario to explore their vulnerability to attack and the relationship between cascading failures and the degree distribution and community structure of a network. The numerical results show that when the networks are of a power-law distribution, a stronger community structure will result in the failure of fewer nodes. In addition, the initial removal of the node with the highest betweenness will not lead to the worst cascading, i.e. the largest avalanche size. The Betweenness Overflow (BOF), an index that we developed, is an effective indicator of this tendency. The RCN, however, display a different result. In addition, the avalanche size of each node can be adopted as an index to evaluate the importance of the node.
A novel survivable architecture for hybrid WDM/TDM passive optical networks
NASA Astrophysics Data System (ADS)
Qiu, Yang; Chan, Chun-Kit
2014-02-01
A novel tree-ring survivable architecture, which consists of an organization of a wavelength-division-multiplexing (WDM) tree from optical line terminal (OLT) to remote nodes (RNs) and a time division multiplexing (TDM) ring in each RN, is proposed for hybrid WDM/TDM passive optical networks. By utilizing the cyclic property of arrayed waveguide gratings (AWGs) and the single-ring topology among a group of optical network units (ONUs) in the remote node, not only the feeder and distribution fibers, but also any fiber failures in the RN rings are protected simultaneously. Five-Gbit/s transmissions under both normal working and protection modes were experimentally demonstrated and a traffic restoration time was successfully measured.
NASA Astrophysics Data System (ADS)
Mahabadi, O. K.; Tatone, B. S. A.; Grasselli, G.
2014-07-01
This study investigates the influence of microscale heterogeneity and microcracks on the failure behavior and mechanical response of a crystalline rock. The thin section analysis for obtaining the microcrack density is presented. Using micro X-ray computed tomography (μCT) scanning of failed laboratory specimens, the influence of heterogeneity and, in particular, biotite grains on the brittle fracture of the specimens is discussed and various failure patterns are characterized. Three groups of numerical simulations are presented, which demonstrate the role of microcracks and the influence of μCT-based and stochastically generated phase distributions. The mechanical response, stress distribution, and fracturing process obtained by the numerical simulations are also discussed. The simulation results illustrate that heterogeneity and microcracks should be considered to accurately predict the tensile strength and failure behavior of the sample.
Chan, Kwun Chuen Gary; Wang, Mei-Cheng
2017-01-01
Recurrent event processes with marker measurements are mostly and largely studied with forward time models starting from an initial event. Interestingly, the processes could exhibit important terminal behavior during a time period before occurrence of the failure event. A natural and direct way to study recurrent events prior to a failure event is to align the processes using the failure event as the time origin and to examine the terminal behavior by a backward time model. This paper studies regression models for backward recurrent marker processes by counting time backward from the failure event. A three-level semiparametric regression model is proposed for jointly modeling the time to a failure event, the backward recurrent event process, and the marker observed at the time of each backward recurrent event. The first level is a proportional hazards model for the failure time, the second level is a proportional rate model for the recurrent events occurring before the failure event, and the third level is a proportional mean model for the marker given the occurrence of a recurrent event backward in time. By jointly modeling the three components, estimating equations can be constructed for marked counting processes to estimate the target parameters in the three-level regression models. Large sample properties of the proposed estimators are studied and established. The proposed models and methods are illustrated by a community-based AIDS clinical trial to examine the terminal behavior of frequencies and severities of opportunistic infections among HIV infected individuals in the last six months of life.
Time of non-invasive ventilation.
Nava, Stefano; Navalesi, Paolo; Conti, Giorgio
2006-03-01
Non-invasive ventilation (NIV) is a safe, versatile and effective technique that can avert side effects and complications associated with endotracheal intubation. The success of NIV relies on several factors, including the type and severity of acute respiratory failure, the underlying disease, the location of treatment, and the experience of the team. The time factor is also important. NIV is primarily used to avert the need for endotracheal intubation in patients with early-stage acute respiratory failure and post-extubation respiratory failure. It can also be used as an alternative to invasive ventilation at a more advanced stage of acute respiratory failure or to facilitate the process of weaning from mechanical ventilation. NIV has been used to prevent development of acute respiratory failure or post-extubation respiratory failure. The number of days of NIV and hours of daily use differ, depending on the severity and course of the acute respiratory failure and the timing of application. In this review article, we analyse, compare and discuss the results of studies in which NIV was applied at various times during the evolution of acute respiratory failure.
Selecting statistical model and optimum maintenance policy: a case study of hydraulic pump.
Ruhi, S; Karim, M R
2016-01-01
Proper maintenance policy can play a vital role for effective investigation of product reliability. Every engineered object such as product, plant or infrastructure needs preventive and corrective maintenance. In this paper we look at a real case study. It deals with the maintenance of hydraulic pumps used in excavators by a mining company. We obtain the data that the owner had collected and carry out an analysis and building models for pump failures. The data consist of both failure and censored lifetimes of the hydraulic pump. Different competitive mixture models are applied to analyze a set of maintenance data of a hydraulic pump. Various characteristics of the mixture models, such as the cumulative distribution function, reliability function, mean time to failure, etc. are estimated to assess the reliability of the pump. Akaike Information Criterion, adjusted Anderson-Darling test statistic, Kolmogrov-Smirnov test statistic and root mean square error are considered to select the suitable models among a set of competitive models. The maximum likelihood estimation method via the EM algorithm is applied mainly for estimating the parameters of the models and reliability related quantities. In this study, it is found that a threefold mixture model (Weibull-Normal-Exponential) fits well for the hydraulic pump failures data set. This paper also illustrates how a suitable statistical model can be applied to estimate the optimum maintenance period at a minimum cost of a hydraulic pump.
Flaw-induced plastic-flow dynamics in bulk metallic glasses under tension
Chen, S. H.; Yue, T. M.; Tsui, C. P.; Chan, K. C.
2016-01-01
Inheriting amorphous atomic structures without crystalline lattices, bulk metallic glasses (BMGs) are known to have superior mechanical properties, such as high strength approaching the ideal value, but are susceptible to catastrophic failures. Understanding the plastic-flow dynamics of BMGs is important for achieving stable plastic flow in order to avoid catastrophic failures, especially under tension, where almost all BMGs demonstrate limited plastic flow with catastrophic failure. Previous findings have shown that the plastic flow of BMGs displays critical dynamics under compression tests, however, the plastic-flow dynamics under tension are still unknown. Here we report that power-law critical dynamics can also be achieved in the plastic flow of tensile BMGs by introducing flaws. Differing from the plastic flow under compression, the flaw-induced plastic flow under tension shows an upward trend in the amplitudes of the load drops with time, resulting in a stable plastic-flow stage with a power-law distribution of the load drop. We found that the flaw-induced plastic flow resulted from the stress gradients around the notch roots, and the stable plastic-flow stage increased with the increase of the stress concentration factor ahead of the notch root. The findings are potentially useful for predicting and avoiding the catastrophic failures in tensile BMGs by tailoring the complex stress fields in practical structural-applications. PMID:27779221
Does an inter-flaw length control the accuracy of rupture forecasting in geological materials?
NASA Astrophysics Data System (ADS)
Vasseur, Jérémie; Wadsworth, Fabian B.; Heap, Michael J.; Main, Ian G.; Lavallée, Yan; Dingwell, Donald B.
2017-10-01
Multi-scale failure of porous materials is an important phenomenon in nature and in material physics - from controlled laboratory tests to rockbursts, landslides, volcanic eruptions and earthquakes. A key unsolved research question is how to accurately forecast the time of system-sized catastrophic failure, based on observations of precursory events such as acoustic emissions (AE) in laboratory samples, or, on a larger scale, small earthquakes. Until now, the length scale associated with precursory events has not been well quantified, resulting in forecasting tools that are often unreliable. Here we test the hypothesis that the accuracy of the forecast failure time depends on the inter-flaw distance in the starting material. We use new experimental datasets for the deformation of porous materials to infer the critical crack length at failure from a static damage mechanics model. The style of acceleration of AE rate prior to failure, and the accuracy of forecast failure time, both depend on whether the cracks can span the inter-flaw length or not. A smooth inverse power-law acceleration of AE rate to failure, and an accurate forecast, occurs when the cracks are sufficiently long to bridge pore spaces. When this is not the case, the predicted failure time is much less accurate and failure is preceded by an exponential AE rate trend. Finally, we provide a quantitative and pragmatic correction for the systematic error in the forecast failure time, valid for structurally isotropic porous materials, which could be tested against larger-scale natural failure events, with suitable scaling for the relevant inter-flaw distances.
Risk-based decision making to manage water quality failures caused by combined sewer overflows
NASA Astrophysics Data System (ADS)
Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.
2017-12-01
Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean- pbf optimization. The effectiveness of using buffered failure probability compared to the failure probability is tested by comparing the solutions obtained by using mean-pbf and mean-pf optimizations.
NASA Astrophysics Data System (ADS)
Taylor, Gabriel James
The failure of electrical cables exposed to severe thermal fire conditions are a safety concern for operating commercial nuclear power plants (NPPs). The Nuclear Regulatory Commission (NRC) has promoted the use of risk-informed and performance-based methods for fire protection which resulted in a need to develop realistic methods to quantify the risk of fire to NPP safety. Recent electrical cable testing has been conducted to provide empirical data on the failure modes and likelihood of fire-induced damage. This thesis evaluated numerous aspects of the data. Circuit characteristics affecting fire-induced electrical cable failure modes have been evaluated. In addition, thermal failure temperatures corresponding to cable functional failures have been evaluated to develop realistic single point thermal failure thresholds and probability distributions for specific cable insulation types. Finally, the data was used to evaluate the prediction capabilities of a one-dimension conductive heat transfer model used to predict cable failure.
NASA Astrophysics Data System (ADS)
Arnone, E.; Noto, L. V.; Dialynas, Y. G.; Caracciolo, D.; Bras, R. L.
2015-12-01
This work presents the capabilities of a model, i.e. the tRIBS-VEGGIE-Landslide, in two different versions, i.e. developed within a probabilistic framework and coupled with a root cohesion module. The probabilistic model treats geotechnical and soil retention curve parameters as random variables across the basin and estimates theoretical probability distributions of slope stability and the associated "factor of safety" commonly used to describe the occurrence of shallow landslides. The derived distributions are used to obtain the spatio-temporal dynamics of probability of failure, conditioned on soil moisture dynamics at each watershed location. The framework has been tested in the Luquillo Experimental Forest (Puerto Rico) where shallow landslides are common. In particular, the methodology was used to evaluate how the spatial and temporal patterns of precipitation, whose variability is significant over the basin, affect the distribution of probability of failure. Another version of the model accounts for the additional cohesion exerted by vegetation roots. The approach is to use the Fiber Bundle Model (FBM) framework that allows for the evaluation of the root strength as a function of the stress-strain relationships of bundles of fibers. The model requires the knowledge of the root architecture to evaluate the additional reinforcement from each root diameter class. The root architecture is represented with a branching topology model based on Leonardo's rule. The methodology has been tested on a simple case study to explore the role of both hydrological and mechanical root effects. Results demonstrate that the effects of root water uptake can at times be more significant than the mechanical reinforcement; and that the additional resistance provided by roots depends heavily on the vegetation root structure and length.
Cao, Qi; Postmus, Douwe; Hillege, Hans L; Buskens, Erik
2013-06-01
Early estimates of the commercial headroom available to a new medical device can assist producers of health technology in making appropriate product investment decisions. The purpose of this study was to illustrate how this quantity can be captured probabilistically by combining probability elicitation with early health economic modeling. The technology considered was a novel point-of-care testing device in heart failure disease management. First, we developed a continuous-time Markov model to represent the patients' disease progression under the current care setting. Next, we identified the model parameters that are likely to change after the introduction of the new device and interviewed three cardiologists to capture the probability distributions of these parameters. Finally, we obtained the probability distribution of the commercial headroom available per measurement by propagating the uncertainty in the model inputs to uncertainty in modeled outcomes. For a willingness-to-pay value of €10,000 per life-year, the median headroom available per measurement was €1.64 (interquartile range €0.05-€3.16) when the measurement frequency was assumed to be daily. In the subsequently conducted sensitivity analysis, this median value increased to a maximum of €57.70 for different combinations of the willingness-to-pay threshold and the measurement frequency. Probability elicitation can successfully be combined with early health economic modeling to obtain the probability distribution of the headroom available to a new medical technology. Subsequently feeding this distribution into a product investment evaluation method enables stakeholders to make more informed decisions regarding to which markets a currently available product prototype should be targeted. Copyright © 2013. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Zhang, Zhong
In this work, motivated by the need to coordinate transmission maintenance scheduling among a multiplicity of self-interested entities in restructured power industry, a distributed decision support framework based on multiagent negotiation systems (MANS) is developed. An innovative risk-based transmission maintenance optimization procedure is introduced. Several models for linking condition monitoring information to the equipment's instantaneous failure probability are presented, which enable quantitative evaluation of the effectiveness of maintenance activities in terms of system cumulative risk reduction. Methodologies of statistical processing, equipment deterioration evaluation and time-dependent failure probability calculation are also described. A novel framework capable of facilitating distributed decision-making through multiagent negotiation is developed. A multiagent negotiation model is developed and illustrated that accounts for uncertainty and enables social rationality. Some issues of multiagent negotiation convergence and scalability are discussed. The relationships between agent-based negotiation and auction systems are also identified. A four-step MAS design methodology for constructing multiagent systems for power system applications is presented. A generic multiagent negotiation system, capable of inter-agent communication and distributed decision support through inter-agent negotiations, is implemented. A multiagent system framework for facilitating the automated integration of condition monitoring information and maintenance scheduling for power transformers is developed. Simulations of multiagent negotiation-based maintenance scheduling among several independent utilities are provided. It is shown to be a viable alternative solution paradigm to the traditional centralized optimization approach in today's deregulated environment. This multiagent system framework not only facilitates the decision-making among competing power system entities, but also provides a tool to use in studying competitive industry relative to monopolistic industry.
Moon, Young Joon; Kim, Hong-Wook; Kim, Jin Bum; Kim, Hyung Joon; Chang, Young-Seop
2015-10-01
To evaluate the distribution of ureteral stones and to determine their characteristics and expulsion rate based on their location. We retrospectively reviewed computed tomography (CT) findings of 246 patients who visited our Emergency Department (ED) for renal colic caused by unilateral ureteral stones between January 2013 and April 2014. Histograms were constructed to plot the distribution of stones based on initial CT findings. Data from 144 of the 246 patients who underwent medical expulsive therapy (MET) for 2 weeks were analyzed to evaluate the factors responsible for the stone distribution and expulsion. The upper ureter and ureterovesical junction (UVJ) were 2 peak locations at which stones initially lodged. Stones lodged at the upper ureter and ureteropelvic junction (group A) had a larger longitudinal diameter (4.21 mm vs. 3.56 mm, p=0.004) compared to those lodged at the lower ureter and UVJ (group B). The expulsion rate was 75.6% and 94.9% in groups A and B, respectively. There was no significant difference in the time interval from initiation of renal colic to arrival at the ED between groups A and B (p=0.422). Stone diameter was a significant predictor of MET failure (odds ratio [OR], 1.795; p=0.005) but the initial stone location was not (OR, 0.299; p=0.082). The upper ureter and UVJ are 2 peak sites at which stones lodge. For stone size 10 mm or less, initial stone lodge site is not a significant predictor of MET failure in patients who have no previous history of active stone treatment in the ureter.
Theater Logistics Management: A Case for a Joint Distribution Solution
2008-03-15
Multinational (JIIM) operations necessitate creating joint-multinational-based distribution management centers which effectively manage materiel...in the world. However, as the operation continued, the inherent weakness of the intra-theater logistical distribution management link became clear...compounded the distribution management problem. The common thread between each of the noted GAO failures is the lack of a defined joint, theater
A R, Subhashree
2014-06-01
Red cell distribution width (RDW) is a red cell measurement given by fully automated hematology analyzers. It is a measure of heterogeneity in the size of circulating erythrocytes. Studies have shown that it is a prognostic marker in non - anemic diabetic patients with symptomatic cardiovascular disease but its correlation with cardiac failure in diabetics has not been studied so far. Moreover, studies have also shown that a higher RDW may reflect an underlying inflammatory state. Since Diabetes is a pro inflammatory state there is a possibility that it might have an influence on the RDW values even when there is no cardiac failure, but research data on this aspect is lacking. B-type natriuretic peptide (BNP) is a proven marker for cardiac failure whose values are comparable with echo cardio graphic findings in assessing the left ventricular dysfunction. This study aimed to find out the correlation between RDW% and serum BNP levels in Diabetics with heart failure (cases) when compared to those without failure (controls). Further, we compared the RDW % values of the cases with controls. Settings and Design : The study was approved by institutional ethical and research committee. A cross-sectional study was conducted with patients attending the Diabetes clinic of a tertiary care hospital in Chennai, India, during the period of October to December 2013. Hundred known cases of type II Diabetes mellitus attending Diabetes centre of the Hospital, with clinical and Echo cardio graphic features of cardiac failure were included as cases. Hundred age and gender matched diabetics with negative history of cardiovascular disease and with normal Echo cardio graphic features were included as controls. Informed consent was obtained from all the cases and controls. Demographic data and clinical history were gathered from all the cases and controls by using a standardized self - administered questionnaire. Biochemical and hematological parameters which included Fasting and Postprandial blood sugar, Glycosylated hemoglobin, Complete Blood count including RDW and serum BNP were performed for all the cases and controls. RESULTS were tabulated and analysed using SPSS 20.0 version A statistically significant correlation (p<.001) was found between Red cell Distribution Width % and Serum B type Natriuretic Peptide values in the cases. Further, RDW% showed a statistically significant difference between cases and controls. RDW% can be used as a screening parameter to identify cardiac failure in Diabetic patients similar to non-diabetic cardiac failure. RDW% values are significantly higher in cases of Diabetes with failure in comparison to uncomplicated Diabetes.
Microstructure and Mechanical Performance of Friction Stir Spot-Welded Aluminum-5754 Sheets
NASA Astrophysics Data System (ADS)
Pathak, N.; Bandyopadhyay, K.; Sarangi, M.; Panda, Sushanta Kumar
2013-01-01
Friction stir spot welding (FSSW) is a recent trend of joining light-weight sheet metals while fabricating automotive and aerospace body components. For the successful application of this solid-state welding process, it is imperative to have a thorough understanding of the weld microstructure, mechanical performance, and failure mechanism. In the present study, FSSW of aluminum-5754 sheet metal was tried using tools with circular and tapered pin considering different tool rotational speeds, plunge depths, and dwell times. The effects of tool design and process parameters on temperature distribution near the sheet-tool interface, weld microstructure, weld strength, and failure modes were studied. It was found that the peak temperature was higher while welding with a tool having circular pin compared to tapered pin, leading to a bigger dynamic recrystallized stir zone (SZ) with a hook tip bending towards the upper sheet and away from the keyhole. Hence, higher lap shear separation load was observed in the welds made from circular pin compared to those made from tapered pin. Due to influence of size and hardness of SZ on crack propagation, three different failure modes of weld nugget were observed through optical cross-sectional micrograph and SEM fractographs.
A Framework for Debugging Geoscience Projects in a High Performance Computing Environment
NASA Astrophysics Data System (ADS)
Baxter, C.; Matott, L.
2012-12-01
High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.
Modern methodology of designing target reliability into rotating mechanical components
NASA Technical Reports Server (NTRS)
Kececioglu, D. B.; Chester, L. B.
1973-01-01
Experimentally determined distributional cycles-to-failure versus maximum alternating nominal strength (S-N) diagrams, and distributional mean nominal strength versus maximum alternating nominal strength (Goodman) diagrams are presented. These distributional S-N and Goodman diagrams are for AISI 4340 steel, R sub c 35/40 hardness, round, cylindrical specimens 0.735 in. in diameter and 6 in. long with a circumferential groove 0.145 in. radius for a theoretical stress concentration = 1.42 and 0.034 in. radius for a stress concentration = 2.34. The specimens are subjected to reversed bending and steady torque in specially built, three complex-fatigue research machines. Based on these results, the effects on the distributional S-N and Goodman diagrams and on service life of superimposing steady torque on reversed bending are established, as well as the effect of various stress concentrations. In addition a computer program for determining the three-parameter Weibull distribution representing the cycles-to-failure data, and two methods for calculating the reliability of components subjected to cumulative fatigue loads are given.
Multiple comparisons permutation test for image based data mining in radiotherapy.
Chen, Chun; Witte, Marnix; Heemsbergen, Wilma; van Herk, Marcel
2013-12-23
: Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling the multiple comparisons problem. A test statistic Tmax was proposed that summarizes the differences between the images into a single value and a permutation procedure was employed to compute the adjusted p-value. We demonstrated the method in two retrospective studies: a prostate study that relates 3D dose distributions to failure, and an esophagus study that relates 2D surface dose distributions of the esophagus to acute esophagus toxicity. As a result, we were able to identify suspicious regions that are significantly associated with failure (prostate study) or toxicity (esophagus study). Permutation testing allows direct comparison of images from different patient categories and is a useful tool for data mining in radiotherapy.
NASA Technical Reports Server (NTRS)
Schmeckpeper, K. R.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Volume 2 continues the presentation of IOA analysis worksheets and contains the potential critical items list.
Analysis of fault-tolerant neurocontrol architectures
NASA Technical Reports Server (NTRS)
Troudet, T.; Merrill, W.
1992-01-01
The fault-tolerance of analog parallel distributed implementations of a multivariable aircraft neurocontroller is analyzed by simulating weight and neuron failures in a simplified scheme of analog processing based on the functional architecture of the ETANN chip (Electrically Trainable Artificial Neural Network). The neural information processing is found to be only partially distributed throughout the set of weights of the neurocontroller synthesized with the backpropagation algorithm. Although the degree of distribution of the neural processing, and consequently the fault-tolerance of the neurocontroller, could be enhanced using Locally Distributed Weight and Neuron Approaches, a satisfactory level of fault-tolerance could only be obtained by retraining the degrated VLSI neurocontroller. The possibility of maintaining neurocontrol performance and stability in the presence of single weight of neuron failures was demonstrated through an automated retraining procedure of the neurocontroller based on a pre-programmed choice and sequence of the training parameters.
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
Nonparametric analysis of bivariate gap time with competing risks.
Huang, Chiung-Yu; Wang, Chenguang; Wang, Mei-Cheng
2016-09-01
This article considers nonparametric methods for studying recurrent disease and death with competing risks. We first point out that comparisons based on the well-known cumulative incidence function can be confounded by different prevalence rates of the competing events, and that comparisons of the conditional distribution of the survival time given the failure event type are more relevant for investigating the prognosis of different patterns of recurrence disease. We then propose nonparametric estimators for the conditional cumulative incidence function as well as the conditional bivariate cumulative incidence function for the bivariate gap times, that is, the time to disease recurrence and the residual lifetime after recurrence. To quantify the association between the two gap times in the competing risks setting, a modified Kendall's tau statistic is proposed. The proposed estimators for the conditional bivariate cumulative incidence distribution and the association measure account for the induced dependent censoring for the second gap time. Uniform consistency and weak convergence of the proposed estimators are established. Hypothesis testing procedures for two-sample comparisons are discussed. Numerical simulation studies with practical sample sizes are conducted to evaluate the performance of the proposed nonparametric estimators and tests. An application to data from a pancreatic cancer study is presented to illustrate the methods developed in this article. © 2016, The International Biometric Society.
Fluctuations of healthy and unhealthy heartbeat intervals
NASA Astrophysics Data System (ADS)
Lan, Boon Leong; Toda, Mikito
2013-04-01
We show that the RR-interval fluctuations, defined as the difference between successive natural-logarithm of the RR interval, for healthy, congestive-heart-failure (CHF) and atrial-fibrillation (AF) subjects are well modeled by non-Gaussian stable distributions. Our results suggest that healthy or unhealthy RR-interval fluctuation can generally be modeled as a sum of a large number of independent physiological effects which are identically distributed with infinite variance. Furthermore, we show for the first time that one indicator —the scale parameter of the stable distribution— is sufficient to robustly distinguish the three groups of subjects. The scale parameters for healthy subjects are smaller than those for AF subjects but larger than those for CHF subjects —this ordering suggests that the scale parameter could be used to objectively quantify the severity of CHF and AF over time and also serve as an early warning signal for a healthy person when it approaches either boundary of the healthy range.
Performance results of cooperating expert systems in a distributed real-time monitoring system
NASA Technical Reports Server (NTRS)
Schwuttke, U. M.; Veregge, J. R.; Quan, A. G.
1994-01-01
There are numerous definitions for real-time systems, the most stringent of which involve guaranteeing correct system response within a domain-dependent or situationally defined period of time. For applications such as diagnosis, in which the time required to produce a solution can be non-deterministic, this requirement poses a unique set of challenges in dynamic modification of solution strategy that conforms with maximum possible latencies. However, another definition of real time is relevant in the case of monitoring systems where failure to supply a response in the proper (and often infinitesimal) amount of time allowed does not make the solution less useful (or, in the extreme example of a monitoring system responsible for detecting and deflecting enemy missiles, completely irrelevant). This more casual definition involves responding to data at the same rate at which it is produced, and is more appropriate for monitoring applications with softer real-time constraints, such as interplanetary exploration, which results in massive quantities of data transmitted at the speed of light for a number of hours before it even reaches the monitoring system. The latter definition of real time has been applied to the MARVEL system for automated monitoring and diagnosis of spacecraft telemetry. An early version of this system has been in continuous operational use since it was first deployed in 1989 for the Voyager encounter with Neptune. This system remained under incremental development until 1991 and has been under routine maintenance in operations since then, while continuing to serve as an artificial intelligence (AI) testbed in the laboratory. The system architecture has been designed to facilitate concurrent and cooperative processing by multiple diagnostic expert systems in a hierarchical organization. The diagnostic modules adhere to concepts of data-driven reasoning, constrained but complete nonoverlapping domains, metaknowledge of global consequences of anomalous data, hierarchical reporting of problems that extend beyond a single domain, and shared responsibility for problems that overlap domains. The system enables efficient diagnosis of complex system failures in real-time environments with high data volumes and moderate failure rates, as indicated by extensive performance measurements.
Lovelock, D Michael; Zhang, Zhigang; Jackson, Andrew; Keam, Jennifer; Bekelman, Justin; Bilsky, Mark; Lis, Eric; Yamada, Yoshiya
2010-07-15
In the setting of high-dose single-fraction image-guided radiotherapy of spine metastases, the delivered dose is hypothesized to be a significant factor in local control. We investigated the dependence of local control on measures of dose insufficiency. The minimum doses received by the hottest 100%, 98%, and 95% (D(min), D(98), and D(95)) of the gross target volume (GTV) were computed for 91 consecutively treated lesions observed in 79 patients. Prescribed doses of 18-24 Gy were delivered in a single fraction. The spinal cord and cauda equina were constrained to a maximum dose of 12-14 Gy and 16 Gy, respectively. A rank-sum test was used to assess the differences between radiographic local failure and local control. With a median follow-up of 18 months, seven local failures have occurred. The distributions of GTV D(min), D(98), and D(95) for treatments resulting in local failure were found to be statistically different from the corresponding distributions of the patient group as a whole. Taking no account of histology, p values calculated for D(min), D(98), and D(95) were 0.004, 0.012, and 0.031, respectively. No correlations between local failure and target volume or between local failure and anatomic location were found. The results indicate that D(min), D(98), and D(95) may be important risk factors for local failure. No local failures in any histology were observed when D(min) was >15 Gy, suggesting that this metric may be an important predictor of local control. Copyright 2010 Elsevier Inc. All rights reserved.
Lovelock, D. Michael; Zhang, Zhigang; Jackson, Andrew; Keam, Jennifer; Bekelman, Justin; Bilsky, Mark; Lis, Eric; Yamada, Yoshiya
2011-01-01
Purpose In the setting of high-dose single-fraction image-guided radiotherapy of spine metastases, the delivered dose is hypothesized to be a significant factor in local control. We investigated the dependence of local control on measures of dose insufficiency. Methods and Materials The minimum doses received by the hottest 100%, 98%, and 95% (Dmin, D98, and D95) of the gross target volume (GTV) were computed for 91 consecutively treated lesions observed in 79 patients. Prescribed doses of 18–24 Gy were delivered in a single fraction. The spinal cord and cauda equina were constrained to a maximum dose of 12–14 Gy and 16 Gy, respectively. A rank-sum test was used to assess the differences between radiographic local failure and local control. Results With a median follow-up of 18 months, seven local failures have occurred. The distributions of GTV Dmin, D98, and D95 for treatments resulting in local failure were found to be statistically different from the corresponding distributions of the patient group as a whole. Taking no account of histology, p values calculated for Dmin, D98, and D95 were 0.004, 0.012, and 0.031, respectively. No correlations between local failure and target volume or between local failure and anatomic location were found. Conclusions The results indicate that Dmin, D98, and D95 may be important risk factors for local failure. No local failures in any histology were observed when Dmin was >15 Gy, suggesting that this metric may be an important predictor of local control. PMID:20350795
Paudyal, Anju; Dewan, Sukriti; Ikie, Cindy; Whalley, Benjamin J; de Tombe, Pieter P.
2016-01-01
Key points The present study investigated the mechanism associated with impaired cardiac mechanosensing that leads to heart failure by examining the factors regulating muscle LIM protein subcellular distribution in myocytes.In myocytes, muscle LIM protein subcellular distribution is regulated by cell contractility rather than passive stretch via heme oxygenase‐1 and histone deacetylase signalling. The result of the present study provide new insights into mechanotransduction in cardiac myocytes.Myocyte mechanosensitivity, as indicated by the muscle LIM protein ratio, is also correlated with cardiac function in the transition to failure in a guinea‐pig model of disease. This shows that the loss mechanosensitivity plays an important role during the transition to failure in the heart.The present study provides the first indication that mechanosensing could be modified pharmacologically during the transition to heart failure. Abstract Impaired mechanosensing leads to heart failure and a decreased ratio of cytoplasmic to nuclear CSRP3/muscle LIM protein (MLP ratio) is associated with a loss of mechanosensitivity. In the present study, we tested whether passive or active stress/strain was important in modulating the MLP ratio and determined whether this correlated with heart function during the transition to failure. We exposed cultured neonatal rat myocytes to a 10% cyclic mechanical stretch at 1 Hz, or electrically paced myocytes at 6.8 V (1 Hz) for 48 h. The MLP ratio decreased by 50% (P < 0.05, n = 4) only in response to electrical pacing, suggesting impaired mechanosensitivity. Inhibition of contractility with 10 μm blebbistatin resulted in an ∼3‐fold increase in the MLP ratio (n = 8, P < 0.05), indicating that myocyte contractility regulates nuclear MLP. Inhibition of histone deacetylase (HDAC) signalling with trichostatin A increased nuclear MLP following passive stretch, suggesting that HDACs block MLP nuclear accumulation. Inhibition of heme oxygenase1 (HO‐1) activity with protoporphyrin IX zinc(II) blocked MLP nuclear accumulation. To examine how mechanosensitivity changes during the transition to heart failure, we studied a guinea‐pig model of angiotensin II infusion (400 ng kg–1 min–1) over 12 weeks. Using subcellular fractionation, we showed that the MLP ratio increased by 88% (n = 4, P < 0.01) during compensated hypertrophy but decreased significantly during heart failure (P < 0.001, n = 4). The MLP ratio correlated significantly with the E/A ratio (r = 0.71, P < 0.01, n = 12), a clinical measure of diastolic function. These data indicate for the first time that myocyte mechanosensitivity as indicated by the MLP ratio is regulated primarily by myocyte contractility via HO‐1 and HDAC signalling. PMID:26847743
NASA Technical Reports Server (NTRS)
Shumka, A.; Sollock, S. G.
1981-01-01
This paper represents the first comprehensive survey of the Mount Laguna Photovoltaic Installation. The novel techniques used for performing the field tests have been effective in locating and characterizing defective modules. A comparative analysis on the two types of modules used in the array indicates that they have significantly different failure rates, different distributions in degradational space and very different failure modes. A life cycle model is presented to explain a multimodal distribution observed for one module type. A statistical model is constructed and it is shown to be in good agreement with the field data.
Point-of-care diagnostics: will the hurdles be overcome this time?
Huckle, David
2006-07-01
Point-of-care diagnostics have been proposed as the latest development in clinical diagnostics several times in the last 30 years; however, they have not yet fully developed into a business sector to match the projections. This perspective examines the reasons for past failures and the failure of technology to meet user needs. Advances have taken place in the last few years that effectively remove technology as a barrier to the development of point-of-care testing. Even regulatory issues regarding how products are developed and claims supported have been absorbed, understood and now accepted. The emphasis here is on the possible favorable aspects that are novel this time around. These changes have arisen as a result of the situation with global healthcare economics and the pressure from patients to be treated more like customers. The final hurdles relate to the conflict between diagnosis with the patient present and treated as soon as the point-of-care result is available and the entrenched positions of the central laboratory, the suppliers and their established distribution chains, and the way in which healthcare budgets are allocated. The ultimate hurdle that encapsulates all of these issues is reimbursement, which is the final barrier to a significant point-of-care diagnostics market--without reimbursement there will be no market.
Climate change and the global pattern of moraine-dammed glacial lake outburst floods
NASA Astrophysics Data System (ADS)
Harrison, Stephan; Kargel, Jeffrey S.; Huggel, Christian; Reynolds, John; Shugar, Dan H.; Betts, Richard A.; Emmer, Adam; Glasser, Neil; Haritashya, Umesh K.; Klimeš, Jan; Reinhardt, Liam; Schaub, Yvonne; Wiltshire, Andy; Regmi, Dhananjay; Vilímek, Vít
2018-04-01
Despite recent research identifying a clear anthropogenic impact on glacier recession, the effect of recent climate change on glacier-related hazards is at present unclear. Here we present the first global spatio-temporal assessment of glacial lake outburst floods (GLOFs) focusing explicitly on lake drainage following moraine dam failure. These floods occur as mountain glaciers recede and downwaste. GLOFs can have an enormous impact on downstream communities and infrastructure. Our assessment of GLOFs associated with the rapid drainage of moraine-dammed lakes provides insights into the historical trends of GLOFs and their distributions under current and future global climate change. We observe a clear global increase in GLOF frequency and their regularity around 1930, which likely represents a lagged response to post-Little Ice Age warming. Notably, we also show that GLOF frequency and regularity - rather unexpectedly - have declined in recent decades even during a time of rapid glacier recession. Although previous studies have suggested that GLOFs will increase in response to climate warming and glacier recession, our global results demonstrate that this has not yet clearly happened. From an assessment of the timing of climate forcing, lag times in glacier recession, lake formation and moraine-dam failure, we predict increased GLOF frequencies during the next decades and into the 22nd century.
NASA Astrophysics Data System (ADS)
Khawaja, Taimoor Saleem
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate "possibly" non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines, (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines, (c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.
ERIC Educational Resources Information Center
Colander, David
2003-01-01
Argues that microeconomics principles courses are structured around an approach to policy that avoids many controversial but central policy issues including: the interplay of moral issues and efficiency; questions of consumer sovereignty; and questions of the interrelation between measures of efficiency and income distribution. Suggests a dual…
Independent Orbiter Assessment (IOA): Analysis of the orbital maneuvering system
NASA Technical Reports Server (NTRS)
Prust, C. D.; Paul, D. J.; Burkemper, V. J.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbital Maneuvering System (OMS) hardware are documented. The OMS provides the thrust to perform orbit insertion, orbit circularization, orbit transfer, rendezvous, and deorbit. The OMS is housed in two independent pods located one on each side of the tail and consists of the following subsystems: Helium Pressurization; Propellant Storage and Distribution; Orbital Maneuvering Engine; and Electrical Power Distribution and Control. The IOA analysis process utilized available OMS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluted and analyzed for possible failure modes and effects. Criticality was asigned based upon the severity of the effect for each failure mode.
Introduction to Concurrent Engineering: Electronic Circuit Design and Production Applications
1992-09-01
STD-1629. Failure mode distribution data for many different types of parts may be found in RAC publication FMD -91. FMEA utilizes inductive logic in a...contrasts with a Fault Tree Analysis ( FTA ) which utilizes deductive logic in a "top down" approach. In FTA , a system failure is assumed and traced down...Analysis ( FTA ) is a graphical method of risk analysis used to identify critical failure modes within a system or equipment. Utilizing a pictorial approach
1989-12-01
Bose, Ohio Appni’-sd for puauc t&cw 189 12 29 023 I [ AFIT /GAE /ENY/ 89D-06 A STUDY OF FAILURE CHARACTERISTICS IN THERMOPLASTIC COMPOSITE LAMINATES DUE...distribution unlimited I ,I AFIT / GAE / ENY /89D-06 A STUDY OF FAILURE CHARACTERISTICS IN THERMOPLASTIC COMPOSITE LAMINATES DUE TO AN ECCENTRIC CIRCULAR...the Flight Dynamics Laboratory. Dr. Sandhu provided me with an insight into composite materials, and testing techniques, that will benefit me for a
Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1999-01-01
A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.
Power System Information Delivering System Based on Distributed Object
NASA Astrophysics Data System (ADS)
Tanaka, Tatsuji; Tsuchiya, Takehiko; Tamura, Setsuo; Seki, Tomomichi; Kubota, Kenji
In recent years, improvement in computer performance and development of computer network technology or the distributed information processing technology has a remarkable thing. Moreover, the deregulation is starting and will be spreading in the electric power industry in Japan. Consequently, power suppliers are required to supply low cost power with high quality services to customers. Corresponding to these movements the authors have been proposed SCOPE (System Configuration Of PowEr control system) architecture for distributed EMS/SCADA (Energy Management Systems / Supervisory Control and Data Acquisition) system based on distributed object technology, which offers the flexibility and expandability adapting those movements. In this paper, the authors introduce a prototype of the power system information delivering system, which was developed based on SCOPE architecture. This paper describes the architecture and the evaluation results of this prototype system. The power system information delivering system supplies useful power systems information such as electric power failures to the customers using Internet and distributed object technology. This system is new type of SCADA system which monitors failure of power transmission system and power distribution system with geographic information integrated way.
Analysis and experiments for composite laminates with holes and subjected to 4-point bending
NASA Technical Reports Server (NTRS)
Shuart, M. J.; Prasad, C. B.
1990-01-01
Analytical and experimental results are presented for composite laminates with a hole and subjected to four-point bending. A finite-plate analysis is used to predict moment and strain distributions for six-layer quasi-isotropic laminates and transverse-ply laminates. Experimental data are compared with the analytical results. Experimental and analytical strain results show good agreement for the quasi-isotropic laminates. Failure of the two types of composite laminates is described, and failure strain results are presented as a function of normalized hole diameter. The failure results suggest that the initial failure mechanism for laminates subjected to four-point bending are similar to the initial failure mechanisms for corresponding laminates subjected to uniaxial inplane loadings.
A Nonlinear Viscoelastic Model for Ceramics at High Temperatures
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Panoskaltsis, Vassilis P.; Gasparini, Dario A.; Choi, Sung R.
2002-01-01
High-temperature creep behavior of ceramics is characterized by nonlinear time-dependent responses, asymmetric behavior in tension and compression, and nucleation and coalescence of voids leading to creep rupture. Moreover, creep rupture experiments show considerable scatter or randomness in fatigue lives of nominally equal specimens. To capture the nonlinear, asymmetric time-dependent behavior, the standard linear viscoelastic solid model is modified. Nonlinearity and asymmetry are introduced in the volumetric components by using a nonlinear function similar to a hyperbolic sine function but modified to model asymmetry. The nonlinear viscoelastic model is implemented in an ABAQUS user material subroutine. To model the random formation and coalescence of voids, each element is assigned a failure strain sampled from a lognormal distribution. An element is deleted when its volumetric strain exceeds its failure strain. Element deletion has been implemented within ABAQUS. Temporal increases in strains produce a sequential loss of elements (a model for void nucleation and growth), which in turn leads to failure. Nonlinear viscoelastic model parameters are determined from uniaxial tensile and compressive creep experiments on silicon nitride. The model is then used to predict the deformation of four-point bending and ball-on-ring specimens. Simulation is used to predict statistical moments of creep rupture lives. Numerical simulation results compare well with results of experiments of four-point bending specimens. The analytical model is intended to be used to predict the creep rupture lives of ceramic parts in arbitrary stress conditions.
Identification of priorities for medication safety in neonatal intensive care.
Kunac, Desireé L; Reith, David M
2005-01-01
Although neonates are reported to be at greater risk of medication error than infants and older children, little is known about the causes and characteristics of error in this patient group. Failure mode and effects analysis (FMEA) is a technique used in industry to evaluate system safety and identify potential hazards in advance. The aim of this study was to identify and prioritize potential failures in the neonatal intensive care unit (NICU) medication use process through application of FMEA. Using the FMEA framework and a systems-based approach, an eight-member multidisciplinary panel worked as a team to create a flow diagram of the neonatal unit medication use process. Then by brainstorming, the panel identified all potential failures, their causes and their effects at each step in the process. Each panel member independently rated failures based on occurrence, severity and likelihood of detection to allow calculation of a risk priority score (RPS). The panel identified 72 failures, with 193 associated causes and effects. Vulnerabilities were found to be distributed across the entire process, but multiple failures and associated causes were possible when prescribing the medication and when preparing the drug for administration. The top ranking issue was a perceived lack of awareness of medication safety issues (RPS score 273), due to a lack of medication safety training. The next highest ranking issues were found to occur at the administration stage. Common potential failures related to errors in the dose, timing of administration, infusion pump settings and route of administration. Perceived causes were multiple, but were largely associated with unsafe systems for medication preparation and storage in the unit, variable staff skill level and lack of computerised technology. Interventions to decrease medication-related adverse events in the NICU should aim to increase staff awareness of medication safety issues and focus on medication administration processes.
45 CFR 1303.7 - Effect of failure to file or serve documents in a timely manner.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Effect of failure to file or serve documents in a... PROSPECTIVE DELEGATE AGENCIES General § 1303.7 Effect of failure to file or serve documents in a timely manner... requisite deadlines or time frames if it exceeds them by any amount. (d) The time to file an appeal...
High Risk of Graft Failure in Emerging Adult Heart Transplant Recipients.
Foster, B J; Dahhou, M; Zhang, X; Dharnidharka, V; Ng, V; Conway, J
2015-12-01
Emerging adulthood (17-24 years) is a period of high risk for graft failure in kidney transplant. Whether a similar association exists in heart transplant recipients is unknown. We sought to estimate the relative hazards of graft failure at different current ages, compared with patients between 20 and 24 years old. We evaluated 11 473 patients recorded in the Scientific Registry of Transplant Recipients who received a first transplant at <40 years old (1988-2013) and had at least 6 months of graft function. Time-dependent Cox models were used to estimate the association between current age (time-dependent) and failure risk, adjusted for time since transplant and other potential confounders. Failure was defined as death following graft failure or retransplant; observation was censored at death with graft function. There were 2567 failures. Crude age-specific graft failure rates were highest in 21-24 year olds (4.2 per 100 person-years). Compared to individuals with the same time since transplant, 21-24 year olds had significantly higher failure rates than all other age periods except 17-20 years (HR 0.92 [95%CI 0.77, 1.09]) and 25-29 years (0.86 [0.73, 1.03]). Among young first heart transplant recipients, graft failure risks are highest in the period from 17 to 29 years of age. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meeuwsen, J.J.; Kling, W.L.; Ploem, W.A.G.A.
1997-01-01
Protection systems in power systems can fail either by not responding when they should (failure to operate) or by operating when they should not (false tripping). The former type of failure is particularly serious since it may result in the isolation of large sections of the network. However, the probability of a failure to operate can be reduced by carrying out preventive maintenance on protection systems. This paper describes an approach to determine the impact of preventive maintenance on protection systems on the reliability of the power supply to customers. The proposed approach is based on Markov models.
The size distributions of fragments ejected at a given velocity from impact craters
NASA Technical Reports Server (NTRS)
O'Keefe, John D.; Ahrens, Thomas J.
1987-01-01
The mass distribution of fragments that are ejected at a given velocity for impact craters is modeled to allow extrapolation of laboratory, field, and numerical results to large scale planetary events. The model is semi-empirical in nature and is derived from: (1) numerical calculations of cratering and the resultant mass versus ejection velocity, (2) observed ejecta blanket particle size distributions, (3) an empirical relationship between maximum ejecta fragment size and crater diameter, (4) measurements and theory of maximum ejecta size versus ejecta velocity, and (5) an assumption on the functional form for the distribution of fragments ejected at a given velocity. This model implies that for planetary impacts into competent rock, the distribution of fragments ejected at a given velocity is broad, e.g., 68 percent of the mass of the ejecta at a given velocity contains fragments having a mass less than 0.1 times a mass of the largest fragment moving at that velocity. The broad distribution suggests that in impact processes, additional comminution of ejecta occurs after the upward initial shock has passed in the process of the ejecta velocity vector rotating from an initially downward orientation. This additional comminution produces the broader size distribution in impact ejecta as compared to that obtained in simple brittle failure experiments.
Improving Distribution Resiliency with Microgrids and State and Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuffner, Francis K.; Williams, Tess L.; Schneider, Kevin P.
Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking themore » system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using advanced metering infrastructure and other distribution-level measurements to create a three-phase, unbalanced distribution state estimation approach. With distribution-level state estimation, the grid can be operated more efficiently, and outages or equipment failures can be caught faster, improving the overall resilience and reliability of the grid.« less
Deformation Failure Characteristics of Coal Body and Mining Induced Stress Evolution Law
Wen, Zhijie; Wen, Jinhao; Shi, Yongkui; Jia, Chuanyang
2014-01-01
The results of the interaction between coal failure and mining pressure field evolution during mining are presented. Not only the mechanical model of stope and its relative structure division, but also the failure and behavior characteristic of coal body under different mining stages are built and demonstrated. Namely, the breaking arch and stress arch which influence the mining area are quantified calculated. A systematic method of stress field distribution is worked out. All this indicates that the pore distribution of coal body with different compressed volume has fractal character; it appears to be the linear relationship between propagation range of internal stress field and compressed volume of coal body and nonlinear relationship between the range of outburst coal mass and the number of pores which is influenced by mining pressure. The results provide theory reference for the research on the range of mining-induced stress and broken coal wall. PMID:24967438
Distributed optical fibre sensing for early detection of shallow landslides triggering.
Schenato, Luca; Palmieri, Luca; Camporese, Matteo; Bersan, Silvia; Cola, Simonetta; Pasuto, Alessandro; Galtarossa, Andrea; Salandin, Paolo; Simonini, Paolo
2017-10-31
A distributed optical fibre sensing system is used to measure landslide-induced strains on an optical fibre buried in a large scale physical model of a slope. The fibre sensing cable is deployed at the predefined failure surface and interrogated by means of optical frequency domain reflectometry. The strain evolution is measured with centimetre spatial resolution until the occurrence of the slope failure. Standard legacy sensors measuring soil moisture and pore water pressure are installed at different depths and positions along the slope for comparison and validation. The evolution of the strain field is related to landslide dynamics with unprecedented resolution and insight. In fact, the results of the experiment clearly identify several phases within the evolution of the landslide and show that optical fibres can detect precursory signs of failure well before the collapse, paving the way for the development of more effective early warning systems.
Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline
NASA Astrophysics Data System (ADS)
Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.
2017-05-01
In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.
The quality of antimalarials available in Yemen
Abdo-Rabbo, Ahmed; Bassili, Amal; Atta, Hoda
2005-01-01
Background Malaria has always been a major public health problem in Yemen. Several studies in developing countries have demonstrated ineffective and poor quality drugs including antimalarials. Therefore, quality assessment of antimalarial drugs is of crucial importance. This study aimed to assess the quality of antimalarials (chloroquine and sulfadoxine/pyrimethamine) available in Yemen and to determine whether the quality of these products was related to the level of the distribution chain at which the samples were collected or related to the manufacturers. Methods Four samples from each antimalarial product were collected from each of the various levels of the distribution chain. One sample was kept with the research team. Two were tested at Sana'a and Aden Drug Quality Control Laboratories. The fourth was sent to the Centre for Quality Assurance of Medicines in Potchefstroom, South Africa, for analysis. Quality indicators measured were the content of the active ingredient and dissolution rate (for tablets only) in comparison to standard specifications for these products in the relevant pharmacopoeia. Results The results identified several problems of sub-standard products within the drug distribution chain. They included high and low failures in ingredient content for chloroquine tablets and chloroquine syrup. There was some dissolution failure for chloroquine tablets, and high sulfadoxine/pyrimethamine tablets dissolution failures. Failures with the dissolution of the pyrimethamine were found at most of the collection points. No clear relationship neither between the quality products and the level of the distribution chain, nor between locally manufactured and imported products was observed. Conclusion There are sub-standard antimalarial products circulating within the drug distribution chains in the country, which will have serious implications on the reduced therapeutic effectiveness and on the development of drug resistance. This appears to be due to non-compliance with Good Manufacturing Practice guidelines by manufacturers in the production of the antimalarials. PMID:15987508