Sample records for component failure rate

  1. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  2. Savannah River Site generic data base development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanton, C.H.; Eide, S.A.

    This report describes the results of a project to improve the generic component failure data base for the Savannah River Site (SRS). A representative list of components and failure modes for SRS risk models was generated by reviewing existing safety analyses and component failure data bases and from suggestions from SRS safety analysts. Then sources of data or failure rate estimates were identified and reviewed for applicability. A major source of information was the Nuclear Computerized Library for Assessing Reactor Reliability, or NUCLARR. This source includes an extensive collection of failure data and failure rate estimates for commercial nuclear powermore » plants. A recent Idaho National Engineering Laboratory report on failure data from the Idaho Chemical Processing Plant was also reviewed. From these and other recent sources, failure data and failure rate estimates were collected for the components and failure modes of interest. This information was aggregated to obtain a recommended generic failure rate distribution (mean and error factor) for each component failure mode.« less

  3. Sensitivity analysis by approximation formulas - Illustrative examples. [reliability analysis of six-component architectures

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1983-01-01

    This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.

  4. A Study to Compare the Failure Rates of Current Space Shuttle Ground Support Equipment with the New Pathfinder Equipment and Investigate the Effect that the Proposed GSE Infrastructure Upgrade Might Have to Reduce GSE Infrastructure Failures

    NASA Technical Reports Server (NTRS)

    Kennedy, Barbara J.

    2004-01-01

    The purposes of this study are to compare the current Space Shuttle Ground Support Equipment (GSE) infrastructure with the proposed GSE infrastructure upgrade modification. The methodology will include analyzing the first prototype installation equipment at Launch PAD B called the "Pathfinder". This study will begin by comparing the failure rate of the current components associated with the "Hardware interface module (HIM)" at the Kennedy Space Center to the failure rate of the neW Pathfinder components. Quantitative data will be gathered specifically on HIM components and the PAD B Hypergolic Fuel facility and Hypergolic Oxidizer facility areas which has the upgraded pathfinder equipment installed. The proposed upgrades include utilizing industrial controlled modules, software, and a fiber optic network. The results of this study provide evidence that there is a significant difference in the failure rates of the two studied infrastructure equipment components. There is also evidence that the support staff for each infrastructure system is not equal. A recommendation to continue with future upgrades is based on a significant reduction of failures in the new' installed ground system components.

  5. Rate based failure detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less

  6. Use of a constrained tripolar acetabular liner to treat intraoperative instability and postoperative dislocation after total hip arthroplasty: a review of our experience.

    PubMed

    Callaghan, John J; O'Rourke, Michael R; Goetz, Devon D; Lewallen, David G; Johnston, Richard C; Capello, William N

    2004-12-01

    Constrained acetabular components have been used to treat certain cases of intraoperative instability and postoperative dislocation after total hip arthroplasty. We report our experience with a tripolar constrained component used in these situations since 1988. The outcomes of the cases where this component was used were analyzed for component failure, component loosening, and osteolysis. At average 10-year followup, for cases treated for intraoperative instability (2 cases) or postoperative dislocation (4 cases), the component failure rate was 6% (6 of 101 hips in 5 patients). For cases where the constrained liner was cemented into a fixed cementless acetabular shell, the failure rate was 7% (2 of 31 hips in 2 patients) at 3.9-year average followup. Use of a constrained liner was not associated with an increased osteolysis or aseptic loosening rate. This tripolar constrained acetabular liner provided total hip arthroplasty construct stability in most cases in which it was used for intraoperative instability or postoperative dislocation.

  7. Enhanced Component Performance Study. Emergency Diesel Generators 1998–2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2014-11-01

    This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2013 and maintenance unavailability (UA) performance data using Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2013. The objective is to present an analysis of factors that could influence the system and component trends in addition to annual performance trends of failure rates and probabilities. The factors analyzed for the EDG component are the differences in failuresmore » between all demands and actual unplanned engineered safety feature (ESF) demands, differences among manufacturers, and differences among EDG ratings. Statistical analyses of these differences are performed and results showing whether pooling is acceptable across these factors. In addition, engineering analyses were performed with respect to time period and failure mode. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating.« less

  8. Estimating distributions with increasing failure rate in an imperfect repair model.

    PubMed

    Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R

    2002-03-01

    A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.

  9. Digital Systems Validation Handbook. Volume 2

    DTIC Science & Technology

    1989-02-01

    0 TABLE 7.2-3. FAILURE RATES FOR MAJOR RDFCS COMPONENTS COMPONENT UNIT FAILURE RATE* Pitch Angle Gyro 303 Roll Angle Gyro 303 Yaw Rate Gyro 200...Airplane Weight 314,500 lb Altitude 35 ft Angle of Attack 10.91 0 Indicated Air Speed 168 kts Flap Deployment 22 o Transition capability was added to go...various pieces of information into the form needed by the FCCs. For example, roll angle and pitch angle are converted to three-wire AC signals, properly

  10. Service Life Extension of the Propulsion System of Long-Term Manned Orbital Stations

    NASA Technical Reports Server (NTRS)

    Kamath, Ulhas; Kuznetsov, Sergei; Spencer, Victor

    2014-01-01

    One of the critical non-replaceable systems of a long-term manned orbital station is the propulsion system. Since the propulsion system operates beginning with the launch of station elements into orbit, its service life determines the service life of the station overall. Weighing almost a million pounds, the International Space Station (ISS) is about four times as large as the Russian space station Mir and about five times as large as the U.S. Skylab. Constructed over a span of more than a decade with the help of over 100 space flights, elements and modules of the ISS provide more research space than any spacecraft ever built. Originally envisaged for a service life of fifteen years, this Earth orbiting laboratory has been in orbit since 1998. Some elements that have been launched later in the assembly sequence were not yet built when the first elements were placed in orbit. Hence, some of the early modules that were launched at the inception of the program were already nearing the end of their design life when the ISS was finally ready and operational. To maximize the return on global investments on ISS, it is essential for the valuable research on ISS to continue as long as the station can be sustained safely in orbit. This paper describes the work performed to extend the service life of the ISS propulsion system. A system comprises of many components with varying failure rates. Reliability of a system is the probability that it will perform its intended function under encountered operating conditions, for a specified period of time. As we are interested in finding out how reliable a system would be in the future, reliability expressed as a function of time provides valuable insight. In a hypothetical bathtub shaped failure rate curve, the failure rate, defined as the number of failures per unit time that a currently healthy component will suffer in a given future time interval, decreases during infant-mortality period, stays nearly constant during the service life and increases at the end when the design service life ends and wear-out phase begins. However, the component failure rates do not remain constant over the entire cycle life. The failure rate depends on various factors such as design complexity, current age of the component, operating conditions, severity of environmental stress factors, etc. Development, qualification and acceptance test processes provide rigorous screening of components to weed out imperfections that might otherwise cause infant mortality failures. If sufficient samples are tested to failure, the failure time versus failure quantity can be analyzed statistically to develop a failure probability distribution function (PDF), a statistical model of the probability of failure versus time. Driven by cost and schedule constraints however, spacecraft components are generally not tested in large numbers. Uncertainties in failure rate and remaining life estimates increase when fewer units are tested. To account for this, spacecraft operators prefer to limit useful operations to a period shorter than the maximum demonstrated service life of the weakest component. Running each component to its failure to determine the maximum possible service life of a system can become overly expensive and impractical. Spacecraft operators therefore, specify the required service life and an acceptable factor of safety (FOS). The designers use these requirements to limit the life test duration. Midway through the design life, when benefits justify additional investments, supplementary life test may be performed to demonstrate the capability to safely extend the service life of the system. An innovative approach is required to evaluate the entire system, without having to go through an elaborate test program of propulsion system elements. Evaluating every component through a brute force test program would be a cost prohibitive and time consuming endeavor. ISS propulsion system components were designed and built decades ago. There are no representative ground test articles for some of the components. A 'test everything' approach would require manufacturing new test articles. The paper outlines some of the techniques used for selective testing, by way of cherry picking candidate components based on failure mode effects analysis, system level impacts, hazard analysis, etc. The type of testing required for extending the service life depends on the design and criticality of the component, failure modes and failure mechanisms, life cycle margin provided by the original certification, operational and environmental stresses encountered, etc. When specific failure mechanism being considered and the underlying relationship of that mode to the stresses provided in the test can be correlated by supporting analysis, time and effort required for conducting life extension testing can be significantly reduced. Exposure to corrosive propellants over long periods of time, for instance, lead to specific failure mechanisms in several components used in the propulsion system. Using Arrhenius model, which is tied to chemically dependent failure mechanisms such as corrosion or chemical reactions, it is possible to subject carefully selected test articles to accelerated life test. Arrhenius model reflects the proportional relationship between time to failure of a component and the exponential of the inverse of absolute temperature acting on the component. The acceleration factor is used to perform tests at higher stresses that allow direct correlation between the times to failure at a high test temperature to the temperatures to be expected in actual use. As long as the temperatures are such that new failure mechanisms are not introduced, this becomes a very useful method for testing to failure a relatively small sample of items for a much shorter amount of time. In this article, based on the example of the propulsion system of the first ISS module Zarya, theoretical approaches and practical activities of extending the service life of the propulsion system are reviewed with the goal of determining the maximum duration of its safe operation.

  11. Development of STS/Centaur failure probabilities liftoff to Centaur separation

    NASA Technical Reports Server (NTRS)

    Hudson, J. M.

    1982-01-01

    The results of an analysis to determine STS/Centaur catastrophic vehicle response probabilities for the phases of vehicle flight from STS liftoff to Centaur separation from the Orbiter are presented. The analysis considers only category one component failure modes as contributors to the vehicle response mode probabilities. The relevant component failure modes are grouped into one of fourteen categories of potential vehicle behavior. By assigning failure rates to each component, for each of its failure modes, the STS/Centaur vehicle response probabilities in each phase of flight can be calculated. The results of this study will be used in a DOE analysis to ascertain the hazard from carrying a nuclear payload on the STS.

  12. Reliability analysis of C-130 turboprop engine components using artificial neural network

    NASA Astrophysics Data System (ADS)

    Qattan, Nizar A.

    In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine turbine under actual operating conditions, which can be used by aircraft operators for assessing system and component failures and customizing the maintenance programs recommended by the manufacturer.

  13. Survivorship analysis of failure pattern after revision total hip arthroplasty.

    PubMed

    Retpen, J B; Varmarken, J E; Jensen, J S

    1989-12-01

    Failure, defined as established indication for or performed re-revision of one or both components, was analyzed using survivorship methods in 306 revision total hip arthroplasties. The longevity of revision total hip arthroplasties was inferior to that of previously reported primary total hip arthroplasties. The overall survival curve was two-phased, with a late failure period associated with aseptic loosening of one or both components and an early failure period associated with causes of failure other than loosening. Separate survival curves for aseptic loosening of femoral and acetabular components showed late and almost simultaneous decline, but with a tendency toward a higher rate of failure for the femoral component. No differences in survival could be found between the Stanmore, Lubinus standard, and Lubinus long-stemmed femoral components. A short interval between the index operation and the revision and intraoperative and postoperative complications were risk factors for early failure. Young age was a risk factor for aseptic loosening of the femoral component. Intraoperative fracture of the femoral shaft was not a risk factor for secondary loosening. No difference in survival was found between primary cemented total arthroplasty and primary noncemented hemiarthroplasty.

  14. Heroic Reliability Improvement in Manned Space Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.

  15. Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline

    NASA Astrophysics Data System (ADS)

    Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.

    2017-05-01

    In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.

  16. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  17. RSA prediction of high failure rate for the uncoated Interax TKA confirmed by meta-analysis.

    PubMed

    Pijls, Bart G; Nieuwenhuijse, Marc J; Schoones, Jan W; Middeldorp, Saskia; Valstar, Edward R; Nelissen, Rob G H H

    2012-04-01

    In a previous radiostereometric (RSA) trial the uncoated, uncemented, Interax tibial components showed excessive migration within 2 years compared to HA-coated and cemented tibial components. It was predicted that this type of fixation would have a high failure rate. The purpose of this systematic review and meta-analysis was to investigate whether this RSA prediction was correct. We performed a systematic review and meta-analysis to determine the revision rate for aseptic loosening of the uncoated and cemented Interax tibial components. 3 studies were included, involving 349 Interax total knee arthroplasties (TKAs) for the comparison of uncoated and cemented fixation. There were 30 revisions: 27 uncoated and 3 cemented components. There was a 3-times higher revision rate for the uncoated Interax components than that for cemented Interax components (OR = 3; 95% CI: 1.4-7.2). This meta-analysis confirms the prediction of a previous RSA trial. The uncoated Interax components showed the highest migration and turned out to have the highest revision rate for aseptic loosening. RSA appears to enable efficient detection of an inferior design as early as 2 years postoperatively in a small group of patients.

  18. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  19. Accounting for Epistemic Uncertainty in Mission Supportability Assessment: A Necessary Step in Understanding Risk and Logistics Requirements

    NASA Technical Reports Server (NTRS)

    Owens, Andrew; De Weck, Olivier L.; Stromgren, Chel; Goodliff, Kandyce; Cirillo, William

    2017-01-01

    Future crewed missions to Mars present a maintenance logistics challenge that is unprecedented in human spaceflight. Mission endurance – defined as the time between resupply opportunities – will be significantly longer than previous missions, and therefore logistics planning horizons are longer and the impact of uncertainty is magnified. Maintenance logistics forecasting typically assumes that component failure rates are deterministically known and uses them to represent aleatory uncertainty, or uncertainty that is inherent to the process being examined. However, failure rates cannot be directly measured; rather, they are estimated based on similarity to other components or statistical analysis of observed failures. As a result, epistemic uncertainty – that is, uncertainty in knowledge of the process – exists in failure rate estimates that must be accounted for. Analyses that neglect epistemic uncertainty tend to significantly underestimate risk. Epistemic uncertainty can be reduced via operational experience; for example, the International Space Station (ISS) failure rate estimates are refined using a Bayesian update process. However, design changes may re-introduce epistemic uncertainty. Thus, there is a tradeoff between changing a design to reduce failure rates and operating a fixed design to reduce uncertainty. This paper examines the impact of epistemic uncertainty on maintenance logistics requirements for future Mars missions, using data from the ISS Environmental Control and Life Support System (ECLS) as a baseline for a case study. Sensitivity analyses are performed to investigate the impact of variations in failure rate estimates and epistemic uncertainty on spares mass. The results of these analyses and their implications for future system design and mission planning are discussed.

  20. Enhanced Component Performance Study: Emergency Diesel Generators 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-11-01

    This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using (1) Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2014 and (2) maintenance unavailability (UA) performance data from Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2014. The objective is to show estimates of current failure probabilities and rates related to EDGs, trend these data on an annual basis, determine if the current data are consistent with the probability distributions currently recommended for use inmore » NRC probabilistic risk assessments, show how the reliability data differ for different EDG manufacturers and for EDGs with different ratings; and summarize the subcomponents, causes, detection methods, and recovery associated with each EDG failure mode. Engineering analyses were performed with respect to time period and failure mode without regard to the actual number of EDGs at each plant. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating. Six trends with varying degrees of statistical significance were identified in the data.« less

  1. Composite Interlaminar Shear Fracture Toughness, G(sub 2c): Shear Measurement of Sheer Myth?

    NASA Technical Reports Server (NTRS)

    OBrien, T. Kevin

    1997-01-01

    The concept of G2c as a measure of the interlaminar shear fracture toughness of a composite material is critically examined. In particular, it is argued that the apparent G2c as typically measured is inconsistent with the original definition of shear fracture. It is shown that interlaminar shear failure actually consists of tension failures in the resin rich layers between plies followed by the coalescence of ligaments created by these failures and not the sliding of two planes relative to one another that is assumed in fracture mechanics theory. Several strain energy release rate solutions are reviewed for delamination in composite laminates and structural components where failures have been experimentally documented. Failures typically occur at a location where the mode 1 component accounts for at least one half of the total G at failure. Hence, it is the mode I and mixed-mode interlaminar fracture toughness data that will be most useful in predicting delamination failure in composite components in service. Although apparent G2c measurements may prove useful for completeness of generating mixed-mode criteria, the accuracy of these measurements may have very little influence on the prediction of mixed-mode failures in most structural components.

  2. Development of KSC program for investigating and generating field failure rates. Reliability handbook for ground support equipment

    NASA Technical Reports Server (NTRS)

    Bloomquist, C. E.; Kallmeyer, R. H.

    1972-01-01

    Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results are also included.

  3. New understandings of failure modes in SSL luminaires

    NASA Astrophysics Data System (ADS)

    Shepherd, Sarah D.; Mills, Karmann C.; Yaga, Robert; Johnson, Cortina; Davis, J. Lynn

    2014-09-01

    As SSL products are being rapidly introduced into the market, there is a need to develop standard screening and testing protocols that can be performed quickly and provide data surrounding product lifetime and performance. These protocols, derived from standard industry tests, are known as ALTs (accelerated life tests) and can be performed in a timeframe of weeks to months instead of years. Accelerated testing utilizes a combination of elevated temperature and humidity conditions as well as electrical power cycling to control aging of the luminaires. In this study, we report on the findings of failure modes for two different luminaire products exposed to temperature-humidity ALTs. LEDs are typically considered the determining component for the rate of lumen depreciation. However, this study has shown that each luminaire component can independently or jointly influence system performance and reliability. Material choices, luminaire designs, and driver designs all have significant impacts on the system reliability of a product. From recent data, it is evident that the most common failure modes are not within the LED, but instead occur within resistors, capacitors, and other electrical components of the driver. Insights into failure modes and rates as a result of ALTs are reported with emphasis on component influence on overall system reliability.

  4. Monitoring the quality of total hip replacement in a tertiary care department using a cumulative summation statistical method (CUSUM).

    PubMed

    Biau, D J; Meziane, M; Bhumbra, R S; Dumaine, V; Babinet, A; Anract, P

    2011-09-01

    The purpose of this study was to define immediate post-operative 'quality' in total hip replacements and to study prospectively the occurrence of failure based on these definitions of quality. The evaluation and assessment of failure were based on ten radiological and clinical criteria. The cumulative summation (CUSUM) test was used to study 200 procedures over a one-year period. Technical criteria defined failure in 17 cases (8.5%), those related to the femoral component in nine (4.5%), the acetabular component in 32 (16%) and those relating to discharge from hospital in five (2.5%). Overall, the procedure was considered to have failed in 57 of the 200 total hip replacements (28.5%). The use of a new design of acetabular component was associated with more failures. For the CUSUM test, the level of adequate performance was set at a rate of failure of 20% and the level of inadequate performance set at a failure rate of 40%; no alarm was raised by the test, indicating that there was no evidence of inadequate performance. The use of a continuous monitoring statistical method is useful to ensure that the quality of total hip replacement is maintained, especially as newer implants are introduced.

  5. A novel approach for analyzing fuzzy system reliability using different types of intuitionistic fuzzy failure rates of components.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-03-01

    This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Development of KSC program for investigating and generating field failure rates. Volume 2: Recommended format for reliability handbook for ground support equipment

    NASA Technical Reports Server (NTRS)

    Bloomquist, C. E.; Kallmeyer, R. H.

    1972-01-01

    Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results presented in this handbook are also included.

  7. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  8. Failure mechanisms of fibrin-based surgical tissue adhesives

    NASA Astrophysics Data System (ADS)

    Sierra, David Hugh

    A series of studies was performed to investigate the potential impact of heterogeneity in the matrix of multiple-component fibrin-based tissue adhesives upon their mechanical and biomechanical properties both in vivo and in vitro. Investigations into the failure mechanisms by stereological techniques demonstrated that heterogeneity could be measured quantitatively and that the variation in heterogeneity could be altered both by the means of component mixing and delivery and by the formulation of the sealant. Ex vivo tensile adhesive strength was found to be inversely proportional to the amount of heterogeneity. In contrast, in vivo tensile wound-closure strength was found to be relatively unaffected by the degree of heterogeneity, while in vivo parenchymal organ hemostasis in rabbits was found to be affected: greater heterogeneity appeared to correlate with an increase in hemostasis time and amount of sealant necessary to effect hemostasis. Tensile testing of the bulk sealant showed that mechanical parameters were proportional to fibrin concentration and that the physical characteristics of the failure supported a ductile mechanism. Strain hardening as a function of percentage of strain, and strain rate was observed for both concentrations, and syneresis was observed at low strain rates for the lower fibrin concentration. Blister testing demonstrated that burst pressure and failure energy were proportional to fibrin concentration and decreased with increasing flow rate. Higher fibrin concentration demonstrated predominately compact morphology debonds with cohesive failure loci, demonstrating shear or viscous failure in a viscoelastic rubbery adhesive. The lower fibrin concentration sealant exhibited predominately fractal morphology debonds with cohesive failure loci, supporting an elastoviscous material condition. The failure mechanism for these was hypothesized and shown to be flow-induced ductile fracture. Based on these findings, the failure mechanism was stochastic in nature because the mean failure energy and burst pressure values were not predictive of locus and morphology. Instead, flow rate and fibrin concentration showed the most predictive value, with the outcome best described as a probability distribution rather than a specific deterministic outcome.

  9. A Methodology for Modeling Nuclear Power Plant Passive Component Aging in Probabilistic Risk Assessment under the Impact of Operating Conditions, Surveillance and Maintenance Activities

    NASA Astrophysics Data System (ADS)

    Guler Yigitoglu, Askin

    In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.

  10. Correlation study between vibrational environmental and failure rates of civil helicopter components

    NASA Technical Reports Server (NTRS)

    Alaniz, O.

    1979-01-01

    An investigation of two selected helicopter types, namely, the Models 206A/B and 212, is reported. An analysis of the available vibration and reliability data for these two helicopter types resulted in the selection of ten components located in five different areas of the helicopter and consisting primarily of instruments, electrical components, and other noncritical flight hardware. The potential for advanced technology in suppressing vibration in helicopters was assessed. The are still several unknowns concerning both the vibration environment and the reliability of helicopter noncritical flight components. Vibration data for the selected components were either insufficient or inappropriate. The maintenance data examined for the selected components were inappropriate due to variations in failure mode identification, inconsistent reporting, or inaccurate informaton.

  11. Acoustic emissions (AE) monitoring of large-scale composite bridge components

    NASA Astrophysics Data System (ADS)

    Velazquez, E.; Klein, D. J.; Robinson, M. J.; Kosmatka, J. B.

    2008-03-01

    Acoustic Emissions (AE) has been successfully used with composite structures to both locate and give a measure of damage accumulation. The current experimental study uses AE to monitor large-scale composite modular bridge components. The components consist of a carbon/epoxy beam structure as well as a composite to metallic bonded/bolted joint. The bonded joints consist of double lap aluminum splice plates bonded and bolted to carbon/epoxy laminates representing the tension rail of a beam. The AE system is used to monitor the bridge component during failure loading to assess the failure progression and using time of arrival to give insight into the origins of the failures. Also, a feature in the AE data called Cumulative Acoustic Emission counts (CAE) is used to give an estimate of the severity and rate of damage accumulation. For the bolted/bonded joints, the AE data is used to interpret the source and location of damage that induced failure in the joint. These results are used to investigate the use of bolts in conjunction with the bonded joint. A description of each of the components (beam and joint) is given with AE results. A summary of lessons learned for AE testing of large composite structures as well as insight into failure progression and location is presented.

  12. Developing Reliable Life Support for Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.

  13. Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee C. Cadwallader

    2010-06-01

    This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with “generic” component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance.

  14. Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee C. Cadwallader

    2007-08-01

    This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with “generic” component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance.

  15. Estimation of lifetime distributions on 1550-nm DFB laser diodes using Monte-Carlo statistic computations

    NASA Astrophysics Data System (ADS)

    Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc

    2004-09-01

    High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and equations of drift mechanisms versus time fitted from experimental measurements.

  16. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  17. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  18. Risk assessment of component failure modes and human errors using a new FMECA approach: application in the safety analysis of HDR brachytherapy.

    PubMed

    Giardina, M; Castiglia, F; Tomarchio, E

    2014-12-01

    Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.

  19. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less

  20. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  1. Modelling the failure behaviour of wind turbines

    NASA Astrophysics Data System (ADS)

    Faulstich, S.; Berkhout, V.; Mayer, J.; Siebenlist, D.

    2016-09-01

    Modelling the failure behaviour of wind turbines is an essential part of offshore wind farm simulation software as it leads to optimized decision making when specifying the necessary resources for the operation and maintenance of wind farms. In order to optimize O&M strategies, a thorough understanding of a wind turbine's failure behaviour is vital and is therefore being developed at Fraunhofer IWES. Within this article, first the failure models of existing offshore O&M tools are presented to show the state of the art and strengths and weaknesses of the respective models are briefly discussed. Then a conceptual framework for modelling different failure mechanisms of wind turbines is being presented. This framework takes into account the different wind turbine subsystems and structures as well as the failure modes of a component by applying several influencing factors representing wear and break failure mechanisms. A failure function is being set up for the rotor blade as exemplary component and simulation results have been compared to a constant failure rate and to empirical wind turbine fleet data as a reference. The comparison and the breakdown of specific failure categories demonstrate the overall plausibility of the model.

  2. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  3. Failure and life cycle evaluation of watering valves.

    PubMed

    Gonzalez, David M; Graciano, Sandy J; Karlstad, John; Leblanc, Mathias; Clark, Tom; Holmes, Scott; Reuter, Jon D

    2011-09-01

    Automated watering systems provide a reliable source of ad libitum water to animal cages. Our facility uses an automated water delivery system to support approximately 95% of the housed population (approximately 14,000 mouse cages). Drinking valve failure rates from 2002 through 2006 never exceeded the manufacturer standard of 0.1% total failure, based on monthly cage census and the number of floods. In 2007, we noted an increase in both flooding and cases of clinical dehydration in our mouse population. Using manufacturer's specifications for a water flow rate of 25 to 50 mL/min, we initiated a wide-scale screening of all valves used. During a 4-mo period, approximately 17,000 valves were assessed, of which 2200 failed according to scoring criteria (12.9% overall; 7.2% low flow; 1.6% no flow; 4.1% leaky). Factors leading to valve failures included residual metal shavings, silicone flash, introduced debris or bedding, and (most common) distortion of the autoclave-rated internal diaphragm and O-ring. Further evaluation revealed that despite normal autoclave conditions of heat, pressure, and steam, an extreme negative vacuum pull caused the valves' internal silicone components (diaphragm and O-ring) to become distorted and water-permeable. Normal flow rate often returned after a 'drying out' period, but components then reabsorbed water while on the animal rack or during subsequent autoclave cycles to revert to a variable flow condition. On the basis of our findings, we recalibrated autoclaves and initiated a preventative maintenance program to mitigate the risk of future valve failure.

  4. Failure and Life Cycle Evaluation of Watering Valves

    PubMed Central

    Gonzalez, David M; Graciano, Sandy J; Karlstad, John; Leblanc, Mathias; Clark, Tom; Holmes, Scott; Reuter, Jon D

    2011-01-01

    Automated watering systems provide a reliable source of ad libitum water to animal cages. Our facility uses an automated water delivery system to support approximately 95% of the housed population (approximately 14,000 mouse cages). Drinking valve failure rates from 2002 through 2006 never exceeded the manufacturer standard of 0.1% total failure, based on monthly cage census and the number of floods. In 2007, we noted an increase in both flooding and cases of clinical dehydration in our mouse population. Using manufacturer's specifications for a water flow rate of 25 to 50 mL/min, we initiated a wide-scale screening of all valves used. During a 4-mo period, approximately 17,000 valves were assessed, of which 2200 failed according to scoring criteria (12.9% overall; 7.2% low flow; 1.6% no flow; 4.1% leaky). Factors leading to valve failures included residual metal shavings, silicone flash, introduced debris or bedding, and (most common) distortion of the autoclave-rated internal diaphragm and O-ring. Further evaluation revealed that despite normal autoclave conditions of heat, pressure, and steam, an extreme negative vacuum pull caused the valves’ internal silicone components (diaphragm and O-ring) to become distorted and water-permeable. Normal flow rate often returned after a ‘drying out’ period, but components then reabsorbed water while on the animal rack or during subsequent autoclave cycles to revert to a variable flow condition. On the basis of our findings, we recalibrated autoclaves and initiated a preventative maintenance program to mitigate the risk of future valve failure. PMID:22330720

  5. Delamination modeling of laminate plate made of sublaminates

    NASA Astrophysics Data System (ADS)

    Kormaníková, Eva; Kotrasová, Kamila

    2017-07-01

    The paper presents the mixed-mode delamination of plates made of sublaminates. To this purpose an opening load mode of delamination is proposed as failure model. The failure model is implemented in ANSYS code to calculate the mixed-mode delamination response as energy release rate. The analysis is based on interface techniques. Within the interface finite element modeling there are calculated the individual components of damage parameters as spring reaction forces, relative displacements and energy release rates along the lamination front.

  6. Failure of aseptic revision total knee arthroplasties

    PubMed Central

    Leta, Tesfaye H; Lygre, Stein Håkon L; Skredderstuen, Arne; Hallan, Geir; Furnes, Ove

    2015-01-01

    Background and purpose In Norway, the proportion of revision knee arthroplasties increased from 6.9% in 1994 to 8.5% in 2011. However, there is limited information on the epidemiology and causes of subsequent failure of revision knee arthroplasty. We therefore studied survival rate and determined the modes of failure of aseptic revision total knee arthroplasties. Method This study was based on 1,016 aseptic revision total knee arthroplasties reported to the Norwegian Arthroplasty Register between 1994 and 2011. Revisions done for infections were not included. Kaplan-Meier and Cox regression analyses were used to assess the survival rate and the relative risk of re-revision with all causes of re-revision as endpoint. Results 145 knees failed after revision total knee arthroplasty. Deep infection was the most frequent cause of re-revision (28%), followed by instability (26%), loose tibial component (17%), and pain (10%). The cumulative survival rate for revision total knee arthroplasties was 85% at 5 years, 78% at 10 years, and 71% at 15 years. Revision total knee arthroplasties with exchange of the femoral or tibial component exclusively had a higher risk of re-revision (RR = 1.7) than those with exchange of the whole prosthesis. The risk of re-revision was higher for men (RR = 2.0) and for patients aged less than 60 years (RR = 1.6). Interpretation In terms of implant survival, revision of the whole implant was better than revision of 1 component only. Young age and male sex were risk factors for re-revision. Deep infection was the most frequent cause of failure of revision of aseptic total knee arthroplasties. PMID:25267502

  7. Reliability and availability analysis of a 10 kW@20 K helium refrigerator

    NASA Astrophysics Data System (ADS)

    Li, J.; Xiong, L. Y.; Liu, L. Q.; Wang, H. R.; Wang, B. M.

    2017-02-01

    A 10 kW@20 K helium refrigerator has been established in the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. To evaluate and improve this refrigerator’s reliability and availability, a reliability and availability analysis is performed. According to the mission profile of this refrigerator, a functional analysis is performed. The failure data of the refrigerator components are collected and failure rate distributions are fitted by software Weibull++ V10.0. A Failure Modes, Effects & Criticality Analysis (FMECA) is performed and the critical components with higher risks are pointed out. Software BlockSim V9.0 is used to calculate the reliability and the availability of this refrigerator. The result indicates that compressors, turbine and vacuum pump are the critical components and the key units of this refrigerator. The mitigation actions with respect to design, testing, maintenance and operation are proposed to decrease those major and medium risks.

  8. Developing Ultra Reliable Life Support for the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  9. Uncemented glenoid component in total shoulder arthroplasty. Survivorship and outcomes.

    PubMed

    Martin, Scott David; Zurakowski, David; Thornhill, Thomas S

    2005-06-01

    Glenoid component loosening continues to be a major factor affecting the long-term survivorship of total shoulder replacements. Radiolucent lines, cement fracture, migration, and loosening requiring revision are common problems with cemented glenoid components. The purpose of this study was to evaluate the results of total shoulder arthroplasty with an uncemented glenoid component and to identify predictors of glenoid component failure. One hundred and forty-seven consecutive total shoulder arthroplasties were performed in 132 patients (mean age, 63.3 years) with use of an uncemented glenoid component fixed with screws between 1988 and 1996. One hundred and forty shoulders in 124 patients were available for follow-up at an average of 7.5 years. One shoulder in which the arthroplasty had failed at 2.4 years and for which the duration of follow-up was four years was also included for completeness. The preoperative diagnoses included osteoarthritis in seventy-two shoulders and rheumatoid arthritis in fifty-five. Radiolucency was noted around the glenoid component and/or screws in fifty-three of the 140 shoulders. The mean modified ASES (American Shoulder and Elbow Surgeons) score (and standard deviation) improved from 15.6 +/- 11.8 points preoperatively to 75.8 +/- 17.5 points at the time of follow-up. Eighty-five shoulders were not painful, forty-two were slightly or mildly painful, ten were moderately painful, and three were severely painful. Fifteen (11%) of the glenoid components failed clinically, and ten of them also had radiographic signs of failure. Eleven other shoulders had radiographic signs of failure but no symptoms at the time of writing. Three factors had a significant independent association with clinical failure: male gender (p = 0.02), pain (p < 0.01), and radiolucency adjacent to the flat tray (p < 0.001). In addition, the annual risk of implant revision was nearly seven times higher for patients with radiographic signs of failure. Clinical survivorship was 95% at five years and 85% at ten years. The failure rates of the total shoulder arthroplasties in this study were higher than those in previously reported studies of cemented polyethylene components with similar durations of follow-up. Screw breakage and excessive polyethylene wear were common problems that may lead to additional failures of these uncemented glenoid components in the future.

  10. Reliability and Maintainability Data for Lead Lithium Cooling Systems

    DOE PAGES

    Cadwallader, Lee

    2016-11-16

    This article presents component failure rate data for use in assessment of lead lithium cooling systems. Best estimate data applicable to this liquid metal coolant is presented. Repair times for similar components are also referenced in this work. These data support probabilistic safety assessment and reliability, availability, maintainability and inspectability analyses.

  11. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

  12. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  13. The Influence of Various Factors on High School Football Helmet Face Mask Removal: A Retrospective, Cross-Sectional Analysis

    PubMed Central

    Swartz, Erik E; Decoster, Laura C; Norkus, Susan A; Cappaert, Thomas A

    2007-01-01

    Context: Most research on face mask removal has been performed on unused equipment. Objective: To identify and compare factors that influence the condition of helmet components and their relationship to face mask removal. Design: A cross-sectional, retrospective study. Setting: Five athletic equipment reconditioning/recertification facilities. Participants: 2584 helmets from 46 high school football teams representing 5 geographic regions. Intervention(s): Helmet characteristics (brand, model, hardware components) were recorded. Helmets were mounted and face mask removal was attempted using a cordless screwdriver. The 2004 season profiles and weather histories were obtained for each high school. Main Outcome Measure(s): Success and failure (including reason) for removal of 4 screws from the face mask were noted. Failure rates among regions, teams, reconditioning year, and screw color (type) were compared. Weather histories were compared. We conducted a discriminant analysis to determine if weather variables, region, helmet brand and model, reconditioning year, and screw color could predict successful face mask removal. Metallurgic analysis of screw samples was performed. Results: All screws were successfully removed from 2165 (84%) helmets. At least 1 screw could not be removed from 419 (16%) helmets. Significant differences were found for mean screw failure per helmet among the 5 regions, with the Midwest having the lowest failure rate (0.08 ± 0.38) and the Southern (0.33 ± 0.72), the highest. Differences were found in screw failure rates among the 46 teams (F1,45 = 9.4, P < .01). Helmets with the longest interval since last reconditioning (3 years) had the highest failure rate, 0.47 ± 0.93. Differences in success rates were found among 4 screw types (χ21,4 = 647, P < .01), with silver screws having the lowest percentage of failures (3.4%). A discriminant analysis (Λ = .932, χ214,n=2584 = 175.34, P < .001) revealed screw type to be the strongest predictor of successful removal. Conclusions: Helmets with stainless steel or nickel-plated carbon steel screws reconditioned in the previous year had the most favorable combination of factors for successful screw removal. T-nut spinning at the side screw locations was the most common reason and location for failure. PMID:17597938

  14. Is There Still a Role for Irrigation and Debridement With Liner Exchange in Acute Periprosthetic Total Knee Infection?

    PubMed

    Duque, Andrés F; Post, Zachary D; Lutz, Rex W; Orozco, Fabio R; Pulido, Sergio H; Ong, Alvin C

    2017-04-01

    Periprosthetic joint infection (PJI) is an important cause of failure in total knee arthroplasty. Irrigation and debridement including liner exchange (I&D/L) success rates have varied for acute PJI. The purpose of this study is to present results of a specific protocol for I&D/L with retention of total knee arthroplasty components. Sixty-seven consecutive I&D/L patients were retrospectively evaluated. Inclusion criteria for I&D/L were as follows: fewer than 3 weeks of symptoms, no immunologic compromise, intact soft tissue sleeve, and well-fixed components. I&D/L consisted of extensive synovectomy; irrigation with 3 L each of betadine, Dakin's, bacitracin, and normal saline solutions; and exchange of the polyethylene component. Postoperatively, all patients were treated with intravenous antibiotics. Infection was considered eradicated if the wound healed without persistent drainage, there was no residual pain or evidence of infection. Forty-six patients (68.66%) had successful infection eradication regardless of bacterial strain. Those with methicillin-resistant Staphylococcus aureus (MRSA) had an 80% failure rate and those with Pseudomonas aeruginosa had a 66.67% failure rate. The success rate for bacteria other than MRSA and Pseudomonas was 85.25%. Our protocol for I&D/L was successful in the majority of patients who met strict criteria. We recommend that PJI patients with MRSA or P aeruginosa not undergo I&D/L and be treated with 2-stage revision. For nearly all other patients, our protocol avoids the cost and patient morbidity of a 2-stage revision. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Fabrication of MEMS components using ultrafine-grained aluminium alloys

    NASA Astrophysics Data System (ADS)

    Qiao, Xiao Guang; Gao, Nong; Moktadir, Zakaria; Kraft, Michael; Starink, Marco J.

    2010-04-01

    A novel process for the fabrication of a microelectromechanical systems (MEMS) metallic component with features smaller than 10 µm and high thermal conductivity was investigated. This may be applied to new or improved microscale components, such as (micro-) heat exchangers. In the first stage of processing, equal channel angular pressing (ECAP) was employed to refine the grain size of commercial purity aluminium (Al-1050) to the ultrafine-grained (UFG) material. Embossing was conducted using a micro silicon mould fabricated by deep reactive ion etching (DRIE). Both cold embossing and hot embossing were performed on the coarse-grained and UFG Al-1050. Cold embossing on UFG Al-1050 led to a partially transferred pattern from the micro silicon mould and high failure rate of the mould. Hot embossing on UFG Al-1050 provided a smooth embossed surface with a fully transferred pattern and a low failure rate of the mould, while hot embossing on the coarse-grained Al-1050 resulted in a rougher surface with shear bands.

  16. On-orbit spacecraft reliability

    NASA Technical Reports Server (NTRS)

    Bloomquist, C.; Demars, D.; Graham, W.; Henmi, P.

    1978-01-01

    Operational and historic data for 350 spacecraft from 52 U.S. space programs were analyzed for on-orbit reliability. Failure rates estimates are made for on-orbit operation of spacecraft subsystems, components, and piece parts, as well as estimates of failure probability for the same elements during launch. Confidence intervals for both parameters are also given. The results indicate that: (1) the success of spacecraft operation is only slightly affected by most reported incidents of anomalous behavior; (2) the occurrence of the majority of anomalous incidents could have been prevented piror to launch; (3) no detrimental effect of spacecraft dormancy is evident; (4) cycled components in general are not demonstrably less reliable than uncycled components; and (5) application of product assurance elements is conductive to spacecraft success.

  17. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  18. Dynamic Response and Failure Mechanism of Brittle Rocks Under Combined Compression-Shear Loading Experiments

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Dai, Feng

    2018-03-01

    A novel method is developed for characterizing the mechanical response and failure mechanism of brittle rocks under dynamic compression-shear loading: an inclined cylinder specimen using a modified split Hopkinson pressure bar (SHPB) system. With the specimen axis inclining to the loading direction of SHPB, a shear component can be introduced into the specimen. Both static and dynamic experiments are conducted on sandstone specimens. Given carefully pulse shaping, the dynamic equilibrium of the inclined specimens can be satisfied, and thus the quasi-static data reduction is employed. The normal and shear stress-strain relationships of specimens are subsequently established. The progressive failure process of the specimen illustrated via high-speed photographs manifests a mixed failure mode accommodating both the shear-dominated failure and the localized tensile damage. The elastic and shear moduli exhibit certain loading-path dependence under quasi-static loading but loading-path insensitivity under high loading rates. Loading rate dependence is evidently demonstrated through the failure characteristics involving fragmentation, compression and shear strength and failure surfaces based on Drucker-Prager criterion. Our proposed method is convenient and reliable to study the dynamic response and failure mechanism of rocks under combined compression-shear loading.

  19. Rate-based structural health monitoring using permanently installed sensors

    PubMed Central

    2017-01-01

    Permanently installed sensors are becoming increasingly ubiquitous, facilitating very frequent in situ measurements and consequently improved monitoring of ‘trends’ in the observed system behaviour. It is proposed that this newly available data may be used to provide prior warning and forecasting of critical events, particularly system failure. Numerous damage mechanisms are examples of positive feedback; they are ‘self-accelerating’ with an increasing rate of damage towards failure. The positive feedback leads to a common time-response behaviour which may be described by an empirical relation allowing prediction of the time to criticality. This study focuses on Structural Health Monitoring of engineering components; failure times are projected well in advance of failure for fatigue, creep crack growth and volumetric creep damage experiments. The proposed methodology provides a widely applicable framework for using newly available near-continuous data from permanently installed sensors to predict time until failure in a range of application areas including engineering, geophysics and medicine. PMID:28989308

  20. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Technical Reports Server (NTRS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  1. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Astrophysics Data System (ADS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-10-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  2. Application of reliability-centered maintenance to boiling water reactor emergency core cooling systems fault-tree analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Y.A.; Feltus, M.A.

    1995-07-01

    Reliability-centered maintenance (RCM) methods are applied to boiling water reactor plant-specific emergency core cooling system probabilistic risk assessment (PRA) fault trees. The RCM is a technique that is system function-based, for improving a preventive maintenance (PM) program, which is applied on a component basis. Many PM programs are based on time-directed maintenance tasks, while RCM methods focus on component condition-directed maintenance tasks. Stroke time test data for motor-operated valves (MOVs) are used to address three aspects concerning RCM: (a) to determine if MOV stroke time testing was useful as a condition-directed PM task; (b) to determine and compare the plant-specificmore » MOV failure data from a broad RCM philosophy time period compared with a PM period and, also, compared with generic industry MOV failure data; and (c) to determine the effects and impact of the plant-specific MOV failure data on core damage frequency (CDF) and system unavailabilities for these emergency systems. The MOV stroke time test data from four emergency core cooling systems [i.e., high-pressure coolant injection (HPCI), reactor core isolation cooling (RCIC), low-pressure core spray (LPCS), and residual heat removal/low-pressure coolant injection (RHR/LPCI)] were gathered from Philadelphia Electric Company`s Peach Bottom Atomic Power Station Units 2 and 3 between 1980 and 1992. The analyses showed that MOV stroke time testing was not a predictor for eminent failure and should be considered as a go/no-go test. The failure data from the broad RCM philosophy showed an improvement compared with the PM-period failure rates in the emergency core cooling system MOVs. Also, the plant-specific MOV failure rates for both maintenance philosophies were shown to be lower than the generic industry estimates.« less

  3. Minimum ten-year results of primary bipolar hip arthroplasty for degenerative arthritis of the hip.

    PubMed

    Pellegrini, Vincent D; Heiges, Bradley A; Bixler, Brian; Lehman, Erik B; Davis, Charles M

    2006-08-01

    Bipolar hip arthroplasty has been advocated by some as an alternative to total hip arthroplasty for the treatment of degenerative arthritis of the hip. We sought to assess the results of this procedure at our institution after a minimum duration of follow-up of ten years. We retrospectively reviewed a consecutive series of 152 patients (173 hips) who underwent primary bipolar hemiarthroplasty for the treatment of symptomatic degenerative arthritis of the hip with a cementless femoral component between 1983 and 1987. Of the original cohort of 152 patients, ninety-two patients (104 hips) were available for clinical and radiographic review at a mean of 12.2 years postoperatively. At the time of the latest follow-up, self-administered Harris hip questionnaires were used to assess pain, mobility, activity level, and overall satisfaction with the procedure. Biplanar hip radiographs were made to evaluate bipolar shell migration, osteolysis, and femoral stem fixation. At the time of the latest follow-up, nineteen patients (nineteen hips) had undergone revision to total hip arthroplasty because of mechanical failure, and three patients (three hips) were awaiting revision because of symptomatic radiographic mechanical failure. Twelve acetabular revisions were performed or scheduled for the treatment of pelvic osteolysis or protrusio acetabuli secondary to component migration. Acetabular reconstruction required bone-grafting, an oversized shell, and/or a pelvic reconstruction ring. The overall rate of mechanical failure was 21.2% (twenty-two of 104 hips), with 91% (twenty) of the twenty-two failures involving the acetabular component. Reaming of the acetabulum at the time of the index arthroplasty was associated with a 6.4-fold greater risk of revision. The rate of implant survival, with revision because of mechanical failure as the end point, was 94.2% for femoral components and 80.8% for acetabular components at a mean of 12.2 years. Of the remaining sixty-nine patients (eighty-one hips) in whom the original prosthesis was retained, seventeen patients (24.6%) rated the pain as moderate to severe. Nearly 30% of patients with an intact prosthesis required analgesics on a regular basis. Radiographs were available for fifty-eight hips (including all of the hips with moderate to severe pain) after a minimum duration of follow-up of ten years; twenty-eight of these fifty-eight hips had radiographic evidence of acetabular component migration. This bipolar cup, when used for hemiarthroplasty in patients with symptomatic arthritis of the hip, was associated with unacceptably high rates of pain, migration, osteolysis, and the need for revision to total hip arthroplasty, especially when the acetabulum had been reamed. To the extent that these findings can be generalized to similar implant designs with conventional polyethylene, we do not recommend bipolar hemiarthroplasty as the primary operative treatment for degenerative arthritis of the hip.

  4. Levelized cost-benefit analysis of proposed diagnostics for the Ammunition Transfer Arm of the US Army`s Future Armored Resupply Vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, V.K.; Young, J.M.

    1995-07-01

    The US Army`s Project Manager, Advanced Field Artillery System/Future Armored Resupply Vehicle (PM-AFAS/FARV) is sponsoring the development of technologies that can be applied to the resupply vehicle for the Advanced Field Artillery System. The Engineering Technology Division of the Oak Ridge National Laboratory has proposed adding diagnostics/prognostics systems to four components of the Ammunition Transfer Arm of this vehicle, and a cost-benefit analysis was performed on the diagnostics/prognostics to show the potential savings that may be gained by incorporating these systems onto the vehicle. Possible savings could be in the form of reduced downtime, less unexpected or unnecessary maintenance, fewermore » regular maintenance checks. and/or tower collateral damage or loss. The diagnostics/prognostics systems are used to (1) help determine component problems, (2) determine the condition of the components, and (3) estimate the remaining life of the monitored components. The four components on the arm that are targeted for diagnostics/prognostics are (1) the electromechanical brakes, (2) the linear actuators, (3) the wheel/roller bearings, and (4) the conveyor drive system. These would be monitored using electrical signature analysis, vibration analysis, or a combination of both. Annual failure rates for the four components were obtained along with specifications for vehicle costs, crews, number of missions, etc. Accident scenarios based on component failures were postulated, and event trees for these scenarios were constructed to estimate the annual loss of the resupply vehicle, crew, arm. or mission aborts. A levelized cost-benefit analysis was then performed to examine the costs of such failures, both with and without some level of failure reduction due to the diagnostics/prognostics systems. Any savings resulting from using diagnostics/prognostics were calculated.« less

  5. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  6. Rock Testing Handbook (Test Standards 1993)

    DTIC Science & Technology

    1993-01-01

    surface. ergy lost due to nonuniformity of mediums. progressive failure-formation and development of localized refusal-in grouting, when the rate of...the components of mixed grout, resulting in nonuniform across which it acts. (ISRM) proportions in the mass. shear plane-a plane along which failure of...responr, le techndcal commttee, which you may attend if you teel mt your comments have not received a lair heating you should make your views known to the

  7. A dual-center review of compressive osseointegration for fixation of massive endoprosthetics: 2- to 9-year followup.

    PubMed

    Calvert, George T; Cummings, Judd E; Bowles, Austin J; Jones, Kevin B; Wurtz, L Daniel; Randall, R Lor

    2014-03-01

    Aseptic failure of massive endoprostheses used in the reconstruction of major skeletal defects remains a major clinical problem. Fixation using compressive osseointegration was developed as an alternative to cemented and traditional press-fit fixation in an effort to decrease aseptic failure rates. The purpose of this study was to answer the following questions: (1) What is the survivorship of this technique at minimum 2-year followup? (2) Were patient demographic variables (age, sex) or anatomic location associated with implant failure? (3) Were there any prosthesis-related variables (eg, spindle size) associated with failure? (4) Was there a discernible learning curve associated with the use of the new device as defined by a difference in failure rate early in the series versus later on? The first 50 cases using compressive osseointegration fixation from two tertiary referral centers were retrospectively studied. Rates of component removal for any reason and for aseptic failure were calculated. Demographic, surgical, and oncologic factors were analyzed using regression analysis to assess for association with implant failure. Minimum followup was 2 years with a mean of 66 months. Median age at the time of surgery was 14.5 years. A total of 15 (30%) implants were removed for any reason. Of these revisions, seven (14%) were the result of aseptic failure. Five of the seven aseptic failures occurred at less than 1 year (average, 8.3 months), and none occurred beyond 17 months. With the limited numbers available, no demographic, surgical, or prosthesis-related factors correlated with failure. Most aseptic failures of compressive osseointegration occurred early. Longer followup is needed to determine if this technique is superior to other forms of fixation.

  8. Anger, hostility, and hospitalizations in patients with heart failure.

    PubMed

    Keith, Felicia; Krantz, David S; Chen, Rusan; Harris, Kristie M; Ware, Catherine M; Lee, Amy K; Bellini, Paula G; Gottlieb, Stephen S

    2017-09-01

    Heart failure patients have a high hospitalization rate, and anger and hostility are associated with coronary heart disease morbidity and mortality. Using structural equation modeling, this prospective study assessed the predictive validity of anger and hostility traits for cardiovascular and all-cause rehospitalizations in patients with heart failure. 146 heart failure patients were administered the STAXI and Cook-Medley Hostility Inventory to measure anger, hostility, and their component traits. Hospitalizations were recorded for up to 3 years following baseline. Causes of hospitalizations were categorized as heart failure, total cardiac, noncardiac, and all-cause (sum of cardiac and noncardiac). Measurement models were separately fit for Anger and Hostility, followed by a Confirmatory Factor Analysis to estimate the relationship between the Anger and Hostility constructs. An Anger model consisted of State Anger, Trait Anger, Anger Expression Out, and Anger Expression In, and a Hostility model included Cynicism, Hostile Affect, Aggressive Responding, and Hostile Attribution. The latent construct of Anger did not predict any of the hospitalization outcomes, but Hostility significantly predicted all-cause hospitalizations. Analyses of individual trait components of each of the 2 models indicated that Anger Expression Out predicted all-cause and noncardiac hospitalizations, and Trait Anger predicted noncardiac hospitalizations. None of the individual components of Hostility were related to rehospitalizations or death. The construct of Hostility and several components of Anger are predictive of hospitalizations that were not specific to cardiac causes. Mechanisms common to a variety of health problems, such as self-care and risky health behaviors, may be involved in these associations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. FOR Allocation to Distribution Systems based on Credible Improvement Potential (CIP)

    NASA Astrophysics Data System (ADS)

    Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.

    2017-02-01

    This paper describes an algorithm for forced outage rate (FOR) allocation to each section of an electrical distribution system subject to satisfaction of reliability constraints at each load point. These constraints include threshold values of basic reliability indices, for example, failure rate, interruption duration and interruption duration per year at load points. Component improvement potential measure has been used for FOR allocation. Component with greatest magnitude of credible improvement potential (CIP) measure is selected for improving reliability performance. The approach adopted is a monovariable method where one component is selected for FOR allocation and in the next iteration another component is selected for FOR allocation based on the magnitude of CIP. The developed algorithm is implemented on sample radial distribution system.

  10. The Oxford unicompartmental knee fails at a high rate in a high-volume knee practice.

    PubMed

    Schroer, William C; Barnes, C Lowry; Diesfeld, Paul; LeMarr, Angela; Ingrassia, Rachel; Morton, Diane J; Reedy, Mary

    2013-11-01

    The Oxford knee is a unicompartmental implant featuring a mobile-bearing polyethylene component with excellent long-term survivorship results reported by the implant developers and early adopters. By contrast, other studies have reported higher revision rates in large academic practices and in national registries. Registry data have shown increased failure with this implant especially by lower-volume surgeons and institutions. In the setting of a high-volume knee arthroplasty practice, we sought to determine (1) the failure rate of the Oxford unicompartmental knee implant using a failure definition for aseptic loosening that combined clinical features, plain radiographs, and scintigraphy, and (2) whether increased experience with this implant would decrease failure rate, if there is a learning curve effect. Eighty-three Oxford knee prostheses were implanted between September 2005 and July 2008 by the principal investigator. Radiographic and clinical data were available for review for all cases. A failed knee was defined as having recurrent pain after an earlier period of recovery from surgery, progressive radiolucent lines compared with initial postoperative radiographs, and a bone scan showing an isolated area of uptake limited to the area of the replaced compartment. Eleven knees in this series failed (13%); Kaplan-Meier survivorship was 86.5% (95% CI, 78.0%-95.0%) at 5 years. Failure occurrences were distributed evenly over the course of the study period. No learning curve effect was identified. Based on these findings, including a high failure rate of the Oxford knee implant and the absence of any discernible learning curve effect, the principal investigator no longer uses this implant.

  11. Failure after reverse total shoulder arthroplasty: what is the success of component revision?

    PubMed

    Black, Eric M; Roberts, Susanne M; Siegel, Elana; Yannopoulos, Paul; Higgins, Laurence D; Warner, Jon J P

    2015-12-01

    Complication rates remain high after reverse total shoulder arthroplasty (RTSA). Salvage options after implant failure have not been well defined. This study examines the role of reimplantation and revision RTSA after failed RTSA, reporting outcomes and complications of this salvage technique. Sixteen patients underwent component revision and reimplantation after a prior failed RTSA from 2004 to 2011. Indications included baseplate failure (7 patients, 43.8%), instability (6 patients, 37.5%), infection (2 patients, 12.5%), and humeral loosening (1 patient, 6.3%). The average age of the patient during revision surgery was 68.6 years. Outcomes information at follow-up was recorded, including visual analog scale score for pain, subjective shoulder value, American Shoulder and Elbow Surgeons score, and Simple Shoulder Test score, and these were compared with pre-revision values. Repeated surgeries and complications were noted. Average time to follow-up from revision was 58.9 months (minimum, 2 years; range, 24-103 months). The average postoperative visual analog scale score for pain was 1.7/10 (7.5/10 preoperatively; P < .0001), and the subjective shoulder value was 62% (17% preoperatively; P < .0001). The average postoperative American Shoulder and Elbow Surgeons score was 66.7, and the Simple Shoulder Test score was 52.6. Fourteen patients (88%) noted that they felt "better" postoperatively than before their original RTSA and would go through the procedure again if given the option. Nine patients suffered major complications (56%), and 6 of these ultimately underwent further procedures (38% of cohort). Salvage options after failure of RTSA remain limited. Component revision and reimplantation can effectively relieve pain and improve function compared with baseline values, and patient satisfaction levels are moderately high. However, complication rates and reoperation rates are significant. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  12. Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.

  13. Fatigue failure of metal components as a factor in civil aircraft accidents

    NASA Technical Reports Server (NTRS)

    Holshouser, W. L.; Mayner, R. D.

    1972-01-01

    A review of records maintained by the National Transportation Safety Board showed that 16,054 civil aviation accidents occurred in the United States during the 3-year period ending December 31, 1969. Material failure was an important factor in the cause of 942 of these accidents. Fatigue was identified as the mode of the material failures associated with the cause of 155 accidents and in many other accidents the records indicated that fatigue failures might have been involved. There were 27 fatal accidents and 157 fatalities in accidents in which fatigue failures of metal components were definitely identified. Fatigue failures associated with accidents occurred most frequently in landing-gear components, followed in order by powerplant, propeller, and structural components in fixed-wing aircraft and tail-rotor and main-rotor components in rotorcraft. In a study of 230 laboratory reports on failed components associated with the cause of accidents, fatigue was identified as the mode of failure in more than 60 percent of the failed components. The most frequently identified cause of fatigue, as well as most other types of material failures, was improper maintenance (including inadequate inspection). Fabrication defects, design deficiencies, defective material, and abnormal service damage also caused many fatigue failures. Four case histories of major accidents are included in the paper as illustrations of some of the factors invovled in fatigue failures of aircraft components.

  14. Molecular Dynamics Modeling of PPTA Crystals in Aramid Fibers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mercer, Brian Scott

    2016-05-19

    In this work, molecular dynamics modeling is used to study the mechanical properties of PPTA crystallites, which are the fundamental microstructural building blocks of polymer aramid bers such as Kevlar. Particular focus is given to constant strain rate axial loading simulations of PPTA crystallites, which is motivated by the rate-dependent mechanical properties observed in some experiments with aramid bers. In order to accommodate the covalent bond rupture that occurs in loading a crystallite to failure, the reactive bond order force eld ReaxFF is employed to conduct the simulations. Two major topics are addressed: The rst is the general behavior ofmore » PPTA crystallites under strain rate loading. Constant strain rate loading simulations of crystalline PPTA reveal that the crystal failure strain increases with increasing strain rate, while the modulus is not a ected by the strain rate. Increasing temperature lowers both the modulus and the failure strain. The simulations also identify the C N bond connecting the aromatic rings as weakest primary bond along the backbone of the PPTA chain. The e ect of chain-end defects on PPTA micromechanics is explored, and it is found that the presence of a chain-end defect transfers load to the adjacent chains in the hydrogen-bonded sheet in which the defect resides, but does not in uence the behavior of any other chains in the crystal. Chain-end defects are found to lower the strength of the crystal when clustered together, inducing bond failure via stress concentrations arising from the load transfer to bonds in adjacent chains near the defect site. The second topic addressed is the nature of primary and secondary bond failure in crystalline PPTA. Failure of both types of bonds is found to be stochastic in nature and driven by thermal uctuations of the bonds within the crystal. A model is proposed which uses reliability theory to model bonds under constant strain rate loading as components with time-dependent failure rate functions. The model is shown to work well for predicting the onset of primary backbone bond failure, as well as the onset of secondary bond failure via chain slippage for the case of isolated non-interacting chain-end defects.« less

  15. Design of Critical Components

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C.; Zaretsky, Erwin V.

    2001-01-01

    Critical component design is based on minimizing product failures that results in loss of life. Potential catastrophic failures are reduced to secondary failures where components removed for cause or operating time in the system. Issues of liability and cost of component removal become of paramount importance. Deterministic design with factors of safety and probabilistic design address but lack the essential characteristics for the design of critical components. In deterministic design and fabrication there are heuristic rules and safety factors developed over time for large sets of structural/material components. These factors did not come without cost. Many designs failed and many rules (codes) have standing committees to oversee their proper usage and enforcement. In probabilistic design, not only are failures a given, the failures are calculated; an element of risk is assumed based on empirical failure data for large classes of component operations. Failure of a class of components can be predicted, yet one can not predict when a specific component will fail. The analogy is to the life insurance industry where very careful statistics are book-kept on classes of individuals. For a specific class, life span can be predicted within statistical limits, yet life-span of a specific element of that class can not be predicted.

  16. Evaluation of the split cantilever beam for Mode 3 delamination testing

    NASA Technical Reports Server (NTRS)

    Martin, Roderick H.

    1989-01-01

    A test rig for testing a thick split cantilever beam for scissoring delamination (mode 3) fracture toughness was developed. A 3-D finite element analysis was conducted on the test specimen to determine the strain energy release rate, G, distribution along the delamination front. The virtual crack closure technique was used to calculate the G components resulting from interlaminar tension, GI, interlaminar sliding shear, GII, and interlaminar tearing shear, GIII. The finite element analysis showed that at the delamination front no GI component existed, but a GII component was present in addition to a GIII component. Furthermore, near the free edges, the GII component was significantly higher than the GIII component. The GII/GIII ratio was found to increase with delamination length but was insensitive to the beam depth. The presence of GII at the delamination front was verified experimentally by examination of the failure surfaces. At the center of the beam, where the failure was in mode 3, there was significant fiber bridging. However, at the edges of the beam where the failure was in mode 3, there was no fiber bridging and mode 2 shear hackles were observed. Therefore, it was concluded that the split cantilever beam configuration does not represent a pure mode 3 test. The experimental work showed that the mode 2 fracture toughness, GIIc, must be less than the mode 3 fracture toughness, GIIIc. Therefore, a conservative approach to characterizing mode 3 delamination is to equate GIIIc to GIIc.

  17. Schooling as a Lottery: Racial Differences in School Advancement in Urban South Africa†

    PubMed Central

    Lam, David; Ardington, Cally; Leibbrandt, Murray

    2010-01-01

    This paper analyzes the large racial differences in progress through secondary school in South Africa. Using recently collected longitudinal data we find that grade advancement is strongly associated with scores on a baseline literacy and numeracy test. In grades 8-11 the effect of these scores on grade progression is much stronger for white and coloured students than for African students, while there is no racial difference in the impact of the scores on passing the nationally standardized grade 12 matriculation exam. We develop a stochastic model of grade repetition that generates predictions consistent with these results. The model predicts that a larger stochastic component in the link between learning and measured performance will generate higher enrollment, higher failure rates, and a weaker link between ability and grade progression. The results suggest that grade progression in African schools is poorly linked to actual ability and learning. The results point to the importance of considering the stochastic component of grade repetition in analyzing school systems with high failure rates. PMID:21499515

  18. SCADA alarms processing for wind turbine component failure detection

    NASA Astrophysics Data System (ADS)

    Gonzalez, E.; Reder, M.; Melero, J. J.

    2016-09-01

    Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.

  19. Micromechanical investigation of ductile failure in Al 5083-H116 via 3D unit cell modeling

    NASA Astrophysics Data System (ADS)

    Bomarito, G. F.; Warner, D. H.

    2015-01-01

    Ductile failure is governed by the evolution of micro-voids within a material. The micro-voids, which commonly initiate at second phase particles within metal alloys, grow and interact with each other until failure occurs. The evolution of the micro-voids, and therefore ductile failure, depends on many parameters (e.g., stress state, temperature, strain rate, void and particle volume fraction, etc.). In this study, the stress state dependence of the ductile failure of Al 5083-H116 is investigated by means of 3-D Finite Element (FE) periodic cell models. The cell models require only two pieces of information as inputs: (1) the initial particle volume fraction of the alloy and (2) the constitutive behavior of the matrix material. Based on this information, cell models are subjected to a given stress state, defined by the stress triaxiality and the Lode parameter. For each stress state, the cells are loaded in many loading orientations until failure. Material failure is assumed to occur in the weakest orientation, and so the orientation in which failure occurs first is considered as the critical orientation. The result is a description of material failure that is derived from basic principles and requires no fitting parameters. Subsequently, the results of the simulations are used to construct a homogenized material model, which is used in a component-scale FE model. The component-scale FE model is compared to experiments and is shown to over predict ductility. By excluding smaller nucleation events and load path non-proportionality, it is concluded that accuracy could be gained by including more information about the true microstructure in the model; emphasizing that its incorporation into micromechanical models is critical to developing quantitatively accurate physics-based ductile failure models.

  20. Diagnosing Recent Failures In Hodoscope Photomultiplier Tube Bases For FNAL E906

    NASA Astrophysics Data System (ADS)

    Stien, Haley; SeaQuest Collaboration

    2017-09-01

    The E906/SeaQuest experiment at Fermi National Accelerator Laboratory is researching the nucleon quark sea in order to provide an accurate determination of the quark and anti-quark distributions within the nucleon. By colliding a 120 GeV proton beam with a set of fixed targets and tracking the dimuons that hit the detectors, it is possible to study the quark/anti-quark interaction that produced the unique dimuon through the Drell-Yan process. However, E906 recently began to experience a number of failures in the Hodoscope Photomultiplier Tube bases in the first two detector stations, which are used in the trigger. It was known that the two most likely causes were radiation damage or overheating. Radiation damage was able to be ruled out when it was found that there was no increase in the number of base failures in high rate areas. It was clear that the heat generated on the custom high rate bases caused several components on the daughter cards to slowly overheat until failure. Using thermal imaging and two temperature probes, it was observed that the components on the daughter cards would reach temperatures over 100 degrees Celcius very quickly during our tests. This presentation will discuss the diagnostic process and summarize how this issue will be prevented in the future. Supported by U.S. D.O.E. Medium Energy Nuclear Physics under Grant DE-FG02-03ER41243.

  1. California State University, Northridge: Hybrid Lab Courses

    ERIC Educational Resources Information Center

    EDUCAUSE, 2014

    2014-01-01

    California State University, Northridge's Hybrid Lab course model targets high failure rate, multisection, gateway courses in which prerequisite knowledge is a key to success. The Hybrid Lab course model components incorporate interventions and practices that have proven successful at CSUN and other campuses in supporting students, particularly…

  2. Five year survival analysis of an oxidised zirconium total knee arthroplasty.

    PubMed

    Holland, Philip; Santini, Alasdair J A; Davidson, John S; Pope, Jill A

    2013-12-01

    Zirconium total knee arthroplasties theoretically have a low incidence of failure as they are low friction, hard wearing and hypoallergenic. We report the five year survival of 213 Profix zirconium total knee arthroplasties with a conforming all polyethylene tibial component. Data was collected prospectively and multiple strict end points were used. SF12 and WOMAC scores were recorded pre-operatively, at three months, at twelve months, at 3 years and at 5 years. Eight patients died and six were "lost to follow-up". The remaining 199 knees were followed up for five years. The mean WOMAC score improved from 56 to 35 and the mean SF12 physical component score improved from 28 to 34. The five year survival for failure due to implant related reasons was 99.5% (95% CI 97.4-100). This was due to one tibial component becoming loose aseptically in year zero. Our results demonstrate that the Profix zirconium total knee arthroplasty has a low medium term failure rate comparable to the best implants. Further research is needed to establish if the beneficial properties of zirconium improve long term implant survival. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Higher failure rate and stem migration of an uncemented femoral component in patients with femoral head osteonecrosis than in patients with osteoarthrosis.

    PubMed

    Radl, Roman; Hungerford, Marc; Materna, Wilfried; Rehak, Peter; Windhager, Reinhard

    2005-02-01

    Several authors have found poorer outcome after hip replacement for osteonecrosis than after hip replacement for arthrosis. In a retrospective study we evaluated the performance of an uncemented femoral component in patients with osteonecrosis and arthrosis of the hip. 31 patients operated for osteonecrosis, and 49 patients operated for osteoarthrosis were included. The median follow-up time was 6.1 (2-11) years for the patients with osteonecrosis, and 5.9 (4-8) for the arthrosis patients. Migration analysis performed by the Einzel-Bild-Roentgen Analysis (EBRA) technique revealed a median stem migration of 1.5 (-8.8-0) mm in the patients with osteonecrosis, but only 0.6 (-2.8-0.7) mm in the patients with arthrosis (p < 0.001). Survivorship analysis with stem revision as endpoint for failure was 74% (95% CI: 55-94) in the osteonecrosis, and 98% (95% CI: 94-100) in the arthrosis group (p = 0.01). We suggest that the higher failure rate and stem migration of uncemented total hip replacement in the patients with osteonecrosis is a consequence of the disease. On the basis of these findings, we recommend close monitoring of the patients with osteonecrosis, which should include migration measurements.

  4. Poor short term outcome with a metal-on-metal total hip arthroplasty.

    PubMed

    Levy, Yadin D; Ezzet, Kace A

    2013-08-01

    Metal-on-metal (MoM) bearings for total hip arthroplasty (THA) have come under scrutiny with reports of high failure rates. Clinical outcome studies with several commercially available MoM THA bearings remain unreported. We evaluated 78 consecutive MoM THAs from a single manufacturer in 68 patients. Sixty-six received cobalt-chrome (CoCr) monoblock and 12 received modular titanium acetabular cups with internal CoCr liners. Femoral components were titanium with modular necks. At average 2.1 years postoperatively, 12 THAs (15.4%) demonstrated aseptic failure (10 revisions, 2 revision recommended). All revised hips demonstrated capsular necrosis with positive histology reaction for aseptic lymphocytic vasculitis-associated lesions/adverse local tissue reactions. Prosthetic instability following revision surgery was relatively common. Female gender was a strong risk factor for failure, though smaller cups were not. Both monoblock and modular components fared poorly. Corrosion was frequently observed around the proximal and distal end of the modular femoral necks. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Technological development of high energy density capacitors. [for spacecraft power supplies

    NASA Technical Reports Server (NTRS)

    Parker, R. D.

    1976-01-01

    A study was conducted to develop cylindrical wound metallized film capacitors rated 2 micron F 500 VDC that had energy densities greater than 0.1J/g. Polysulfone (PS) and polyvinylidene (PVF2) were selected as dielectrics. Single film PS capacitors of 0.2J/g (uncased) were made of 3.75 micron material. Single film PVF2 capacitors of 0.19J/g (uncased) were made of 6.0 micron material. Corona measurements were made at room temperature, and capacitance and dissipation factor measurements were made over the ranges 25 C to 125 C and 120 Hz to 100 kHz. Nineteen of twenty PVF2 components survived a 2500 hour dc plus ac life test. Failure analyses revealed most failures occurred at wrinkles, but some edge failures were also seen. A 0.989g case was designed. When the case was combined with the PVF2 component, a finished energy density of 0.11J/g was achieved.

  6. Space tug propulsion system failure mode, effects and criticality analysis

    NASA Technical Reports Server (NTRS)

    Boyd, J. W.; Hardison, E. P.; Heard, C. B.; Orourke, J. C.; Osborne, F.; Wakefield, L. T.

    1972-01-01

    For purposes of the study, the propulsion system was considered as consisting of the following: (1) main engine system, (2) auxiliary propulsion system, (3) pneumatic system, (4) hydrogen feed, fill, drain and vent system, (5) oxygen feed, fill, drain and vent system, and (6) helium reentry purge system. Each component was critically examined to identify possible failure modes and the subsequent effect on mission success. Each space tug mission consists of three phases: launch to separation from shuttle, separation to redocking, and redocking to landing. The analysis considered the results of failure of a component during each phase of the mission. After the failure modes of each component were tabulated, those components whose failure would result in possible or certain loss of mission or inability to return the Tug to ground were identified as critical components and a criticality number determined for each. The criticality number of a component denotes the number of mission failures in one million missions due to the loss of that component. A total of 68 components were identified as critical with criticality numbers ranging from 1 to 2990.

  7. A Novel Solution-Technique Applied to a Novel WAAS Architecture

    NASA Technical Reports Server (NTRS)

    Bavuso, J.

    1998-01-01

    The Federal Aviation Administration has embarked on an historic task of modernizing and significantly improving the national air transportation system. One system that uses the Global Positioning System (GPS) to determine aircraft navigational information is called the Wide Area Augmentation System (WAAS). This paper describes a reliability assessment of one candidate system architecture for the WAAS. A unique aspect of this study regards the modeling and solution of a candidate system that allows a novel cold sparing scheme. The cold spare is a WAAS communications satellite that is fabricated and launched after a predetermined number of orbiting satellite failures have occurred and after some stochastic fabrication time transpires. Because these satellites are complex systems with redundant components, they exhibit an increasing failure rate with a Weibull time to failure distribution. Moreover, the cold spare satellite build-time is Weibull and upon launch is considered to be a good-as-new system with an increasing failure rate and a Weibull time to failure distribution as well. The reliability model for this system is non-Markovian because three distinct system clocks are required: the time to failure of the orbiting satellites, the build time for the cold spare, and the time to failure for the launched spare satellite. A powerful dynamic fault tree modeling notation and Monte Carlo simulation technique with importance sampling are shown to arrive at a reliability prediction for a 10 year mission.

  8. A Case Study on the Relational Component of Professional Learning Teams

    ERIC Educational Resources Information Center

    Willis, Matthew R.

    2017-01-01

    This dissertation is focused on intentional intervention strategies adopted in a prior five-year period by Denver Area High School for the purpose of reducing suspensions, expulsions, referrals, other minor disciplinary infractions, and reduced failure rates. Those strategies included implementing a Culture of Care, which included (a) Restorative…

  9. Efficient Simulation and Abuse Modeling of Mechanical-Electrochemical-Thermal Phenomena in Lithium-Ion Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanagopalan, Shriram; Smith, Kandler A; Graf, Peter A

    NREL's Energy Storage team is exploring the effect of mechanical crush of lithium ion cells on their thermal and electrical safety. PHEV cells, fresh as well as ones aged over 8 months under different temperatures, voltage windows, and charging rates, were subjected to destructive physical analysis. Constitutive relationship and failure criteria were developed for the electrodes, separator as well as packaging material. The mechanical models capture well, the various modes of failure across different cell components. Cell level validation is being conducted by Sandia National Laboratories.

  10. Stress Analysis of B-52B and B-52H Air-Launching Systems Failure-Critical Structural Components

    NASA Technical Reports Server (NTRS)

    Ko, William L.

    2005-01-01

    The operational life analysis of any airborne failure-critical structural component requires the stress-load equation, which relates the applied load to the maximum tangential tensile stress at the critical stress point. The failure-critical structural components identified are the B-52B Pegasus pylon adapter shackles, B-52B Pegasus pylon hooks, B-52H airplane pylon hooks, B-52H airplane front fittings, B-52H airplane rear pylon fitting, and the B-52H airplane pylon lower sway brace. Finite-element stress analysis was performed on the said structural components, and the critical stress point was located and the stress-load equation was established for each failure-critical structural component. The ultimate load, yield load, and proof load needed for operational life analysis were established for each failure-critical structural component.

  11. Failure analysis of storage tank component in LNG regasification unit using fault tree analysis method (FTA)

    NASA Astrophysics Data System (ADS)

    Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo

    2017-03-01

    Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.

  12. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.

  13. Analysis of Emergency Diesel Generators Failure Incidents in Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Hunt, Ronderio LaDavis

    In early years of operation, emergency diesel generators have had a minimal rate of demand failures. Emergency diesel generators are designed to operate as a backup when the main source of electricity has been disrupted. As of late, EDGs (emergency diesel generators) have been failing at NPPs (nuclear power plants) around the United States causing either station blackouts or loss of onsite and offsite power. These failures occurred from a specific type called demand failures. This thesis evaluated the current problem that raised concern in the nuclear industry which was averaging 1 EDG demand failure/year in 1997 to having an excessive event of 4 EDG demand failure year which occurred in 2011. To determine the next occurrence of the extreme event and possible cause to an event of such happening, two analyses were conducted, the statistical and root cause analysis. Considering the statistical analysis in which an extreme event probability approach was applied to determine the next occurrence year of an excessive event as well as, the probability of that excessive event occurring. Using the root cause analysis in which the potential causes of the excessive event occurred by evaluating, the EDG manufacturers, aging, policy changes/ maintenance practices and failure components. The root cause analysis investigated the correlation between demand failure data and historical data. Final results from the statistical analysis showed expectations of an excessive event occurring in a fixed range of probability and a wider range of probability from the extreme event probability approach. The root-cause analysis of the demand failure data followed historical statistics for the EDG manufacturer, aging and policy changes/ maintenance practices but, indicated a possible cause regarding the excessive event with the failure components. Conclusions showed the next excessive demand failure year, prediction of the probability and the next occurrence year of such failures, with an acceptable confidence level, was difficult but, it was likely that this type of failure will not be a 100 year event. It was noticeable to see that the majority of the EDG demand failures occurred within the main components as of 2005. The overall analysis of this study provided from percentages, indicated that it would be appropriate to make the statement that the excessive event was caused by the overall age (wear and tear) of the Emergency Diesel Generators in Nuclear Power Plants. Future Work will be to better determine the return period of the excessive event once the occurrence has happened for a second time by implementing the extreme event probability approach.

  14. Socket position determines hip resurfacing 10-year survivorship.

    PubMed

    Amstutz, Harlan C; Le Duff, Michel J; Johnson, Alicia J

    2012-11-01

    Modern metal-on-metal hip resurfacing arthroplasty designs have been used for over a decade. Risk factors for short-term failure include small component size, large femoral head defects, low body mass index, older age, high level of sporting activity, and component design, and it is established there is a surgeon learning curve. Owing to failures with early surgical techniques, we developed a second-generation technique to address those failures. However, it is unclear whether the techniques affected the long-term risk factors. We (1) determined survivorship for hips implanted with the second-generation cementing technique; (2) identified the risk factors for failure in these patients; and (3) determined the effect of the dominant risk factors on the observed modes of failure. We retrospectively reviewed the first 200 hips (178 patients) implanted using our second-generation surgical technique, which consisted of improvements in cleaning and drying the femoral head before and during cement application. There were 129 men and 49 women. Component orientation and contact patch to rim distance were measured. We recorded the following modes of failure: femoral neck fracture, femoral component loosening, acetabular component loosening, wear, dislocation, and sepsis. The minimum followup was 25 months (mean, 106.5 months; range, 25-138 months). Twelve hips were revised. Kaplan-Meier survivorship was 98.0% at 5 years and 94.3% at 10 years. The only variable associated with revision was acetabular component position. Contact patch to rim distance was lower in hips that dislocated, were revised for wear, or were revised for acetabular loosening. The dominant modes of failure were related to component wear or acetabular component loosening. Acetabular component orientation, a factor within the surgeon's control, determines the long-term success of our current hip resurfacing techniques. Current techniques have changed the modes of failure from aseptic femoral failure to wear or loosening of the acetabular component. Level III, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.

  15. Effect of an Interactive Component on Students' Conceptual Understanding of Hypothesis Testing

    ERIC Educational Resources Information Center

    Inkpen, Sarah Anne

    2016-01-01

    The Premier Technical College of Qatar (PTC-Q) has seen high failure rates among students taking a college statistics course. The students are English as a foreign language (EFL) learners in business studies and health sciences. Course delivery has involved conventional content/ curriculum-centered instruction with minimal to no interactive…

  16. Single Event Effects (SEE) for Power Metal-Oxide-Semiconductor Field-Effect Transistors (MOSFETs)

    NASA Technical Reports Server (NTRS)

    Lauenstein, Jean-Marie

    2011-01-01

    Single-event gate rupture (SEGR) continues to be a key failure mode in power MOSFETs. (1) SEGR is complex, making rate prediction difficult SEGR mechanism has two main components: (1) Oxide damage-- Reduces field required for rupture (2) Epilayer response -- Creates transient high field across the oxide.

  17. Heart-rate variability depression in porcine peritonitis-induced sepsis without organ failure.

    PubMed

    Jarkovska, Dagmar; Valesova, Lenka; Chvojka, Jiri; Benes, Jan; Danihel, Vojtech; Sviglerova, Jitka; Nalos, Lukas; Matejovic, Martin; Stengl, Milan

    2017-05-01

    Depression of heart-rate variability (HRV) in conditions of systemic inflammation has been shown in both patients and experimental animal models and HRV has been suggested as an early indicator of sepsis. The sensitivity of HRV-derived parameters to the severity of sepsis, however, remains unclear. In this study we modified the clinically relevant porcine model of peritonitis-induced sepsis in order to avoid the development of organ failure and to test the sensitivity of HRV to such non-severe conditions. In 11 anesthetized, mechanically ventilated and instrumented domestic pigs of both sexes, sepsis was induced by fecal peritonitis. The dose of feces was adjusted and antibiotic therapy was administered to avoid multiorgan failure. Experimental subjects were screened for 40 h from the induction of sepsis. In all septic animals, sepsis with hyperdynamic circulation and increased plasma levels of inflammatory mediators developed within 12 h from the induction of peritonitis. The sepsis did not progress to multiorgan failure and there was no spontaneous death during the experiment despite a modest requirement for vasopressor therapy in most animals (9/11). A pronounced reduction of HRV and elevation of heart rate developed quickly (within 5 h, time constant of 1.97 ± 0.80 h for HRV parameter TINN) upon the induction of sepsis and were maintained throughout the experiment. The frequency domain analysis revealed a decrease in the high-frequency component. The reduction of HRV parameters and elevation of heart rate preceded sepsis-associated hemodynamic changes by several hours (time constant of 11.28 ± 2.07 h for systemic vascular resistance decline). A pronounced and fast reduction of HRV occurred in the setting of a moderate experimental porcine sepsis without organ failure. Inhibition of parasympathetic cardiac signaling probably represents the main mechanism of HRV reduction in sepsis. The sensitivity of HRV to systemic inflammation may allow early detection of a moderate sepsis without organ failure. Impact statement A pronounced and fast reduction of heart-rate variability occurred in the setting of a moderate experimental porcine sepsis without organ failure. Dominant reduction of heart-rate variability was found in the high-frequency band indicating inhibition of parasympathetic cardiac signaling as the main mechanism of heart-rate variability reduction. The sensitivity of heart-rate variability to systemic inflammation may contribute to an early detection of moderate sepsis without organ failure.

  18. Low-thrust mission risk analysis, with application to a 1980 rendezvous with the comet Encke

    NASA Technical Reports Server (NTRS)

    Yen, C. L.; Smith, D. B.

    1973-01-01

    A computerized failure process simulation procedure is used to evaluate the risk in a solar electric space mission. The procedure uses currently available thrust-subsystem reliability data and performs approximate simulations of the thrust sybsystem burn operation, the system failure processes, and the retargeting operations. The method is applied to assess the risks in carrying out a 1980 rendezvous mission to the comet Encke. Analysis of the results and evaluation of the effects of various risk factors on the mission show that system component failure rates are the limiting factors in attaining a high mission relability. It is also shown that a well-designed trajectory and system operation mode can be used effectively to partially compensate for unreliable thruster performance.

  19. Antibody-Mediated Rejection of Human Orthotopic Liver Allografts

    PubMed Central

    Demetris, A. Jake; Jaffe, Ron; Tzakis, A.; Ramsey, Glenn; Todo, S.; Belle, Steven; Esquivel, Carlos; Shapiro, Ron; Markus, Bernd; Mroczek, Elizabeth; Van Thiel, D. H.; Sysyn, Greg; Gordon, Robert; Makowka, Leonard; Starzl, Tom

    1988-01-01

    A clinicopathologic analysis of liver transplantation across major ABO blood group barriers was carried out 1) to determine if antibody-mediated (humoral) rejection was a cause of graft failure and if humoral rejection can be identified, 2) to propose criteria for establishing the diagnosis, and 3) to describe the clinical and pathologic features of humoral rejection. A total of 51 (24 primary) ABO-incompatible (ABO-I) liver grafts were transplanted into 49 recipients. There was a 46% graft failure rate during the first 30 days for primary ABO-I grafts compared with an 11% graft failure rate for primary ABO compatible (ABO-C), crossmatch negative, age, sex and priority-matched control patients (P < 0.02). A similarly high early graft failure rate (60%) was seen for nonprimary ABO-I grafts during the first 30 days. Clinically, the patients experienced a relentless rise in serum transaminases, hepatic failure, and coagulopathy during the first weeks after transplant. Pathologic examination of ABO-I grafts that failed early demonstrated widespread areas of geographic hemorrhagic necrosis with diffuse intraorgan coagulation. Prominent arterial deposition of antibody and complement components was demonstrated by immunoflourescent staining. Elution studies confirmed the presence of tissue-bound, donor-specific isoagglutinins within the grafts. No such deposition was seen in control cases. These studies confirm that antibody mediated rejection of the liver occurs and allows for the development of criteria for establishing the diagnosis. ImagesFigure 1Figure 2Figure 3Figure 4Figure 5Figure 6 PMID:3046369

  20. Wind Turbine Failures - Tackling current Problems in Failure Data Analysis

    NASA Astrophysics Data System (ADS)

    Reder, M. D.; Gonzalez, E.; Melero, J. J.

    2016-09-01

    The wind industry has been growing significantly over the past decades, resulting in a remarkable increase in installed wind power capacity. Turbine technologies are rapidly evolving in terms of complexity and size, and there is an urgent need for cost effective operation and maintenance (O&M) strategies. Especially unplanned downtime represents one of the main cost drivers of a modern wind farm. Here, reliability and failure prediction models can enable operators to apply preventive O&M strategies rather than corrective actions. In order to develop these models, the failure rates and downtimes of wind turbine (WT) components have to be understood profoundly. This paper is focused on tackling three of the main issues related to WT failure analyses. These are, the non-uniform data treatment, the scarcity of available failure analyses, and the lack of investigation on alternative data sources. For this, a modernised form of an existing WT taxonomy is introduced. Additionally, an extensive analysis of historical failure and downtime data of more than 4300 turbines is presented. Finally, the possibilities to encounter the lack of available failure data by complementing historical databases with Supervisory Control and Data Acquisition (SCADA) alarms are evaluated.

  1. Microembossing of ultrafine grained Al: microstructural analysis and finite element modelling

    NASA Astrophysics Data System (ADS)

    Qiao, Xiao Guang; Bah, Mamadou T.; Zhang, Jiuwen; Gao, Nong; Moktadir, Zakaria; Kraft, Michael; Starink, Marco J.

    2010-10-01

    Ultra-fine-grained (UFG) Al-1050 processed by equal channel angular pressing and UFG Al-Mg-Cu-Mn processed by high-pressure torsion (HPT) were embossed at both room temperature and 300 °C, with the aim of producing micro-channels. The behaviour of Al alloys during the embossing process was analysed using finite element modelling. The cold embossing of both Al alloys is characterized by a partial pattern transfer, a large embossing force, channels with oblique sidewalls and a large failure rate of the mould. The hot embossing is characterized by straight channel sidewalls, fully transferred patterns and reduced loads which decrease the failure rate of the mould. Hot embossing of UFG Al-Mg-Cu-Mn produced by HPT shows a potential of fabrication of microelectromechanical system components with micro channels.

  2. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  3. Frequency characteristics of the heart rate variability produced by Cheyne-Stokes respiration during 24-hr ambulatory electrocardiographic monitoring.

    PubMed

    Ichimaru, Y; Yanaga, T

    1989-06-01

    Spectral analysis of heart rates during 24-hr ambulatory electrocardiographic monitoring has been carried out to characterize the heart rate spectral components of Cheyne-Stokes respiration (CSR) by using fast Fourier transformation (FFT). Eight patients with congestive heart failure were selected for the study. FFT analyses have been performed for 614.4 sec. Out of the power spectrum, five parameters were extracted to characterize the CSR. The low peak frequencies in eight subjects were between 0.0179 Hz (56 sec) and 0.0081 Hz (123 sec). The algorithms used to detect CSR are the followings: (i) if the LFPA/ULFA ratios were above the absolute value of 1.0, and (ii) the LFPP/MLFP ratios were above the absolute values of 4.0, then the power spectrum is suggestive of CSR. We conclude that the automatic detection of CSR by heart rate spectral analysis during ambulatory ECG monitoring may afford a tool for the evaluation of the patients with congestive heart failure.

  4. Survivorship of Total Hip Joint Replacements Following Isolated Liner Exchange for Wear.

    PubMed

    Vadei, Leone; Kieser, David C; Frampton, Chris; Hooper, Gary

    2017-11-01

    Liner exchange for articular component wear in total hip joint replacements (THJRs) is a common procedure, often thought to be benign with reliable outcomes. Recent studies, however, suggest high failure rates of liner exchange revisions with significant complications. The primary aim of this study was, therefore, to analyze the survivorship of isolated liner exchange for articular component wear, and secondarily to assess the influence of patient demographics (gender, age, and American Society of Anaesthesiologists [ASA] ratings) on rerevisions following isolated liner exchange for wear. A retrospective review of the 15-year New Zealand Joint Registry (1999-2014) was performed, analyzing the outcomes of isolated liner exchange for articular component wear. The survivorship as defined as rerevision with component exchange was determined and 10-year Kaplan-Meier survivorship curves were constructed. These revision rates were compared to age, gender, and ASA rating groups using a log-rank test. The 10-year survivorship of THJR following liner exchange revision for liner wear was 75.3%. If a rerevision was required, the median time to rerevision was 1.33 years with a rerevision rate of 3.33 per 100 component years (95% confidence interval 2.68-4.08/100 component years). The principle reasons for rerevision were dislocation (48.4%) and acetabular component loosening (20.9%). There was no statistically significant difference in rerevision rates based on gender, age categories, or ASA scores. THJR isolated liner exchange for liner wear is not a benign procedure with a survivorship of 75.3% at 10 years. Surgeons contemplating liner exchange revisions should be cognisant of this risk and should adequately assess component position and stability preoperatively. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Failure and recovery in dynamical networks.

    PubMed

    Böttcher, L; Luković, M; Nagler, J; Havlin, S; Herrmann, H J

    2017-02-03

    Failure, damage spread and recovery crucially underlie many spatially embedded networked systems ranging from transportation structures to the human body. Here we study the interplay between spontaneous damage, induced failure and recovery in both embedded and non-embedded networks. In our model the network's components follow three realistic processes that capture these features: (i) spontaneous failure of a component independent of the neighborhood (internal failure), (ii) failure induced by failed neighboring nodes (external failure) and (iii) spontaneous recovery of a component. We identify a metastable domain in the global network phase diagram spanned by the model's control parameters where dramatic hysteresis effects and random switching between two coexisting states are observed. This dynamics depends on the characteristic link length of the embedded system. For the Euclidean lattice in particular, hysteresis and switching only occur in an extremely narrow region of the parameter space compared to random networks. We develop a unifying theory which links the dynamics of our model to contact processes. Our unifying framework may help to better understand controllability in spatially embedded and random networks where spontaneous recovery of components can mitigate spontaneous failure and damage spread in dynamical networks.

  6. Virtually-synchronous communication based on a weak failure suspector

    NASA Technical Reports Server (NTRS)

    Schiper, Andre; Ricciardi, Aleta

    1993-01-01

    Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.

  7. Control system failure monitoring using generalized parity relations. M.S. Thesis Interim Technical Report

    NASA Technical Reports Server (NTRS)

    Vanschalkwyk, Christiaan Mauritz

    1991-01-01

    Many applications require that a control system must be tolerant to the failure of its components. This is especially true for large space-based systems that must work unattended and with long periods between maintenance. Fault tolerance can be obtained by detecting the failure of the control system component, determining which component has failed, and reconfiguring the system so that the failed component is isolated from the controller. Component failure detection experiments that were conducted on an experimental space structure, the NASA Langley Mini-Mast are presented. Two methodologies for failure detection and isolation (FDI) exist that do not require the specification of failure modes and are applicable to both actuators and sensors. These methods are known as the Failure Detection Filter and the method of Generalized Parity Relations. The latter method was applied to three different sensor types on the Mini-Mast. Failures were simulated in input-output data that were recorded during operation of the Mini-Mast. Both single and double sensor parity relations were tested and the effect of several design parameters on the performance of these relations is discussed. The detection of actuator failures is also treated. It is shown that in all the cases it is possible to identify the parity relations directly from input-output data. Frequency domain analysis is used to explain the behavior of the parity relations.

  8. An unusual mode of failure of a tripolar constrained acetabular liner: a case report.

    PubMed

    Banks, Louisa N; McElwain, John P

    2010-04-01

    Dislocation after primary total hip arthroplasty (THA) is the most commonly encountered complication and is unpleasant for both the patient and the surgeon. Constrained acetabular components can be used to treat or prevent instability after primary total hip arthroplasty. We present the case of a 42-year-old female with a BMI of 41. At 18 months post-primary THA the patient underwent further revision hip surgery after numerous (more than 20) dislocations. She had a tripolar Trident acetabular cup (Stryker-Howmedica-Osteonics, Rutherford, New Jersey) inserted. Shortly afterwards the unusual mode of failure of the constrained acetabular liner was noted from radiographs in that the inner liner had dissociated from the outer. The reinforcing ring remained intact and in place. We believe that the patient's weight, combined with poor abductor musculature caused excessive demand on the device leading to failure at this interface when the patient flexed forward. Constrained acetabular components are useful implants to treat instability but have been shown to have up to 42% long-term failure rates with problems such as dissociated inserts, dissociated constraining rings and dissociated femoral rings being sited. Sometimes they may be the only option left in difficult cases such as illustrated here, but still unfortunately have the capacity to fail in unusual ways.

  9. The Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Axelrod, T. S.

    2006-07-01

    The Large Synoptic Survey Telescope (LSST) is an 8.4 meter telescope with a 10 square degree field degree field and a 3 Gigapixel imager, planned to be on-sky in 2012. It is a dedicated all-sky survey instrument, with several complementary science missions. These include understanding dark energy through weak lensing and supernovae; exploring transients and variable objects; creating and maintaining a solar system map, with particular emphasis on potentially hazardous objects; and increasing the precision with which we understand the structure of the Milky Way. The instrument operates continuously at a rapid cadence, repetitively scanning the visible sky every few nights. The data flow rates from LSST are larger than those from current surveys by roughly a factor of 1000: A few GB/night are typical today. LSST will deliver a few TB/night. From a computing hardware perspective, this factor of 1000 can be dealt with easily in 2012. The major issues in designing the LSST data management system arise from the fact that the number of people available to critically examine the data will not grow from current levels. This has a number of implications. For example, every large imaging survey today is resigned to the fact that their image reduction pipelines fail at some significant rate. Many of these failures are dealt with by rerunning the reduction pipeline under human supervision, with carefully ``tweaked'' parameters to deal with the original problem. For LSST, this will no longer be feasible. The problem is compounded by the fact that the processing must of necessity occur on clusters with large numbers of CPU's and disk drives, and with some components connected by long-haul networks. This inevitably results in a significant rate of hardware component failures, which can easily lead to further software failures. Both hardware and software failures must be seen as a routine fact of life rather than rare exceptions to normality.

  10. Modelling river bank retreat by combining fluvial erosion, seepage and mass failure

    NASA Astrophysics Data System (ADS)

    Dapporto, S.; Rinaldi, M.

    2003-04-01

    Streambank erosion processes contribute significantly to the sediment yielded from a river system and represent an important issue in the contexts of soil degradation and river management. Bank retreat is controlled by a complex interaction of hydrologic, geotechnical, and hydraulic processes. The capability of modelling these different components allows for a full reconstruction and comprehension of the causes and rates of bank erosion. River bank retreat during a single flow event has been modelled by combining simulation of fluvial erosion, seepage, and mass failures. The study site, along the Sieve River (Central Italy), has been subject to extensive researches, including monitoring of pore water pressures for a period of 4 years. The simulation reconstructs fairly faithfully the observed changes, and is used to: a) test the potentiality and discuss advantages and limitations of such type of methodology for modelling bank retreat; c) quantify the contribution and mutual role of the different processes determining bank retreat. The hydrograph of the event is divided in a series of time steps. Modelling of the riverbank retreat includes for each step the following components: a) fluvial erosion and consequent changes in bank geometry; b) finite element seepage analysis; c) stability analysis by limit equilibrium method. Direct fluvial shear erosion is computed using empirically derived relationships expressing lateral erosion rate as a function of the excess of shear stress to the critical entrainment value for the different materials along the bank profile. Lateral erosion rate has been calibrated on the basis of the total bank retreat measured by digital terrestrial photogrammetry. Finite element seepage analysis is then conducted to reconstruct the saturated and unsaturated flow within the bank and the pore water pressure distribution for each time step. The safety factor for mass failures is then computed, using the pore water pressure distribution obtained by the seepage analysis, and the geometry of the upper bank is modified in case of failure.

  11. Analysis of failed nuclear plant components

    NASA Astrophysics Data System (ADS)

    Diercks, D. R.

    1993-12-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.

  12. Corrosion of graphite composites in phosphoric acid fuel cells

    NASA Technical Reports Server (NTRS)

    Christner, L. G.; Dhar, H. P.; Farooque, M.; Kush, A. K.

    1986-01-01

    Polymers, polymer-graphite composites and different carbon materials are being considered for many of the fuel cell stack components. Exposure to concentrated phosphoric acid in the fuel cell environment and to high anodic potential results in corrosion. Relative corrosion rates of these materials, failure modes, plausible mechanisms of corrosion and methods for improvement of these materials are investigated.

  13. Curved-stem Hip Resurfacing

    PubMed Central

    2008-01-01

    Hip resurfacing is an attractive concept because it preserves rather than removes the femoral head and neck. Most early designs had high failure rates, but one unique design had a femoral stem. Because that particular device appeared to have better implant survival, this study assessed the clinical outcome and long-term survivorship of a hip resurfacing prosthesis. Four hundred forty-five patients (561 hips) were retrospectively reviewed after a minimum of 20 years’ followup or until death; 23 additional patients were lost to followup. Patients received a metal femoral prosthesis with a small curved stem. Three types of acetabular reconstructions were used: (1) cemented polyurethane; (2) metal-on-metal; and (3) polyethylene secured with cement or used as the liner of a two-piece porous-coated implant. Long-term results were favorable with the metal-on-metal combination only. The mean overall Harris hip score was 92 at 2 years of followup. None of the 121 patients (133 hips) who received metal-on-metal articulation experienced failure. The failure rate with polyurethane was 100%, and the failure rate with cemented polyethylene was 41%. Hip resurfacing with a curved-stem femoral component had a durable clinical outcome when a metal-on-metal articulation was used. Level of Evidence: Level IV, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence. PMID:18338217

  14. Cognitive influences on self-care decision making in persons with heart failure.

    PubMed

    Dickson, Victoria V; Tkacs, Nancy; Riegel, Barbara

    2007-09-01

    Despite advances in management, heart failure is associated with high rates of hospitalization, poor quality of life, and early death. Education intended to improve patients' abilities to care for themselves is an integral component of disease management programs. True self-care requires that patients make decisions about symptoms, but the cognitive deficits documented in 30% to 50% of the heart failure population may make daily decision making challenging. After describing heart failure self-care as a naturalistic decision making process, we explore cognitive deficits known to exist in persons with heart failure. Problems in heart failure self-care are analyzed in relation to neural alterations associated with heart failure. As a neural process, decision making has been traced to regions of the prefrontal cortex, the same areas that are affected by ischemia, infarction, and hypoxemia in heart failure. Resulting deficits in memory, attention, and executive function may impair the perception and interpretation of early symptoms and reasoning and, thereby, delay early treatment implementation. There is compelling evidence that the neural processes critical to decision making are located in the same structures that are affected by heart failure. Because self-care requires the cognitive ability to learn, perceive, interpret, and respond, research is needed to discern how neural deficits affects these abilities, decision-making, and self-care behaviors.

  15. Component Repair Times Obtained from MSPI Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eide, Steven A.; Cadwallader, Lee

    Information concerning times to repair or restore equipment to service given a failure is valuable to probabilistic risk assessments (PRAs). Examples of such uses in modern PRAs include estimation of the probability of failing to restore a failed component within a specified time period (typically tied to recovering a mitigating system before core damage occurs at nuclear power plants) and the determination of mission times for support system initiating event (SSIE) fault tree models. Information on equipment repair or restoration times applicable to PRA modeling is limited and dated for U.S. commercial nuclear power plants. However, the Mitigating Systems Performancemore » Index (MSPI) program covering all U.S. commercial nuclear power plants provides up-to-date information on restoration times for a limited set of component types. This paper describes the MSPI program data available and analyzes the data to obtain median and mean component restoration times as well as non-restoration cumulative probability curves. The MSPI program provides guidance for monitoring both planned and unplanned outages of trains of selected mitigating systems deemed important to safety. For systems included within the MSPI program, plants monitor both train UA and component unreliability (UR) against baseline values. If the combined system UA and UR increases sufficiently above established baseline results (converted to an estimated change in core damage frequency or CDF), a “white” (or worse) indicator is generated for that system. That in turn results in increased oversight by the US Nuclear Regulatory Commission (NRC) and can impact a plant’s insurance rating. Therefore, there is pressure to return MSPI program components to service as soon as possible after a failure occurs. Three sets of unplanned outages might be used to determine the component repair durations desired in this article: all unplanned outages for the train type that includes the component of interest, only unplanned outages associated with failures of the component of interest, and only unplanned outages associated with PRA failures of the component of interest. The paper will describe how component repair times can be generated from each set and which approach is most applicable. Repair time information will be summarized for MSPI pumps and diesel generators using data over 2003 – 2007. Also, trend information over 2003 – 2012 will be presented to indicate whether the 2003 – 2007 repair time information is still considered applicable. For certain types of pumps, mean repair times are significantly higher than the typically assumed 24 h duration.« less

  16. Development of a pilot-scale kinetic extruder feeder system and test program. Phase II. Verification testing. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-01-12

    This report describes the work done under Phase II, the verification testing of the Kinetic Extruder. The main objective of the test program was to determine failure modes and wear rates. Only minor auxiliary equipment malfunctions were encountered. Wear rates indicate useful life expectancy of from 1 to 5 years for wear-exposed components. Recommendations are made for adapting the equipment for pilot plant and commercial applications. 3 references, 20 figures, 12 tables.

  17. Health monitoring display system for a complex plant

    DOEpatents

    Ridolfo, Charles F [Bloomfield, CT; Harmon, Daryl L [Enfield, CT; Colin, Dreyfuss [Enfield, CT

    2006-08-08

    A single page enterprise wide level display provides a comprehensive readily understood representation of the overall health status of a complex plant. Color coded failure domains allow rapid intuitive recognition of component failure status. A three-tier hierarchy of displays provide details on the health status of the components and systems displayed on the enterprise wide level display in a manner that supports a logical drill down to the health status of sub-components on Tier 1 to expected faults of the sub-components on Tier 2 to specific information relative to expected sub-component failures on Tier 3.

  18. A Comparison of Online, Video Synchronous, and Traditional Learning Modes for an Introductory Undergraduate Physics Course

    NASA Astrophysics Data System (ADS)

    Faulconer, E. K.; Griffith, J.; Wood, B.; Acharyya, S.; Roberts, D.

    2018-05-01

    While the equivalence between online and traditional classrooms has been well-researched, very little of this includes college-level introductory Physics. Only one study explored Physics at the whole-class level rather than specific course components such as a single lab or a homework platform. In this work, we compared the failure rate, grade distribution, and withdrawal rates in an introductory undergraduate Physics course across several learning modes including traditional face-to-face instruction, synchronous video instruction, and online classes. Statistically significant differences were found for student failure rates, grade distribution, and withdrawal rates but yielded small effect sizes. Post-hoc pair-wise test was run to determine differences between learning modes. Online students had a significantly lower failure rate than students who took the class via synchronous video classroom. While statistically significant differences were found for grade distributions, the pair-wise comparison yielded no statistically significance differences between learning modes when using the more conservative Bonferroni correction in post-hoc testing. Finally, in this study, student withdrawal rates were lowest for students who took the class in person (in-person classroom and synchronous video classroom) than online. Students that persist in an online introductory Physics class are more likely to achieve an A than in other modes. However, the withdrawal rate is higher from online Physics courses. Further research is warranted to better understand the reasons for higher withdrawal rates in online courses. Finding the root cause to help eliminate differences in student performance across learning modes should remain a high priority for education researchers and the education community as a whole.

  19. Estimated survival probability of the Spotorno total hip arthroplasty after a 15- to 21-year follow-up: one surgeon's results.

    PubMed

    Terré, Ricardo A

    2010-01-01

    We retrospectively assess 171 consecutive total hip arthroplasties (THAs) with a Spotorno CLS uncemented prosthesis implanted through a Hardinge approach. The mean follow-up was 17.9 years. All consecutive operations were performed by 1 surgeon. Eight patients had been lost to follow-up, and 77 had died for unrelated causes. Overall, 4 stems and 19 cups underwent revision. The cumulative survival rate at 21 years was 79.02% (95% confidence interval [95% CI], 45.98-100.00%) for the acetabular component and 96.71% (95% CI, 60.71-100.00%) for the stem. We can conclude that failure of the Spotorno CLS THA is mainly due to its acetabular component (relative risk 4.5). Survival results for the Spotorno CLS stem exceed the patients? life expectancies in the 60- to 70-year-old population in our area. Loosening with or without fatigue fracture of the component and the learning curve for proper implantation have been the main causes for the expansion cup failure.

  20. Reliability and Maintainability Analysis for the Amine Swingbed Carbon Dioxide Removal System

    NASA Technical Reports Server (NTRS)

    Dunbar, Tyler

    2016-01-01

    I have performed a reliability & maintainability analysis for the Amine Swingbed payload system. The Amine Swingbed is a carbon dioxide removal technology that has gone through 2,400 hours of International Space Station on-orbit use between 2013 and 2016. While the Amine Swingbed is currently an experimental payload system, the Amine Swingbed may be converted to system hardware. If the Amine Swingbed becomes system hardware, it will supplement the Carbon Dioxide Removal Assembly (CDRA) as the primary CO2 removal technology on the International Space Station. NASA is also considering using the Amine Swingbed as the primary carbon dioxide removal technology for future extravehicular mobility units and for the Orion, which will be used for the Asteroid Redirect and Journey to Mars missions. The qualitative component of the reliability and maintainability analysis is a Failure Modes and Effects Analysis (FMEA). In the FMEA, I have investigated how individual components in the Amine Swingbed may fail, and what the worst case scenario is should a failure occur. The significant failure effects are the loss of ability to remove carbon dioxide, the formation of ammonia due to chemical degradation of the amine, and loss of atmosphere because the Amine Swingbed uses the vacuum of space to regenerate the Amine Swingbed. In the quantitative component of the reliability and maintainability analysis, I have assumed a constant failure rate for both electronic and nonelectronic parts. Using this data, I have created a Poisson distribution to predict the failure rate of the Amine Swingbed as a whole. I have determined a mean time to failure for the Amine Swingbed to be approximately 1,400 hours. The observed mean time to failure for the system is between 600 and 1,200 hours. This range includes initial testing of the Amine Swingbed, as well as software faults that are understood to be non-critical. If many of the commercial parts were switched to military-grade parts, the expected mean time to failure would be 2,300 hours. Both calculated mean times to failure for the Amine Swingbed use conservative failure rate models. The observed mean time to failure for CDRA is 2,500 hours. Working on this project and for NASA in general has helped me gain insight into current aeronautics missions, reliability engineering, circuit analysis, and different cultures. Prior my internship, I did not have a lot knowledge about the work being performed at NASA. As a chemical engineer, I had not really considered working for NASA as a career path. By engaging in interactions with civil servants, contractors, and other interns, I have learned a great deal about modern challenges that NASA is addressing. My work has helped me develop a knowledge base in safety and reliability that would be difficult to find elsewhere. Prior to this internship, I had not thought about reliability engineering. Now, I have gained a skillset in performing reliability analyses, and understanding the inner workings of a large mechanical system. I have also gained experience in understanding how electrical systems work while I was analyzing the electrical components of the Amine Swingbed. I did not expect to be exposed to as many different cultures as I have while working at NASA. I am referring to both within NASA and the Houston area. NASA employs individuals with a broad range of backgrounds. It has been great to learn from individuals who have highly diverse experiences and outlooks on the world. In the Houston area, I have come across individuals from different parts of the world. Interacting with such a high number of individuals with significantly different backgrounds has helped me to grow as a person in ways that I did not expect. My time at NASA has opened a window into the field of aeronautics. After earning a bachelor's degree in chemical engineering, I plan to go to graduate school for a PhD in engineering. Prior to coming to NASA, I was not aware of the graduate Pathways program. I intend to apply for the graduate Pathways program as positions are opened up. I would like to pursue future opportunities with NASA, especially as my engineering career progresses.

  1. Mass and Reliability Source (MaRS) Database

    NASA Technical Reports Server (NTRS)

    Valdenegro, Wladimir

    2017-01-01

    The Mass and Reliability Source (MaRS) Database consolidates components mass and reliability data for all Oribital Replacement Units (ORU) on the International Space Station (ISS) into a single database. It was created to help engineers develop a parametric model that relates hardware mass and reliability. MaRS supplies relevant failure data at the lowest possible component level while providing support for risk, reliability, and logistics analysis. Random-failure data is usually linked to the ORU assembly. MaRS uses this data to identify and display the lowest possible component failure level. As seen in Figure 1, the failure point is identified to the lowest level: Component 2.1. This is useful for efficient planning of spare supplies, supporting long duration crewed missions, allowing quicker trade studies, and streamlining diagnostic processes. MaRS is composed of information from various databases: MADS (operating hours), VMDB (indentured part lists), and ISS PART (failure data). This information is organized in Microsoft Excel and accessed through a program made in Microsoft Access (Figure 2). The focus of the Fall 2017 internship tour was to identify the components that were the root cause of failure from the given random-failure data, develop a taxonomy for the database, and attach material headings to the component list. Secondary objectives included verifying the integrity of the data in MaRS, eliminating any part discrepancies, and generating documentation for future reference. Due to the nature of the random-failure data, data mining had to be done manually without the assistance of an automated program to ensure positive identification.

  2. Triennial Reproduction Symposium: influence of follicular characteristics at ovulation on early embryonic survival.

    PubMed

    Geary, T W; Smith, M F; MacNeil, M D; Day, M L; Bridges, G A; Perry, G A; Abreu, F M; Atkins, J A; Pohler, K G; Jinks, E M; Madsen, C A

    2013-07-01

    Reproductive failure in livestock can result from failure to fertilize the oocyte or embryonic loss during gestation. Although fertilization failure occurs, embryonic mortality represents a greater contribution to reproductive failure. Reproductive success varies among species and production goals but is measured as a binomial trait (i.e., pregnancy), derived by the success or failure of multiple biological steps. This review focuses primarily on follicular characteristics affecting oocyte quality, fertilization, and embryonic health that lead to pregnancy establishment in beef cattle. When estrous cycles are manipulated with assisted reproductive technologies and ovulation is induced, duration of proestrus (i.e., interval from induced luteolysis to induced ovulation), ovulatory follicle growth rate, and ovulatory follicle size are factors that affect the maturation of the follicle and oocyte at induced ovulation. The most critical maturational component of the ovulatory follicle is the production of sufficient estradiol to prepare follicular cells for luteinization and progesterone synthesis and prepare the uterus for pregnancy. The exact roles of estradiol in oocyte maturation remain unclear, but cows that have lesser serum concentrations of estradiol have decreased fertilization rates and decreased embryo survival on d 7 after induced ovulation. When length of proestrus is held constant, perhaps the most practical follicular measure of fertility is ovulatory follicle size because it is an easily measured attribute of the follicle that is highly associated with its ability to produce estradiol.

  3. Failure modes and conditions of a cohesive, spherical body due to YORP spin-up

    NASA Astrophysics Data System (ADS)

    Hirabayashi, Masatoshi

    2015-12-01

    This paper presents transition of the failure mode of a cohesive, spherical body due to The Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) spin-up. On the assumption that the distribution of materials in the body is homogeneous, failed regions first appearing in the body at different spin rates are predicted by comparing the yield condition of an elastic stress in the body. It is found that as the spin rate increases, the locations of the failed regions move from the equatorial surface to the central region. To avoid such failure modes, the body should have higher cohesive strength. The results by this model are consistent with those by a plastic finite element model. Then, this model and a two-layered-cohesive model first proposed by Hirabayashi et al. are used to classify possible evolution and disruption of a spherical body. There are three possible pathways to disruption. First, because of a strong structure, failure of the central region is dominant and eventually leads to a breakup into multiple components. Secondly, a weak surface and a weak interior make the body oblate. Thirdly, a strong internal core prevents the body from failing and only allows surface shedding. This implies that observed failure modes may highly depend on the internal structure of an asteroid, which could provide crucial information for giving constraints on the physical properties.

  4. Development of KSC program for investigating and generating field failure rates. Volume 1: Summary and overview

    NASA Technical Reports Server (NTRS)

    Bean, E. E.; Bloomquist, C. E.

    1972-01-01

    A summary of the KSC program for investigating the reliability aspects of the ground support activities is presented. An analysis of unsatisfactory condition reports (RC), and the generation of reliability assessment of components based on the URC are discussed along with the design considerations for attaining reliable real time hardware/software configurations.

  5. Differences in Characteristics of Aviation Accidents During 1993-2012 Based on Aircraft Type

    NASA Technical Reports Server (NTRS)

    Evans, Joni K.

    2015-01-01

    Civilian aircraft are available in a variety of sizes, engine types, construction materials and instrumentation complexity. For the analysis reported here, eleven aircraft categories were developed based mostly on aircraft size and engine type, and these categories were applied to twenty consecutive years of civil aviation accidents. Differences in various factors were examined among these aircraft types, including accident severity, pilot characteristics and accident occurrence categories. In general, regional jets and very light sport aircraft had the lowest rates of adverse outcomes (injuries, fatal accidents, aircraft destruction, major accidents), while aircraft with twin (piston) engines or with a single (piston) engine and retractable landing gear carried the highest incidence of adverse outcomes. The accident categories of abnormal runway contact, runway excursions and non-powerplant system/component failures occur frequently within all but two or three aircraft types. In contrast, ground collisions, loss of control - on ground/water and powerplant system/component failure occur frequently within only one or two aircraft types. Although accidents in larger aircraft tend to have less severe outcomes, adverse outcome rates also differ among accident categories. It may be that the type of accident has as much or more influence on the outcome as the type of aircraft.

  6. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Chen, Zizhong; Song, Shuaiwen

    2016-01-18

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  7. Investigating the Interplay between Energy Efficiency and Resilience in High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Song, Shuaiwen; Wu, Panruo

    2015-05-29

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  8. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Chen, Zizhong; Song, Shuaiwen Leon

    2015-11-16

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  9. Toward Failure Modeling In Complex Dynamic Systems: Impact of Design and Manufacturing Variations

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; McAdams, Daniel A.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    When designing vehicle vibration monitoring systems for aerospace devices, it is common to use well-established models of vibration features to determine whether failures or defects exist. Most of the algorithms used for failure detection rely on these models to detect significant changes during a flight environment. In actual practice, however, most vehicle vibration monitoring systems are corrupted by high rates of false alarms and missed detections. Research conducted at the NASA Ames Research Center has determined that a major reason for the high rates of false alarms and missed detections is the numerous sources of statistical variations that are not taken into account in the. modeling assumptions. In this paper, we address one such source of variations, namely, those caused during the design and manufacturing of rotating machinery components that make up aerospace systems. We present a novel way of modeling the vibration response by including design variations via probabilistic methods. The results demonstrate initial feasibility of the method, showing great promise in developing a general methodology for designing more accurate aerospace vehicle vibration monitoring systems.

  10. Solder Reflow Failures in Electronic Components During Manual Soldering

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander; Greenwell, Chris; Felt, Frederick

    2008-01-01

    This viewgraph presentation reviews the solder reflow failures in electronic components that occur during manual soldering. It discusses the specifics of manual-soldering-induced failures in plastic devices with internal solder joints. The failure analysis turned up that molten solder had squeezed up to the die surface along the die molding compound interface, and the dice were not protected with glassivation allowing solder to short gate and source to the drain contact. The failure analysis concluded that the parts failed due to overheating during manual soldering.

  11. NASA Helps Keep the Light Burning for the Saturn Car Company

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Saturn Electronics & Engineering, Inc. (Saturn) facility in Marks, Miss., that produces lamp assemblies was experiencing itermittent problems with its automotive under the hood lamps. After numerous testing and engineering efforts, technicians could not pin down the root of the problem. So Saturn contacted the NASA Technology Assistance Program (TAP) at Stennis Space Center. The Marks production facility had been experiencing intermittent problems with under the hood lamp assemblies for some time. The failure rate, at 2 percent, was unacceptable. Every effort was made to identify the problem so that corrective action could be put in place. The problem was investigated and researched by Saturn's engineering department. In addition, Saturn brought in several independent testing laboratories. Other measures included examining the switch component suppliers and auditing them for compliance to the design specifications and for surface contaminants. All attempts to identify the factors responsible for the failures were inconclusive. In an effort to get to the root of the problem, and at the recommendation of the Mississippi Department of Economic Development, Saturn contacted the NASA TAP at Stennis. The NASA Materials and Contamination Laboratory, with assistance from the Stennis Prototype Laboratory, conducted a materials evaluation study on the switch components. The laboratory findings showed the failures were caused by a build-up of carbon-based contaminants on the switch components. Saturn Electronics & Engineering, Inc., is a minority-owned provider of contract manufacturing services to a diverse global marketplace. Saturn operates manufacturing facilities globally serving the North American, European, and Asian markets. Saturn's production facility in Marks, Mississippi, produces more than 1,000,000 lamps and switches monthly. "Since the NASA recommendations were implemented, our internal failure rate for intermittency has dropped to less than .02 percent. Most importantly, we restored our high-level of customer satisfaction. Stennis provided an invaluable service to our business," Patrick said. Both NASA and Saturn were pleased with the results form this technical assistance project. The Technology Assistance Program at Stennis makes available to the public NASA technical expertise and access to lab facilities. This project provided both services with a positive outcome.

  12. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  13. Field Programmable Gate Array Reliability Analysis Guidelines for Launch Vehicle Reliability Block Diagrams

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.

    2017-01-01

    Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.

  14. Substantial harm associated with failure of chronic paediatric central venous access devices.

    PubMed

    Ullman, Amanda J; Kleidon, Tricia; Cooke, Marie; Rickard, Claire M

    2017-07-06

    Central venous access devices (CVADs) form an important component of modern paediatric healthcare, especially for children with chronic health conditions such as cancer or gastrointestinal disorders. However device failure and complications rates are high.Over 2½ years, a child requiring parenteral nutrition and associated vascular access dependency due to 'short gut syndrome' (intestinal failure secondary to gastroschisis and resultant significant bowel resection) had ten CVADs inserted, with ninesubsequently failing. This resulted in multiple anaesthetics, invasive procedures, injuries, vascular depletion, interrupted nutrition, delayed treatment and substantial healthcare costs. A conservative estimate of the institutional costs for each insertion, or rewiring, of her tunnelled CVAD was $A10 253 (2016 Australian dollars).These complications and device failures had significant negative impact on the child and her family. Considering the commonality of conditions requiring prolonged vascular access, these failures also have a significant impact on international health service costs. © BMJ Publishing Group Ltd (unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Total elbow arthroplasty for primary osteoarthritis.

    PubMed

    Schoch, Bradley S; Werthel, Jean-David; Sánchez-Sotelo, Joaquín; Morrey, Bernard F; Morrey, Mark

    2017-08-01

    Primary osteoarthritis of the elbow is a less common indication for total elbow arthroplasty (TEA). Higher complication rates in younger, active patients may offset short-term improvements in pain and function. The purpose of this study was to determine pain relief, functional outcomes, complications, and survival of TEA in this population. Between 1984 and 2011, 20 consecutive TEAs were performed for primary elbow osteoarthritis. Two patients died before the 2-year follow-up. Mean age at surgery was 68 years (range, 51-85 years). Outcome measures included pain, motion, Mayo Elbow Performance Score, satisfaction, complications, and reoperations. Mean follow-up was 8.9 years (range, 2-20 years). Three elbows sustained mechanical failures. Complications included intraoperative fracture (n = 2), wound irrigation and débridement (n = 1), bony ankylosis (n = 1), humeral loosening (n = 1), humeral component fracture (n = 1), and mechanical failure of a radial head component (n = 1). Fifteen elbows without mechanical failure were examined clinically. Pain improved from 3.6 to 1.5 (P < .001). Range of motion remained clinically unchanged (P > .05), with preoperative flexion contractures not improving. Mayo Elbow Performance Scores were available for 13 elbows without mechanical failure, averaging 81.5 points (range, 60-100 points); these were graded as excellent (n = 5), good (n = 2), and fair (n = 6). Subjectively, all patients without mechanical failure were satisfied. TEA represents a reliable surgical option for pain relief in patients with primary osteoarthritis. However, restoration of extension is not always obtained, indicating that more aggressive soft tissue releases or bony resection should be considered. Complications occurred in a large number of elbows, but mechanical failure was low considering the nature of this population and the length of follow-up. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  16. Application of Function-Failure Similarity Method to Rotorcraft Component Design

    NASA Technical Reports Server (NTRS)

    Roberts, Rory A.; Stone, Robert E.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the designs that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. During the design of aircraft, a general technique is needed to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to specific components, which are described by their functionality. The failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using this technique, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. The fundamentals of this method were previously introduced for a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.

  17. Treatment of the cardiac hypertrophic response and heart failure with ginseng, ginsenosides, and ginseng-related products.

    PubMed

    Karmazyn, Morris; Gan, Xiaohong Tracey

    2017-10-01

    Heart failure is a major medical and economic burden throughout the world. Although various treatment options are available to treat heart failure, death rates in both men and women remain high. Potential adjunctive therapies may lie with use of herbal medications, many of which possess potent pharmacological properties. Among the most widely studied is ginseng, a member of the genus Panax that is grown in many parts of the world and that has been used as a medical treatment for a variety of conditions for thousands of years, particularly in Asian societies. There are a number of ginseng species, each possessing distinct pharmacological effects due primarily to differences in their bioactive components including saponin ginsenosides and polysaccharides. While experimental evidence for salutary effects of ginseng on heart failure is robust, clinical evidence is less so, primarily due to a paucity of large-scale well-controlled clinical trials. However, there is evidence from small trials that ginseng-containing Chinese medications such as Shenmai can offer benefit when administered as adjunctive therapy to heart failure patients. Substantial additional studies are required, particularly in the clinical arena, to provide evidence for a favourable effect of ginseng in heart failure patients.

  18. 16 CFR § 1207.5 - Design.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... installed swimming pool slide shall be such that no structural failures of any component part shall cause failures of any other component part of the slide as described in the performance tests in paragraphs (d)(4... number and placement of such fasteners shall not cause a failure of the tread under the ladder loading...

  19. Towards eradication of inappropriate therapies for ICD lead failure by combining comprehensive remote monitoring and lead noise alerts.

    PubMed

    Ploux, Sylvain; Swerdlow, Charles D; Strik, Marc; Welte, Nicolas; Klotz, Nicolas; Ritter, Philippe; Haïssaguerre, Michel; Bordachar, Pierre

    2018-06-02

    Recognition of implantable cardioverter defibrillator (ICD) lead malfunction before occurrence of life threatening complications is crucial. We aimed to assess the effectiveness of remote monitoring associated or not with a lead noise alert for early detection of ICD lead failure. From October 2013 to April 2017, a median of 1,224 (578-1,958) ICD patients were remotely monitored with comprehensive analysis of all transmitted materials. ICD lead failure and subsequent device interventions were prospectively collected in patients with (RMLN) and without (RM) a lead noise alert (Abbott Secure Sense™ or Medtronic Lead Integrity Alert™) in their remote monitoring system. During a follow-up of 4,457 patient years, 64 lead failures were diagnosed. Sixty-one (95%) of the diagnoses were made before any clinical complication occurred. Inappropriate shocks were delivered in only one patient of each group (3%), with an annual rate of 0.04%. All high voltage conductor failures were identified remotely by a dedicated impedance alert in 10 patients. Pace-sense component failures were correctly identified by a dedicated alert in 77% (17 of 22) of the RMLN group versus 25% (8 of 32) of the RM group (P = 0.002). The absence of a lead noise alert was associated with a 16-fold increase in the likelihood of initiating either a shock or ATP (OR: 16.0, 95% CI 1.8-143.3; P = 0.01). ICD remote monitoring with systematic review of all transmitted data is associated with a very low rate of inappropriate shocks related to lead failure. Dedicated noise alerts further reduce inappropriate detection of ventricular arrhythmias. © 2018 Wiley Periodicals, Inc.

  20. A computer program for cyclic plasticity and structural fatigue analysis

    NASA Technical Reports Server (NTRS)

    Kalev, I.

    1980-01-01

    A computerized tool for the analysis of time independent cyclic plasticity structural response, life to crack initiation prediction, and crack growth rate prediction for metallic materials is described. Three analytical items are combined: the finite element method with its associated numerical techniques for idealization of the structural component, cyclic plasticity models for idealization of the material behavior, and damage accumulation criteria for the fatigue failure.

  1. Failure Rates for Fiber Optic Assemblies

    DTIC Science & Technology

    1980-10-01

    Information Service (NTIS). At NTIS it will be releasable to the general public, including foreign nations. RADC-TR-80-322 has been reviewed and is...Literature sources searched (in addition to the RAC automated library information retrieval system) include the National Technical Information Service (NTIS...Proceedings 1976, 26th Electronic Components Conference. Price, S.J., et al. FOR RELIABLE SERVICE ENVIRONMENT PERFORMANCE, ENCAPSULATED LEDS WITH CLEAR

  2. Reliability enhancement through optimal burn-in

    NASA Astrophysics Data System (ADS)

    Kuo, W.

    1984-06-01

    A numerical reliability and cost model is defined for production line burn-in tests of electronic components. The necessity of burn-in is governed by upper and lower bounds: burn-in is mandatory for operation-critical or nonreparable component; no burn-in is needed when failure effects are insignificant or easily repairable. The model considers electronic systems in terms of a series of components connected by a single black box. The infant mortality rate is described with a Weibull distribution. Performance reaches a steady state after burn-in, and the cost of burn-in is a linear function for each component. A minimum cost is calculated among the costs and total time of burn-in, shop repair, and field repair, with attention given to possible losses in future sales from inadequate burn-in testing.

  3. Failure analysis of aluminum alloy components

    NASA Technical Reports Server (NTRS)

    Johari, O.; Corvin, I.; Staschke, J.

    1973-01-01

    Analysis of six service failures in aluminum alloy components which failed in aerospace applications is reported. Identification of fracture surface features from fatigue and overload modes was straightforward, though the specimens were not always in a clean, smear-free condition most suitable for failure analysis. The presence of corrosion products and of chemically attacked or mechanically rubbed areas here hindered precise determination of the cause of crack initiation, which was then indirectly inferred from the scanning electron fractography results. In five failures the crack propagation was by fatigue, though in each case the fatigue crack initiated from a different cause. Some of these causes could be eliminated in future components by better process control. In one failure, the cause was determined to be impact during a crash; the features of impact fracture were distinguished from overload fractures by direct comparisons of the received specimens with laboratory-generated failures.

  4. A geometric approach to failure detection and identification in linear systems

    NASA Technical Reports Server (NTRS)

    Massoumnia, M. A.

    1986-01-01

    Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.

  5. Reliability analysis and initial requirements for FC systems and stacks

    NASA Astrophysics Data System (ADS)

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  6. Deriving Function-failure Similarity Information for Failure-free Rotorcraft Component Design

    NASA Technical Reports Server (NTRS)

    Roberts, Rory A.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the design that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. The aircraft design needs to be passed through a general technique to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to certain components, which are described by their functionality. In turn, the failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using the technique proposed in this paper, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. This method was previously applied to a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.

  7. Reliability and risk assessment of structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1991-01-01

    Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.

  8. Environmental testing to prevent on-orbit TDRS failures

    NASA Technical Reports Server (NTRS)

    Cutler, Robert M.

    1994-01-01

    Can improved environmental testing prevent on-orbit component failures such as those experienced in the Tracking and Data Relay Satellite (TDRS) constellation? TDRS communications have been available to user spacecraft continuously for over 11 years, during which the five TDRS's placed in orbit have demonstrated their redundancies and robustness by surviving 26 component failures. Nevertheless, additional environmental testing prior to launch could prevent the occurrence of some types of failures, and could help to maintain communication services. Specific testing challenges involve traveling wave tube assemblies (TWTA's) whose lives may decrease with on-off cycling, and heaters that are subject to thermal cycles. The development of test conditions and procedures should account for known thermal variations. Testing may also have the potential to prevent failures in which components such as diplexers have had their lives dramatically shortened because of particle migration in a weightless environment. Reliability modeling could be used to select additional components that could benefit from special testing, but experience shows that this approach has serious limitations. Through knowledge of on-orbit experience, and with advances in testing, communication satellite programs might avoid the occurrence of some types of failures, and extend future spacecraft longevity beyond the current TDRS design life of ten years. However, determining which components to test, and how must testing to do, remain problematical.

  9. Immunity-based detection, identification, and evaluation of aircraft sub-system failures

    NASA Astrophysics Data System (ADS)

    Moncayo, Hever Y.

    This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.

  10. Reliability Effects of Surge Current Testing of Solid Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2007-01-01

    Solid tantalum capacitors are widely used in space applications to filter low-frequency ripple currents in power supply circuits and stabilize DC voltages in the system. Tantalum capacitors manufactured per military specifications (MIL-PRF-55365) are established reliability components and have less than 0.001% of failures per 1000 hours (the failure rate is less than 10 FIT) for grades D or S, thus positioning these parts among electronic components with the highest reliability characteristics. Still, failures of tantalum capacitors do happen and when it occurs it might have catastrophic consequences for the system. This is due to a short-circuit failure mode, which might be damaging to a power supply, and also to the capability of tantalum capacitors with manganese cathodes to self-ignite when a failure occurs in low-impedance applications. During such a failure, a substantial amount of energy is released by exothermic reaction of the tantalum pellet with oxygen generated by the overheated manganese oxide cathode, resulting not only in destruction of the part, but also in damage of the board and surrounding components. A specific feature of tantalum capacitors, compared to ceramic parts, is a relatively large value of capacitance, which in contemporary low-size chip capacitors reaches dozens and hundreds of microfarads. This might result in so-called surge current or turn-on failures in the parts when the board is first powered up. Such a failure, which is considered as the most prevalent type of failures in tantalum capacitors [I], is due to fast changes of the voltage in the circuit, dV/dt, producing high surge current spikes, I(sub sp) = Cx(dV/dt), when current in the circuit is unrestricted. These spikes can reach hundreds of amperes and cause catastrophic failures in the system. The mechanism of surge current failures has not been understood completely yet, and different hypotheses were discussed in relevant literature. These include a sustained scintillation breakdown model [1-3]; electrical oscillations in circuits with a relatively high inductance [4-6]; local overheating of the cathode [5,7, 8]; mechanical damage to tantalum pentoxide dielectric caused by the impact of MnO2 crystals [2,9, 10]; or stress-induced-generation of electron traps caused by electromagnetic forces developed during current spikes [11]. A commonly accepted explanation of the surge current failures is that at unlimited current supply during surge current conditions, the self-healing mechanism in tantalum capacitors does not work, and what would be a minor scintillation spike if the current were limited, becomes a catastrophic failure of the part [l, 12]. However, our data show that the scintillation breakdown voltages are significantly greater that the surge current breakdown voltages, so it is still not clear why the part, which has no scintillations, would fail at the same voltage during surge current testing (SCT).

  11. Trends and problems in development of the power plants electrical part

    NASA Astrophysics Data System (ADS)

    Gusev, Yu. P.

    2015-03-01

    The article discusses some problems relating to development of the electrical part of modern nuclear and thermal power plants, which are stemming from the use of new process and electrical equipment, such as gas turbine units, power converters, and intellectual microprocessor devices in relay protection and automated control systems. It is pointed out that the failure rates of electrical equipment at Russian and foreign power plants tend to increase. The ongoing power plant technical refitting and innovative development processes generate the need to significantly widen the scope of research works on the electrical part of power plants and rendering scientific support to works on putting in use innovative equipment. It is indicated that one of main factors causing the growth of electrical equipment failures is that some of components of this equipment have insufficiently compatible dynamic characteristics. This, in turn may be due to lack or obsolescence of regulatory documents specifying the requirements for design solutions and operation of electric power equipment that incorporates electronic and microprocessor control and protection devices. It is proposed to restore the system of developing new and updating existing departmental regulatory technical documents that existed in the 1970s, one of the fundamental principles of which was placing long-term responsibility on higher schools and leading design institutions for rendering scientific-technical support to innovative development of components and systems forming the electrical part of power plants. This will make it possible to achieve lower failure rates of electrical equipment and to steadily improve the competitiveness of the Russian electric power industry and energy efficiency of generating companies.

  12. Risk and Vulnerability Analysis of Satellites Due to MM/SD with PIRAT

    NASA Astrophysics Data System (ADS)

    Kempf, Scott; Schafer, Frank Rudolph, Martin; Welty, Nathan; Donath, Therese; Destefanis, Roberto; Grassi, Lilith; Janovsky, Rolf; Evans, Leanne; Winterboer, Arne

    2013-08-01

    Until recently, the state-of-the-art assessment of the threat posed to spacecraft by micrometeoroids and space debris was limited to the application of ballistic limit equations to the outer hull of a spacecraft. The probability of no penetration (PNP) is acceptable for assessing the risk and vulnerability of manned space mission, however, for unmanned missions, whereby penetrations of the spacecraft exterior do not necessarily constitute satellite or mission failure, these values are overly conservative. The newly developed software tool PIRAT (Particle Impact Risk and Vulnerability Analysis Tool) has been developed based on the Schäfer-Ryan-Lambert (SRL) triple-wall ballistic limit equation (BLE), applicable for various satellite components. As a result, it has become possible to assess the individual failure rates of satellite components. This paper demonstrates the modeling of an example satellite, the performance of a PIRAT analysis and the potential for subsequent design optimizations with respect of micrometeoroid and space debris (MM/SD) impact risk.

  13. Clinical outcome of the metal-on-metal hybrid Corin Cormet 2000 hip resurfacing system: an up to 11-year follow-up study.

    PubMed

    Gross, Thomas P; Liu, Fei; Webb, Lee A

    2012-04-01

    This report extends the follow-up for the largest center of the first multicenter US Food and Drug Administration investigational device exemption study on metal-on-metal hip resurfacing arthroplasty up to 11 years. A single surgeon performed 373 hip resurfacing arthroplasties using the hybrid Corin Cormet 2000 system. The Kaplan-Meier survivorship at 11 years was 93% when revision for any reason was used as an end point and 91% if radiographic failures were included. The clinical results demonstrate an acceptable failure rate with use of this system. Loosening of the cemented femoral components was the most common source of failure and occurred at all follow-up intervals. A learning curve that persisted for at least 200 cases was confirmed. All femoral neck fractures occurred before 6 months postoperatively. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Parametric Studies Of Failure Mechanisms In Thermal Barrier Coatings During Thermal Cycling Using FEM

    NASA Astrophysics Data System (ADS)

    Srivathsa, B.; Das, D. K.

    2015-12-01

    Thermal barrier coatings (TBCs) are widely used on different hot components of gas turbine engines such as blades and vanes. Although, several mechanisms for the failure of the TBCs have been suggested, it is largely accepted that the durability of these coatings is primarily determined by the residual stresses that are developed during the thermal cycling. In the present study, the residual stress build-up in an electron beam physical vapour deposition (EB-PVD) based TBCs on a coupon during thermal cycling has been studied by varying three parameters such as the cooling rate, TBC thickness and substrate thickness. A two-dimensional thermomechanical generalized plane strain finite element simulations have been performed for thousand cycles. It was observed that these variations change the stress profile significantly and the stress severity factor increases non-linearly. Overall, the predictions of the model agree with reported experimental results and help in predicting the failure mechanisms.

  15. Health effects of hawthorn.

    PubMed

    Dahmer, Stephen; Scott, Emilie

    2010-02-15

    Hawthorn medicinal extract has long been a favored herbal remedy in Europe. The active components of this slow-acting cardiotonic agent are thought to be flavonoids and oligomeric procyanidins. The most studied hawthorn extracts are WS 1442 and LI 132. Reviews of placebo- controlled trials have reported both subjective and objective improvement in patients with mild forms of heart failure (New York Heart Association classes I through III). Other studies of hawthorn in patients with heart failure have revealed improvement in clinical symptoms, pressure-heart rate product, left ventricular ejection fraction, and patients' subjective sense of well-being. However, there is no evidence of a notable reduction in mortality or sudden death. Hawthorn is well tolerated; the most common adverse effects are vertigo and dizziness. Theoretic interactions exist with antiarrhythmics, antihypertensives, digoxin, and antihyperlipidemic agents. Proven conventional therapies for heart failure are still recommended until the safety and effectiveness of hawthorn has been proven in long-term studies.

  16. Heart failure, saxagliptin, and diabetes mellitus: observations from the SAVOR-TIMI 53 randomized trial.

    PubMed

    Scirica, Benjamin M; Braunwald, Eugene; Raz, Itamar; Cavender, Matthew A; Morrow, David A; Jarolim, Petr; Udell, Jacob A; Mosenzon, Ofri; Im, KyungAh; Umez-Eronini, Amarachi A; Pollack, Pia S; Hirshberg, Boaz; Frederich, Robert; Lewis, Basil S; McGuire, Darren K; Davidson, Jaime; Steg, Ph Gabriel; Bhatt, Deepak L

    2014-10-28

    Diabetes mellitus and heart failure frequently coexist. However, few diabetes mellitus trials have prospectively evaluated and adjudicated heart failure as an end point. A total of 16 492 patients with type 2 diabetes mellitus and a history of, or at risk of, cardiovascular events were randomized to saxagliptin or placebo (mean follow-up, 2.1 years). The primary end point was the composite of cardiovascular death, myocardial infarction, or ischemic stroke. Hospitalization for heart failure was a predefined component of the secondary end point. Baseline N-terminal pro B-type natriuretic peptide was measured in 12 301 patients. More patients treated with saxagliptin (289, 3.5%) were hospitalized for heart failure compared with placebo (228, 2.8%; hazard ratio, 1.27; 95% confidence intercal, 1.07-1.51; P=0.007). Corresponding rates at 12 months were 1.9% versus 1.3% (hazard ratio, 1.46; 95% confidence interval, 1.15-1.88; P=0.002), with no significant difference thereafter (time-varying interaction, P=0.017). Subjects at greatest risk of hospitalization for heart failure had previous heart failure, an estimated glomerular filtration rate ≤60 mL/min, or elevated baseline levels of N-terminal pro B-type natriuretic peptide. There was no evidence of heterogeneity between N-terminal pro B-type natriuretic peptide and saxagliptin (P for interaction=0.46), although the absolute risk excess for heart failure with saxagliptin was greatest in the highest N-terminal pro B-type natriuretic peptide quartile (2.1%). Even in patients at high risk of hospitalization for heart failure, the risk of the primary and secondary end points were similar between treatment groups. In the context of balanced primary and secondary end points, saxagliptin treatment was associated with an increased risk or hospitalization for heart failure. This increase in risk was highest among patients with elevated levels of natriuretic peptides, previous heart failure, or chronic kidney disease. http://www.clinicaltrials.gov. Unique identifier: NCT01107886. © 2014 American Heart Association, Inc.

  17. Review and Analysis of Existing Mobile Phone Apps to Support Heart Failure Symptom Monitoring and Self-Care Management Using the Mobile Application Rating Scale (MARS).

    PubMed

    Masterson Creber, Ruth M; Maurer, Mathew S; Reading, Meghan; Hiraldo, Grenny; Hickey, Kathleen T; Iribarren, Sarah

    2016-06-14

    Heart failure is the most common cause of hospital readmissions among Medicare beneficiaries and these hospitalizations are often driven by exacerbations in common heart failure symptoms. Patient collaboration with health care providers and decision making is a core component of increasing symptom monitoring and decreasing hospital use. Mobile phone apps offer a potentially cost-effective solution for symptom monitoring and self-care management at the point of need. The purpose of this review of commercially available apps was to identify and assess the functionalities of patient-facing mobile health apps targeted toward supporting heart failure symptom monitoring and self-care management. We searched 3 Web-based mobile app stores using multiple terms and combinations (eg, "heart failure," "cardiology," "heart failure and self-management"). Apps meeting inclusion criteria were evaluated using the Mobile Application Rating Scale (MARS), IMS Institute for Healthcare Informatics functionality scores, and Heart Failure Society of America (HFSA) guidelines for nonpharmacologic management. Apps were downloaded and assessed independently by 2-4 reviewers, interclass correlations between reviewers were calculated, and consensus was met by discussion. Of 3636 potentially relevant apps searched, 34 met inclusion criteria. Most apps were excluded because they were unrelated to heart failure, not in English or Spanish, or were games. Interrater reliability between reviewers was high. AskMD app had the highest average MARS total (4.9/5). More than half of the apps (23/34, 68%) had acceptable MARS scores (>3.0). Heart Failure Health Storylines (4.6) and AskMD (4.5) had the highest scores for behavior change. Factoring MARS, functionality, and HFSA guideline scores, the highest performing apps included Heart Failure Health Storylines, Symple, ContinuousCare Health App, WebMD, and AskMD. Peer-reviewed publications were identified for only 3 of the 34 apps. This review suggests that few apps meet prespecified criteria for quality, content, or functionality, highlighting the need for further refinement and mapping to evidence-based guidelines and room for overall quality improvement in heart failure symptom monitoring and self-care related apps.

  18. Durability of implanted electrodes and leads in an upper-limb neuroprosthesis.

    PubMed

    Kilgore, Kevin L; Peckham, P Hunter; Keith, Michael W; Montague, Fred W; Hart, Ronald L; Gazdik, Martha M; Bryden, Anne M; Snyder, Scott A; Stage, Thomas G

    2003-01-01

    Implanted neuroprosthetic systems have been successfully used to provide upper-limb function for over 16 years. A critical aspect of these implanted systems is the safety, stability, and-reliability of the stimulating electrodes and leads. These components are (1) the stimulating electrode itself, (2) the electrode lead, and (3) the lead-to-device connector. A failure in any of these components causes the direct loss of the capability to activate a muscle consistently, usually resulting in a decrement in the function provided by the neuroprosthesis. Our results indicate that the electrode, lead, and connector system are extremely durable. We analyzed 238 electrodes that have been implanted as part of an upper-limb neuroprosthesis. Each electrode had been implanted at least 3 years, with a maximum implantation time of over 16 years. Only three electrode-lead failures and one electrode infection occurred, for a survival rate of almost 99 percent. Electrode threshold measurements indicate that the electrode response is stable over time, with no evidence of electrode migration or continual encapsulation in any of the electrodes studied. These results have an impact on the design of implantable neuroprosthetic systems. The electrode-lead component of these systems should no longer be considered a weak technological link.

  19. TSTA Piping and Flame Arrestor Operating Experience Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadwallader, Lee C.; Willms, R. Scott

    The Tritium Systems Test Assembly (TSTA) was a facility dedicated to tritium handling technology and experiment research at the Los Alamos National Laboratory. The facility operated from 1984 to 2001, running a prototype fusion fuel processing loop with ~100 grams of tritium as well as small experiments. There have been several operating experience reports written on this facility’s operation and maintenance experience. This paper describes analysis of two additional components from TSTA, small diameter gas piping that handled small amounts of tritium in a nitrogen carrier gas, and the flame arrestor used in this piping system. The operating experiences andmore » the component failure rates for these components are discussed in this paper. Comparison data from other applications are also presented.« less

  20. Joint modelling of potentially avoidable hospitalisation for five diseases accounting for spatiotemporal effects: A case study in New South Wales, Australia.

    PubMed

    Baker, Jannah; White, Nicole; Mengersen, Kerrie; Rolfe, Margaret; Morgan, Geoffrey G

    2017-01-01

    Three variant formulations of a spatiotemporal shared component model are proposed that allow examination of changes in shared underlying factors over time. Models are evaluated within the context of a case study examining hospitalisation rates for five chronic diseases for residents of a regional area in New South Wales: type II diabetes mellitus (DMII), chronic obstructive pulmonary disease (COPD), coronary arterial disease (CAD), hypertension (HT) and congestive heart failure (CHF) between 2001-2006. These represent ambulatory care sensitive (ACS) conditions, often used as a proxy for avoidable hospitalisations. Using a selected model, the effects of socio-economic status (SES) as a shared component are estimated and temporal patterns in the influence of the residual shared spatial component are examined. Choice of model depends upon the application. In the featured application, a model allowing for changing influence of the shared spatial component over time was found to have the best fit and was selected for further analyses. Hospitalisation rates were found to be increasing for COPD and DMII, decreasing for CHF and stable for CAD and HT. SES was substantively associated with hospitalisation rates, with differing degrees of influence for each disease. In general, most of the spatial variation in hospitalisation rates was explained by disease-specific spatial components, followed by the residual shared spatial component. Appropriate selection of a joint disease model allows for the examination of temporal patterns of disease outcomes and shared underlying spatial factors, and distinction between different shared spatial factors.

  1. Management of Microcircuit Obsolescence in a Pre-Production ACAT-ID Missile Program

    DTIC Science & Technology

    2002-12-01

    and Engineering Center ASIC Application Specific Integrated Circuit AVCOM Avionics Component Obsolescence Management BRU Battery Replaceable Unit...then just a paper qualification, e.g. Board or Battery Replaceable Unit ( BRU ) testing. 5 After-market Package The Die is Available and Can Be...Encapsulated Microcircuits (PEM), speed change, failure rate) 8 Emulation Manufacture or re-engineering of a FFF Replacement 9 CCA or BRU Redesign Board

  2. Design of high temperature ceramic components against fast fracture and time-dependent failure using cares/life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.

    1995-08-01

    A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less

  3. Fatigue resistant carbon coatings for rolling/sliding contacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Harpal; Ramirez, Giovanni; Eryilmaz, Osman

    2016-06-01

    The growing demands for renewable energy production have recently resulted in a significant increase in wind plant installation. Field data from these plants show that wind turbines suffer from costly repair, maintenance and high failure rates. Often times the reliability issues are linked with tribological components used in wind turbine drivetrains. The primary failure modes in bearings and gears are associated with micropitting, wear, brinelling, scuffing, smearing and macropitting all of which occur at or near the surface. Accordingly, a variety of surface engineering approaches are currently being considered to alter the near surface properties of such bearings and gearsmore » to prevent these tribological failures. In the present work, we have evaluated the tribological performance of compliant highly hydrogenated diamond like carbon coating developed at Argonne National Laboratory, under mixed rolling/sliding contact conditions for wind turbine drivetrain components. The coating was deposited on AISI 52100 steel specimens using a magnetron sputter deposition system. The experiments were performed on a PCS Micro-Pitting-Rig (MPR) with four material pairs at 1.79 GPa contact stress, 40% slide to roll ratio and in polyalphaolefin (PAO4) basestock oil (to ensure extreme boundary conditions). The post-test analysis was performed using optical microscopy, surface profilometry, and Raman spectroscopy. The results obtained show a potential for these coatings in sliding/rolling contact applications as no failures were observed with coated specimens even after 100 million cycles compared to uncoated pair in which they failed after 32 million cycles, under the given test conditions.« less

  4. Enhanced Component Performance Study: Motor-Driven Pumps 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2016-02-01

    This report presents an enhanced performance evaluation of motor-driven pumps at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The motor-driven pump failure modes considered for standby systems are failure to start, failure to run less than or equal to one hour, and failure to run more than one hour; for normally running systems, the failure modes considered are failure to start and failure tomore » run. An eight hour unreliability estimate is also calculated and trended. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified in pump run hours per reactor year. Statistically significant decreasing trends were identified for standby systems industry-wide frequency of start demands, and run hours per reactor year for runs of less than or equal to one hour.« less

  5. Studies on Automobile Clutch Release Bearing Characteristics with Acoustic Emission

    NASA Astrophysics Data System (ADS)

    Chen, Guoliang; Chen, Xiaoyang

    Automobile clutch release bearings are important automotive driveline components. For the clutch release bearing, early fatigue failure diagnosis is significant, but the early fatigue failure response signal is not obvious, because failure signals are susceptible to noise on the transmission path and to working environment factors such as interference. With an improvement in vehicle design, clutch release bearing fatigue life indicators have increasingly become an important requirement. Contact fatigue is the main failure mode of release rolling bearing components. Acoustic emission techniques in contact fatigue failure detection have unique advantages, which include highly sensitive nondestructive testing methods. In the acoustic emission technique to detect a bearing, signals are collected from multiple sensors. Each signal contains partial fault information, and there is overlap between the signals' fault information. Therefore, the sensor signals receive simultaneous source information integration is complete fragment rolling bearing fault acoustic emission signal, which is the key issue of accurate fault diagnosis. Release bearing comprises the following components: the outer ring, inner ring, rolling ball, cage. When a failure occurs (such as cracking, pitting), the other components will impact damaged point to produce acoustic emission signal. Release bearings mainly emit an acoustic emission waveform with a Rayleigh wave propagation. Elastic waves emitted from the sound source, and it is through the part surface bearing scattering. Dynamic simulation of rolling bearing failure will contribute to a more in-depth understanding of the characteristics of rolling bearing failure, because monitoring and fault diagnosis of rolling bearings provide a theoretical basis and foundation.

  6. The Parameters Affecting the Success of Irrigation and Debridement with Component Retention in the Treatment of Acutely Infected Total Knee Arthroplasty

    PubMed Central

    Kim, Jae Gyoon; Bae, Ji Hoon; Lee, Seung Yup; Cho, Won Tae

    2015-01-01

    Background The aims of our study were to evaluate the success rate of irrigation and debridement with component retention (IDCR) for acutely infected total knee arthroplasty (TKA) (< 4 weeks of symptom duration) and to analyze the factors affecting prognosis of IDCR. Methods We retrospectively reviewed 28 knees treated by IDCR for acutely infected TKA from 2003 to 2012. We evaluated the success rate of IDCR. All variables were compared between the success and failure groups. Multivariable logistic regression analysis was also used to examine the relative contribution of these parameters to the success of IDCR. Results Seventeen knees (60.7%) were successfully treated. Between the success and failure groups, there were significant differences in the time from primary TKA to IDCR (p = 0.021), the preoperative erythrocyte sedimentation rate (ESR; p = 0.021), microorganism (p = 0.006), and polyethylene liner exchange (p = 0.017). Multivariable logistic regression analysis of parameters affecting the success of IDCR demonstrated that preoperative ESR (odds ratio [OR], 1.02; p = 0.041), microorganism (OR, 12.4; p = 0.006), and polyethylene liner exchange (OR, 0.07; p = 0.021) were significant parameters. Conclusions The results show that 60.7% of the cases were successfully treated by IDCR for acutely infected TKA. The preoperative ESR, microorganism, and polyethylene liner exchange were factors that affected the success of IDCR in acutely infected TKA. PMID:25729521

  7. (n, N) type maintenance policy for multi-component systems with failure interactions

    NASA Astrophysics Data System (ADS)

    Zhang, Zhuoqi; Wu, Su; Li, Binfeng; Lee, Seungchul

    2015-04-01

    This paper studies maintenance policies for multi-component systems in which failure interactions and opportunistic maintenance (OM) involve. This maintenance problem can be formulated as a Markov decision process (MDP). However, since an action set and state space in MDP exponentially expand as the number of components increase, traditional approaches are computationally intractable. To deal with curse of dimensionality, we decompose such a multi-component system into mutually influential single-component systems. Each single-component system is formulated as an MDP with the objective of minimising its long-run average maintenance cost. Under some reasonable assumptions, we prove the existence of the optimal (n, N) type policy for a single-component system. An algorithm to obtain the optimal (n, N) type policy is also proposed. Based on the proposed algorithm, we develop an iterative approximation algorithm to obtain an acceptable maintenance policy for a multi-component system. Numerical examples find that failure interactions and OM pose significant effects on a maintenance policy.

  8. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  9. Identifying Clinical Factors Which Predict for Early Failure Patterns Following Resection for Pancreatic Adenocarcinoma in Patients Who Received Adjuvant Chemotherapy Without Chemoradiation.

    PubMed

    Walston, Steve; Salloum, Joseph; Grieco, Carmine; Wuthrick, Evan; Diaz, Dayssy A; Barney, Christian; Manilchuk, Andrei; Schmidt, Carl; Dillhoff, Mary; Pawlik, Timothy M; Williams, Terence M

    2018-05-04

    The role of radiation therapy (RT) in resected pancreatic cancer (PC) remains incompletely defined. We sought to determine clinical variables which predict for local-regional recurrence (LRR) to help select patients for adjuvant RT. We identified 73 patients with PC who underwent resection and adjuvant gemcitabine-based chemotherapy alone. We performed detailed radiologic analysis of first patterns of failure. LRR was defined as recurrence of PC within standard postoperative radiation volumes. Univariate analyses (UVA) were conducted using the Kaplan-Meier method and multivariate analyses (MVA) utilized the Cox proportional hazard ratio model. Factors significant on UVA were used for MVA. At median follow-up of 20 months, rates of local-regional recurrence only (LRRO) were 24.7%, LRR as a component of any failure 68.5%, metastatic recurrence (MR) as a component of any failure 65.8%, and overall disease recurrence (OR) 90.5%. On UVA, elevated postoperative CA 19-9 (>90 U/mL), pathologic lymph node positive (pLN+) disease, and higher tumor grade were associated with increased LRR, MR, and OR. On MVA, elevated postoperative CA 19-9 and pLN+ were associated with increased MR and OR. In addition, positive resection margin was associated with increased LRRO on both UVA and MVA. About 25% of patients with PC treated without adjuvant RT develop LRRO as initial failure. The only independent predictor of LRRO was positive margin, while elevated postoperative CA 19-9 and pLN+ were associated with predicting MR and overall survival. These data may help determine which patients benefit from intensification of local therapy with radiation.

  10. Proof that green tea tannin suppresses the increase in the blood methylguanidine level associated with renal failure.

    PubMed

    Yokozawa, T; Dong, E; Oura, H

    1997-02-01

    The effects of a green tea tannin mixture and its individual tannin components on methylguanidine were examined in rats with renal failure. The green tea tannin mixture caused a dose-dependent decrease in methylguanidine, a substance which accumulates in the blood with the progression of renal failure. Among individual tannin components, the effect was most conspicuous with (-)-epigallocatechin 3-O-gallate and (-)-epicatechin 3-O-gallate, while other components not linked to gallic acid showed only weak effects. Thus, the effect on methylguanidine was found to vary among different types of tannin.

  11. Contraceptive Failure in the United States: Estimates from the 2006-2010 National Survey of Family Growth.

    PubMed

    Sundaram, Aparna; Vaughan, Barbara; Kost, Kathryn; Bankole, Akinrinola; Finer, Lawrence; Singh, Susheela; Trussell, James

    2017-03-01

    Contraceptive failure rates measure a woman's probability of becoming pregnant while using a contraceptive. Information about these rates enables couples to make informed contraceptive choices. Failure rates were last estimated for 2002, and social and economic changes that have occurred since then necessitate a reestimation. To estimate failure rates for the most commonly used reversible methods in the United States, data from the 2006-2010 National Survey of Family Growth were used; some 15,728 contraceptive use intervals, contributed by 6,683 women, were analyzed. Data from the Guttmacher Institute's 2008 Abortion Patient Survey were used to adjust for abortion underreporting. Kaplan-Meier methods were used to estimate the associated single-decrement probability of failure by duration of use. Failure rates were compared with those from 1995 and 2002. Long-acting reversible contraceptives (the IUD and the implant) had the lowest failure rates of all methods (1%), while condoms and withdrawal carried the highest probabilities of failure (13% and 20%, respectively). However, the failure rate for the condom had declined significantly since 1995 (from 18%), as had the failure rate for all hormonal methods combined (from 8% to 6%). The failure rate for all reversible methods combined declined from 12% in 2002 to 10% in 2006-2010. These broad-based declines in failure rates reverse a long-term pattern of minimal change. Future research should explore what lies behind these trends, as well as possibilities for further improvements. © 2017 The Authors. Perspectives on Sexual and Reproductive Health published by Wiley Periodicals, Inc., on behalf of the Guttmacher Institute.

  12. SPS Energy Conversion Power Management Workshop

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Energy technology concerning photovoltaic conversion, solar thermal conversion systems, and electrical power distribution processing is discussed. The manufacturing processes involving solar cells and solar array production are summarized. Resource issues concerning gallium arsenides and silicon alternatives are reported. Collector structures for solar construction are described and estimates in their service life, failure rates, and capabilities are presented. Theories of advanced thermal power cycles are summarized. Power distribution system configurations and processing components are presented.

  13. Failure rates of mini-implants placed in the infrazygomatic region.

    PubMed

    Uribe, Flavio; Mehr, Rana; Mathur, Ajay; Janakiraman, Nandakumar; Allareddy, Veerasathpurush

    2015-01-01

    The purpose of this pilot study was to evaluate the failure rates of mini-implants placed in the infrazygomatic region and to evaluate factors that affect their stability. A retrospective cohort study of 30 consecutive patients (55 mini-implants) who had infrazygomatic mini-implants at a University Clinic were evaluated for failure rates. Patient, mini-implant, orthodontic, surgical, and mini-implant maintenance factors were evaluated by univariate logistic regression models for association to failure rates. A 21.8 % failure rate of mini-implants placed in the infazygomatic region was observed. None of the predictor variables were significantly associated with higher or lower odds for failed implants. Failure rates for infrazygomatic mini-implants were slightly higher than those reported in other maxilla-mandibular osseous locations. No predictor variables were found to be associated to the failure rates.

  14. Modes of failure of Osteonics constrained tripolar implants: a retrospective analysis of forty-three failed implants.

    PubMed

    Guyen, Olivier; Lewallen, David G; Cabanela, Miguel E

    2008-07-01

    The Osteonics constrained tripolar implant has been one of the most commonly used options to manage recurrent instability after total hip arthroplasty. Mechanical failures were expected and have been reported. The purpose of this retrospective review was to identify the observed modes of failure of this device. Forty-three failed Osteonics constrained tripolar implants were revised at our institution between September 1997 and April 2005. All revisions related to the constrained acetabular component only were considered as failures. All of the devices had been inserted for recurrent or intraoperative instability during revision procedures. Seven different methods of implantation were used. Operative reports and radiographs were reviewed to identify the modes of failure. The average time to failure of the forty-three implants was 28.4 months. A total of five modes of failure were observed: failure at the bone-implant interface (type I), which occurred in eleven hips; failure at the mechanisms holding the constrained liner to the metal shell (type II), in six hips; failure of the retaining mechanism of the bipolar component (type III), in ten hips; dislocation of the prosthetic head at the inner bearing of the bipolar component (type IV), in three hips; and infection (type V), in twelve hips. The mode of failure remained unknown in one hip that had been revised at another institution. The Osteonics constrained tripolar total hip arthroplasty implant is a complex device involving many parts. We showed that failure of this device can occur at most of its interfaces. It would therefore appear logical to limit its application to salvage situations.

  15. CONFIG: Qualitative simulation tool for analyzing behavior of engineering devices

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Basham, Bryan D.; Harris, Richard A.

    1987-01-01

    To design failure management expert systems, engineers mentally analyze the effects of failures and procedures as they propagate through device configurations. CONFIG is a generic device modeling tool for use in discrete event simulation, to support such analyses. CONFIG permits graphical modeling of device configurations and qualitative specification of local operating modes of device components. Computation requirements are reduced by focussing the level of component description on operating modes and failure modes, and specifying qualitative ranges of variables relative to mode transition boundaries. Simulation processing occurs only when modes change or variables cross qualitative boundaries. Device models are built graphically, using components from libraries. Components are connected at ports by graphical relations that define data flow. The core of a component model is its state transition diagram, which specifies modes of operation and transitions among them.

  16. Blowout Prevention System Events and Equipment Component Failures : 2016 SafeOCS Annual Report

    DOT National Transportation Integrated Search

    2017-09-22

    The SafeOCS 2016 Annual Report, produced by the Bureau of Transportation Statistics (BTS), summarizes blowout prevention (BOP) equipment failures on marine drilling rigs in the Outer Continental Shelf. It includes an analysis of equipment component f...

  17. Determining Component Probability using Problem Report Data for Ground Systems used in Manned Space Flight

    NASA Technical Reports Server (NTRS)

    Monaghan, Mark W.; Gillespie, Amanda M.

    2013-01-01

    During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.

  18. Analytical Method to Evaluate Failure Potential During High-Risk Component Development

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Communicating failure mode information during design and manufacturing is a crucial task for failure prevention. Most processes use Failure Modes and Effects types of analyses, as well as prior knowledge and experience, to determine the potential modes of failures a product might encounter during its lifetime. When new products are being considered and designed, this knowledge and information is expanded upon to help designers extrapolate based on their similarity with existing products and the potential design tradeoffs. This paper makes use of similarities and tradeoffs that exist between different failure modes based on the functionality of each component/product. In this light, a function-failure method is developed to help the design of new products with solutions for functions that eliminate or reduce the potential of a failure mode. The method is applied to a simplified rotating machinery example in this paper, and is proposed as a means to account for helicopter failure modes during design and production, addressing stringent safety and performance requirements for NASA applications.

  19. Enhanced Component Performance Study: Turbine-Driven Pumps 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-11-01

    This report presents an enhanced performance evaluation of turbine-driven pumps (TDPs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The TDP failure modes considered are failure to start (FTS), failure to run less than or equal to one hour (FTR=1H), failure to run more than one hour (FTR>1H), and normally running systems FTS and failure to run (FTR). The component reliability estimates and themore » reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified for TDP unavailability, for frequency of start demands for standby TDPs, and for run hours in the first hour after start. Statistically significant decreasing trends were identified for start demands for normally running TDPs, and for run hours per reactor critical year for normally running TDPs.« less

  20. Shuttle/ISS EMU Failure History and the Impact on Advanced EMU Portable Life Support System (PLSS) Design

    NASA Technical Reports Server (NTRS)

    Campbell, Colin

    2015-01-01

    As the Shuttle/ISS EMU Program exceeds 35 years in duration and is still supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.

  1. Shuttle/ISS EMU Failure History and the Impact on Advanced EMU PLSS Design

    NASA Technical Reports Server (NTRS)

    Campbell, Colin

    2011-01-01

    As the Shuttle/ISS EMU Program exceeds 30 years in duration and is still successfully supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.

  2. Shuttle/ISS EMU Failure History and the Impact on Advanced EMU PLSS Design

    NASA Technical Reports Server (NTRS)

    Campbell, Colin

    2015-01-01

    As the Shuttle/ISS EMU Program exceeds 30 years in duration and is still supporting the needs of the International Space Station (ISS), a critical benefit of such a long running program with thorough documentation of system and component failures is the ability to study and learn from those failures when considering the design of the next generation space suit. Study of the subject failure history leads to changes in the Advanced EMU Portable Life Support System (PLSS) schematic, selected component technologies, as well as the planned manner of ground testing. This paper reviews the Shuttle/ISS EMU failure history and discusses the implications to the AEMU PLSS.

  3. DEPEND - A design environment for prediction and evaluation of system dependability

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.; Iyer, Ravishankar K.

    1990-01-01

    The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.

  4. 10 CFR 34.101 - Notifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... written report to the NRC's Office of Federal and State Materials and Environmental Management Programs... shielded position and secure it in this position; or (3) Failure of any component (critical to safe... overexposure submitted under 10 CFR 20.2203 which involves failure of safety components of radiography...

  5. A model for the progressive failure of laminated composite structural components

    NASA Technical Reports Server (NTRS)

    Allen, D. H.; Lo, D. C.

    1991-01-01

    Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.

  6. The effects of heart rate control in chronic heart failure with reduced ejection fraction.

    PubMed

    Grande, Dario; Iacoviello, Massimo; Aspromonte, Nadia

    2018-07-01

    Elevated heart rate has been associated with worse prognosis both in the general population and in patients with heart failure. Heart rate is finely modulated by neurohormonal signals and it reflects the balance between the sympathetic and the parasympathetic limbs of the autonomic nervous system. For this reason, elevated heart rate in heart failure has been considered an epiphenomenon of the sympathetic hyperactivation during heart failure. However, experimental and clinical evidence suggests that high heart rate could have a direct pathogenetic role. Consequently, heart rate might act as a pathophysiological mediator of heart failure as well as a marker of adverse outcome. This hypothesis has been supported by the observation that the positive effect of beta-blockade could be linked to the degree of heart rate reduction. In addition, the selective heart rate control with ivabradine has recently been demonstrated to be beneficial in patients with heart failure and left ventricular systolic dysfunction. The objective of this review is to examine the pathophysiological implications of elevated heart rate in chronic heart failure and explore the mechanisms underlying the effects of pharmacological heart rate control.

  7. Failure detection and identification

    NASA Technical Reports Server (NTRS)

    Massoumnia, Mohammad-Ali; Verghese, George C.; Willsky, Alan S.

    1989-01-01

    Using the geometric concept of an unobservability subspace, a solution is given to the problem of detecting and identifying control system component failures in linear, time-invariant systems. Conditions are developed for the existence of a causal, linear, time-invariant processor that can detect and uniquely identify a component failure, first for the case where components can fail simultaneously, and then for the case where they fail only one at a time. Explicit design algorithms are provided when these conditions are satisfied. In addition to time-domain solvability conditions, frequency-domain interpretations of the results are given, and connections are drawn with results already available in the literature.

  8. Enhanced Component Performance Study: Air-Operated Valves 1998-2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-11-01

    This report presents a performance evaluation of air-operated valves (AOVs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The AOV failure modes considered are failure-to-open/close, failure to operate or control, and spurious operation. The component reliability estimates and the reliability data are trended for the most recent 10-year period, while yearly estimates for reliability are provided for the entire active period. One statistically significantmore » trend was observed in the AOV data: The frequency of demands per reactor year for valves recording the fail-to-open or fail-to-close failure modes, for high-demand valves (those with greater than twenty demands per year), was found to be decreasing. The decrease was about three percent over the ten year period trended.« less

  9. Reliability models applicable to space telescope solar array assembly system

    NASA Technical Reports Server (NTRS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  10. Response Strength in Extreme Multiple Schedules

    PubMed Central

    McLean, Anthony P; Grace, Randolph C; Nevin, John A

    2012-01-01

    Four pigeons were trained in a series of two-component multiple schedules. Reinforcers were scheduled with random-interval schedules. The ratio of arranged reinforcer rates in the two components was varied over 4 log units, a much wider range than previously studied. When performance appeared stable, prefeeding tests were conducted to assess resistance to change. Contrary to the generalized matching law, logarithms of response ratios in the two components were not a linear function of log reinforcer ratios, implying a failure of parameter invariance. Over a 2 log unit range, the function appeared linear and indicated undermatching, but in conditions with more extreme reinforcer ratios, approximate matching was observed. A model suggested by McLean (1991), originally for local contrast, predicts these changes in sensitivity to reinforcer ratios somewhat better than models by Herrnstein (1970) and by Williams and Wixted (1986). Prefeeding tests of resistance to change were conducted at each reinforcer ratio, and relative resistance to change was also a nonlinear function of log reinforcer ratios, again contrary to conclusions from previous work. Instead, the function suggests that resistance to change in a component may be determined partly by the rate of reinforcement and partly by the ratio of reinforcers to responses. PMID:22287804

  11. Extended Aging Theories for Predictions of Safe Operational Life of Critical Airborne Structural Components

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Chen, Tony

    2006-01-01

    The previously developed Ko closed-form aging theory has been reformulated into a more compact mathematical form for easier application. A new equivalent loading theory and empirical loading theories have also been developed and incorporated into the revised Ko aging theory for the prediction of a safe operational life of airborne failure-critical structural components. The new set of aging and loading theories were applied to predict the safe number of flights for the B-52B aircraft to carry a launch vehicle, the structural life of critical components consumed by load excursion to proof load value, and the ground-sitting life of B-52B pylon failure-critical structural components. A special life prediction method was developed for the preflight predictions of operational life of failure-critical structural components of the B-52H pylon system, for which no flight data are available.

  12. Engineering Design Handbook. Development Guide for Reliability. Part Two. Design for Reliability

    DTIC Science & Technology

    1976-01-01

    Component failure rates, however, have been recorded by many sources as a function of use and environment. Some of these sources are listed in Refs. 13-17...other systems capable of creating an explosive reac- tion. The second category is fairly obvious and includes many variations on methods for providing...aboutthem. 4. Ability to detect signals ( including patterns) in high noise environments. 5. Ability to store large amounts of informa- tion for long

  13. Experience of the posterior lip augmentation device in a regional hip arthroplasty unit as a treatment for recurrent dislocation.

    PubMed

    Hoggett, L; Cross, C; Helm, T

    2017-12-01

    Dislocation after total hip arthroplasty (THA) remains a significant complication of the procedure and is the third leading cause for revision THA. One technique for treatment of this complication is the use of the posterior lip augmentation device (PLAD). We describe our experience using the PLAD including complication rates. A retrospective review of 55 PLADs (54 patients) was carried out following identification from electronic theatre records. Basic patient demographics, operative records and radiographs were collected and reviewed and data was analysed using Microsoft Excel. Failure of the PLAD was defined as further operative intervention after PLAD insertion and included: dislocation, implant breakage, infection and revision of the THA for loosening of either component. 55 PLADs were implanted in 54 patients with an average age of 77 years. There was a significant preponderance of females and a variety of surgical approaches had been used for the original hip replacement, including trochanteric osteotomy, posterior and antero-lateral. 9 (16%) patients had recurrent dislocations,1 (2%) failed secondary to screw breakage, 3 (5%) had and infection requiring intervention and 2 (4%) underwent further revision for aseptic loosening of the femoral component. The overall failure rate was 25% with 14 patients requiring intervention post PLAD. Our results are inferior to other published results and indicate that the PLAD should be used with caution for recurrent dislocations of the Charnley hip replacement.

  14. Reconstruction of failed acetabular component in the presence of severe acetabular bone loss: a systematic review.

    PubMed

    Volpin, A; Konan, S; Biz, C; Tansey, R J; Haddad, F S

    2018-04-13

    Acetabular revision especially in the presence of severe bone loss is challenging. There is a paucity of literature critiquing contemporary techniques of revision acetabular reconstruction and their outcomes. The purpose of this study was to systematically review the literature and to report clinical outcomes and survival of contemporary acetabular revision arthroplasty techniques (tantalum metal shells, uncemented revision jumbo shells, reinforced cages and rings, oblong shells and custom-made triflange constructs). Full-text papers and those with an abstract in English published from January 2001 to January 2016 were identified through international databases. A total of 50 papers of level IV scientific evidence, comprising 2811 hips in total, fulfilled the inclusion criteria and were included. Overall, patients had improved outcomes irrespective of the technique of reconstruction as documented by postoperative hip scores. Our pooled analysis suggests that oblong cups components had a lower failure rate compared with other different materials considered in this review. Custom-made triflange cups had one of highest failure rates. However, this may reflect the complexity of revisions and severity of bone loss. The most common postoperative complication reported in all groups was dislocation. This review confirms successful acetabular reconstructions using diverse techniques depending on the type of bone loss and highlights key features and outcomes of different techniques. In particular, oblong cups and tantalum shells have successful survivorship.

  15. Organ failure and tight glycemic control in the SPRINT study.

    PubMed

    Chase, J Geoffrey; Pretty, Christopher G; Pfeifer, Leesa; Shaw, Geoffrey M; Preiser, Jean-Charles; Le Compte, Aaron J; Lin, Jessica; Hewett, Darren; Moorhead, Katherine T; Desaive, Thomas

    2010-01-01

    Intensive care unit mortality is strongly associated with organ failure rate and severity. The sequential organ failure assessment (SOFA) score is used to evaluate the impact of a successful tight glycemic control (TGC) intervention (SPRINT) on organ failure, morbidity, and thus mortality. A retrospective analysis of 371 patients (3,356 days) on SPRINT (August 2005 - April 2007) and 413 retrospective patients (3,211 days) from two years prior, matched by Acute Physiology and Chronic Health Evaluation (APACHE) III. SOFA is calculated daily for each patient. The effect of the SPRINT TGC intervention is assessed by comparing the percentage of patients with SOFA ≤5 each day and its trends over time and cohort/group. Organ-failure free days (all SOFA components ≤2) and number of organ failures (SOFA components >2) are also compared. Cumulative time in 4.0 to 7.0 mmol/L band (cTIB) was evaluated daily to link tightness and consistency of TGC (cTIB ≥0.5) to SOFA ≤5 using conditional and joint probabilities. Admission and maximum SOFA scores were similar (P = 0.20; P = 0.76), with similar time to maximum (median: one day; IQR: 13 days; P = 0.99). Median length of stay was similar (4.1 days SPRINT and 3.8 days Pre-SPRINT; P = 0.94). The percentage of patients with SOFA ≤5 is different over the first 14 days (P = 0.016), rising to approximately 75% for Pre-SPRINT and approximately 85% for SPRINT, with clear separation after two days. Organ-failure-free days were different (SPRINT = 41.6%; Pre-SPRINT = 36.5%; P < 0.0001) as were the percent of total possible organ failures (SPRINT = 16.0%; Pre-SPRINT = 19.0%; P < 0.0001). By Day 3 over 90% of SPRINT patients had cTIB ≥0.5 (37% Pre-SPRINT) reaching 100% by Day 7 (50% Pre-SPRINT). Conditional and joint probabilities indicate tighter, more consistent TGC under SPRINT (cTIB ≥0.5) increased the likelihood SOFA ≤5. SPRINT TGC resolved organ failure faster, and for more patients, from similar admission and maximum SOFA scores, than conventional control. These reductions mirror the reduced mortality with SPRINT. The cTIB ≥0.5 metric provides a first benchmark linking TGC quality to organ failure. These results support other physiological and clinical results indicating the role tight, consistent TGC can play in reducing organ failure, morbidity and mortality, and should be validated on data from randomised trials.

  16. Organ failure and tight glycemic control in the SPRINT study

    PubMed Central

    2010-01-01

    Introduction Intensive care unit mortality is strongly associated with organ failure rate and severity. The sequential organ failure assessment (SOFA) score is used to evaluate the impact of a successful tight glycemic control (TGC) intervention (SPRINT) on organ failure, morbidity, and thus mortality. Methods A retrospective analysis of 371 patients (3,356 days) on SPRINT (August 2005 - April 2007) and 413 retrospective patients (3,211 days) from two years prior, matched by Acute Physiology and Chronic Health Evaluation (APACHE) III. SOFA is calculated daily for each patient. The effect of the SPRINT TGC intervention is assessed by comparing the percentage of patients with SOFA ≤5 each day and its trends over time and cohort/group. Organ-failure free days (all SOFA components ≤2) and number of organ failures (SOFA components >2) are also compared. Cumulative time in 4.0 to 7.0 mmol/L band (cTIB) was evaluated daily to link tightness and consistency of TGC (cTIB ≥0.5) to SOFA ≤5 using conditional and joint probabilities. Results Admission and maximum SOFA scores were similar (P = 0.20; P = 0.76), with similar time to maximum (median: one day; IQR: [1,3] days; P = 0.99). Median length of stay was similar (4.1 days SPRINT and 3.8 days Pre-SPRINT; P = 0.94). The percentage of patients with SOFA ≤5 is different over the first 14 days (P = 0.016), rising to approximately 75% for Pre-SPRINT and approximately 85% for SPRINT, with clear separation after two days. Organ-failure-free days were different (SPRINT = 41.6%; Pre-SPRINT = 36.5%; P < 0.0001) as were the percent of total possible organ failures (SPRINT = 16.0%; Pre-SPRINT = 19.0%; P < 0.0001). By Day 3 over 90% of SPRINT patients had cTIB ≥0.5 (37% Pre-SPRINT) reaching 100% by Day 7 (50% Pre-SPRINT). Conditional and joint probabilities indicate tighter, more consistent TGC under SPRINT (cTIB ≥0.5) increased the likelihood SOFA ≤5. Conclusions SPRINT TGC resolved organ failure faster, and for more patients, from similar admission and maximum SOFA scores, than conventional control. These reductions mirror the reduced mortality with SPRINT. The cTIB ≥0.5 metric provides a first benchmark linking TGC quality to organ failure. These results support other physiological and clinical results indicating the role tight, consistent TGC can play in reducing organ failure, morbidity and mortality, and should be validated on data from randomised trials. PMID:20704712

  17. Reliability systems for implantable cardiac defibrillator batteries

    NASA Astrophysics Data System (ADS)

    Takeuchi, Esther S.

    The reliability of the power sources used in implantable cardiac defibrillators is critical due to the life-saving nature of the device. Achieving a high reliability power source depends on several systems functioning together. Appropriate cell design is the first step in assuring a reliable product. Qualification of critical components and of the cells using those components is done prior to their designation as implantable grade. Product consistency is assured by control of manufacturing practices and verified by sampling plans using both accelerated and real-time testing. Results to date show that lithium/silver vanadium oxide cells used for implantable cardiac defibrillators have a calculated maximum random failure rate of 0.005% per test month.

  18. The Omega-3 fatty acids (Fish Oils) and Aspirin in Vascular access OUtcomes in REnal Disease (FAVOURED) study: the updated final trial protocol and rationale of post-initiation trial modifications.

    PubMed

    Viecelli, Andrea K; Pascoe, Elaine; Polkinghorne, Kevan R; Hawley, Carmel; Paul-Brent, Peta-Anne; Badve, Sunil V; Cass, Alan; Heritier, Stephane; Kerr, Peter G; Mori, Trevor A; Robertson, Amanda; Seong, Hooi L; Irish, Ashley B

    2015-06-27

    The FAVOURED study is an international multicentre, double-blind, placebo-controlled trial which commenced recruitment in 2008 and examines whether omega-3 polyunsaturated fatty acids (omega-3 PUFAs) either alone or in combination with aspirin will effectively reduce primary access failure of de novo arteriovenous fistulae (AVF) in patients with stage 4 and 5 chronic kidney disease. Publication of new evidence derived from additional studies of clopidogrel and a high screen failure rate due to prevalent aspirin usage prompted an updated trial design. The original trial protocol published in 2009 has undergone two major amendments, which were implemented in 2011. Firstly, the primary outcome 'early thrombosis' at 3 months following AVF creation was broadened to a more clinically relevant outcome of 'AVF access failure'; a composite of thrombosis, AVF abandonment and cannulation failure at 12 months. Secondly, participants unable to cease using aspirin were allowed to be enrolled and randomised to omega-3 PUFAs or placebo. The revised primary aim of the FAVOURED study is to test the hypothesis that omega-3 PUFAs will reduce rates of AVF access failure within 12 months following AVF surgery. The secondary aims are to examine the effect of omega-3 PUFAs and aspirin on the individual components of the primary end-point, to examine the safety of study interventions and assess central venous catheter requirement as a result of access failure. This multicentre international clinical trial was amended to address the clinically relevant question of whether the usability of de novo AVF at 12 months can be improved by the early use of omega-3 PUFAs and to a lesser extent aspirin. This study protocol amendment was made in response to a large trial demonstrating that clopidogrel is effective in safely preventing primary AVF thrombosis, but ineffective at increasing functional patency. Secondly, including patients taking aspirin will enroll a more representative cohort of haemodialysis patients, who are significantly older with a higher prevalence of cardiovascular disease and diabetes which may increase event rates and the power of the study. Australia & New Zealand Clinical Trial Register (ACTRN12607000569404).

  19. Reliability analysis of a phaser measurement unit using a generalized fuzzy lambda-tau(GFLT) technique.

    PubMed

    Komal

    2018-05-01

    Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    PubMed Central

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  1. System Lifetimes, The Memoryless Property, Euler's Constant, and Pi

    ERIC Educational Resources Information Center

    Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon

    2013-01-01

    A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…

  2. The independence of irradiation creep in austenitic alloys of displacement rate and helium to dpa ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garner, F.A.; Toloczko, M.B.; Grossbeck, M.L.

    1997-04-01

    The majority of high fluence data on the void swelling and irradiation creep of austenitic steels were generated at relatively high displacement rates and relatively low helium/dpa levels that are not characteristic of the conditions anticipated in ITER and other anticipated fusion environments. After reanalyzing the available data, this paper shows that irradiation creep is not directly sensitive to either the helium/dpa ratio or the displacement rate, other than through their possible influence on void swelling, since one component of the irradiation creep rate varies with no correlation to the instantaneous swelling rate. Until recently, however, the non-swelling-related creep componentmore » was also thought to exhibit its own strong dependence on displacement rate, increasing at lower fluxes. This perception originally arose from the work of Lewthwaite and Mosedale at temperatures in the 270-350{degrees}C range. More recently this perception was thought to extend to higher irradiation temperatures. It now appears, however, that this interpretation is incorrect, and in fact the steady-state value of the non-swelling component of irradiation creep is actually insensitive to displacement rate. The perceived flux dependence appears to arise from a failure to properly interpret the impact of the transient regime of irradiation creep.« less

  3. Independent Orbiter Assessment (IOA): Analysis of the guidance, navigation, and control subsystem

    NASA Technical Reports Server (NTRS)

    Trahan, W. H.; Odonnell, R. A.; Pietz, K. C.; Hiott, J. M.

    1986-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) is presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results corresponding to the Orbiter Guidance, Navigation, and Control (GNC) Subsystem hardware are documented. The function of the GNC hardware is to respond to guidance, navigation, and control software commands to effect vehicle control and to provide sensor and controller data to GNC software. Some of the GNC hardware for which failure modes analysis was performed includes: hand controllers; Rudder Pedal Transducer Assembly (RPTA); Speed Brake Thrust Controller (SBTC); Inertial Measurement Unit (IMU); Star Tracker (ST); Crew Optical Alignment Site (COAS); Air Data Transducer Assembly (ADTA); Rate Gyro Assemblies; Accelerometer Assembly (AA); Aerosurface Servo Amplifier (ASA); and Ascent Thrust Vector Control (ATVC). The IOA analysis process utilized available GNC hardware drawings, workbooks, specifications, schematics, and systems briefs for defining hardware assemblies, components, and circuits. Each hardware item was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  4. An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks

    PubMed Central

    Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei

    2014-01-01

    The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005

  5. Compression Strength of Composite Primary Structural Components

    NASA Technical Reports Server (NTRS)

    Johnson, Eric R.

    1998-01-01

    Research conducted under NASA Grant NAG-1-537 focussed on the response and failure of advanced composite material structures for application to aircraft. Both experimental and analytical methods were utilized to study the fundamental mechanics of the response and failure of selected structural components subjected to quasi-static loads. Most of the structural components studied were thin-walled elements subject to compression, such that they exhibited buckling and postbuckling responses prior to catastrophic failure. Consequently, the analyses were geometrically nonlinear. Structural components studied were dropped-ply laminated plates, stiffener crippling, pressure pillowing of orthogonally stiffened cylindrical shells, axisymmetric response of pressure domes, and the static crush of semi-circular frames. Failure of these components motivated analytical studies on an interlaminar stress postprocessor for plate and shell finite element computer codes, and global/local modeling strategies in finite element modeling. These activities are summarized in the following section. References to literature published under the grant are listed on pages 5 to 10 by a letter followed by a number under the categories of journal publications, conference publications, presentations, and reports. These references are indicated in the text by their letter and number as a superscript.

  6. Stress corrosion cracking properties of 15-5PH steel

    NASA Technical Reports Server (NTRS)

    Rosa, Ferdinand

    1993-01-01

    Unexpected occurrence of failures, due to stress corrosion cracking (SCC) of structural components, indicate a need for improved characterization of materials and more advanced analytical procedures for reliably predicting structures performance. Accordingly, the purpose of this study was to determine the stress corrosion susceptibility of 15-5PH steel over a wide range of applied strain rates in a highly corrosive environment. The selected environment for this investigation was a highly acidified sodium chloride (NaCl) aqueous solution. The selected alloy for the study was a 15-5PH steel in the H900 condition. The slow strain rate technique was selected to test the metals specimens.

  7. Mechanical behavior of precipitation hardenable steels exposed to highly corrosive environment

    NASA Technical Reports Server (NTRS)

    Rosa, Ferdinand

    1994-01-01

    Unexpected occurrences of failures, due to stress corrosion cracking (SCC) of structural components, indicate a need for improved characterization of materials and more advanced analytical procedures for reliably predicting structures performance. Accordingly, the purpose of this study was to determine the stress corrosion susceptibility of 15 - 5 PH steel over a wide range of applied strain rates in a highly corrosive environment. The selected environment for this investigation was a 3.5 percent NaCl aqueous solution. The material selected for the study was 15 - 5 PH steel in the H 900 condition. The Slow Strain Rate technique was used to test the metallic specimens.

  8. Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana Kelly; Song-Hua Shen; Gary DeMoss

    2010-06-01

    Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less

  9. 16 CFR 1207.5 - Design.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... pool slide shall be such that no structural failures of any component part shall cause failures of any... such fasteners shall not cause a failure of the tread under the ladder loading conditions specified in... without failure or permanent deformation. (d) Handrails. Swimming pool slide ladders shall be equipped...

  10. 16 CFR 1207.5 - Design.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... pool slide shall be such that no structural failures of any component part shall cause failures of any... such fasteners shall not cause a failure of the tread under the ladder loading conditions specified in... without failure or permanent deformation. (d) Handrails. Swimming pool slide ladders shall be equipped...

  11. Semiparametric modeling and estimation of the terminal behavior of recurrent marker processes before failure events.

    PubMed

    Chan, Kwun Chuen Gary; Wang, Mei-Cheng

    2017-01-01

    Recurrent event processes with marker measurements are mostly and largely studied with forward time models starting from an initial event. Interestingly, the processes could exhibit important terminal behavior during a time period before occurrence of the failure event. A natural and direct way to study recurrent events prior to a failure event is to align the processes using the failure event as the time origin and to examine the terminal behavior by a backward time model. This paper studies regression models for backward recurrent marker processes by counting time backward from the failure event. A three-level semiparametric regression model is proposed for jointly modeling the time to a failure event, the backward recurrent event process, and the marker observed at the time of each backward recurrent event. The first level is a proportional hazards model for the failure time, the second level is a proportional rate model for the recurrent events occurring before the failure event, and the third level is a proportional mean model for the marker given the occurrence of a recurrent event backward in time. By jointly modeling the three components, estimating equations can be constructed for marked counting processes to estimate the target parameters in the three-level regression models. Large sample properties of the proposed estimators are studied and established. The proposed models and methods are illustrated by a community-based AIDS clinical trial to examine the terminal behavior of frequencies and severities of opportunistic infections among HIV infected individuals in the last six months of life.

  12. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1993-11-23

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures.

  13. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system.

  14. A retrospective survey of the causes of bracket- and tube-bonding failures.

    PubMed

    Roelofs, Tom; Merkens, Nico; Roelofs, Jeroen; Bronkhorst, Ewald; Breuning, Hero

    2017-01-01

    To investigate the causes of bonding failures of orthodontic brackets and tubes and the effect of premedicating for saliva reduction. Premedication with atropine sulfate was administered randomly. Failure rate of brackets and tubes placed in a group of 158 consecutive patients was evaluated after a mean period of 67 weeks after bonding. The failure rate in the group without atropine sulfate premedication was 2.4%. In the group with premedication, the failure rate was 2.7%. The Cox regression analysis of these groups showed that atropine application did not lead to a reduction in bond failures. Statistically significant differences in the hazard ratio were found for the bracket regions and for the dental assistants who prepared for the bonding procedure. Premedication did not lead to fewer bracket failures. The roles of the dental assistant and patient in preventing failures was relevant. A significantly higher failure rate for orthodontic appliances was found in the posterior regions.

  15. Leak Rate Performance of Silicone Elastomer O-Rings Contaminated with JSC-1A Lunar Regolith Simulant

    NASA Technical Reports Server (NTRS)

    Oravec, Heather Ann; Daniels, Christopher C.

    2014-01-01

    Contamination of spacecraft components with planetary and foreign object debris is a growing concern. Face seals separating the spacecraft cabin from the debris filled environment are particularly susceptible; if the seal becomes contaminated there is potential for decreased performance, mission failure, or catastrophe. In this study, silicone elastomer O-rings were contaminated with JSC- 1A lunar regolith and their leak rate performance was evaluated. The leak rate values of contaminated O-rings at four levels of seal compression were compared to those of as-received, uncontaminated, O-rings. The results showed a drastic increase in leak rate after contamination. JSC-1A contaminated O-rings lead to immeasurably high leak rate values for all levels of compression except complete closure. Additionally, a mechanical method of simulant removal was examined. In general, this method returned the leak rate to as-received values.

  16. Advanced Self-Calibrating, Self-Repairing Data Acquisition System

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Eckhoff, Anthony J. (Inventor); Angel, Lucena R. (Inventor); Perotti, Jose M. (Inventor)

    2002-01-01

    An improved self-calibrating and self-repairing Data Acquisition System (DAS) for use in inaccessible areas, such as onboard spacecraft, and capable of autonomously performing required system health checks, failure detection. When required, self-repair is implemented utilizing a "spare parts/tool box" system. The available number of spare components primarily depends upon each component's predicted reliability which may be determined using Mean Time Between Failures (MTBF) analysis. Failing or degrading components are electronically removed and disabled to reduce power consumption, before being electronically replaced with spare components.

  17. An overview of fatigue failures at the Rocky Flats Wind System Test Center

    NASA Technical Reports Server (NTRS)

    Waldon, C. A.

    1981-01-01

    Potential small wind energy conversion (SWECS) design problems were identified to improve product quality and reliability. Mass produced components such as gearboxes, generators, bearings, etc., are generally reliable due to their widespread uniform use in other industries. The likelihood of failure increases, though, in the interfacing of these components and in SWECS components designed for a specific system use. Problems relating to the structural integrity of such components are discussed and analyzed with techniques currently used in quality assurance programs in other manufacturing industries.

  18. Effectiveness and predictors of failure of noninvasive mechanical ventilation in acute respiratory failure.

    PubMed

    Martín-González, F; González-Robledo, J; Sánchez-Hernández, F; Moreno-García, M N; Barreda-Mellado, I

    2016-01-01

    To assess the effectiveness and identify predictors of failure of noninvasive ventilation. A retrospective, longitudinal descriptive study was made. Adult patients with acute respiratory failure. A total of 410 consecutive patients with noninvasive ventilation treated in an Intensive Care Unit of a tertiary university hospital from 2006 to 2011. Noninvasive ventilation. Demographic variables and clinical and laboratory test parameters at the start and two hours after the start of noninvasive ventilation. Evolution during admission to the Unit and until hospital discharge. The failure rate was 50%, with an overall mortality rate of 33%. A total of 156 patients had hypoxemic respiratory failure, 87 postextubation respiratory failure, 78 exacerbation of chronic obstructive pulmonary disease, 61 hypercapnic respiratory failure without chronic obstructive pulmonary disease, and 28 had acute pulmonary edema. The failure rates were 74%, 54%, 27%, 31% and 21%, respectively. The etiology of respiratory failure, serum bilirubin at the start, APACHEII score, radiological findings, the need for sedation to tolerate noninvasive ventilation, changes in level of consciousness, PaO2/FIO2 ratio, respiratory rate and heart rate from the start and two hours after the start of noninvasive ventilation were independently associated to failure. The effectiveness of noninvasive ventilation varies according to the etiology of respiratory failure. Its use in hypoxemic respiratory failure and postextubation respiratory failure should be assessed individually. Predictors of failure could be useful to prevent delayed intubation. Copyright © 2015 Elsevier España, S.L.U. and SEMICYUC. All rights reserved.

  19. Influence of Composition and Deformation Conditions on the Strength and Brittleness of Shale Rock

    NASA Astrophysics Data System (ADS)

    Rybacki, E.; Reinicke, A.; Meier, T.; Makasi, M.; Dresen, G. H.

    2015-12-01

    Stimulation of shale gas reservoirs by hydraulic fracturing operations aims to increase the production rate by increasing the rock surface connected to the borehole. Prospective shales are often believed to display high strength and brittleness to decrease the breakdown pressure required to (re-) initiate a fracture as well as slow healing of natural and hydraulically induced fractures to increase the lifetime of the fracture network. Laboratory deformation tests were performed on several, mainly European black shales with different mineralogical composition, porosity and maturity at ambient and elevated pressures and temperatures. Mechanical properties such as compressive strength and elastic moduli strongly depend on shale composition, porosity, water content, structural anisotropy, and on pressure (P) and temperature (T) conditions, but less on strain rate. We observed a transition from brittle to semibrittle deformation at high P-T conditions, in particular for high porosity shales. At given P-T conditions, the variation of compressive strength and Young's modulus with composition can be roughly estimated from the volumetric proportion of all components including organic matter and pores. We determined also brittleness index values based on pre-failure deformation behavior, Young's modulus and bulk composition. At low P-T conditions, where samples showed pronounced post-failure weakening, brittleness may be empirically estimated from bulk composition or Young's modulus. Similar to strength, at given P-T conditions, brittleness depends on the fraction of all components and not the amount of a specific component, e.g. clays, alone. Beside strength and brittleness, knowledge of the long term creep properties of shales is required to estimate in-situ stress anisotropy and the healing of (propped) hydraulic fractures.

  20. A Mixed-Mode (I-II) Fracture Criterion for AS4/8552 Carbon/Epoxy Composite Laminate

    NASA Astrophysics Data System (ADS)

    Karnati, Sidharth Reddy

    A majority of aerospace structures are subjected to bending and stretching loads that introduce peel and shear stresses between the plies of a composite laminate. These two stress components cause a combination of mode I and II fracture modes in the matrix layer of the composite laminate. The most common failure mode in laminated composites is delamination that affects the structural integrity of composite structures. Damage tolerant designs of structures require two types of materials data: mixed-mode (I-II) delamination fracture toughness that predicts failure and delamination growth rate that predicts the life of the structural component. This research focuses determining mixed-mode (I-II) fracture toughness under a combination of mode I and mode II stress states and then a fracture criterion for AS4/8552 composite laminate, which is widely used in general aviation. The AS4/8552 prepreg was supplied by Hexcel Corporation and autoclave fabricated into a 20-ply unidirectional laminate with an artificial delamination by a Fluorinated Ethylene Propylene (FEP) film at the mid-plane. Standard split beam specimens were prepared and tested in double cantilever beam (DCB) and end notched flexure modes to determine mode I (GIC) and II (GIIC) fracture toughnesses, respectively. The DCB specimens were also tested in a modified mixed-mode bending apparatus at GIIm /GT ratios of 0.18, 0.37, 0.57 and 0.78, where GT is total and GIIm is the mode II component of energy release rates. The measured fracture toughness, GC, was found to follow the locus a power law equation. The equation was validated for the present and literature experimental data.

  1. Treatment Failure With Rhythm and Rate Control Strategies in Patients With Atrial Fibrillation and Congestive Heart Failure: An AF-CHF Substudy.

    PubMed

    Dyrda, Katia; Roy, Denis; Leduc, Hugues; Talajic, Mario; Stevenson, Lynne Warner; Guerra, Peter G; Andrade, Jason; Dubuc, Marc; Macle, Laurent; Thibault, Bernard; Rivard, Lena; Khairy, Paul

    2015-12-01

    Rate and rhythm control strategies for atrial fibrillation (AF) are not always effective or well tolerated in patients with congestive heart failure (CHF). We assessed reasons for treatment failure, associated characteristics, and effects on survival. A total of 1,376 patients enrolled in the AF-CHF trial were followed for 37  ±  19 months, 206 (15.0%) of whom failed initial therapy leading to crossover. Rhythm control was abandoned more frequently than rate control (21.0% vs. 9.1%, P < 0.0001). Crossovers from rhythm to rate control were driven by inefficacy, whereas worsening heart failure was the most common reason to crossover from rate to rhythm control. In multivariate analyses, failure of rhythm control was associated with female sex, higher serum creatinine, functional class III or IV symptoms, lack of digoxin, and oral anticoagulation. Factors independently associated with failure of rate control were paroxysmal (vs. persistent) AF, statin therapy, and presence of an implantable cardioverter-defibrillator. Crossovers were not associated with cardiovascular mortality (hazard ratio [HR] 1.11 from rhythm to rate control; 95% confidence interval [95% CI, 0.73-1.73]; P = 0.6069; HR 1.29 from rate to rhythm control; 95% CI, 0.73-2.25; P = 0.3793) or all-cause mortality (HR 1.16 from rhythm to rate control, 95% CI [0.79-1.72], P = 0.4444; HR 1.15 from rate to rhythm control, 95% [0.69, 1.91], P = 0.5873). Rhythm control is abandoned more frequently than rate control in patients with AF and CHF. The most common reasons for treatment failure are inefficacy for rhythm control and worsening heart failure for rate control. Changing strategies does not impact survival. © 2015 Wiley Periodicals, Inc.

  2. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  3. New type of hip arthroplasty failure related to modular femoral components: breakage at the neck-stem junction.

    PubMed

    Wodecki, P; Sabbah, D; Kermarrec, G; Semaan, I

    2013-10-01

    Total hip replacements (THR) with modular femoral components (stem-neck interface) make it possible to adapt to extramedullary femoral parameters (anteversion, offset, and length) theoretically improving muscle function and stability. Nevertheless, adding a new interface has its disadvantages: reduced mechanical resistance, fretting corrosion and material fatigue fracture. We report the case of a femoral stem fracture of the female part of the component where the modular morse taper of the neck is inserted. An extended trochanteric osteotomy was necessary during revision surgery because the femoral stump could not be grasped for extraction, so that a long stem had to be used. In this case, the patient had the usual risk factors for modular neck failure: he was an active overweight male patient with a long varus neck. This report shows that the female part of the stem of a small femoral component may also be at increased failure risk and should be added to the list of risk factors. To our knowledge, this is the first reported case of this type of failure. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  4. Loose glenoid components in revision shoulder arthroplasty: is there an association with positive cultures?

    PubMed

    Lucas, Robert M; Hsu, Jason E; Whitney, Ian J; Wasserburger, Jory; Matsen, Frederick A

    2016-08-01

    Glenoid loosening is one of the most common causes of total shoulder failure. High rates of positive cultures of Propionibacterium and coagulase-negative staphylococcus have been found among shoulders having surgical revision for glenoid loosening. This study reviewed the culture results in a series of surgical revisions for failed total shoulder arthroplasty to determine the relationship between glenoid loosening and positive cultures. The medical records of 221 patients without obvious evidence of infection who underwent revision total shoulder arthroplasty were reviewed to examine the association between the security of fixation of the glenoid component and the results of cultures obtained at revision surgery. Of the revised shoulders, 53% had positive cultures; 153 of the shoulders (69%) had a loose glenoid component, whereas 68 (31%) had secure glenoid component fixation. Of the 153 loose glenoid components, 82 (54%) had at least 1 positive culture and 44 (29%) had 2 or more positive cultures of the same microorganism. Similarly, of the 68 secure glenoid components, 35 (51%) had at least 1 positive culture (P = .77) and 14 (21%) had 2 or more positive cultures of the same microorganism (P = .25). Explanted glenoid components that were loose had a higher rate of culture positivity (56% [24/43]) in comparison to explanted glenoid components that were well fixed (13% [1/8]) (P = .05). Propionibacterium and coagulase-negative staphylococcus are commonly recovered in revision shoulder arthroplasty, whether or not the glenoid components are loose. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  5. The ac propulsion system for an electric vehicle, phase 1

    NASA Astrophysics Data System (ADS)

    Geppert, S.

    1981-08-01

    A functional prototype of an electric vehicle ac propulsion system was built consisting of a 18.65 kW rated ac induction traction motor, pulse width modulated (PWM) transistorized inverter, two speed mechanically shifted automatic transmission, and an overall drive/vehicle controller. Design developmental steps, and test results of individual components and the complex system on an instrumented test frame are described. Computer models were developed for the inverter, motor and a representative vehicle. A preliminary reliability model and failure modes effects analysis are given.

  6. The ac propulsion system for an electric vehicle, phase 1

    NASA Technical Reports Server (NTRS)

    Geppert, S.

    1981-01-01

    A functional prototype of an electric vehicle ac propulsion system was built consisting of a 18.65 kW rated ac induction traction motor, pulse width modulated (PWM) transistorized inverter, two speed mechanically shifted automatic transmission, and an overall drive/vehicle controller. Design developmental steps, and test results of individual components and the complex system on an instrumented test frame are described. Computer models were developed for the inverter, motor and a representative vehicle. A preliminary reliability model and failure modes effects analysis are given.

  7. Availability analysis of an HTGR fuel recycle facility. Summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharmahd, J.N.

    1979-11-01

    An availability analysis of reprocessing systems in a high-temperature gas-cooled reactor (HTGR) fuel recycle facility was completed. This report summarizes work done to date to define and determine reprocessing system availability for a previously planned HTGR recycle reference facility (HRRF). Schedules and procedures for further work during reprocessing development and for HRRF design and construction are proposed in this report. Probable failure rates, transfer times, and repair times are estimated for major system components. Unscheduled down times are summarized.

  8. Validation of PV-RPM Code in the System Advisor Model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Freeman, Janine

    2017-04-01

    This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whethermore » the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.« less

  9. Contraceptive failure rates: new estimates from the 1995 National Survey of Family Growth.

    PubMed

    Fu, H; Darroch, J E; Haas, T; Ranjit, N

    1999-01-01

    Unintended pregnancy remains a major public health concern in the United States. Information on pregnancy rates among contraceptive users is needed to guide medical professionals' recommendations and individuals' choices of contraceptive methods. Data were taken from the 1995 National Survey of Family Growth (NSFG) and the 1994-1995 Abortion Patient Survey (APS). Hazards models were used to estimate method-specific contraceptive failure rates during the first six months and during the first year of contraceptive use for all U.S. women. In addition, rates were corrected to take into account the underreporting of induced abortion in the NSFG. Corrected 12-month failure rates were also estimated for subgroups of women by age, union status, poverty level, race or ethnicity, and religion. When contraceptive methods are ranked by effectiveness over the first 12 months of use (corrected for abortion underreporting), the implant and injectables have the lowest failure rates (2-3%), followed by the pill (8%), the diaphragm and the cervical cap (12%), the male condom (14%), periodic abstinence (21%), withdrawal (24%) and spermicides (26%). In general, failure rates are highest among cohabiting and other unmarried women, among those with an annual family income below 200% of the federal poverty level, among black and Hispanic women, among adolescents and among women in their 20s. For example, adolescent women who are not married but are cohabiting experience a failure rate of about 31% in the first year of contraceptive use, while the 12-month failure rate among married women aged 30 and older is only 7%. Black women have a contraceptive failure rate of about 19%, and this rate does not vary by family income; in contrast, overall 12-month rates are lower among Hispanic women (15%) and white women (10%), but vary by income, with poorer women having substantially greater failure rates than more affluent women. Levels of contraceptive failure vary widely by method, as well as by personal and background characteristics. Income's strong influence on contraceptive failure suggests that access barriers and the general disadvantage associated with poverty seriously impede effective contraceptive practice in the United States.

  10. Autonomous Component Health Management with Failed Component Detection, Identification, and Avoidance

    NASA Technical Reports Server (NTRS)

    Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.

    2004-01-01

    This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.

  11. Strain Rate Sensitivity of Epoxy Resin in Tensile and Shear Loading

    NASA Technical Reports Server (NTRS)

    Gilat, Amos; Goldberg, Robert K.; Roberts, Gary D.

    2005-01-01

    The mechanical response of E-862 and PR-520 resins is investigated in tensile and shear loadings. At both types of loading the resins are tested at strain rates of about 5x10(exp 5), 2, and 450 to 700 /s. In addition, dynamic shear modulus tests are carried out at various frequencies and temperatures, and tensile stress relaxation tests are conducted at room temperature. The results show that the toughened PR-520 resin can carry higher stresses than the untoughened E-862 resin. Strain rate has a significant effect on the response of both resins. In shear both resins show a ductile response with maximum stress that is increasing with strain rate. In tension a ductile response is observed at low strain rate (approx. 5x10(exp 5) /s), and brittle response is observed at the medium and high strain rates (2, and 700 /s). The hydrostatic component of the stress in the tensile tests causes premature failure in the E-862 resin. Localized deformation develops in the PR-520 resin when loaded in shear. An internal state variable constitutive model is proposed for modeling the response of the resins. The model includes a state variable that accounts for the effect of the hydrostatic component of the stress on the deformation.

  12. A case study in nonconformance and performance trend analysis

    NASA Technical Reports Server (NTRS)

    Maloy, Joseph E.; Newton, Coy P.

    1990-01-01

    As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.

  13. Auxiliary feedwater system risk-based inspection guide for the Salem Nuclear Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugh, R.; Gore, B.F. Vo, T.V.

    In a study by the US Nuclear Regulatory Commission (NRC), Pacific Northwest Laboratory has developed and applied a methodology for deriving plant-specific risk-based inspection guidance for the auxiliary feedwater (AFW) system at pressurized water reactors that have not undergone probabilistic risk assessment (PRA). This methodology uses existing PRA results and plant operating experience information. Existing PRA-based inspection guidance information recently developed for the NRC for various plants was used to identify generic component failure modes. This information was then combined with plant-specific and industry-wide component information and failure data to identify failure modes and failure mechanisms for the AFW systemmore » at the selected plants. Salem was selected as the fifth plant for study. The product of this effort is a prioritized listing of AFW failures which have occurred at the plant and at other PWRs. This listing is intended for use by NRC inspectors in the preparation of inspection plans addressing AFW risk-important components at the Salem plant. 23 refs., 1 fig., 1 tab.« less

  14. Game-Theoretic strategies for systems of components using product-form utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.

    Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less

  15. Comparison of Sprint Fidelis and Riata defibrillator lead failure rates.

    PubMed

    Fazal, Iftikhar A; Shepherd, Ewen J; Tynan, Margaret; Plummer, Christopher J; McComb, Janet M

    2013-09-30

    Sprint Fidelis and Riata defibrillator leads are prone to early failure. Few data exist on the comparative failure rates and mortality related to lead failure. The aims of this study were to determine the failure rate of Sprint Fidelis and Riata leads, and to compare failure rates and mortality rates in both groups. Patients implanted with Sprint Fidelis leads and Riata leads at a single centre were identified and in July 2012, records were reviewed to ascertain lead failures, deaths, and relationship to device/lead problems. 113 patients had Sprint Fidelis leads implanted between June 2005 and September 2007; Riata leads were implanted in 106 patients between January 2003 and February 2008. During 53.0 ± 22.3 months of follow-up there were 13 Sprint Fidelis lead failures (11.5%, 2.60% per year) and 25 deaths. Mean time to failure was 45.1 ± 15.5 months. In the Riata lead cohort there were 32 deaths, and 13 lead failures (11.3%, 2.71% per year) over 54.8 ± 26.3 months follow-up with a mean time to failure of 53.5 ± 24.5 months. There were no significant differences in the lead failure-free Kaplan-Meier survival curve (p=0.77), deaths overall (p=0.17), or deaths categorised as sudden/cause unknown (p=0.54). Sprint Fidelis and Riata leads have a significant but comparable failure rate at 2.60% per year and 2.71% per year of follow-up respectively. The number of deaths in both groups is similar and no deaths have been identified as being related to lead failure in either cohort. Copyright © 2012. Published by Elsevier Ireland Ltd.

  16. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  17. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  18. Methodology for the Incorporation of Passive Component Aging Modeling into the RAVEN/ RELAP-7 Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua

    2014-11-01

    Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper representsmore » an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation environment such as RELAP-7. • Identify the risk-significant passive components, their failure modes and anticipated rates of degradation • Incorporate surveillance and maintenance activities and their effects into the plant state and into component aging progress. • Asses aging affects in a dynamic simulation environment 1. C. L. SMITH, V. N. SHAH, T. KAO, G. APOSTOLAKIS, “Incorporating Ageing Effects into Probabilistic Risk Assessment –A Feasibility Study Utilizing Reliability Physics Models,” NUREG/CR-5632, USNRC, (2001). 2. T. ALDEMIR, “A Survey of Dynamic Methodologies for Probabilistic Safety Assessment of Nuclear Power Plants, Annals of Nuclear Energy, 52, 113-124, (2013). 3. C. RABITI, A. ALFONSI, J. COGLIATI, D. MANDELLI and R. KINOSHITA “Reactor Analysis and Virtual Control Environment (RAVEN) FY12 Report,” INL/EXT-12-27351, (2012). 4. D. ANDERS et.al, "RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7," INL/EXT-12-25924, (2012).« less

  19. Reliability Analysis and Optimal Release Problem Considering Maintenance Time of Software Components for an Embedded OSS Porting Phase

    NASA Astrophysics Data System (ADS)

    Tamura, Yoshinobu; Yamada, Shigeru

    OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.

  20. A probabilisitic based failure model for components fabricated from anisotropic graphite

    NASA Astrophysics Data System (ADS)

    Xiao, Chengfeng

    The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of invariants, known as an integrity basis, was developed for a non-linear elastic constitutive model. This integrity basis allowed the non-linear constitutive model to exhibit different behavior in tension and compression and moreover, the integrity basis was amenable to being augmented and extended to anisotropic behavior. This integrity basis served as the starting point in developing both an isotropic reliability model and a reliability model for transversely isotropic materials. At the heart of the reliability models is a failure function very similar in nature to the yield functions found in classic plasticity theory. The failure function is derived and presented in the context of a multiaxial stress space. States of stress inside the failure envelope denote safe operating states. States of stress on or outside the failure envelope denote failure. The phenomenological strength parameters associated with the failure function are treated as random variables. There is a wealth of failure data in the literature that supports this notion. The mathematical integration of a joint probability density function that is dependent on the random strength variables over the safe operating domain defined by the failure function provides a way to compute the reliability of a state of stress in a graphite core component fabricated from graphite. The evaluation of the integral providing the reliability associated with an operational stress state can only be carried out using a numerical method. Monte Carlo simulation with importance sampling was selected to make these calculations. The derivation of the isotropic reliability model and the extension of the reliability model to anisotropy are provided in full detail. Model parameters are cast in terms of strength parameters that can (and have been) characterized by multiaxial failure tests. Comparisons of model predictions with failure data is made and a brief comparison is made to reliability predictions called for in the ASME Boiler and Pressure Vessel Code. Future work is identified that would provide further verification and augmentation of the numerical methods used to evaluate model predictions.

  1. Spectral Characteristics of Continuous Acoustic Emission (AE) Data from Laboratory Rock Deformation Experiments

    NASA Astrophysics Data System (ADS)

    Flynn, J. William; Goodfellow, Sebastian; Reyes-Montes, Juan; Nasseri, Farzine; Young, R. Paul

    2016-04-01

    Continuous acoustic emission (AE) data recorded during rock deformation tests facilitates the monitoring of fracture initiation and propagation due to applied stress changes. Changes in the frequency and energy content of AE waveforms have been previously observed and were associated with microcrack coalescence and the induction or mobilisation of large fractures which are naturally associated with larger amplitude AE events and lower-frequency components. The shift from high to low dominant frequency components during the late stages of the deformation experiment, as the rate of AE events increases and the sample approaches failure, indicates a transition from the micro-cracking to macro-cracking regime, where large cracks generated result in material failure. The objective of this study is to extract information on the fracturing process from the acoustic records around sample failure, where the fast occurrence of AE events does not allow for identification of individual AE events and phase arrivals. Standard AE event processing techniques are not suitable for extracting this information at these stages. Instead the observed changes in the frequency content of the continuous record can be used to characterise and investigate the fracture process at the stage of microcrack coalescence and sample failure. To analyse and characterise these changes, a detailed non-linear and non-stationary time-frequency analysis of the continuous waveform data is required. Empirical Mode Decomposition (EMD) and Hilbert Spectral Analysis (HSA) are two of the techniques used in this paper to analyse the acoustic records which provide a high-resolution temporal frequency distribution of the data. In this paper we present the results from our analysis of continuous AE data recorded during a laboratory triaxial deformation experiment using the combined EMD and HSA method.

  2. The Local Wind Pump for Marginal Societies in Indonesia: A Perspective of Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Gunawan, Insan; Taufik, Ahmad

    2007-10-01

    There are many efforts to reduce a cost of investment of well established hybrid wind pump applied to rural areas. A recent study on a local wind pump (LWP) for marginal societies in Indonesia (traditional farmers, peasant and tribes) was one of the efforts reporting a new application area. The objectives of the study were defined to measure reliability value of the LWP due to fluctuated wind intensity, low wind speed, economic point of view regarding a prolong economic crisis occurring and an available local component of the LWP and to sustain economics productivity (agriculture product) of the society. In the study, a fault tree analysis (FTA) was deployed as one of three methods used for assessing the LWP. In this article, the FTA has been thoroughly discussed in order to improve a better performance of the LWP applied in dry land watering system of Mesuji district of Lampung province-Indonesia. In the early stage, all of local component of the LWP was classified in term of its function. There were four groups of the components. Moreover, all of the sub components of each group were subjected to failure modes of the FTA, namely (1) primary failure modes; (2) secondary failure modes and (3) common failure modes. In the data processing stage, an available software package, ITEM was deployed. It was observed that the component indicated obtaining relative a long life duration of operational life cycle in 1,666 hours. Moreover, to enhance high performance the LWP, maintenance schedule, critical sub component suffering from failure and an overhaul priority have been identified in term of quantity values. Throughout a year pilot project, it can be concluded that the LWP is a reliable product to the societies enhancing their economics productivities.

  3. Biomechanical comparison of component position and hardware failure in the reverse shoulder prosthesis.

    PubMed

    Gutiérrez, Sergio; Greiwe, R Michael; Frankle, Mark A; Siegal, Steven; Lee, William E

    2007-01-01

    There has been renewed interest in reverse shoulder arthroplasty for the treatment of glenohumeral arthritis with concomitant rotator cuff deficiency. Failure of the prosthesis at the glenoid attachment site remains a concern. The purpose of this study was to examine glenoid component stability with regard to the angle of implantation. This investigation entailed a biomechanical analysis to evaluate forces and micromotion in glenoid components attached to 12 polyurethane blocks at -15 degrees, 0 degrees, and +15 degrees of superior and inferior tilt. The 15 degrees inferior tilt had the most uniform compressive forces and the least amount of tensile forces and micromotion when compared with the 0 degrees and 15 degrees superiorly tilted baseplate. Our results suggest that implantation with an inferior tilt will reduce the incidence of mechanical failure of the glenoid component in a reverse shoulder prosthesis.

  4. Onboard Sensor Data Qualification in Human-Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Melcher, Kevin J.; Maul, William A.; Chicatelli, Amy K.; Sowers, Thomas S.; Fulton, Christopher; Bickford, Randall

    2012-01-01

    The avionics system software for human-rated launch vehicles requires an implementation approach that is robust to failures, especially the failure of sensors used to monitor vehicle conditions that might result in an abort determination. Sensor measurements provide the basis for operational decisions on human-rated launch vehicles. This data is often used to assess the health of system or subsystem components, to identify failures, and to take corrective action. An incorrect conclusion and/or response may result if the sensor itself provides faulty data, or if the data provided by the sensor has been corrupted. Operational decisions based on faulty sensor data have the potential to be catastrophic, resulting in loss of mission or loss of crew. To prevent these later situations from occurring, a Modular Architecture and Generalized Methodology for Sensor Data Qualification in Human-rated Launch Vehicles has been developed. Sensor Data Qualification (SDQ) is a set of algorithms that can be implemented in onboard flight software, and can be used to qualify data obtained from flight-critical sensors prior to the data being used by other flight software algorithms. Qualified data has been analyzed by SDQ and is determined to be a true representation of the sensed system state; that is, the sensor data is determined not to be corrupted by sensor faults or signal transmission faults. Sensor data can become corrupted by faults at any point in the signal path between the sensor and the flight computer. Qualifying the sensor data has the benefit of ensuring that erroneous data is identified and flagged before otherwise being used for operational decisions, thus increasing confidence in the response of the other flight software processes using the qualified data, and decreasing the probability of false alarms or missed detections.

  5. Seismic characteristics of tensile fracture growth induced by hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Eaton, D. W. S.; Van der Baan, M.; Boroumand, N.

    2014-12-01

    Hydraulic fracturing is a process of injecting high-pressure slurry into a rockmass to enhance its permeability. Variants of this process are used for unconventional oil and gas development, engineered geothermal systems and block-cave mining; similar processes occur within volcanic systems. Opening of hydraulic fractures is well documented by mineback trials and tiltmeter monitoring and is a physical requirement to accommodate the volume of injected fluid. Numerous microseismic monitoring investigations acquired in the audio-frequency band are interpreted to show a prevalence of shear-dominated failure mechanisms surrounding the tensile fracture. Moreover, the radiated seismic energy in the audio-frequency band appears to be a miniscule fraction (<< 1%) of the net injected energy, i.e., the integral of the product of fluid pressure and injection rate. We use a simple penny-shaped crack model as a predictive framework to describe seismic characteristics of tensile opening during hydraulic fracturing. This model provides a useful scaling relation that links seismic moment to effective fluid pressure within the crack. Based on downhole recordings corrected for attenuation, a significant fraction of observed microseismic events are characterized by S/P amplitude ratio < 5. Despite the relatively small aperture of the monitoring arrays, which precludes both full moment-tensor analysis and definitive identification of nodal planes or axes, this ratio provides a strong indication that observed microseismic source mechanisms have a component of tensile failure. In addition, we find some instances of periodic spectral notches that can be explained by an opening/closing failure mechanism, in which fracture propagation outpaces fluid velocity within the crack. Finally, aseismic growth of tensile fractures may be indicative of a scenario in which injected energy is consumed to create new fracture surfaces. Taken together, our observations and modeling provide evidence that failure mechanisms documented by passive monitoring of hydraulic fractures may contain a significant component of tensile failure, including fracture opening and closing, although creation of extensive new fracture surfaces may be a seismically inefficient process that radiates at sub-audio frequencies.

  6. Evaluation of possible prognostic factors for the success, survival, and failure of dental implants.

    PubMed

    Geckili, Onur; Bilhan, Hakan; Geckili, Esma; Cilingir, Altug; Mumcu, Emre; Bural, Canan

    2014-02-01

    To analyze the prognostic factors that are associated with the success, survival, and failure rates of dental implants. Data including implant sizes, insertion time, implant location, and prosthetic treatment of 1656 implants have been collected, and the association of these factors with success, survival, and failure of implants was analyzed. The success rate was lower for short and maxillary implants. The failure rate of maxillary implants exceeded that of mandibular implants, and the failure rate of implants that were placed in the maxillary anterior region was significantly higher than other regions. The failure rates of implants that were placed 5 years ago or more were higher than those that were placed later. Anterior maxilla is more critical for implant loss than other sites. Implants in the anterior mandible show better success compared with other locations, and longer implants show better success rates. The learning curve of the clinician influences survival and success rates of dental implants.

  7. Failure rate and reliability of the KOMATSU hydraulic excavator in surface limestone mine

    NASA Astrophysics Data System (ADS)

    Harish Kumar N., S.; Choudhary, R. P.; Murthy, Ch. S. N.

    2018-04-01

    The model with failure rate function of bathtub-shaped is helpful in reliability analysis of any system and particularly in reliability associated privative maintenance. The usual Weibull distribution is, however, not capable to model the complete lifecycle of the any with a bathtub-shaped failure rate function. In this paper, failure rate and reliability analysis of the KOMATSU hydraulic excavator/shovel in surface mine is presented and also to improve the reliability and decrease the failure rate of each subsystem of the shovel based on the preventive maintenance. The model of the bathtub-shaped for shovel can also be seen as a simplification of the Weibull distribution.

  8. Effect of acetabular cup abduction angle on wear of ultrahigh-molecular-weight polyethylene in hip simulator testing.

    PubMed

    Korduba, Laryssa A; Essner, Aaron; Pivec, Robert; Lancin, Perry; Mont, Michael A; Wang, Aiguo; Delanois, Ronald E

    2014-10-01

    The effect of acetabular component positioning on the wear rates of metal-on-polyethylene articulations has not been extensively studied. Placement of acetabular cups at abduction angles of more than 40° has been noted as a possible reason for early failure caused by increased wear. We conducted a study to evaluate the effects of different acetabular cup abduction angles on polyethylene wear rate, wear area, contact pressure, and contact area. Our in vitro study used a hip joint simulator and finite element analysis to assess the effects of cup orientation at 4 angles (0°, 40°, 50°, 70°) on wear and contact properties. Polyethylene bearings with 28-mm cobalt-chrome femoral heads were cycled in an environment mimicking in vivo joint fluid to determine the volumetric wear rate after 10 million cycles. Contact pressure and contact area for each cup abduction angle were assessed using finite element analysis. Results were correlated with cup abduction angles to determine if there were any differences among the 4 groups. The inverse relationship between volumetric wear rate and acetabular cup inclination angle demonstrated less wear with steeper cup angles. The largest abduction angle (70°) had the lowest contact area, largest contact pressure, and smallest head coverage. Conversely, the smallest abduction angle (0°) had the most wear and most head coverage. Polyethylene wear after total hip arthroplasty is a major cause of osteolysis and aseptic loosening, which may lead to premature implant failure. Several studies have found that high wear rates for cups oriented at steep angles contributed to their failure. Our data demonstrated that larger cup abduction angles were associated with lower, not higher, wear. However, this potentially "protective" effect is likely counteracted by other complications of steep cup angles, including impingement, instability, and edge loading. These factors may be more relevant in explaining why implants fail at a higher rate if cups are oriented at more than 40° of abduction.

  9. Endoscopic or arthroscopic iliopsoas tenotomy for iliopsoas impingement following total hip replacement. A prospective multicenter 64-case series.

    PubMed

    Guicherd, W; Bonin, N; Gicquel, T; Gedouin, J E; Flecher, X; Wettstein, M; Thaunat, M; Prevost, N; Ollier, E; May, O

    2017-12-01

    Impingement between the acetabular component and the iliopsoas tendon is a cause of anterior pain after total hip replacement (THR). Treatment can be non-operative, endoscopic or arthroscopic, or by open revision of the acetabular component. Few studies have assessed these options. The present study hypothesis was that endo/arthroscopic treatment provides rapid pain relief with a low rate of complications. A prospective multicenter study included 64 endoscopic or arthroscopic tenotomies for impingement between the acetabular component and the iliopsoas tendon, performed in 8 centers. Mean follow-up was 8months, with a minimum of 6months and no loss to follow-up. Oxford score, patient satisfaction, anterior pain and iliopsoas strength were assessed at last follow-up. Complications and revision procedures were collated. Forty-four percent of patients underwent rehabilitation. At last follow-up, 92% of patients reported pain alleviation. Oxford score, muscle strength and pain in hip flexion showed significant improvement. The complications rate was 3.2%, with complete resolution. Mean hospital stay was 0.8 nights. In 2 cases, arthroscopy revealed metallosis, indicating revision of the acetabular component. The only predictive factor was acetabular projection on oblique view. Rehabilitation significantly improved muscle strength. Endoscopic or arthroscopic tenotomy for impingement between the acetabular component and the iliopsoas tendon following THR significantly alleviated anterior pain in more than 92% of cases. The low complications rate makes this the treatment of choice in case of failure of non-operative management. Arthroscopy also reorients diagnosis in case of associated joint pathology. Projection of the acetabular component on preoperative oblique view is the most predictive criterion, guiding treatment. Copyright © 2017. Published by Elsevier Masson SAS.

  10. Total knee arthroplasty with an oxidised zirconium femoral component: ten-year survivorship analysis.

    PubMed

    Ahmed, I; Salmon, L J; Waller, A; Watanabe, H; Roe, J P; Pinczewski, L A

    2016-01-01

    Oxidised zirconium was introduced as a material for femoral components in total knee arthroplasty (TKA) as an attempt to reduce polyethylene wear. However, the long-term survival of this component is not known. We performed a retrospective review of a prospectively collected database to assess the ten year survival and clinical and radiological outcomes of an oxidised zirconium total knee arthroplasty with the Genesis II prosthesis. The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), Knee Injury and Osteoarthritis Outcome Score (KOOS) and a patient satisfaction scale were used to assess outcome. A total of 303 consecutive TKAs were performed in 278 patients with a mean age of 68 years (45 to 89). The rate of survival ten years post-operatively as assessed using Kaplan-Meier analysis was 97% (95% confidence interval 94 to 99) with revision for any reason as the endpoint. There were no revisions for loosening, osteolysis or failure of the implant. There was a significant improvement in all components of the WOMAC score at final follow-up (p < 0.001). The mean individual components of the KOOS score for symptoms (82.4 points; 36 to 100), pain (87.5 points; 6 to 100), activities of daily life (84.9 points; 15 to 100) and quality of life (71.4 points; 6 to 100) were all at higher end of the scale. This study provides further supportive evidence that the oxidised zirconium TKA gives comparable rates of survival with other implants and excellent functional outcomes ten years post-operatively. Total knee arthroplasty with an oxidised zirconium femoral component gives comparable long-term rates of survival and functional outcomes with conventional implants. ©2016 The British Editorial Society of Bone & Joint Surgery.

  11. Independent Orbiter Assessment (IOA): Weibull analysis report

    NASA Technical Reports Server (NTRS)

    Raffaelli, Gary G.

    1987-01-01

    The Auxiliary Power Unit (APU) and Hydraulic Power Unit (HPU) Space Shuttle Subsystems were reviewed as candidates for demonstrating the Weibull analysis methodology. Three hardware components were identified as analysis candidates: the turbine wheel, the gearbox, and the gas generator. Detailed review of subsystem level wearout and failure history revealed the lack of actual component failure data. In addition, component wearout data were not readily available or would require a separate data accumulation effort by the vendor. Without adequate component history data being available, the Weibull analysis methodology application to the APU and HPU subsystem group was terminated.

  12. Ferrographic and spectrometer oil analysis from a failed gas turbine engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1982-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, a plasma, an atomic absorption, and an emission spectrometer to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations ( 2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure.

  13. Diverse Redundant Systems for Reliable Space Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.

  14. Effect of discharge instructions on readmission of hospitalised patients with heart failure: do all of the Joint Commission on Accreditation of Healthcare Organizations heart failure core measures reflect better care?

    PubMed Central

    VanSuch, Monica; Naessens, James M; Stroebel, Robert J; Huddleston, Jeanne M; Williams, Arthur R

    2006-01-01

    Background Most nationally standardised quality measures use widely accepted evidence‐based processes as their foundation, but the discharge instruction component of the United States standards of Joint Commission on Accreditation of Healthcare Organizations heart failure core measure appears to be based on expert opinion alone. Objective To determine whether documentation of compliance with any or all of the six required discharge instructions is correlated with readmissions to hospital or mortality. Research design A retrospective study at a single tertiary care hospital was conducted on randomly sampled patients hospitalised for heart failure from July 2002 to September 2003. Participants Applying the Joint Commission on Accreditation of Healthcare Organizations criteria, 782 of 1121 patients were found eligible to receive discharge instructions. Eligibility was determined by age, principal diagnosis codes and discharge status codes. Measures The primary outcome measures are time to death and time to readmission for heart failure or readmission for any cause and time to death. Results In all, 68% of patients received all instructions, whereas 6% received no instructions. Patients who received all instructions were significantly less likely to be readmitted for any cause (p = 0.003) and for heart failure (p = 0.035) than those who missed at least one type of instruction. Documentation of discharge instructions is correlated with reduced readmission rates. However, there was no association between documentation of discharge instructions and mortality (p = 0.521). Conclusions Including discharge instructions among other evidence‐based heart failure core measures appears justified. PMID:17142589

  15. Discrete component bonding and thick film materials study. [of capacitor chips bonded with solders and conductive epoxies

    NASA Technical Reports Server (NTRS)

    Kinser, D. L.

    1976-01-01

    The bonding reliability of discrete capacitor chips bonded with solders and conductive epoxies was examined along with the thick film resistor materials consisting of iron oxide phosphate and vanadium oxide phosphates. It was concluded from the bonding reliability studies that none of the wide range of types of solders examined is capable of resisting failure during thermal cycling while the conductive epoxy gives substantially lower failure rates. The thick film resistor studies proved the feasibility of iron oxide phosphate resistor systems although some environmental sensitivity problems remain. One of these resistor compositions has inadvertently proven to be a candidate for thermistor applications because of the excellent control achieved upon the temperature coefficient of resistance. One new and potentially damaging phenomenon observed was the degradation of thick film conductors during the course of thermal cycling.

  16. Preservation of renal function in atypical hemolytic uremic syndrome by eculizumab: a case report.

    PubMed

    Giordano, Mario; Castellano, Giuseppe; Messina, Giovanni; Divella, Claretta; Bellantuono, Rosa; Puteo, Flora; Colella, Vincenzo; Depalo, Tommaso; Gesualdo, Loreto

    2012-11-01

    Genetic mutations in complement components are associated with the development of atypical hemolytic uremic syndrome (aHUS), a rare disease with high morbidity rate triggered by infections or unidentified factors. The uncontrolled activation of the alternative pathway of complement results in systemic endothelial damage leading to progressive development of renal failure. A previously healthy 8-month-old boy was referred to our hospital because of onset of fever, vomiting, and a single episode of nonbloody diarrhea. Acute kidney injury with preserved diuresis, hemolytic anemia, and thrombocytopenia were detected, and common protocols for management of HUS were followed without considerable improvement. The persistent low levels of complement component C3 led us to hypothesize the occurrence of aHUS. In fact, the child carried a specific mutation in complement factor H (Cfh; nonsense mutation in 3514G>T, serum levels of Cfh 138 mg/L, normal range 350-750). Given the lack of response to therapy and the occurrence of kidney failure requiring dialysis, we used eculizumab as rescue therapy, a monoclonal humanized antibody against the complement component C5. One week from the first administration, we observed a significant improvement of all clinical and laboratory parameters with complete recovery from hemodialysis, even in the presence of systemic infections. Our case report shows that complement inhibiting treatment allows the preservation of renal function and avoids disease relapses during systemic infections.

  17. Risk assessment for enterprise resource planning (ERP) system implementations: a fault tree analysis approach

    NASA Astrophysics Data System (ADS)

    Zeng, Yajun; Skibniewski, Miroslaw J.

    2013-08-01

    Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.

  18. VLSI (Very Large Scale Integrated Circuits) Device Reliability Models.

    DTIC Science & Technology

    1984-12-01

    CIRCUIT COMPLEXITY FAILURE RATES FOR... A- 40 MOS SSI/MSI DEVICES IN FAILURE PER 106 HOURS TABLE 5.1.2.5-19: C1 AND C2 CIRCUIT COMPLEXITY FAILURE RATES FOR...A- 40 MOS SSI/MSI DEVICES IN FAILURE PER 106 HOURS TABLE 5.1.2.5-19: Cl AND C2 CIRCUIT COMPLEXITY FAILURE RATES FOR... A-41 LINEAR DEVICES IN...19 National Semiconductor 20 Nitron 21 Raytheon 22 Sprague 23 Synertek 24 Teledyne Crystalonics 25 TRW Semiconductor 26 Zilog The following companies

  19. A Bayesian Approach Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data

    DOE PAGES

    Yue, Meng; Toto, Tami; Jensen, Michael P.; ...

    2017-05-18

    Severe weather events such as strong thunderstorms are some of the most significant and frequent threats to the electrical grid infrastructure. Outages resulting from storms can be very costly. While some tools are available to utilities to predict storm occurrences and damage, they are typically very crude and provide little means of facilitating restoration efforts. This study developed a methodology to use historical high-resolution (both temporal and spatial) radar observations of storm characteristics and outage information to develop weather condition dependent failure rate models (FRMs) for different grid components. Such models can provide an estimation or prediction of the outagemore » numbers in small areas of a utility’s service territory once the real-time measurement or forecasted data of weather conditions become available as the input to the models. Considering the potential value provided by real-time outages reported, a Bayesian outage prediction (BOP) algorithm is proposed to account for both strength and uncertainties of the reported outages and failure rate models. The potential benefit of this outage prediction scheme is illustrated in this study.« less

  20. A Bayesian Approach Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yue, Meng; Toto, Tami; Jensen, Michael P.

    Severe weather events such as strong thunderstorms are some of the most significant and frequent threats to the electrical grid infrastructure. Outages resulting from storms can be very costly. While some tools are available to utilities to predict storm occurrences and damage, they are typically very crude and provide little means of facilitating restoration efforts. This study developed a methodology to use historical high-resolution (both temporal and spatial) radar observations of storm characteristics and outage information to develop weather condition dependent failure rate models (FRMs) for different grid components. Such models can provide an estimation or prediction of the outagemore » numbers in small areas of a utility’s service territory once the real-time measurement or forecasted data of weather conditions become available as the input to the models. Considering the potential value provided by real-time outages reported, a Bayesian outage prediction (BOP) algorithm is proposed to account for both strength and uncertainties of the reported outages and failure rate models. The potential benefit of this outage prediction scheme is illustrated in this study.« less

  1. CR TKA UHMWPE wear tested after artificial aging of the vitamin E treated gliding component by simulating daily patient activities.

    PubMed

    Schwiesau, Jens; Fritz, Bernhard; Kutzner, Ines; Bergmann, Georg; Grupp, Thomas M

    2014-01-01

    The wear behaviour of total knee arthroplasty (TKA) is dominated by two wear mechanisms: the abrasive wear and the delamination of the gliding components, where the second is strongly linked to aging processes and stress concentration in the material. The addition of vitamin E to the bulk material is a potential way to reduce the aging processes. This study evaluates the wear behaviour and delamination susceptibility of the gliding components of a vitamin E blended, ultra-high molecular weight polyethylene (UHMWPE) cruciate retaining (CR) total knee arthroplasty. Daily activities such as level walking, ascending and descending stairs, bending of the knee, and sitting and rising from a chair were simulated with a data set received from an instrumented knee prosthesis. After 5 million test cycles no structural failure of the gliding components was observed. The wear rate was with 5.62 ± 0.53 mg/million cycles falling within the limit of previous reports for established wear test methods.

  2. CR TKA UHMWPE Wear Tested after Artificial Aging of the Vitamin E Treated Gliding Component by Simulating Daily Patient Activities

    PubMed Central

    Schwiesau, Jens; Fritz, Bernhard; Kutzner, Ines; Bergmann, Georg; Grupp, Thomas M.

    2014-01-01

    The wear behaviour of total knee arthroplasty (TKA) is dominated by two wear mechanisms: the abrasive wear and the delamination of the gliding components, where the second is strongly linked to aging processes and stress concentration in the material. The addition of vitamin E to the bulk material is a potential way to reduce the aging processes. This study evaluates the wear behaviour and delamination susceptibility of the gliding components of a vitamin E blended, ultra-high molecular weight polyethylene (UHMWPE) cruciate retaining (CR) total knee arthroplasty. Daily activities such as level walking, ascending and descending stairs, bending of the knee, and sitting and rising from a chair were simulated with a data set received from an instrumented knee prosthesis. After 5 million test cycles no structural failure of the gliding components was observed. The wear rate was with 5.62 ± 0.53 mg/million cycles falling within the limit of previous reports for established wear test methods. PMID:25506594

  3. Advanced Gas Turbine (AGT) power-train system development

    NASA Technical Reports Server (NTRS)

    Helms, H. E.; Johnson, R. A.; Gibson, R. K.

    1982-01-01

    Technical work on the design and component testing of a 74.5 kW (100 hp) advanced automotive gas turbine is described. Selected component ceramic component design, and procurement were tested. Compressor tests of a modified rotor showed high speed performance improvement over previous rotor designs; efficiency improved by 2.5%, corrected flow by 4.6%, and pressure ratio by 11.6% at 100% speed. The aerodynamic design is completed for both the gasifier and power turbines. Ceramic (silicon carbide) gasifier rotors were spin tested to failure. Improving strengths is indicated by burst speeds and the group of five rotors failed at speeds between 104% and 116% of engine rated speed. The emission results from combustor testing showed NOx levels to be nearly one order of magnitude lower than with previous designs. A one piece ceramic exhaust duct/regenerator seal platform is designed with acceptable low stress levels.

  4. The relationship between fuel lubricity and diesel injection system wear

    NASA Astrophysics Data System (ADS)

    Lacy, Paul I.

    1992-01-01

    Use of low-lubricity fuel may have contributed to increased failure rates associated with critical fuel injection equipment during the 1991 Operation Desert Storm. However, accurate quantitative analysis of failed components from the field is almost impossible due to the unique service history of each pump. This report details the results of pump stand tests with fuels of equal viscosity, but widely different lubricity. Baseline tests were also performed using reference no. 2 diesel fuel. Use of poor lubricity fuel under these controlled conditions was found to greatly reduce both pump durability and engine performance. However, both improved metallurgy and fuel lubricity additives significantly reduced wear. Good correlation was obtained between standard bench tests and lightly loaded pump components. However, high contact loads on isolated components produced a more severe wear mechanism that is not well reflected by the Ball-on-Cylinder Lubricity Evaluator.

  5. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2016-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshal Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  6. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2015-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshall Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  7. Microtensile bond strength of etch and rinse versus self-etch adhesive systems.

    PubMed

    Hamouda, Ibrahim M; Samra, Nagia R; Badawi, Manal F

    2011-04-01

    The aim of this study was to compare the microtensile bond strength of the etch and rinse adhesive versus one-component or two-component self-etch adhesives. Twelve intact human molar teeth were cleaned and the occlusal enamel of the teeth was removed. The exposed dentin surfaces were polished and rinsed, and the adhesives were applied. A microhybride composite resin was applied to form specimens of 4 mm height and 6 mm diameter. The specimens were sectioned perpendicular to the adhesive interface to produce dentin-resin composite sticks, with an adhesive area of approximately 1.4 mm(2). The sticks were subjected to tensile loading until failure occurred. The debonded areas were examined with a scanning electron microscope to determine the site of failure. The results showed that the microtensile bond strength of the etch and rinse adhesive was higher than that of one-component or two-component self-etch adhesives. The scanning electron microscope examination of the dentin surfaces revealed adhesive and mixed modes of failure. The adhesive mode of failure occurred at the adhesive/dentin interface, while the mixed mode of failure occurred partially in the composite and partially at the adhesive/dentin interface. It was concluded that the etch and rinse adhesive had higher microtensile bond strength when compared to that of the self-etch adhesives. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Application of millisecond pulsed laser for thermal fatigue property evaluation

    NASA Astrophysics Data System (ADS)

    Pan, Sining; Yu, Gang; Li, Shaoxia; He, Xiuli; Xia, Chunyang; Ning, Weijian; Zheng, Caiyun

    2018-02-01

    An approach based on millisecond pulsed laser is proposed for thermal fatigue property evaluation in this paper. Cyclic thermal stresses and strains within millisecond interval are induced by complex and transient temperature gradients with pulsed laser heating. The influence of laser parameters on surface temperature is studied. The combination of low pulse repetition rate and high pulse energy produces small temperature oscillation, while high pulse repetition rate and low pulse energy introduces large temperature shock. The possibility of application is confirmed by two thermal fatigue tests of compacted graphite iron with different laser controlled modes. The developed approach is able to fulfill the preset temperature cycles and simulate thermal fatigue failure of engine components.

  9. Performance interface document for users of Tracking and Data Relay Satellite System (TDRSS) electromechanically steered antenna systems (EMSAS)

    NASA Technical Reports Server (NTRS)

    Hockensmith, R.; Devine, E.; Digiacomo, M.; Hager, F.; Moss, R.

    1983-01-01

    Satellites that use the NASA Tracking and Data Relay Satellite System (TDRSS) require antennas that are crucial for performing and achieving reliable TDRSS link performance at the desired data rate. Technical guidelines are presented to assist the prospective TDRSS medium-and high-data rate user in selecting and procuring a viable, steerable high-gain antenna system. Topics addressed include the antenna gain/transmitter power/data rate relationship; Earth power flux-density limitations; electromechanical requirements dictated by the small beam widths, desired angular coverage, and minimal torque disturbance to the spacecraft; weight and moment considerations; mechanical, electrical and thermal interfaces; design lifetime failure modes; and handling and storage. Proven designs are cited and space-qualified assemblies and components are identified.

  10. Markov and semi-Markov processes as a failure rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabski, Franciszek

    2016-06-08

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

  11. A review of typical thermal fatigue failure models for solder joints of electronic components

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong

    2017-09-01

    For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.

  12. Product component genealogy modeling and field-failure prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Caleb; Hong, Yili; Meeker, William Q.

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  13. Product component genealogy modeling and field-failure prediction

    DOE PAGES

    King, Caleb; Hong, Yili; Meeker, William Q.

    2016-04-13

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  14. Clinical assessment of pacemaker power sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilitch, M.; Parsonnet, V.; Furman, S.

    1980-01-01

    The development of power sources for cardiac pacemakers has progressed from a 15-year usage of mercury-zinc batteries to widely used and accepted lithium cells. At present, there are about 6 different types of lithium cells incorporated into commercially distributed pacemakers. The authors reviewed experience over a 5-year period with 1711 mercury-zinc, 130 nuclear (P238) and 1912 lithium powered pacemakers. The lithium units have included 698 lithium-iodide, 270 lithium-silver chromate, 135 lithium-thionyl chloride, 31 lithium-lead and 353 lithium-cupric sulfide batteries. 57 of the lithium units have failed (91.2% component failure and 5.3% battery failure). 459 mercury-zinc units failed (25% component failuremore » and 68% battery depletion). The data show that lithium powered pacemaker failures are primarily component, while mercury-zinc failures are primarily battery related. It is concluded that mercury-zinc powered pulse generators are obsolete and that lithium and nuclear (P238) power sources are highly reliable over the 5 years for which data are available. 3 refs.« less

  15. Packaging-induced failure of semiconductor lasers and optical telecommunications components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharps, J.A.

    1996-12-31

    Telecommunications equipment for field deployment generally have specified lifetimes of > 100,000 hr. To achieve this high reliability, it is common practice to package sensitive components in hermetic, inert gas environments. The intent is to protect components from particulate and organic contamination, oxidation, and moisture. However, for high power density 980 nm diode lasers used in optical amplifiers, the authors found that hermetic, inert gas packaging induced a failure mode not observed in similar, unpackaged lasers. They refer to this failure mode as packaging-induced failure, or PIF. PIF is caused by nanomole amounts of organic contamination which interact with highmore » intensity 980 nm light to form solid deposits over the emitting regions of the lasers. These deposits absorb 980 nm light, causing heating of the laser, narrowing of the band gap, and eventual thermal runaway. The authors have found PIF is averted by packaging with free O{sub 2} and/or a getter material that sequesters organics.« less

  16. Systematic Destruction of Electronic Parts for Aid in Electronic Failure Analysis

    NASA Technical Reports Server (NTRS)

    Decker, S. E.; Rolin, T. D.; McManus, P. D.

    2012-01-01

    NASA analyzes electrical, electronic, and electromechanical (EEE) parts used in space vehicles to understand failure modes of these components. Operational amplifiers and transistors are two examples of EEE parts critical to NASA missions that can fail due to electrical overstress (EOS). EOS is the result of voltage or current over time conditions that exceeds a component s specification limit. The objective of this study was to provide known voltage pulses over well-defined time intervals to determine the type and extent of damage imparted to the device. The amount of current was not controlled but measured so that pulse energy was determined. The damage was ascertained electrically using curve trace plots and optically using various metallographic techniques. The resulting data can be used to build a database of physical evidence to compare to damaged components removed from flight avionics. The comparison will provide the avionics failure analyst necessary information about voltage and times that caused flight or test failures when no other electrical data is available.

  17. Wavelet-based information filtering for fault diagnosis of electric drive systems in electric ships.

    PubMed

    Silva, Andre A; Gupta, Shalabh; Bazzi, Ali M; Ulatowski, Arthur

    2017-09-22

    Electric machines and drives have enjoyed extensive applications in the field of electric vehicles (e.g., electric ships, boats, cars, and underwater vessels) due to their ease of scalability and wide range of operating conditions. This stems from their ability to generate the desired torque and power levels for propulsion under various external load conditions. However, as with the most electrical systems, the electric drives are prone to component failures that can degrade their performance, reduce the efficiency, and require expensive maintenance. Therefore, for safe and reliable operation of electric vehicles, there is a need for automated early diagnostics of critical failures such as broken rotor bars and electrical phase failures. In this regard, this paper presents a fault diagnosis methodology for electric drives in electric ships. This methodology utilizes the two-dimensional, i.e. scale-shift, wavelet transform of the sensor data to filter optimal information-rich regions which can enhance the diagnosis accuracy as well as reduce the computational complexity of the classifier. The methodology was tested on sensor data generated from an experimentally validated simulation model of electric drives under various cruising speed conditions. The results in comparison with other existing techniques show a high correct classification rate with low false alarm and miss detection rates. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Modelling indirect interactions during failure spreading in a project activity network.

    PubMed

    Ellinas, Christos

    2018-03-12

    Spreading broadly refers to the notion of an entity propagating throughout a networked system via its interacting components. Evidence of its ubiquity and severity can be seen in a range of phenomena, from disease epidemics to financial systemic risk. In order to understand the dynamics of these critical phenomena, computational models map the probability of propagation as a function of direct exposure, typically in the form of pairwise interactions between components. By doing so, the important role of indirect interactions remains unexplored. In response, we develop a simple model that accounts for the effect of both direct and subsequent exposure, which we deploy in the novel context of failure propagation within a real-world engineering project. We show that subsequent exposure has a significant effect in key aspects, including the: (a) final spreading event size, (b) propagation rate, and (c) spreading event structure. In addition, we demonstrate the existence of 'hidden influentials' in large-scale spreading events, and evaluate the role of direct and subsequent exposure in their emergence. Given the evidence of the importance of subsequent exposure, our findings offer new insight on particular aspects that need to be included when modelling network dynamics in general, and spreading processes specifically.

  19. Error and attack tolerance of complex networks

    NASA Astrophysics Data System (ADS)

    Albert, Réka; Jeong, Hawoong; Barabási, Albert-László

    2000-07-01

    Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network. Complex communication networks display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web, the Internet, social networks and cells. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.

  20. Space reliability technology - A historical perspective

    NASA Technical Reports Server (NTRS)

    Cohen, H.

    1984-01-01

    The progressive improvements in reliability of launch vehicles is traced from the Vanguard rocket to the STS. The Vanguard, built with minimal redundancy and a high mass ratio, was used as an operational vehicle midway through its test program in an attempt to meet the perceived challenge represented by the Sputnik. The fourth Vanguard failed due to inadequate contamination prevention and lack of inspection ports. Automatic firing sequences were adopted for the Titan rockets, which were an order of magnitude larger than the Vanguard and therefore had room for interior inspections. Qualification testing and reporting were introduced for components, along with X ray inspection of fuel tank welds. Dual systems were added for flight critical components when the Titan became man-rated for the Gemini program. Designs incorporated full failure mode effects and criticality analyses for the Apollo program, which exposed the limits of applicability of numerical reliability models. Fault tree analyses and program milestone reviews were initiated. The worth of man-in-the-loop in space activities for reliability was demonstrated with the rescue of Skylab after solar panel and meteoroid shield failures. It is now the reliability of the payload, rather than the vehicle, that is questioned for Shuttle launches.

  1. Spacecraft Parachute Recovery System Testing from a Failure Rate Perspective

    NASA Technical Reports Server (NTRS)

    Stewart, Christine E.

    2013-01-01

    Spacecraft parachute recovery systems, especially those with a parachute cluster, require testing to identify and reduce failures. This is especially important when the spacecraft in question is human-rated. Due to the recent effort to make spaceflight affordable, the importance of determining a minimum requirement for testing has increased. The number of tests required to achieve a mature design, with a relatively constant failure rate, can be estimated from a review of previous complex spacecraft recovery systems. Examination of the Apollo parachute testing and the Shuttle Solid Rocket Booster recovery chute system operation will clarify at which point in those programs the system reached maturity. This examination will also clarify the risks inherent in not performing a sufficient number of tests prior to operation with humans on-board. When looking at complex parachute systems used in spaceflight landing systems, a pattern begins to emerge regarding the need for a minimum amount of testing required to wring out the failure modes and reduce the failure rate of the parachute system to an acceptable level for human spaceflight. Not only a sufficient number of system level testing, but also the ability to update the design as failure modes are found is required to drive the failure rate of the system down to an acceptable level. In addition, sufficient data and images are necessary to identify incipient failure modes or to identify failure causes when a system failure occurs. In order to demonstrate the need for sufficient system level testing prior to an acceptable failure rate, the Apollo Earth Landing System (ELS) test program and the Shuttle Solid Rocket Booster Recovery System failure history will be examined, as well as some experiences in the Orion Capsule Parachute Assembly System will be noted.

  2. Modelling Wind Turbine Failures based on Weather Conditions

    NASA Astrophysics Data System (ADS)

    Reder, Maik; Melero, Julio J.

    2017-11-01

    A large proportion of the overall costs of a wind farm is directly related to operation and maintenance (O&M) tasks. By applying predictive O&M strategies rather than corrective approaches these costs can be decreased significantly. Here, especially wind turbine (WT) failure models can help to understand the components’ degradation processes and enable the operators to anticipate upcoming failures. Usually, these models are based on the age of the systems or components. However, latest research shows that the on-site weather conditions also affect the turbine failure behaviour significantly. This study presents a novel approach to model WT failures based on the environmental conditions to which they are exposed to. The results focus on general WT failures, as well as on four main components: gearbox, generator, pitch and yaw system. A penalised likelihood estimation is used in order to avoid problems due to for example highly correlated input covariates. The relative importance of the model covariates is assessed in order to analyse the effect of each weather parameter on the model output.

  3. Critical Infrastructure Vulnerability to Spatially Localized Failures with Applications to Chinese Railway System.

    PubMed

    Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun

    2017-01-17

    This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.

  4. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  5. Resistance to reinforcement change in multiple and concurrent schedules assessed in transition and at steady state.

    PubMed

    McLean, A P; Blampied, N M

    1995-01-01

    Behavioral momentum theory relates resistance to change of responding in a multiple-schedule component to the total reinforcement obtained in that component, regardless of how the reinforcers are produced. Four pigeons responded in a series of multiple-schedule conditions in which a variable-interval 40-s schedule arranged reinforcers for pecking in one component and a variable-interval 360-s schedule arranged them in the other. In addition, responses on a second key were reinforced according to variable-interval schedules that were equal in the two components. In different parts of the experiment, responding was disrupted by changing the rate of reinforcement on the second key or by delivering response-independent food during a blackout separating the two components. Consistent with momentum theory, responding on the first key in Part 1 changed more in the component with the lower reinforcement total when it was disrupted by changes in the rate of reinforcement on the second key. However, responding on the second key changed more in the component with the higher reinforcement total. In Parts 2 and 3, responding was disrupted with free food presented during intercomponent blackouts, with extinction (Part 2) or variable-interval 80-s reinforcement (Part 3) arranged on the second key. Here, resistance to change was greater for the component with greater overall reinforcement. Failures of momentum theory to predict short-term differences in resistance to change occurred with disruptors that caused greater change between steady states for the richer component. Consistency of effects across disruptors may yet be found if short-term effects of disruptors are assessed relative to the extent of change observed after prolonged exposure.

  6. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Sheffler, K. D.; Demasi, J. T.

    1985-01-01

    A methodology was established to predict thermal barrier coating life in an environment simulative of that experienced by gas turbine airfoils. Specifically, work is being conducted to determine failure modes of thermal barrier coatings in the aircraft engine environment. Analytical studies coupled with appropriate physical and mechanical property determinations are being employed to derive coating life prediction model(s) on the important failure mode(s). An initial review of experimental and flight service components indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the metal-ceramic interface. Initial results from a laboratory test program designed to study the influence of various driving forces such as temperature, thermal cycle frequency, environment, and coating thickness, on ceramic coating spalling life suggest that bond coat oxidation damage at the metal-ceramic interface contributes significantly to thermomechanical cracking in the ceramic layer. Low cycle rate furnace testing in air and in argon clearly shows a dramatic increase of spalling life in the non-oxidizing environments.

  7. Compressed natural gas bus safety: a quantitative risk assessment.

    PubMed

    Chamberlain, Samuel; Modarres, Mohammad

    2005-04-01

    This study assesses the fire safety risks associated with compressed natural gas (CNG) vehicle systems, comprising primarily a typical school bus and supporting fuel infrastructure. The study determines the sensitivity of the results to variations in component failure rates and consequences of fire events. The components and subsystems that contribute most to fire safety risk are determined. Finally, the results are compared to fire risks of the present generation of diesel-fueled school buses. Direct computation of the safety risks associated with diesel-powered vehicles is possible because these are mature technologies for which historical performance data are available. Because of limited experience, fatal accident data for CNG bus fleets are minimal. Therefore, this study uses the probabilistic risk assessment (PRA) approach to model and predict fire safety risk of CNG buses. Generic failure data, engineering judgments, and assumptions are used in this study. This study predicts the mean fire fatality risk for typical CNG buses as approximately 0.23 fatalities per 100-million miles for all people involved, including bus passengers. The study estimates mean values of 0.16 fatalities per 100-million miles for bus passengers only. Based on historical data, diesel school bus mean fire fatality risk is 0.091 and 0.0007 per 100-million miles for all people and bus passengers, respectively. One can therefore conclude that CNG buses are more prone to fire fatality risk by 2.5 times that of diesel buses, with the bus passengers being more at risk by over two orders of magnitude. The study estimates a mean fire risk frequency of 2.2 x 10(-5) fatalities/bus per year. The 5% and 95% uncertainty bounds are 9.1 x 10(-6) and 4.0 x 10(-5), respectively. The risk result was found to be affected most by failure rates of pressure relief valves, CNG cylinders, and fuel piping.

  8. Effects of comprehensive educational reforms on academic success in a diverse student body.

    PubMed

    Lieberman, Steven A; Ainsworth, Michael A; Asimakis, Gregory K; Thomas, Lauree; Cain, Lisa D; Mancuso, Melodee G; Rabek, Jeffrey P; Zhang, Ni; Frye, Ann W

    2010-12-01

    Calls for medical curriculum reform and increased student diversity in the USA have seen mixed success: performance outcomes following curriculum revisions have been inconsistent and national matriculation of under-represented minority (URM) students has not met aspirations. Published innovations in curricula, academic support and pipeline programmes usually describe isolated interventions that fail to affect curriculum-level outcomes. United States Medical Licensing Examination (USMLE) Step 1 performance and graduation rates were analysed for three classes of medical students before (matriculated 1995-1997, n=517) and after (matriculated 2003-2005, n=597) implementing broad-based reforms in our education system. The changes in pipeline recruitment and preparation programmes, instructional methods, assessment systems, academic support and board preparation were based on sound educational principles and best practices. Post-reform classes were diverse with respect to ethnicity (25.8% URM students), gender (51.8% female), and Medical College Admissions Test (MCAT) score (range 20-40; 24.1% scored ≤ 25). Mean±standard deviation MCAT scores were minimally changed (from 27.2±4.7 to 27.8±3.6). The Step 1 failure rate decreased by 69.3% and mean score increased by 14.0 points (effect size: d=0.67) overall. Improvements were greater among women (failure rate decreased by 78.9%, mean score increased by 15.6 points; d=0.76) and URM students (failure rate decreased by 76.5%, mean score increased by 14.6 points; d=0.74), especially African-American students (failure rate decreased by 93.6%, mean score increased by 20.8 points; d=1.12). Step 1 scores increased across the entire MCAT range. Four- and 5-year graduation rates increased by 7.1% and 5.8%, respectively. The effect sizes in these performance improvements surpassed those previously reported for isolated interventions in curriculum and student support. This success is likely to have resulted from the broad-based, mutually reinforcing nature of reforms in multiple components of the education system. The results suggest that a narrow reductionist view of educational programme reform is less likely to result in improved educational outcomes than a system perspective that addresses the coordinated functioning of multiple aspects of the academic enterprise. © Blackwell Publishing Ltd 2010.

  9. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    DOE PAGES

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; ...

    2017-11-06

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less

  10. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    NASA Astrophysics Data System (ADS)

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; Lim, Hojun; Littlewood, David J.

    2018-02-01

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. To resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. In this study, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled with a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.

  11. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less

  12. Academic performance of ethnic minority candidates and discrimination in the MRCGP examinations between 2010 and 2012: analysis of data.

    PubMed

    Esmail, Aneez; Roberts, Chris

    2013-09-26

    To determine the difference in failure rates in the postgraduate examination of the Royal College of General Practitioners (MRCGP) by ethnic or national background, and to identify factors associated with pass rates in the clinical skills assessment component of the examination. Analysis of data provided by the Royal College of General Practitioners and the General Medical Council. Cohort of 5095 candidates sitting the applied knowledge test and clinical skills assessment components of the MRCGP examination between November 2010 and November 2012. A further analysis was carried out on 1175 candidates not trained in the United Kingdom, who sat an English language capability test (IELTS) and the Professional and Linguistic Assessment Board (PLAB) examination (as required for full medical registration), controlling for scores on these examinations and relating them to pass rates of the clinical skills assessment. United Kingdom. After controlling for age, sex, and performance in the applied knowledge test, significant differences persisted between white UK graduates and other candidate groups. Black and minority ethnic graduates trained in the UK were more likely to fail the clinical skills assessment at their first attempt than their white UK colleagues (odds ratio 3.536 (95% confidence interval 2.701 to 4.629), P<0.001; failure rate 17% v 4.5%). Black and minority ethnic candidates who trained abroad were also more likely to fail the clinical skills assessment than white UK candidates (14.741 (11.397 to 19.065), P<0.001; 65% v 4.5%). For candidates not trained in the UK, black or minority ethnic candidates were more likely to fail than white candidates, but this difference was no longer significant after controlling for scores in the applied knowledge test, IELTS, and PLAB examinations (adjusted odds ratio 1.580 (95% confidence interval 0.878 to 2.845), P=0.127). Subjective bias due to racial discrimination in the clinical skills assessment may be a cause of failure for UK trained candidates and international medical graduates. The difference between British black and minority ethnic candidates and British white candidates in the pass rates of the clinical skills assessment, despite controlling for prior attainment, suggests that subjective bias could also be a factor. Changes to the clinical skills assessment could improve the perception of the examination as being biased against black and minority ethnic candidates. The difference in training experience and other cultural factors between candidates trained in the UK and abroad could affect outcomes. Consideration should be given to strengthening postgraduate training for international medical graduates.

  13. How and why of orthodontic bond failures: An in vivo study

    PubMed Central

    Vijayakumar, R. K.; Jagadeep, Raju; Ahamed, Fayyaz; Kanna, Aprose; Suresh, K.

    2014-01-01

    Introduction: The bonding of orthodontic brackets and their failure rates by both direct and in-direct procedures are well-documented in orthodontic literature. Over the years different adhesive materials and various indirect bonding transfer procedures have been compared and evaluated for bond failure rates. The aim of our study is to highlight the use of a simple, inexpensive and ease of manipulation of a single thermo-plastic transfer tray and the use the of a single light cure adhesive to evaluate the bond failure rates in clinical situations. Materials and Methods: A total of 30 patients were randomly divided into two groups (Group A and Group B). A split-mouth study design was used, for, both the groups so that they were distributed equally with-out bias. After initial prophylaxis, both the procedures were done as per manufactures instructions. All patients were initially motivated and reviewed for bond failures rates for 6 months. Results: Bond failure rates were assessed for over-all direct and indirect procedures, anterior and posterior arches, and for individual tooth. Z-test was used for statistically analyzing, the normal distribution of the sample in a spilt mouth study. The results of the two groups were compared and P value was calculated using Z-proportion test to assess the significance of the bond failure. Conclusion: Over-all bond failure was more for direct bonding. Anterior bracket failure was more in-direct bonding than indirect procedure, which showed more posterior bracket failures. In individual tooth bond failure, mandibular incisor, and premolar brackets showed more failure, followed by maxillary premolars and canines. PMID:25210392

  14. Does the United States economy affect heart failure readmissions? A single metropolitan center analysis.

    PubMed

    Thompson, Keith A; Morrissey, Ryan P; Phan, Anita; Schwarz, Ernst R

    2012-08-01

    To determine the effects of the US economy on heart failure hospitalization rates. The recession was associated with worsening unemployment, loss of private insurance and prescription medication benefits, medication nonadherence, and ultimately increased rates of hospitalization for heart failure. We compared hospitalization rates at a large, single, academic medical center from July 1, 2006 to February 28, 2007, a time of economic stability, and July 1, 2008 to February 28, 2009, a time of economic recession in the United States. Significantly fewer patients had private medical insurance during the economic recession than during the control period (36.5% vs 46%; P = 0.04). Despite this, there were no differences in the heart failure hospitalization or readmission rates, length of hospitalization, need for admission to an intensive care unit, in-hospital mortality, or use of guideline-recommended heart failure medications between the 2 study periods. We conclude that despite significant effects on medical insurance coverage, rates of heart failure hospitalization at our institution were not significantly affected by the recession. Additional large-scale population-based research is needed to better understand the effects of fluctuations in the US economy on heart failure hospitalization rates. © 2012 Wiley Periodicals, Inc.

  15. Surveillance of in vivo resistance of Plasmodium falciparum to antimalarial drugs from 1992 to 1999 in Malabo (Equatorial Guinea).

    PubMed

    Roche, Jesús; Guerra-Neira, Ana; Raso, José; Benito, Agustîn

    2003-05-01

    From 1992-1999, we have assessed the therapeutic efficacy of three malaria treatment regimens (chloroquine 25 mg/kg over three days, pyrimethamine/sulfadoxine 1.25/25 mg/kg in one dose, and quinine 25-30 mg/kg daily in three oral doses over a four-, five-, or seven-day period) in 1,189 children under age 10 at Malabo Regional Hospital in Equatorial Guinea. Of those children, 958 were followed up clinically and parasitologically for 14 days. With chloroquine, the failure rate varied from 55% in 1996 to 40% in 1999; the early treatment failure rate increased progressively over the years, from 6% in 1992 to 30% in 1999. With pyrimethamine/sulfadoxine, the failure rate varied from 0% in 1996 to 16% in 1995. The short quinine treatment regimens used in 1992 and 1993 (4 and 5 days, respectively) resulted in significantly higher failure rates (19% and 22%, respectively) than the 7d regimen (3-5.5%). We conclude that: a) failure rates for chloroquine are in the change period (> 25%), and urgent action is needed; b) pyrimethamine/ sulfadoxine failure rates are in the alert period (6-15%), and surveillance must be continued; and c) quinine failure rates are in the grace period (< 6%), so quinine can be recommended.

  16. Reliability of hybrid microcircuit discrete components

    NASA Technical Reports Server (NTRS)

    Allen, R. V.

    1972-01-01

    Data accumulated during 4 years of research and evaluation of ceramic chip capacitors, ceramic carrier mounted active devices, beam-lead transistors, and chip resistors are presented. Life and temperature coefficient test data, and optical and scanning electron microscope photographs of device failures are presented and the failure modes are described. Particular interest is given to discrete component qualification, power burn-in, and procedures for testing and screening discrete components. Burn-in requirements and test data will be given in support of 100 percent burn-in policy on all NASA flight programs.

  17. Ferrographic and spectrometer oil analysis from a failed gas turbine engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1983-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor parts that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, and with plasma, atomic absorption, and emission spectrometers to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations (2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure. Previously announced in STAR as N83-12433

  18. TWT transmitter fault prediction based on ANFIS

    NASA Astrophysics Data System (ADS)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  19. Internuclear cascade-evaporation model for LET spectra of 200 MeV protons used for parts testing.

    PubMed

    O'Neill, P M; Badhwar, G D; Culpepper, W X

    1998-12-01

    The Linear Energy Transfer (LET) spectrum produced in microelectronic components during testing with 200 MeV protons is calculated with an intemuclear cascade-evaporation code. This spectrum is compared to the natural space heavy ion environment for various earth orbits. This comparison is used to evaluate the results of proton testing in terms of determining a firm upper bound to the on-orbit heavy ion upset rate and the risk of on-orbit heavy ion failures that would not be detected with protons.

  20. A study of Mariner 10 flight experiences and some flight piece part failure rate computations

    NASA Technical Reports Server (NTRS)

    Paul, F. A.

    1976-01-01

    The problems and failures encountered in Mariner flight are discussed and the data available through a quantitative accounting of all electronic piece parts on the spacecraft are summarized. It also shows computed failure rates for electronic piece parts. It is intended that these computed data be used in the continued updating of the failure rate base used for trade-off studies and predictions for future JPL space missions.

  1. Angular-Rate Estimation Using Quaternion Measurements

    NASA Technical Reports Server (NTRS)

    Azor, Ruth; Bar-Itzhack, Y.; Deutschmann, Julie K.; Harman, Richard R.

    1998-01-01

    In most spacecraft (SC) there is a need to know the SC angular rate. Precise angular rate is required for attitude determination, and a coarse rate is needed for attitude control damping. Classically, angular rate information is obtained from gyro measurements. These days, there is a tendency to build smaller, lighter and cheaper SC, therefore the inclination now is to do away with gyros and use other means and methods to determine the angular rate. The latter is also needed even in gyro equipped satellites when performing high rate maneuvers whose angular-rate is out of range of the on board gyros or in case of gyro failure. There are several ways to obtain the angular rate in a gyro-less SC. When the attitude is known, one can differentiate the attitude in whatever parameters it is given and use the kinematics equation that connects the derivative of the attitude with the satellite angular-rate and compute the latter. Since SC usually utilize vector measurements for attitude determination, the differentiation of the attitude introduces a considerable noise component in the computed angular-rate vector.

  2. Heart Rate Dynamics During A Treadmill Cardiopulmonary Exercise Test in Optimized Beta-Blocked Heart Failure Patients

    PubMed Central

    Carvalho, Vitor Oliveira; Guimarães, Guilherme Veiga; Ciolac, Emmanuel Gomes; Bocchi, Edimar Alcides

    2008-01-01

    BACKGROUND Calculating the maximum heart rate for age is one method to characterize the maximum effort of an individual. Although this method is commonly used, little is known about heart rate dynamics in optimized beta-blocked heart failure patients. AIM The aim of this study was to evaluate heart rate dynamics (basal, peak and % heart rate increase) in optimized beta-blocked heart failure patients compared to sedentary, normal individuals (controls) during a treadmill cardiopulmonary exercise test. METHODS Twenty-five heart failure patients (49±11 years, 76% male), with an average LVEF of 30±7%, and fourteen controls were included in the study. Patients with atrial fibrillation, a pacemaker or noncardiovascular functional limitations or whose drug therapy was not optimized were excluded. Optimization was considered to be 50 mg/day or more of carvedilol, with a basal heart rate between 50 to 60 bpm that was maintained for 3 months. RESULTS Basal heart rate was lower in heart failure patients (57±3 bpm) compared to controls (89±14 bpm; p<0.0001). Similarly, the peak heart rate (% maximum predicted for age) was lower in HF patients (65.4±11.1%) compared to controls (98.6±2.2; p<0.0001). Maximum respiratory exchange ratio did not differ between the groups (1.2±0.5 for controls and 1.15±1 for heart failure patients; p=0.42). All controls reached the maximum heart rate for their age, while no patients in the heart failure group reached the maximum. Moreover, the % increase of heart rate from rest to peak exercise between heart failure (48±9%) and control (53±8%) was not different (p=0.157). CONCLUSION No patient in the heart failure group reached the maximum heart rate for their age during a treadmill cardiopulmonary exercise test, despite the fact that the percentage increase of heart rate was similar to sedentary normal subjects. A heart rate increase in optimized beta-blocked heart failure patients during cardiopulmonary exercise test over 65% of the maximum age-adjusted value should be considered an effort near the maximum. This information may be useful in rehabilitation programs and ischemic tests, although further studies are required. PMID:18719758

  3. Fission Limit And Surface Disruption Criteria For Asteroids: The Case Of Kleopatra

    NASA Astrophysics Data System (ADS)

    Hirabayashi, Masatoshi; Scheeres, D. J.

    2012-05-01

    Asteroid structural failure due to a rapid rotation may occur by two fundamentally different ways: by spinning so fast that surface particles are lofted off due to centripetal accelerations overcoming gravitational attractions or through fission of the body. We generalize these failure modes for real asteroid shapes. How a rubble pile asteroid will fail depends on which of these failure criterion occur first if its spin rate is increased due to the YORP effect, impacts, or planetary flybys. The spin rate at which the interior of an arbitrary uniformly rotating body will undergo tension (and conservatively be susceptible to fission) is computed by taking planar cuts through the shape model, computing the mutual gravitational attraction between the two segments, and determining the spin rate at which the centrifugal force between the two components equals the mutual gravitational attraction. The gravitational attraction computation uses an improved version of the algorithm presented in Werner et al. (2005). To determine the interior point that first undergoes tension, we consider this planar cut perpendicular to the axis of minimum moment of inertia at different cross-sections. On the other hand, we define the surface disruption as follows. For an arbitrary body uniformly rotating at a constant spin rate there are at least four synchronous orbits, which represent circular orbits with the same period as the asteroid spin rate. Surface disruption occurs when the body spins fast enough so that at least one of these synchronous orbits touches the asteroid surface. Kleopatra currently spins with a period of 5.38 hours. The spin period for surface disruption is computed to be 3.02 hours, while the spin period for the interior of the asteroid to go into tension is about 4.8 hours. Thus Kleopatra’s internal fission could occur at spin periods longer than when surface disruption occurs.

  4. Characteristics of Pediatric Performance on a Test Battery Commonly Used in the Diagnosis of Central Auditory Processing Disorder.

    PubMed

    Weihing, Jeffrey; Guenette, Linda; Chermak, Gail; Brown, Mallory; Ceruti, Julianne; Fitzgerald, Krista; Geissler, Kristin; Gonzalez, Jennifer; Brenneman, Lauren; Musiek, Frank

    2015-01-01

    Although central auditory processing disorder (CAPD) test battery performance has been examined in adults with neurologic lesions of the central auditory nervous system (CANS), similar data on children being referred for CAPD evaluations are sparse. This study characterizes CAPD test battery performance in children using tests commonly administered to diagnose the disorder. Specifically, this study describes failure rates for various test combinations, relationships between CAPD tests used in the battery, and the influence of cognitive function on CAPD test performance and CAPD diagnosis. A comparison is also made between the performance of children with CAPD and data from patients with neurologic lesions of the CANS. A retrospective study. Fifty-six pediatric patients were referred for CAPD testing. Participants were administered four CAPD tests, including frequency patterns (FP), low-pass filtered speech (LPFS), dichotic digits (DD), and competing sentences (CS). In addition, they were given the Wechsler Intelligence Scale for Children (WISC). Descriptive analyses examined the failure rates of various test combinations, as well as how often children with CAPD failed certain combinations when compared with adults with CANS lesions. A principal components analysis was performed to examine interrelationships between tests. Correlations and regressions were conducted to determine the relationship between CAPD test performance and the WISC. Results showed that the FP and LPFS tests were most commonly failed by children with CAPD. Two-test combinations that included one or both of these two tests and excluded DD tended to be failed more often. Including the DD and CS test in a battery benefited specificity. Tests thought to measure interhemispheric transfer tended to be correlated. Compared with adult patients with neurologic lesions, children with CAPD tended to fail LPFS more frequently and DD less frequently. Both groups failed FP with relatively equal frequency. The two-test combination that showed the highest failure rate for children with CAPD was LPFS-FP. Comparison with adults with CANS lesions, however, suggests that the mechanisms underlying LPFS performance in children need to be better understood. The two-test combination that showed the next highest failure rates among children with CAPD and did not include LPFS was CS-FP. If it is desirable to use a dichotic measure that has a lower linguistic load than CS then DD can be substituted for CS despite the slightly lower failure rate of the DD-FP battery. American Academy of Audiology.

  5. What Reliability Engineers Should Know about Space Radiation Effects

    NASA Technical Reports Server (NTRS)

    DiBari, Rebecca

    2013-01-01

    Space radiation in space systems present unique failure modes and considerations for reliability engineers. Radiation effects is not a one size fits all field. Threat conditions that must be addressed for a given mission depend on the mission orbital profile, the technologies of parts used in critical functions and on application considerations, such as supply voltages, temperature, duty cycle, and redundancy. In general, the threats that must be addressed are of two types-the cumulative degradation mechanisms of total ionizing dose (TID) and displacement damage (DD). and the prompt responses of components to ionizing particles (protons and heavy ions) falling under the heading of single-event effects. Generally degradation mechanisms behave like wear-out mechanisms on any active components in a system: Total Ionizing Dose (TID) and Displacement Damage: (1) TID affects all active devices over time. Devices can fail either because of parametric shifts that prevent the device from fulfilling its application or due to device failures where the device stops functioning altogether. Since this failure mode varies from part to part and lot to lot, lot qualification testing with sufficient statistics is vital. Displacement damage failures are caused by the displacement of semiconductor atoms from their lattice positions. As with TID, failures can be either parametric or catastrophic, although parametric degradation is more common for displacement damage. Lot testing is critical not just to assure proper device fi.mctionality throughout the mission. It can also suggest remediation strategies when a device fails. This paper will look at these effects on a variety of devices in a variety of applications. This paper will look at these effects on a variety of devices in a variety of applications. (2) On the NEAR mission a functional failure was traced to a PIN diode failure caused by TID induced high leakage currents. NEAR was able to recover from the failure by reversing the current of a nearby Thermal Electric Cooler (turning the TEC into a heater). The elevated temperature caused the PIN diode to anneal and the device to recover. It was by lot qualification testing that NEAR knew the diode would recover when annealed. This paper will look at these effects on a variety of devices in a variety of applications. Single Event Effects (SEE): (1) In contrast to TID and displacement damage, Single Event Effects (SEE) resemble random failures. SEE modes can range from changes in device logic (single-event upset, or SEU). temporary disturbances (single-event transient) to catastrophic effects such as the destructive SEE modes, single-event latchup (SEL). single-event gate rupture (SEGR) and single-event burnout (SEB) (2) The consequences of nondestructive SEE modes such as SEU and SET depend critically on their application--and may range from trivial nuisance errors to catastrophic loss of mission. It is critical not just to ensure that potentially susceptible devices are well characterized for their susceptibility, but also to work with design engineers to understand the implications of each error mode. -For destructive SEE, the predominant risk mitigation strategy is to avoid susceptible parts, or if that is not possible. to avoid conditions under which the part may be susceptible. Destructive SEE mechanisms are often not well understood, and testing is slow and expensive, making rate prediction very challenging. (3) Because the consequences of radiation failure and degradation modes depend so critically on the application as well as the component technology, it is essential that radiation, component. design and system engineers work togetherpreferably starting early in the program to ensure critical applications are addressed in time to optimize the probability of mission success.

  6. The impact of vaccine failure rate on epidemic dynamics in responsive networks.

    PubMed

    Liang, Yu-Hao; Juang, Jonq

    2015-04-01

    An SIS model based on the microscopic Markov-chain approximation is considered in this paper. It is assumed that the individual vaccination behavior depends on the contact awareness, local and global information of an epidemic. To better simulate the real situation, the vaccine failure rate is also taken into consideration. Our main conclusions are given in the following. First, we show that if the vaccine failure rate α is zero, then the epidemic eventually dies out regardless of what the network structure is or how large the effective spreading rate and the immunization response rates of an epidemic are. Second, we show that for any positive α, there exists a positive epidemic threshold depending on an adjusted network structure, which is only determined by the structure of the original network, the positive vaccine failure rate and the immunization response rate for contact awareness. Moreover, the epidemic threshold increases with respect to the strength of the immunization response rate for contact awareness. Finally, if the vaccine failure rate and the immunization response rate for contact awareness are positive, then there exists a critical vaccine failure rate αc > 0 so that the disease free equilibrium (DFE) is stable (resp., unstable) if α < αc (resp., α > αc). Numerical simulations to see the effectiveness of our theoretical results are also provided.

  7. Defense strategies for asymmetric networked systems under composite utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell

    We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less

  8. Solving Component Structural Dynamic Failures Due to Extremely High Frequency Structural Response on the Space Shuttle Program

    NASA Technical Reports Server (NTRS)

    Frady, Greg; Nesman, Thomas; Zoladz, Thomas; Szabo, Roland

    2010-01-01

    For many years, the capabilities to determine the root-cause failure of component failures have been limited to the analytical tools and the state of the art data acquisition systems. With this limited capability, many anomalies have been resolved by adding material to the design to increase robustness without the ability to determine if the design solution was satisfactory until after a series of expensive test programs were complete. The risk of failure and multiple design, test, and redesign cycles were high. During the Space Shuttle Program, many crack investigations in high energy density turbomachines, like the SSME turbopumps and high energy flows in the main propulsion system, have led to the discovery of numerous root-cause failures and anomalies due to the coexistences of acoustic forcing functions, structural natural modes, and a high energy excitation, such as an edge tone or shedding flow, leading the technical community to understand many of the primary contributors to extremely high frequency high cycle fatique fluid-structure interaction anomalies. These contributors have been identified using advanced analysis tools and verified using component and system tests during component ground tests, systems tests, and flight. The structural dynamics and fluid dynamics communities have developed a special sensitivity to the fluid-structure interaction problems and have been able to adjust and solve these problems in a time effective manner to meet budget and schedule deadlines of operational vehicle programs, such as the Space Shuttle Program over the years.

  9. Sterilization failures in Singapore: an examination of ligation techniques and failure rates.

    PubMed

    Cheng, M C; Wong, Y M; Rochat, R W; Ratnam, S S

    1977-04-01

    The University Department of Obstetrics and Gynecology, Kandang Kerbau Hospital in Singapore, initiated a study in early 1974 of failure rates for various methods of sterilization and the factors responsible for the failures. During the period January 1974 to March 1976, 51 cases of first pregnancy following ligation were discovered. Cumulative failure rates at 24 months were 0.34 per 100 women for abdominal sterilization, 1.67 for culdoscopic, 3.12 for vaginal, and 4.49 for laparoscopic procedures. Findings for 35 patients who underwent religation showed that recanalization and the establishment of a fistulous opening caused the majority of failures. Clearly, more effective methods of tubal occlusion in sterilization are needed.

  10. EEMD-based wind turbine bearing failure detection using the generator stator current homopolar component

    NASA Astrophysics Data System (ADS)

    Amirat, Yassine; Choqueuse, Vincent; Benbouzid, Mohamed

    2013-12-01

    Failure detection has always been a demanding task in the electrical machines community; it has become more challenging in wind energy conversion systems because sustainability and viability of wind farms are highly dependent on the reduction of the operational and maintenance costs. Indeed the most efficient way of reducing these costs would be to continuously monitor the condition of these systems. This allows for early detection of the generator health degeneration, facilitating a proactive response, minimizing downtime, and maximizing productivity. This paper provides then an assessment of a failure detection techniques based on the homopolar component of the generator stator current and attempts to highlight the use of the ensemble empirical mode decomposition as a tool for failure detection in wind turbine generators for stationary and non-stationary cases.

  11. Methods And Systms For Analyzing The Degradation And Failure Of Mechanical Systems

    DOEpatents

    Jarrell, Donald B.; Sisk, Daniel R.; Hatley, Darrel D.; Kirihara, Leslie J.; Peters, Timothy J.

    2005-02-08

    Methods and systems for identifying, understanding, and predicting the degradation and failure of mechanical systems are disclosed. The methods include measuring and quantifying stressors that are responsible for the activation of degradation mechanisms in the machine component of interest. The intensity of the stressor may be correlated with the rate of physical degradation according to some determinable function such that a derivative relationship exists between the machine performance, degradation, and the underlying stressor. The derivative relationship may be used to make diagnostic and prognostic calculations concerning the performance and projected life of the machine. These calculations may be performed in real time to allow the machine operator to quickly adjust the operational parameters of the machinery in order to help minimize or eliminate the effects of the degradation mechanism, thereby prolonging the life of the machine. Various systems implementing the methods are also disclosed.

  12. Heart Failure Update: Chronic Disease Management Programs.

    PubMed

    Fountain, Lorna B

    2016-03-01

    With high mortality and readmission rates among patients with heart failure (HF), multiple disease management models have been and continue to be tested, with mixed results. Early postdischarge care improves outcomes for patients. Telemonitoring also can assist in reducing mortality and HF-related hospitalizations. Office-based team care improves patient outcomes, with important components including rapid access to physicians, partnerships with clinical pharmacists, education, monitoring, and support. Pay-for-performance measures developed for HF, primarily use of angiotensin-converting enzyme inhibitors and beta blockers, also improve patient outcomes, but the influence of adherence to other measures has been minimal. Evaluating comorbid conditions, including diabetes and hypertension, and making drug adjustments for patients with HF to include blood pressure control and use of metformin, when possible, can reduce mortality and morbidity. Written permission from the American Academy of Family Physicians is required for reproduction of this material in whole or in part in any form or medium.

  13. Remote control of the industry processes. POWERLINK protocol application

    NASA Astrophysics Data System (ADS)

    Wóbel, A.; Paruzel, D.; Paszkiewicz, B.

    2017-08-01

    The present technological development enables the use of solutions characterized by a lower failure rate, and work with greater precision. This allows you to obtain the most efficient production, high speed production and reliability of individual components. The main scope of this article was POWERLINK protocol application for communication with the controller B & R through communication Ethernet for recording process parameters. This enables control of run production cycle using an internal network connected to the PC industry. Knowledge of the most important parameters of the production in real time allows detecting of a failure immediately after occurrence. For this purpose, the position of diagnostic use driver X20CP1301 B&R to record measurement data such as pressure, temperature valve between the parties and the torque required to change the valve setting was made. The use of POWERLINK protocol allows for the transmission of information on the status of every 200 μs.

  14. Strain gage system evaluation program

    NASA Technical Reports Server (NTRS)

    Dolleris, G. W.; Mazur, H. J.; Kokoszka, E., Jr.

    1978-01-01

    A program was conducted to determine the reliability of various strain gage systems when applied to rotating compressor blades in an aircraft gas turbine engine. A survey of current technology strain gage systems was conducted to provide a basis for selecting candidate systems for evaluation. Testing and evaluation was conducted in an F 100 engine. Sixty strain gage systems of seven different designs were installed on the first and third stages of an F 100 engine fan. Nineteen strain gage failures occurred during 62 hours of engine operation, for a survival rate of 68 percent. Of the failures, 16 occurred at blade-to-disk leadwire jumps (84 percent), two at a leadwire splice (11 percent), and one at a gage splice (5 percent). Effects of erosion, temperature, G-loading, and stress levels are discussed. Results of a post-test analysis of the individual components of each strain gage system are presented.

  15. Research and Improvement on Characteristics of Emergency Diesel Generating Set Mechanical Support System in Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Zhe, Yang

    2017-06-01

    There are often mechanical problems of emergency power generation units in nuclear power plant, which bring a great threat to nuclear safety. Through analyzing the influence factors caused by mechanical failure, the existing defects of the design of mechanical support system are determined, and the design idea has caused the direction misleading in the field of maintenance and transformation. In this paper, research analysis is made on basic support design of diesel generator set, main pipe support design and important components of supercharger support design. And this paper points out the specific design flaws and shortcomings, and proposes targeted improvement program. Through the implementation of improvement programs, vibration level of unit and mechanical failure rate are reduced effectively. At the same time, it also provides guidance for design, maintenance and renovation of diesel generator mechanical support system of nuclear power plants in the future.

  16. Payload maintenance cost model for the space telescope

    NASA Technical Reports Server (NTRS)

    White, W. L.

    1980-01-01

    An optimum maintenance cost model for the space telescope for a fifteen year mission cycle was developed. Various documents and subsequent updates of failure rates and configurations were made. The reliability of the space telescope for one year, two and one half years, and five years were determined using the failure rates and configurations. The failure rates and configurations were also used in the maintenance simulation computer model which simulate the failure patterns for the fifteen year mission life of the space telescope. Cost algorithms associated with the maintenance options as indicated by the failure patterns were developed and integrated into the model.

  17. Application of Single Crystal Failure Criteria: Theory and Turbine Blade Case Study

    NASA Technical Reports Server (NTRS)

    Sayyah, Tarek; Swanson, Gregory R.; Schonberg, W. P.

    1999-01-01

    The orientation of the single crystal material within a structural component is known to affect the strength and life of the part. The first stage blade of the High Pressure Fuel Turbopump (HPFTP)/ Alternative Turbopump Development (ATD), of the Space Shuttle Main Engine (SSME) was used to study the effects of secondary axis'orientation angles on the failure rate of the blade. A new failure criterion was developed based on normal and shear strains on the primary crystallographic planes. The criterion was verified using low cycle fatigue (LCF) specimen data and a finite element model of the test specimens. The criterion was then used to study ATD/HPFTP first stage blade failure events. A detailed ANSYS finite element model of the blade was used to calculate the failure parameter for the different crystallographic orientations. A total of 297 cases were run to cover a wide range of acceptable orientations within the blade. Those orientations are related to the base crystallographic coordinate system that was created in the ANSYS finite element model. Contour plots of the criterion as a function of orientation for the blade tip and attachment were obtained. Results of the analysis revealed a 40% increase in the failure parameter due to changing of the primary and secondary axes of material orientations. A comparison between failure criterion predictions and actual engine test data was then conducted. The engine test data comes from two ATD/HPFTP builds (units F3- 4B and F6-5D), which were ground tested on the SSME at the Stennis Space Center in Mississippi. Both units experienced cracking of the airfoil tips in multiple blades, but only a few cracks grew all the way across the wall of the hollow core airfoil.

  18. [Differences between German and Turkish-speaking participants in a chronic heart failure management program].

    PubMed

    Ernstmann, N; Karbach, U

    2017-02-01

    German and Turkish-speaking patients were recruited for a chronic heart failure management program. So far little is known about the special needs and characteristics of Turkish-speaking patients with chronic heart failure; therefore, the aim of this study was to examine sociodemographic and illness-related differences between German and Turkish-speaking patients with chronic heart failure. German and Turkish-speaking patients suffering from chronic heart failure and insured with the AOK Rheinland/Hamburg or the BARMER GEK health insurance companies and living in Cologne, Germany, were enrolled. Recruitment took place in hospitals, private practices and at information events. Components of the program were coordination of a guideline-oriented medical care, telemonitoring (e.g., blood pressure, electrocardiogram, and weight), a 24-h information hotline, attendance by German and Turkish-speaking nurses and a patient education program. Data were collected by standardized interviews in German or Turkish language. Data were analyzed with descriptive measures and tested for significance differences using Pearson's χ 2 -test and the t‑test. A total of 465 patients (average age 71 years, 55 % male and 33 % Turkish-speaking) were enrolled in the care program during the study period. Significant differences between German and Turkish-speaking patients were found for age, education, employment status, comorbidities, risk perception, knowledge on heart failure and fear of loss of independence. The response rate could be achieved with the help of specific measures for patient enrollment by Turkish-speaking integration nurses. The differences between German and Turkish-speaking patients should in future be taken into account in the care of people with chronic heart failure.

  19. Mechanical loading of bovine pericardium accelerates enzymatic degradation.

    PubMed

    Ellsmere, J C; Khanna, R A; Lee, J M

    1999-06-01

    Bioprosthetic heart valves fail as the result of two simultaneous processes: structural deterioration and calcification. Leaflet deterioration and perforation have been correlated with regions of highest stress in the tissue. The failures have long been assumed to be due to simple mechanical fatigue of the collagen fibre architecture; however, we have hypothesized that local stresses-and particularly dynamic stresses-accelerate local proteolysis, leading to tissue failure. This study addresses that hypothesis. Using a novel, custom-built microtensile culture system, strips of bovine pericardium were subjected to static and dynamic loads while being exposed to solutions of microbial collagenase or trypsin (a non-specific proteolytic enzyme). The time to extend to 30% strain (defined here as time to failure) was recorded. After failure, the percentage of collagen solubilized was calculated based on the amount of hydroxyproline present in solution. All data were analyzed by analysis of variance (ANOVA). In collagenase, exposure to static load significantly decreased the time to failure (P < 0.002) due to increased mean rate of collagen solubilization. Importantly, specimens exposed to collagenase and dynamic load failed faster than those exposed to collagenase under the same average static load (P = 0.02). In trypsin, by contrast, static load never led to failure and produced only minimal degradation. Under dynamic load, however, specimens exposed to collagenase, trypsin, and even Tris/CaCl2 buffer solution, all failed. Only samples exposed to Hanks' physiological solution did not fail. Failure of the specimens exposed to trypsin and Tris/CaCl2 suggests that the non-collagenous components and the calcium-dependent proteolytic enzymes present in pericardial tissue may play roles in the pathogenesis of bioprosthetic heart valve degeneration.

  20. Rate of change of heart size before congestive heart failure in dogs with mitral regurgitation.

    PubMed

    Lord, P; Hansson, K; Kvart, C; Häggström, J

    2010-04-01

    The objective of the study was to examine the changes in vertebral heart scale, and left atrial and ventricular dimensions before and at onset of congestive heart failure in cavalier King Charles spaniels with mitral regurgitation. Records and radiographs from 24 cavalier King Charles spaniels with mitral regurgitation were used. Vertebral heart scale (24 dogs), and left atrial dimension and left ventricular end diastolic and end systolic diameters (18 dogs) and their rate of increase were measured at intervals over years to the onset of congestive heart failure. They were plotted against time to onset of congestive heart failure. Dimensions and rates of change of all parameters were highest at onset of congestive heart failure, the difference between observed and chance outcome being highly significant using a two-tailed chi-square test (P<0.001). The left heart chambers increase in size rapidly only in the last year before the onset of congestive heart failure. Increasing left ventricular end systolic dimension is suggestive of myocardial failure before the onset of congestive heart failure. Rate of increase of heart dimensions may be a useful indicator of impending congestive heart failure.

  1. Heart failure and atrial fibrillation: current concepts and controversies.

    PubMed Central

    Van den Berg, M. P.; Tuinenburg, A. E.; Crijns, H. J.; Van Gelder, I. C.; Gosselink, A. T.; Lie, K. I.

    1997-01-01

    Heart failure and atrial fibrillation are very common, particularly in the elderly. Owing to common risk factors both disorders are often present in the same patient. In addition, there is increasing evidence of a complex, reciprocal relation between heart failure and atrial fibrillation. Thus heart failure may cause atrial fibrillation, with electromechanical feedback and neurohumoral activation playing an important mediating role. In addition, atrial fibrillation may promote heart failure; in particular, when there is an uncontrolled ventricular rate, tachycardiomyopathy may develop and thereby heart failure. Eventually, a vicious circle between heart failure and atrial fibrillation may form, in which neurohumoral activation and subtle derangement of rate control are involved. Treatment should aim at unloading of the heart, adequate control of ventricular rate, and correction of neurohumoral activation. Angiotensin converting enzyme inhibitors may help to achieve these goals. Treatment should also include an attempt to restore sinus rhythm through electrical cardioversion, though appropriate timing of cardioversion is difficult. His bundle ablation may be used to achieve adequate rate control in drug refractory cases. PMID:9155607

  2. Shielding of the Hip Prosthesis During Radiation Therapy for Heterotopic Ossification is Associated with Increased Failure of Prophylaxis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balboni, Tracy A.; Gaccione, Peter; Gobezie, Reuben

    2007-04-01

    Purpose: Radiation therapy (RT) is frequently administered to prevent heterotopic ossification (HO) after total hip arthroplasty (THA). The purpose of this study was to determine if there is an increased risk of HO after RT prophylaxis with shielding of the THA components. Methods and Materials: This is a retrospective analysis of THA patients undergoing RT prophylaxis of HO at Brigham and Women's Hospital between June 1994 and February 2004. Univariate and multivariate logistic regressions were used to assess the relationships of all variables to failure of RT prophylaxis. Results: A total of 137 patients were identified and 84 were eligiblemore » for analysis (61%). The median RT dose was 750 cGy in one fraction, and the median follow-up was 24 months. Eight of 40 unshielded patients (20%) developed any progression of HO compared with 21 of 44 shielded patients (48%) (p = 0.009). Brooker Grade III-IV HO developed in 5% of unshielded and 18% of shielded patients (p 0.08). Multivariate analysis revealed shielding (p = 0.02) and THA for prosthesis infection (p = 0.03) to be significant predictors of RT failure, with a trend toward an increasing risk of HO progression with age (p = 0.07). There was no significant difference in the prosthesis failure rates between shielded and unshielded patients. Conclusions: A significantly increased risk of failure of RT prophylaxis for HO was noted in those receiving shielding of the hip prosthesis. Shielding did not appear to reduce the risk of prosthesis failure.« less

  3. Catastrophic Fault Recovery with Self-Reconfigurable Chips

    NASA Technical Reports Server (NTRS)

    Zheng, Will Hua; Marzwell, Neville I.; Chau, Savio N.

    2006-01-01

    Mission critical systems typically employ multi-string redundancy to cope with possible hardware failure. Such systems are only as fault tolerant as there are many redundant strings. Once a particular critical component exhausts its redundant spares, the multi-string architecture cannot tolerate any further hardware failure. This paper aims at addressing such catastrophic faults through the use of 'Self-Reconfigurable Chips' as a last resort effort to 'repair' a faulty critical component.

  4. A Generic Modeling Process to Support Functional Fault Model Development

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  5. Reliability Centred Maintenance (RCM) Analysis of Laser Machine in Filling Lithos at PT X

    NASA Astrophysics Data System (ADS)

    Suryono, M. A. E.; Rosyidi, C. N.

    2018-03-01

    PT. X used automated machines which work for sixteen hours per day. Therefore, the machines should be maintained to keep the availability of the machines. The aim of this research is to determine maintenance tasks according to the cause of component’s failure using Reliability Centred Maintenance (RCM) and determine the amount of optimal inspection frequency which must be performed to the machine at filling lithos process. In this research, RCM is used as an analysis tool to determine the critical component and find optimal inspection frequencies to maximize machine’s reliability. From the analysis, we found that the critical machine in filling lithos process is laser machine in Line 2. Then we proceed to determine the cause of machine’s failure. Lastube component has the highest Risk Priority Number (RPN) among other components such as power supply, lens, chiller, laser siren, encoder, conveyor, and mirror galvo. Most of the components have operational consequences and the others have hidden failure consequences and safety consequences. Time-directed life-renewal task, failure finding task, and servicing task can be used to overcome these consequences. The results of data analysis show that the inspection must be performed once a month for laser machine in the form of preventive maintenance to lowering the downtime.

  6. Effects of self-graphing and goal setting on the math fact fluency of students with disabilities.

    PubMed

    Figarola, Patricia M; Gunter, Philip L; Reffel, Julia M; Worth, Susan R; Hummel, John; Gerber, Brian L

    2008-01-01

    We evaluated the impact of goal setting and students' participation in graphing their own performance data on the rate of math fact calculations. Participants were 3 students with mild disabilities in the first and second grades; 2 of the 3 students were also identified with Attention-Deficit/Hyperactivity Disorder (ADHD). They were taught to use Microsoft Excel® software to graph their rate of correct calculations when completing timed, independent practice sheets consisting of single-digit mathematics problems. Two students' rates of correct calculations nearly always met or exceeded the aim line established for their correct calculations. Additional interventions were required for the third student. Results are discussed in terms of implications and future directions for increasing the use of evaluation components in classrooms for students at risk for behavior disorders and academic failure.

  7. Computerized system for assessing heart rate variability.

    PubMed

    Frigy, A; Incze, A; Brânzaniuc, E; Cotoi, S

    1996-01-01

    The principal theoretical, methodological and clinical aspects of heart rate variability (HRV) analysis are reviewed. This method has been developed over the last 10 years as a useful noninvasive method of measuring the activity of the autonomic nervous system. The main components and the functioning of the computerized rhythm-analyzer system developed by our team are presented. The system is able to perform short-term (maximum 20 minutes) time domain HRV analysis and statistical analysis of the ventricular rate in any rhythm, particularly in atrial fibrillation. The performances of our system are demonstrated by using the graphics (RR histograms, delta RR histograms, RR scattergrams) and the statistical parameters resulted from the processing of three ECG recordings. These recordings are obtained from a normal subject, from a patient with advanced heart failure, and from a patient with atrial fibrillation.

  8. Acute effects of Finnish sauna and cold-water immersion on haemodynamic variables and autonomic nervous system activity in patients with heart failure.

    PubMed

    Radtke, Thomas; Poerschke, Daniel; Wilhelm, Matthias; Trachsel, Lukas D; Tschanz, Hansueli; Matter, Friederike; Jauslin, Daniel; Saner, Hugo; Schmid, Jean-Paul

    2016-04-01

    The haemodynamic response to Finnish sauna and subsequent cold-water immersion in heart failure patients is unknown. Haemodynamic response to two consecutive Finnish sauna (80℃) exposures, followed by a final head-out cold-water immersion (12℃) was measured in 37 male participants: chronic heart failure (n = 12, 61.8 ± 9.2 years), coronary artery disease (n = 13, 61.2 ± 10.6 years) and control subjects (n = 12, 60.9 ± 8.9 years). Cardiac output was measured non-invasively with an inert gas rebreathing method prior to and immediately after the first sauna exposure and after cold-water immersion, respectively. Blood pressure was measured before, twice during and after sauna. The autonomic nervous system was assessed by power spectral analysis of heart rate variability. Total power, low-frequency and high-frequency components were evaluated. The low frequency/high frequency ratio was used as a marker of sympathovagal balance. Sauna and cold-water immersion were well tolerated by all subjects. Cardiac output and heart rate significantly increased in all groups after sauna and cold-water immersion (p < 0.05), except for coronary artery disease patients after sauna exposure. Systolic blood pressure during sauna decreased significantly in all groups with a nadir after 6 min (all p < 0.05). Cold-water immersion significantly increased systolic blood pressure in all groups (p < 0.05). No change in the low/high frequency ratio was found in chronic heart failure patients. In coronary artery disease patients and controls a prolonged increase in low frequency/high frequency ratio was observed after the first sauna exposure. Acute exposure to Finnish sauna and cold-water immersion causes haemodynamic alterations in chronic heart failure patients similarly to control subjects and in particular did not provoke an excessive increase in adrenergic activity or complex arrhythmias. © The European Society of Cardiology 2015.

  9. Research on fault characteristics about switching component failures for distribution electronic power transformers

    NASA Astrophysics Data System (ADS)

    Sang, Z. X.; Huang, J. Q.; Yan, J.; Du, Z.; Xu, Q. S.; Lei, H.; Zhou, S. X.; Wang, S. C.

    2017-11-01

    The protection is an essential part for power device, especially for those in power grid, as the failure may cost great losses to the society. A study on the voltage and current abnormality in the power electronic devices in Distribution Electronic Power Transformer (D-EPT) during the failures on switching components is presented, as well as the operational principles for 10 kV rectifier, 10 kV/400 V DC-DC converter and 400 V inverter in D-EPT. Derived from the discussion on the effects of voltage and current distortion, the fault characteristics as well as a fault diagnosis method for D-EPT are introduced.

  10. Kinetic balance and variational bounds failure in the solution of the Dirac equation in a finite Gaussian basis set

    NASA Technical Reports Server (NTRS)

    Dyall, Kenneth G.; Faegri, Knut, Jr.

    1990-01-01

    The paper investigates bounds failure in calculations using Gaussian basis sets for the solution of the one-electron Dirac equation for the 2p1/2 state of Hg(79+). It is shown that bounds failure indicates inadequacies in the basis set, both in terms of the exponent range and the number of functions. It is also shown that overrepresentation of the small component space may lead to unphysical results. It is concluded that it is important to use matched large and small component basis sets with an adequate size and exponent range.

  11. Insomnia: psychological and neurobiological aspects and non-pharmacological treatments.

    PubMed

    Molen, Yara Fleury; Carvalho, Luciane Bizari Coin; Prado, Lucila Bizari Fernandes do; Prado, Gilmar Fernandes do

    2014-01-01

    Insomnia involves difficulty in falling asleep, maintaining sleep or having refreshing sleep. This review gathers the existing informations seeking to explain insomnia, including those that focus on psychological aspects and those considered neurobiological. Insomnia has been defined in psychological (cognitive components, such as worries and rumination, and behavioral aspects, such as classic conditioning) and physiological terms (increased metabolic rate, with increased muscle tone, heart rate and temperature). From the neurobiological point of view, there are two perspectives: one which proposes that insomnia occurs in association with a failure to inhibit wakefulness and another that considers hyperarousal as having an important role in the physiology of sleep. The non-pharmacological interventions developed to face different aspects of insomnia are presented.

  12. Efficient 3-D finite element failure analysis of compression loaded angle-ply plates with holes

    NASA Technical Reports Server (NTRS)

    Burns, S. W.; Herakovich, C. T.; Williams, J. G.

    1987-01-01

    Finite element stress analysis and the tensor polynomial failure criterion predict that failure always initiates at the interface between layers on the hole edge for notched angle-ply laminates loaded in compression. The angular location of initial failure is a function of the fiber orientation in the laminate. The dominant stress components initiating failure are shear. It is shown that approximate symmetry can be used to reduce the computer resources required for the case of unaxial loading.

  13. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T.

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less

  14. High complication rate in reconstruction of Paprosky type IIIa acetabular defects using an oblong implant with modular side plates and a hook.

    PubMed

    Babis, G C; Sakellariou, V I; Chatziantoniou, A N; Soucacos, P N; Megas, P

    2011-12-01

    We report the results of 62 hips in 62 patients (17 males, 45 females) with mean age of 62.4 years (37 to 81), who underwent revision of the acetabular component of a total hip replacement due to aseptic loosening between May 2003 and November 2007. All hips had a Paprosky type IIIa acetabular defect. Acetabular revision was undertaken using a Procotyl E cementless oblong implant with modular side plates and a hook combined with impaction allografting. At a mean follow-up of 60.5 months (36 to 94) with no patients lost to follow-up and one died due to unrelated illness, the complication rate was 38.7%. Complications included aseptic loosening (19 hips), deep infection (3 hips), broken hook and side plate (one hip) and a femoral nerve palsy (one hip). Further revision of the acetabular component was required in 18 hips (29.0%) and a further four hips (6.4%) are currently loose and awaiting revision. We observed unacceptably high rates of complication and failure in our group of patients and cannot recommend this implant or technique.

  15. Parameters affecting the resilience of scale-free networks to random failures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, Hamilton E.; LaViolette, Randall A.; Lane, Terran

    2005-09-01

    It is commonly believed that scale-free networks are robust to massive numbers of random node deletions. For example, Cohen et al. in (1) study scale-free networks including some which approximate the measured degree distribution of the Internet. Their results suggest that if each node in this network failed independently with probability 0.99, most of the remaining nodes would still be connected in a giant component. In this paper, we show that a large and important subclass of scale-free networks are not robust to massive numbers of random node deletions. In particular, we study scale-free networks which have minimum node degreemore » of 1 and a power-law degree distribution beginning with nodes of degree 1 (power-law networks). We show that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. The new result is partially due to improved analytical accommodation of the large number of degree-0 nodes that result after node deletions. Our results apply to power-law networks with a wide range of power-law exponents, including Internet-like networks. We give both analytical and empirical evidence that such networks are not generally robust to massive random node deletions.« less

  16. The reliability of the pass/fail decision for assessments comprised of multiple components.

    PubMed

    Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana

    2015-01-01

    The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When "conjunctively" combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg's Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached - for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements.

  17. The reliability of the pass/fail decision for assessments comprised of multiple components

    PubMed Central

    Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana

    2015-01-01

    Objective: The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When “conjunctively” combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. Method: The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg’s Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Results: Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. Conclusion: The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached – for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements. PMID:26483855

  18. Model analysis of the link between interest rates and crashes

    NASA Astrophysics Data System (ADS)

    Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft

    2016-09-01

    We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.

  19. The influence of mandibular skeletal characteristics on inferior alveolar nerve block anesthesia.

    PubMed

    You, Tae Min; Kim, Kee-Deog; Huh, Jisun; Woo, Eun-Jung; Park, Wonse

    2015-09-01

    The inferior alveolar nerve block (IANB) is the most common anesthetic techniques in dentistry; however, its success rate is low. The purpose of this study was to determine the correlation between IANB failure and mandibular skeletal characteristics. In total, 693 cases of lower third molar extraction (n = 575 patients) were examined in this study. The ratio of the condylar and coronoid distances from the mandibular foramen (condyle-coronoid ratio [CC ratio]) was calculated, and the mandibular skeleton was then classified as normal, retrognathic, or prognathic. The correlation between IANB failure and sex, treatment side, and the CC ratio was assessed. The IANB failure rates for normal, retrognathic, and prognathic mandibles were 7.3%, 14.5%, and 9.5%, respectively, and the failure rate was highest among those with a CC ratio < 0.8 (severe retrognathic mandible). The failure rate was significantly higher in the retrognathic group than in normal group (P = 0.019), and there was no statistically significant difference between the other two groups. IANB failure could be attributable, in part, to the skeletal characteristics of the mandible. In addition, the failure rate was found to be significantly higher in the retrognathic group.

  20. The influence of mandibular skeletal characteristics on inferior alveolar nerve block anesthesia

    PubMed Central

    You, Tae Min; Kim, Kee-Deog; Huh, Jisun; Woo, Eun-Jung

    2015-01-01

    Background The inferior alveolar nerve block (IANB) is the most common anesthetic techniques in dentistry; however, its success rate is low. The purpose of this study was to determine the correlation between IANB failure and mandibular skeletal characteristics Methods In total, 693 cases of lower third molar extraction (n = 575 patients) were examined in this study. The ratio of the condylar and coronoid distances from the mandibular foramen (condyle-coronoid ratio [CC ratio]) was calculated, and the mandibular skeleton was then classified as normal, retrognathic, or prognathic. The correlation between IANB failure and sex, treatment side, and the CC ratio was assessed. Results The IANB failure rates for normal, retrognathic, and prognathic mandibles were 7.3%, 14.5%, and 9.5%, respectively, and the failure rate was highest among those with a CC ratio < 0.8 (severe retrognathic mandible). The failure rate was significantly higher in the retrognathic group than in normal group (P = 0.019), and there was no statistically significant difference between the other two groups. Conclusions IANB failure could be attributable, in part, to the skeletal characteristics of the mandible. In addition, the failure rate was found to be significantly higher in the retrognathic group. PMID:28879267

  1. A quantitative model of honey bee colony population dynamics.

    PubMed

    Khoury, David S; Myerscough, Mary R; Barron, Andrew B

    2011-04-18

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem.

  2. The Threat of Uncertainty: Why Using Traditional Approaches for Evaluating Spacecraft Reliability are Insufficient for Future Human Mars Missions

    NASA Technical Reports Server (NTRS)

    Stromgren, Chel; Goodliff, Kandyce; Cirillo, William; Owens, Andrew

    2016-01-01

    Through the Evolvable Mars Campaign (EMC) study, the National Aeronautics and Space Administration (NASA) continues to evaluate potential approaches for sending humans beyond low Earth orbit (LEO). A key aspect of these missions is the strategy that is employed to maintain and repair the spacecraft systems, ensuring that they continue to function and support the crew. Long duration missions beyond LEO present unique and severe maintainability challenges due to a variety of factors, including: limited to no opportunities for resupply, the distance from Earth, mass and volume constraints of spacecraft, high sensitivity of transportation element designs to variation in mass, the lack of abort opportunities to Earth, limited hardware heritage information, and the operation of human-rated systems in a radiation environment with little to no experience. The current approach to maintainability, as implemented on ISS, which includes a large number of spares pre-positioned on ISS, a larger supply sitting on Earth waiting to be flown to ISS, and an on demand delivery of logistics from Earth, is not feasible for future deep space human missions. For missions beyond LEO, significant modifications to the maintainability approach will be required.Through the EMC evaluations, several key findings related to the reliability and safety of the Mars spacecraft have been made. The nature of random and induced failures presents significant issues for deep space missions. Because spare parts cannot be flown as needed for Mars missions, all required spares must be flown with the mission or pre-positioned. These spares must cover all anticipated failure modes and provide a level of overall reliability and safety that is satisfactory for human missions. This will require a large amount of mass and volume be dedicated to storage and transport of spares for the mission. Further, there is, and will continue to be, a significant amount of uncertainty regarding failure rates for spacecraft components. This uncertainty makes it much more difficult to anticipate failures and will potentially require an even larger amount of spares to provide an acceptable level of safety. Ultimately, the approach to maintenance and repair applied to ISS, focusing on the supply of spare parts, may not be tenable for deep space missions. Other approaches, such as commonality of components, simplification of systems, and in-situ manufacturing will be required.

  3. Control of Risks Through the Use of Procedures: A Method for Evaluating the Change in Risk

    NASA Technical Reports Server (NTRS)

    Praino, Gregory T.; Sharit, Joseph

    2010-01-01

    This paper considers how procedures can be used to control risks faced by an organization and proposes a means of recognizing if a particular procedure reduces risk or contributes to the organization's exposure. The proposed method was developed out of the review of work documents and the governing procedures performed in the wake of the Columbia accident by NASA and the Space Shuttle prime contractor, United Space Alliance, LLC. A technique was needed to understand the rules, or procedural controls, in place at the time in the context of how important the role of each rule was. The proposed method assesses procedural risks, the residual risk associated with a hazard after a procedure's influence is accounted for, by considering each clause of a procedure as a unique procedural control that may be beneficial or harmful. For procedural risks with consequences severe enough to threaten the survival of the organization, the method measures the characteristics of each risk on a scale that is an alternative to the traditional consequence/likelihood couple. The dual benefits of the substitute scales are that they eliminate both the need to quantify a relationship between different consequence types and the need for the extensive history a probabilistic risk assessment would require. Control Value is used as an analog for the consequence, where the value of a rule is based on how well the control reduces the severity of the consequence when operating successfully. This value is composed of two parts: the inevitability of the consequence in the absence of the control, and the opportunity to intervene before the consequence is realized. High value controls will be ones where there is minimal need for intervention but maximum opportunity to actively prevent the outcome. Failure Likelihood is used as the substitute for the conventional likelihood of the outcome. For procedural controls, a failure is considered to be any non-malicious violation of the rule, whether intended or not. The model used for describing the Failure Likelihood considers how well a task was established by evaluating that task on five components. The components selected to define a well established task are: that it be defined, assigned to someone capable, that they be trained appropriately, that the actions be organized to enable proper completion and that some form of independent monitoring be performed. Validation of the method was based on the information provided by a group of experts in Space Shuttle ground processing when they were presented with 5 scenarios that identified a clause from a procedure. For each scenario, they recorded their perception of how important the associated rule was and how likely it was to fail. They then rated the components of Control Value and Failure Likelihood for all the scenarios. The order in which each reviewer ranked the scenarios Control Value and Failure Likelihood was compared to the order in which they ranked the scenarios for each of the associated components; inevitability and opportunity for Control Value and definition, assignment, training, organization and monitoring for Failure Likelihood. This order comparison showed how the components contributed to a relative relationship to the substitute risk element. With the relationship established for Space Shuttle ground processing, this method can be used to gauge if the introduction or removal of a particular rule will increase or decrease the .risk associated with the hazard it is intended to control.

  4. On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sappok, Alex; Ragaller, Paul; Herman, Andrew

    The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directlymore » monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.« less

  5. [Endoprosthesis failure in the ankle joint : Histopathological diagnostics and classification].

    PubMed

    Müller, S; Walther, M; Röser, A; Krenn, V

    2017-03-01

    Endoprostheses of the ankle joint show higher revision rates of 3.29 revisions per 100 component years. The aims of this study were the application and modification of the consensus classification of the synovia-like interface membrane (SLIM) for periprosthetic failure of the ankle joint, the etiological clarification of periprosthetic pseudocysts and a detailed measurement of proliferative activity (Ki67) in the region of osteolysis. Tissue samples from 159 patients were examined according to the criteria of the standardized consensus classification. Of these, 117 cases were derived from periprosthetic membranes of the ankle. The control group included 42 tissue specimens from the hip and knee joints. Particle identification and characterization were carried out using the particle algorithm. An immunohistochemical examination with Ki67 proliferation was performed in all cases of ankle pseudocysts and 19 control cases. The consensus classification of SLIM is transferrable to endoprosthetic failure of the ankle joint. Periprosthetic pseudocysts with the histopathological characteristics of the appropriate SLIM subtype were detectable in 39 cases of ankle joint endoprostheses (33.3%). The mean value of the Ki67 index was 14% and showed an increased proliferation rate in periprosthetic pseudocysts of the ankle (p-value 0.02037). In periprosthetic pseudocysts an above average higher detection rate of type 1 SLIM induced by abrasion (51.3%) with an increased Ki67 proliferation fraction (p-value 0.02037) was found, which can be interpreted as local destructive intraosseus synovialitis. This can be the reason for formation of pseudocystic osteolysis caused by high mechanical stress in ankle endoprostheses. A simplified diagnostic classification scoring system of dysfunctional endoprostheses of the ankle is proposed for collation of periprosthetic pseudocysts, ossifications and the Ki67 proliferation fraction.

  6. Usefulness of peak exercise oxygen consumption and the heart failure survival score to predict survival in patients >65 years of age with heart failure.

    PubMed

    Parikh, Mona N; Lund, Lars H; Goda, Ayumi; Mancini, Donna

    2009-04-01

    Peak exercise oxygen consumption (Vo(2)) and the Heart Failure (HF) Survival Score (HFSS) were developed in middle-aged patient cohorts referred for heart transplantation with HF. The prognostic value of Vo(2) in patients >65 years has not been well studied. Accordingly, the prognostic value of peak Vo(2) was evaluated in these patients with HF. A retrospective analysis of 396 patients with HF >65 years with cardiopulmonary exercise testing was performed. Peak Vo(2) and components of the HFSS (presence of coronary artery disease, left ventricular ejection fraction, heart rate, mean arterial blood pressure, presence of intraventricular conduction defects, and serum sodium) were collected. Follow-up averaged 1,038 +/- 983 days. Outcome events were defined as death, implantation of a left ventricular assist device, or urgent transplantation. Patients were divided into risk strata for peak Vo(2) and HFSS based on previous cut-off points. Survival curves were derived using Kaplan-Meier analysis and compared using log-rank analysis. Survival differed markedly by Vo(2) stratum (p <0.0001), with significantly better survival rates for the low- (>14 ml/kg/min) versus medium- (10 to 14 ml/kg/min), low- versus high- (<10 ml/kg/min), and medium- versus high-risk strata (all p <0.05). Survival also differed markedly by HFSS stratum (p <0.0001), with significantly better survival rates for the low- (> or =8.10) versus medium- (7.20 to 8.09), low- versus high- (< or =7.19), and medium- versus high-risk strata (all p <0.0001). In conclusion, peak Vo(2) and the HFSS were both excellent parameters to predict survival in patients >65 years with HF.

  7. Defense Strategies for Asymmetric Networked Systems with Discrete Components.

    PubMed

    Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun

    2018-05-03

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.

  8. Defense Strategies for Asymmetric Networked Systems with Discrete Components

    PubMed Central

    Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.

    2018-01-01

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588

  9. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  10. A Methodology for Quantifying Certain Design Requirements During the Design Phase

    NASA Technical Reports Server (NTRS)

    Adams, Timothy; Rhodes, Russel

    2005-01-01

    A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.

  11. Failure: A Source of Progress in Maintenance and Design

    NASA Astrophysics Data System (ADS)

    Chaïb, R.; Taleb, M.; Benidir, M.; Verzea, I.; Bellaouar, A.

    This approach, allows using the failure as a source of progress in maintenance and design to detect the most critical components in equipment, to determine the priority order maintenance actions to lead and direct the exploitation procedure towards the most penalizing links in this equipment, even define the necessary changes and recommendations for future improvement. Thus, appreciate the pathological behaviour of the material and increase its availability, even increase its lifespan and improve its future design. In this context and in the light of these points, the failures are important in managing the maintenance function. Indeed, it has become important to understand the phenomena of failure and degradation of equipments in order to establish an appropriate maintenance policy for the rational use of mechanical components and move to the practice of proactive maintenance [1], do maintenance at the design [2].

  12. Preventing blood transfusion failures: FMEA, an effective assessment method.

    PubMed

    Najafpour, Zhila; Hasoumi, Mojtaba; Behzadi, Faranak; Mohamadi, Efat; Jafary, Mohamadreza; Saeedi, Morteza

    2017-06-30

    Failure Mode and Effect Analysis (FMEA) is a method used to assess the risk of failures and harms to patients during the medical process and to identify the associated clinical issues. The aim of this study was to conduct an assessment of blood transfusion process in a teaching general hospital, using FMEA as the method. A structured FMEA was recruited in our study performed in 2014, and corrective actions were implemented and re-evaluated after 6 months. Sixteen 2-h sessions were held to perform FMEA in the blood transfusion process, including five steps: establishing the context, selecting team members, analysis of the processes, hazard analysis, and developing a risk reduction protocol for blood transfusion. Failure modes with the highest risk priority numbers (RPNs) were identified. The overall RPN scores ranged from 5 to 100 among which, four failure modes were associated with RPNs over 75. The data analysis indicated that failures with the highest RPNs were: labelling (RPN: 100), transfusion of blood or the component (RPN: 100), patient identification (RPN: 80) and sampling (RPN: 75). The results demonstrated that mis-transfusion of blood or blood component is the most important error, which can lead to serious morbidity or mortality. Provision of training to the personnel on blood transfusion, knowledge raising on hazards and appropriate preventative measures, as well as developing standard safety guidelines are essential, and must be implemented during all steps of blood and blood component transfusion.

  13. A Comparison of Functional Models for Use in the Function-Failure Design Method

    NASA Technical Reports Server (NTRS)

    Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.

    2006-01-01

    When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.

  14. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.

  15. Forensic engineering: applying materials and mechanics principles to the investigation of product failures.

    PubMed

    Hainsworth, S V; Fitzpatrick, M E

    2007-06-01

    Forensic engineering is the application of engineering principles or techniques to the investigation of materials, products, structures or components that fail or do not perform as intended. In particular, forensic engineering can involve providing solutions to forensic problems by the application of engineering science. A criminal aspect may be involved in the investigation but often the problems are related to negligence, breach of contract, or providing information needed in the redesign of a product to eliminate future failures. Forensic engineering may include the investigation of the physical causes of accidents or other sources of claims and litigation (for example, patent disputes). It involves the preparation of technical engineering reports, and may require giving testimony and providing advice to assist in the resolution of disputes affecting life or property.This paper reviews the principal methods available for the analysis of failed components and then gives examples of different component failure modes through selected case studies.

  16. Reduction of water losses by rehabilitation of water distribution network.

    PubMed

    Güngör, Mahmud; Yarar, Ufuk; Firat, Mahmut

    2017-09-11

    Physical or real losses may be indicated as the most important component of the water losses occurring in a water distribution network (WDN). The objective of this study is to examine the effects of piping material management and network rehabilitation on the physical water losses and water losses management in a WDN. For this aim, the Denizli WDN consisting of very old pipes that have exhausted their economic life is selected as the study area. The fact that the current network is old results in the decrease of pressure strength, increase of failure intensity, and inefficient use of water resources thus leading to the application of the rehabilitation program. In Denizli, network renewal works have been carried out since the year 2009 under the rehabilitation program. It was determined that the failure rate at regions where network renewal constructions have been completed decreased down to zero level. Renewal of piping material enables the minimization of leakage losses as well as the failure rate. On the other hand, the system rehabilitation has the potential to amortize itself in a very short amount of time if the initial investment cost of network renewal is considered along with the operating costs of the old and new systems, as well as water loss costs. As a result, it can be stated that renewal of piping material in water distribution systems, enhancement of the physical properties of the system, provide significant contributions such as increase of water and energy efficiency and more effective use of resources.

  17. Hemostatic efficacy of EVARREST™, Fibrin Sealant Patch vs. TachoSil® in a heparinized swine spleen incision model.

    PubMed

    Matonick, John P; Hammond, Jeffrey

    2014-12-01

    First-generation single-component hemostats such as oxidized regenerated cellulose (ORC), fibrin, collagen, and gelatin have evolved into second and third generations of combination hemostats. This study compares two FDA approved products, EVARREST™, Fibrin Sealant Patch, a hemostat comprised of a matrix of nonwoven polyglactin 910 embedded in ORC coated with human fibrinogen and thrombin to TachoSil® medicated sponge, an equine collagen pad coated with human fibrinogen and thrombin. Swine were anticoagulated with heparin to 3X their baseline activated clotting time and a 15 mm long × 3 mm deep incision was made to create a consistent moderate bleeding pattern. Test material was then applied to the wound site and compressed manually for 3 min with just enough pressure to prevent continued bleeding. Hemostatic effectiveness was evaluated at 3 min and 10 min. At 3 min, the hemostasis success rate was 86% in the EVARREST™ group and 0% in the TachoSil® group, p < .0001. The overall success rate at 10 min was 100% with EVARREST™ and 4% with TachoSil®, p < .0001. Adhesive failure, in which the test material did not stick to the tissue, occurred in 96% of TachoSil® sites. In contrast, 100% of the EVARREST™ applications adhered to the test site. EVARREST™, Fibrin Sealant Patch demonstrated greater wound adhesion and more effective hemostasis than TachoSil®. Adhesive failure was the primary failure mode for TachoSil® in this model.

  18. Risk factors for eye bank preparation failure of Descemet membrane endothelial keratoplasty tissue.

    PubMed

    Vianna, Lucas M M; Stoeger, Christopher G; Galloway, Joshua D; Terry, Mark; Cope, Leslie; Belfort, Rubens; Jun, Albert S

    2015-05-01

    To assess the results of a single eye bank preparing a high volume of Descemet membrane endothelial keratoplasty (DMEK) tissues using multiple technicians to provide an overview of the experience and to identify possible risk factors for DMEK preparation failure. Cross-sectional study. setting: Lions VisionGift and Wilmer Eye Institute at Johns Hopkins Hospital. All 563 corneal tissues processed by technicians at Lions VisionGift for DMEK between October 2011 and May 2014 inclusive. Tissues were divided into 2 groups: DMEK preparation success and DMEK preparation failure. We compared donor characteristics, including past medical history. The overall tissue preparation failure rate was 5.2%. Univariate analysis showed diabetes mellitus (P = .000028) and its duration (P = .023), hypertension (P = .021), and hyperlipidemia or obesity (P = .0004) were more common in the failure group. Multivariate analysis showed diabetes mellitus (P = .0001) and hyperlipidemia or obesity (P = .0142) were more common in the failure group. Elimination of tissues from donors either with diabetes or with hyperlipidemia or obesity reduced the failure rate from 5.2% to 2.2%. Trends toward lower failure rates occurring with increased technician experience also were found. Our work showed that tissues from donors with diabetes mellitus (especially with longer disease duration) and hyperlipidemia or obesity were associated with higher failure rates in DMEK preparation. Elimination of tissues from donors either with diabetes mellitus or with hyperlipidemia or obesity reduced the failure rate. In addition, our data may provide useful initial guidelines and benchmark values for eye banks seeking to establish and maintain DMEK programs. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Quality of Life for Saudi Patients With Heart Failure: A Cross-Sectional Correlational Study.

    PubMed

    AbuRuz, Mohannad Eid; Alaloul, Fawwaz; Saifan, Ahmed; Masa'deh, Rami; Abusalem, Said

    2015-06-25

    Heart failure is a major public health issue and a growing concern in developing countries, including Saudi Arabia. Most related research was conducted in Western cultures and may have limited applicability for individuals in Saudi Arabia. Thus, this study assesses the quality of life of Saudi patients with heart failure. A cross-sectional correlational design was used on a convenient sample of 103 patients with heart failure. Data were collected using the Short Form-36 and the Medical Outcomes Study-Social Support Survey. Overall, the patients' scores were low for all domains of Quality of Life. The Physical Component Summary and Mental Component Summary mean scores and SDs were (36.7±12.4, 48.8±6.5) respectively, indicating poor Quality of Life. Left ventricular ejection fraction was the strongest predictor of both physical and mental summaries. Identifying factors that impact quality of life for Saudi heart failure patients is important in identifying and meeting their physical and psychosocial needs.

  20. Modeling joint restoration strategies for interdependent infrastructure systems.

    PubMed

    Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P

    2018-01-01

    Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.

  1. A Mixed Methods Explanatory Study of the Failure/Drop Rate for Freshman STEM Calculus Students

    ERIC Educational Resources Information Center

    Worthley, Mary

    2013-01-01

    In a national context of high failure rates in freshman calculus courses, the purpose of this study was to understand who is struggling, and why. High failure rates are especially alarming given a local environment where students have access to a variety of academic, and personal, assistance. The sample consists of students at Colorado State…

  2. Corrosion studies of titanium in borated water for TPX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D.F.; Pawel, S.J.; DeVan, J.H.

    1995-12-31

    Corrosion testing was performed to demonstrate the compatibility of the titanium vacuum vessel with borated water. Borated water is proposed to fill the annulus of the double wall vacuum vessel to provide effective radiation shielding. Borating the water with 110 grams of boric acid per liter is sufficient to reduce the nuclear heating in the Toroidal Field Coil set and limit the activation of components external to the vacuum vessel. Constant extension rate tensile (CERT) and electrochemical potentiodynamic tests were performed. Results of the CERT tests confirm that stress corrosion cracking is not significant for Ti-6Al4V or Ti-3AI-2.5V. Welded andmore » unwelded specimens were tested in air and in borated water at 150{degree}C. Strength, elongation, and time to failure were nearly identical for all test conditions, and all the samples exhibited ductile failure. Potentiodynamic tests on Ti-6A1-4V and Ti in borated water as a function of temperature showed low corrosion rates over a wide passive potential range. Further, this passivity appeared stable to anodic potentials substantially greater than those expected from MHD effects.« less

  3. Training of residents in laparoscopic tubal sterilization: Long-term failure rates

    PubMed Central

    Rackow, Beth W.; Rhee, Maria C.; Taylor, Hugh S.

    2011-01-01

    Objectives Laparoscopic tubal sterilization with bipolar coagulation is a common and effective method of contraception, and a procedure much used to teach laparoscopic surgical skills to Obstetrics and Gynaecology residents (trainees); but it has an inherent risk of failure. This study investigated the long-term failure rate of this procedure when performed by Obstetrics and Gynaecology residents on women treated in their teaching clinics. Methods From 1991 to 1994, Obstetrics and Gynaecology residents carried out 386 laparoscopic tubal sterilizations with bipolar coagulation at Yale-New Haven Hospital. Six to nine years after the procedure, the women concerned were contacted by telephone and data were collected about sterilization failure. Results Two failures of laparoscopic tubal sterilization with bipolar coagulation were identified: an ectopic pregnancy and a spontaneous abortion. For this time period, the long-term sterilization failure rate was 1.9% (0–4.4%). Conclusions The long-term sterilization failure rate for laparoscopic tubal sterilization with bipolar coagulation performed by residents is comparable to the results of prior studies. These findings can be used to properly counsel women at a teaching clinic about the risks of sterilization failure with this procedure, and attest to the adequacy of residents’ training and supervision. PMID:18465476

  4. Cerclage wires or cables for the management of intraoperative fracture associated with a cementless, tapered femoral prosthesis: results at 2 to 16 years.

    PubMed

    Berend, Keith R; Lombardi, Adolph V; Mallory, Thomas H; Chonko, Douglas J; Dodds, Kathleen L; Adams, Joanne B

    2004-10-01

    Initial stability is critical for fixation and survival of cementless total hip arthroplasty. Occasionally, a split of the calcar occurs intraoperatively. A review of 1,320 primary total hip arthroplasties with 2-year follow-up, performed between August 1985 and February 2001 using the Mallory-Head Porous tapered femoral component, revealed 58 hips in 55 patients with an intraoperative calcar fracture managed with single or multiple cerclage wires or cables and immediate full weight bearing. At 7.5 years average follow-up (range, 2-16 years), there were no revisions of the femoral component, radiographic failures, or patients with severe thigh pain, for a stem survival rate of 100%. Average Harris hip score improvement was 33.8 points. Fracture of the proximal femur occurs in approximately 4% of primary THAs using the Mallory-Head Porous femoral component. When managed intraoperatively with cerclage wire or cable, the mid- to long-term results appear unaffected with 100% femoral component survival at up to 16 years.

  5. Sample features associated with success rates in population-based EGFR mutation testing.

    PubMed

    Shiau, Carolyn J; Babwah, Jesse P; da Cunha Santos, Gilda; Sykes, Jenna R; Boerner, Scott L; Geddie, William R; Leighl, Natasha B; Wei, Cuihong; Kamel-Reid, Suzanne; Hwang, David M; Tsao, Ming-Sound

    2014-07-01

    Epidermal growth factor receptor (EGFR) mutation testing has become critical in the treatment of patients with advanced non-small-cell lung cancer. This study involves a large cohort and epidemiologically unselected series of EGFR mutation testing for patients with nonsquamous non-small-cell lung cancer in a North American population to determine sample-related factors that influence success in clinical EGFR testing. Data from consecutive cases of Canadian province-wide testing at a centralized diagnostic laboratory for a 24-month period were reviewed. Samples were tested for exon-19 deletion and exon-21 L858R mutations using a validated polymerase chain reaction method with 1% to 5% detection sensitivity. From 2651 samples submitted, 2404 samples were tested with 2293 samples eligible for analysis (1780 histology and 513 cytology specimens). The overall test-failure rate was 5.4% with overall mutation rate of 20.6%. No significant differences in the failure rate, mutation rate, or mutation type were found between histology and cytology samples. Although tumor cellularity was significantly associated with test-success or mutation rates in histology and cytology specimens, respectively, mutations could be detected in all specimen types. Significant rates of EGFR mutation were detected in cases with thyroid transcription factor (TTF)-1-negative immunohistochemistry (6.7%) and mucinous component (9.0%). EGFR mutation testing should be attempted in any specimen, whether histologic or cytologic. Samples should not be excluded from testing based on TTF-1 status or histologic features. Pathologists should report the amount of available tumor for testing. However, suboptimal samples with a negative EGFR mutation result should be considered for repeat testing with an alternate sample.

  6. Analysis of failed and nickel-coated 3093 beam clamp components at the East Tennessee Technology Park (ETTP).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, D.; Pappacena, K.; Gaviria, J.

    2010-10-11

    The U.S. Department of Energy and its contractor, Bechtel Jacobs Company (BJC), are undertaking a major effort to clean up the former gaseous diffusion facility (K-25) located in Oak Ridge, TN. The decontamination and decommissioning activities require systematic removal of contaminated equipment and machinery followed by demolition of the buildings. As part of the cleanup activities, a beam clamp, used for horizontal life lines (HLLs) for fall protection, was discovered to be fractured during routine inspection. The beam clamp (yoke and D-ring) was a component in the HLL system purchased from Reliance Industries LLC. Specifically, the U-shaped stainless steel yokemore » of the beam clamp failed in a brittle mode at under less than 10% of the rated design capacity of 14,500 lb. The beam clamp had been in service for approximately 16 months. Bechtel Jacobs approached Argonne National Laboratory to assist in identifying the root cause of the failure of the beam clamp. The objectives of this study were to (1) review the prior reports and documents on the subject, (2) understand the possible failure mechanism(s) that resulted in the failed beam clamp components, (3) recommend approaches to mitigate the failure mechanism(s), and (4) evaluate the modified beam clamp assemblies. Energy dispersive x-ray analysis and chemical analysis of the corrosion products on the failed yoke and white residue on an in-service yoke indicated the presence of zinc, sulfur, and calcium. Analysis of rainwater in the complex, as conducted by BJC, indicated the presence of sulfur and calcium. It was concluded that, as a result of galvanic corrosion, zinc from the galvanized components of the beam clamp assembly (D-ring) migrated to the corroded region in the presence of the rainwater. Under mechanical stress, the corrosion process would have accelerated, resulting in the catastrophic failure of the yoke. As suggested by Bechtel Jacobs personnel, hydrogen embrittlement as a consequence of corrosion was also explored as a failure mechanism. Corroded and failed yoke samples had hydrogen concentrations of 20-60 ppm. However, the hydrogen content reduced to 4-11 ppm (similar to baseline as-received yoke samples) when the corrosion products were polished off. The hydrogen content in the scraped off corrosion product powders was >7000 ppm. These results indicate that hydrogen is primarily present in the corrosion products and not in the underlying steel. Rockwell hardness values on the corroded yoke and D-rings were R{sub c} {approx} 41-46. It was recommended to the beam clamp manufacturer that the beam clamp components be annealed to reduce the hardness values so that they are less susceptible to brittle failure. Upon annealing, hardness values of the beam clamp components reduced to R{sub c} {approx} 25. Several strategies were recommended and put in place to mitigate failure of the beam clamp components: (a) maintain hardness levels of both yokes and D-rings at R{sub c} < 35, (b) coat the yoke and D-rings with a dual coating of nickel (with 10% phosphorus) to delay corrosion and aluminum to prevent galvanic corrosion since it is more anodic to zinc, and (c) optimize coating thicknesses for nickel and aluminum while maintaining the physical integrity of the coatings. Evaluation of the Al- and Ni-coated yoke and D-ring specimens indicated they appear to have met the recommendations. Average hardness values of the dual-coated yokes were R{sub c} {approx} 25-35. Hardness values of dual-coated D-ring were R{sub c} {approx} 32. Measured average coating thicknesses for the aluminum and nickel coatings for yoke samples were 22 {micro}m (0.9 mils) and 80 {micro}m (3 mils), respectively. The D-rings also showed similar coating thicknesses. Microscopic examination showed that the aluminum coating was well bonded to the underlying nickel coating. Some observed damage was believed to be an artifact of the cutting-and-polishing steps during sample preparation for microscopy.« less

  7. Creep Life of Ceramic Components Using a Finite-Element-Based Integrated Design Program (CARES/CREEP)

    NASA Technical Reports Server (NTRS)

    Powers, L. M.; Jadaan, O. M.; Gyekenyesi, J. P.

    1998-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural application such as in advanced turbine engine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilizes commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life, of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the Ceramics Analysis and Reliability Evaluation of Structures/CREEP (CARES/CREEP) integrated design program, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benchmark problems and engine components are included.

  8. Creep Life of Ceramic Components Using a Finite-Element-Based Integrated Design Program (CARES/CREEP)

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.; Powers, L. M.; Jadaan, O. M.

    1998-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilized commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the CARES/CREEP (Ceramics Analysis and Reliability Evaluation of Structures/CREEP) integrated design programs, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benechmark problems and engine components are included.

  9. Environmental stress-corrosion cracking of fiberglass: lessons learned from failures in the chemical industry.

    PubMed

    Myers, T J; Kytömaa, H K; Smith, T R

    2007-04-11

    Fiberglass reinforced plastic (FRP) composite materials are often used to construct tanks, piping, scrubbers, beams, grating, and other components for use in corrosive environments. While FRP typically offers superior and cost effective corrosion resistance relative to other construction materials, the glass fibers traditionally used to provide the structural strength of the FRP can be susceptible to attack by the corrosive environment. The structural integrity of traditional FRP components in corrosive environments is usually dependent on the integrity of a corrosion-resistant barrier, such as a resin-rich layer containing corrosion resistant glass fibers. Without adequate protection, FRP components can fail under loads well below their design by an environmental stress-corrosion cracking (ESCC) mechanism when simultaneously exposed to mechanical stress and a corrosive chemical environment. Failure of these components can result in significant releases of hazardous substances into plants and the environment. In this paper, we present two case studies where fiberglass components failed due to ESCC at small chemical manufacturing facilities. As is often typical, the small chemical manufacturing facilities relied largely on FRP component suppliers to determine materials appropriate for the specific process environment and to repair damaged in-service components. We discuss the lessons learned from these incidents and precautions companies should take when interfacing with suppliers and other parties during the specification, design, construction, and repair of FRP components in order to prevent similar failures and chemical releases from occurring in the future.

  10. An analysis of the value of spermicides in contraception.

    PubMed

    1979-11-01

    Development of the so-called modern methods of contraception has somewhat eclipsed interest in traditional methods. However, spermicides are still important for many couples and their use appears to be increasing. A brief history of the use of and research into spermicidal contraceptives is presented. The limitations of spermicides are: the necessity for use at the time of intercourse, and their high failure rate. Estimates of the failure rates of spermicides have ranged from .3 pregnancies per 100 woman-years of use to nearly 40, depending on the product used and the population tested. Just as their use depends on various social factors, so does their failure rate. Characteristics of the user deterine failure rates. Motivation is important in lowering failure rates as is education, the intracouple relationship, and previous experience with spermicides. Method failure is also caused by defects in the product, either in the active ingredient of the spermicide or in the base carrier. The main advantage of spermicidal contraception is its safety. Limited research is currently being conducted on spermicides. Areas for improvement in existing spermicides and areas for possible innovation are mentioned.

  11. An application of artificial intelligence theory to reconfigurable flight control

    NASA Technical Reports Server (NTRS)

    Handelman, David A.

    1987-01-01

    Artificial intelligence techniques were used along with statistical hpyothesis testing and modern control theory, to help the pilot cope with the issues of information, knowledge, and capability in the event of a failure. An intelligent flight control system is being developed which utilizes knowledge of cause and effect relationships between all aircraft components. It will screen the information available to the pilots, supplement his knowledge, and most importantly, utilize the remaining flight capability of the aircraft following a failure. The list of failure types the control system will accommodate includes sensor failures, actuator failures, and structural failures.

  12. C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component

    NASA Astrophysics Data System (ADS)

    Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.

    2018-06-01

    The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.

  13. C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component

    NASA Astrophysics Data System (ADS)

    Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.

    2018-02-01

    The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.

  14. Influence of enamel preservation on failure rates of porcelain laminate veneers.

    PubMed

    Gurel, Galip; Sesma, Newton; Calamita, Marcelo A; Coachman, Christian; Morimoto, Susana

    2013-01-01

    The purpose of this study was to evaluate the failure rates of porcelain laminate veneers (PLVs) and the influence of clinical parameters on these rates in a retrospective survey of up to 12 years. Five hundred eighty laminate veneers were bonded in 66 patients. The following parameters were analyzed: type of preparation (depth and margin), crown lengthening, presence of restoration, diastema, crowding, discoloration, abrasion, and attrition. Survival was analyzed using the Kaplan-Meier method. Cox regression modeling was used to determine which factors would predict PLV failure. Forty-two veneers (7.2%) failed in 23 patients, and an overall cumulative survival rate of 86% was observed. A statistically significant association was noted between failure and the limits of the prepared tooth surface (margin and depth). The most frequent failure type was fracture (n = 20). The results revealed no significant influence of crown lengthening apically, presence of restoration, diastema, discoloration, abrasion, or attrition on failure rates. Multivariable analysis (Cox regression model) also showed that PLVs bonded to dentin and teeth with preparation margins in dentin were approximately 10 times more likely to fail than PLVs bonded to enamel. Moreover, coronal crown lengthening increased the risk of PLV failure by 2.3 times. A survival rate of 99% was observed for veneers with preparations confined to enamel and 94% for veneers with enamel only at the margins. Laminate veneers have high survival rates when bonded to enamel and provide a safe and predictable treatment option that preserves tooth structure.

  15. Revisiting the role of durable polymers in cardiovascular devices.

    PubMed

    Mori, Hiroyoshi; Otsuka, Fumiyuki; Gupta, Anuj; Jinnouchi, Hiroyuki; Torii, Sho; Harari, Emanuel; Virmani, Renu; Finn, Aloke V

    2017-11-01

    Polymers are an essential component of drug-eluting stents (DES) used to control drug release but remain the most controversial component of DES technology. There are two types of polymers employed in DES: durable polymer based DES (DP-DES) and biodegradable polymer DES (BP-DES). First-generation DES were exclusively composed of DP and demonstrated increased rates of late stent failure due in part to poor polymer biocompatibility. Newer generations DES use more biocompatible durable polymers or biodegradable polymers. Areas covered: We will cover issues identified with 1st-generation DP-DES, areas of success and failure in 2nd-generation DP-DES and examine the promise and shortcomings of BP-DES. Briefly, fluorinated polymers used in 2nd-generation DP-DES have excellent anti-thrombogenicity and better biocompatibility than 1st-generation DES polymers. However, these devices lead to persistent drug exposure to the endothelium which impairs endothelial function and predisposes towards neoatherosclerosis. Meanwhile, BP-DES has shortened the duration of drug exposure which might be beneficial for endothelial functional recovery leading to less neoatherosclerosis. However, it remains uncertain whether the long-term biocompatibility of bare metal surfaces is better than that of polymer-coated metals. Expert commentary: Each technology has distinct advantages, which can be optimized depending upon the particular characteristics of the patient being treated.

  16. Qualification and issues with space flight laser systems and components

    NASA Astrophysics Data System (ADS)

    Ott, Melanie N.; Coyle, D. B.; Canham, John S.; Leidecker, Henning W., Jr.

    2006-02-01

    The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 1990's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.

  17. Qualification and Issues with Space Flight Laser Systems and Components

    NASA Technical Reports Server (NTRS)

    Ott, Melanie N.; Coyle, D. Barry; Canham, John S.; Leidecker, Henning W.

    2006-01-01

    The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 1990's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.

  18. Qualification and Issues with Space Flight Laser Systems and Components

    NASA Technical Reports Server (NTRS)

    Ott, Melanie N.; Coyle, D. Barry; Canham, John S.; Leidecker, Henning W.

    2006-01-01

    The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 199O's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.

  19. Complications of short versus long cephalomedullary nail for intertrochanteric femur fractures, minimum 1 year follow-up.

    PubMed

    Vaughn, Josh; Cohen, Eric; Vopat, Bryan G; Kane, Patrick; Abbood, Emily; Born, Christopher

    2015-05-01

    Hip fractures are becoming increasingly common resulting in significant morbidity, mortality and raising healthcare costs. Both short and long cephalomedullary devices are currently employed to treat intertrochanteric hip fractures. However, which device is optimal continues to be debated as each implant has unique characteristics and theoretical advantages. This study looked to identify rates of complications associated with both long and short cephalomedullary nails for the treatment of intertrochanteric hip fractures. We retrospectively reviewed charts from 2006 to 2011, and we identified 256 patients were identified with AO class 31.1-32.3 fractures. Sixty were treated with short nails and 196 with long nails. Radiographs and charts were then analysed for failures and hardware complications. Catastrophic failure and hardware complication rates were not statistically different between short or long cephalomedullary nails. The overall catastrophic failure rate was 3.1 %; there was a 5 % failure rate in the short-nail group compared with a 2.6 % failure rate in the long-nail group (p = 0.191). There was a 3.33 % secondary femur fracture rate in the short-nail group, compared with none in the long-nail cohort (p = 0.054). The rate of proximal fixation failure was 1.67 % for the short-nail group and 2.0 % in the long-nail group (p = 0.406). Our data suggests equivocal outcomes as measured by similar catastrophic failure rate between both short and long cephalomedullary nails for intertrochanteric femur fractures. However, there was an increased risk of secondary femur fracture with short cephalomedullary nails when compared to long nails that approached statistical significance.

  20. X-33 LH2 Tank Failure Investigation Findings

    NASA Technical Reports Server (NTRS)

    Niedermeyer, Mindy; Clinton, R. G., Jr. (Technical Monitor)

    2000-01-01

    This presentation focuses on the tank history, test objectives, failure description, investigation and conclusions. The test objectives include verify structural integrity at 105% expected flight load limit varying the following parameters: cryogenic temperature; internal pressure; and mechanical loading. The Failure description includes structural component of the aft body, quad-lobe design, and sandwich - honeycomb graphite epoxy construction.

  1. Microscopic observations during longitudinal compression loading of single pulp fibers

    Treesearch

    Irving B. Sachs

    1986-01-01

    Paperboard components (linerboard adn corrugating medium) fail in edgewise compression because of failure of single fibers, as well as fiber-to-fiber bonds. While fiber-to-fiber-bond failure has been studied extensively, little is known about the longitudinal compression failure of a single fiber. In this study, surface alterations on single loblolly pine kraft pulp...

  2. Obsolescence of electronics at the VLT

    NASA Astrophysics Data System (ADS)

    Hüdepohl, Gerhard; Haddad, Juan-Pablo; Lucuix, Christian

    2016-07-01

    The ESO Very Large Telescope Observatory (VLT) at Cerro Paranal in Chile had its first light in 1998. Most of the telescopes' electronics components were chosen and designed in the mid 1990s and are now around 20 years old. As a consequence we are confronted with increasing failure rates due to aging and lack of spare parts, since many of the components are no longer available on the market. The lifetime of large telescopes is generally much beyond 25 years. Therefore the obsolescence of electronics components and modules becomes an issue sooner or later and forces the operations teams to upgrade the systems to new technology in order to avoid that the telescope becomes inoperable. Technology upgrade is a time and money consuming process, which in many cases is not straightforward and has various types of complications. This paper shows the strategy, analysis, approach, timeline, complications and progress in obsolescence driven electronics upgrades at the ESO Very Large Telescope (VLT) at the Paranal Observatory.

  3. Identification of hip surface arthroplasty failures with TcSC/TcmDP radionuclide imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, B.J.; Amstutz, H.C.; Mai, L.L.

    1982-07-01

    The roentgenographic identification of femoral component loosening after hip surface arthroplasty is often impossible because the metallic femoral component obscures the bone-cement interface. The use of combined technetium sulfur colloid and technetium methylene diphosphonate radionuclide imaging has been especially useful in the diagnosis of loosening. In 40 patients, follow-up combined TcSC and TcmDP scans at an average of three, nine, and 27 months postoperation revealed significant differences in the isotope uptakes in patients who had loose prostheses compared with those without complications. Scans were evaluated by first dividing them into eight anatomical regions and then rating the uptake in eachmore » region or 'zone' on a five-point scale. Results were compared using the Student's t-test and differences were noted between normal controls and patients who had femoral component loosening. Combining both TcSC and TcmDP studies increased the statistical significance obtained when comparing patients who had complications to those in the control group.« less

  4. Improving online risk assessment with equipment prognostics and health monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coble, Jamie B.; Liu, Xiaotong; Briere, Chris

    The current approach to evaluating the risk of nuclear power plant (NPP) operation relies on static probabilities of component failure, which are based on industry experience with the existing fleet of nominally similar light water reactors (LWRs). As the nuclear industry looks to advanced reactor designs that feature non-light water coolants (e.g., liquid metal, high temperature gas, molten salt), this operating history is not available. Many advanced reactor designs use advanced components, such as electromagnetic pumps, that have not been used in the US commercial nuclear fleet. Given the lack of rich operating experience, we cannot accurately estimate the evolvingmore » probability of failure for basic components to populate the fault trees and event trees that typically comprise probabilistic risk assessment (PRA) models. Online equipment prognostics and health management (PHM) technologies can bridge this gap to estimate the failure probabilities for components under operation. The enhanced risk monitor (ERM) incorporates equipment condition assessment into the existing PRA and risk monitor framework to provide accurate and timely estimates of operational risk.« less

  5. Vibration detection of component health and operability

    NASA Technical Reports Server (NTRS)

    Baird, B. C.

    1975-01-01

    In order to prevent catastrophic failure and eliminate unnecessary periodic maintenance in the shuttle orbiter program environmental control system components, some means of detecting incipient failure in these components is required. The utilization was investigated of vibrational/acoustic phenomena as one of the principal physical parameters on which to base the design of this instrumentation. Baseline vibration/acoustic data was collected from three aircraft type fans and two aircraft type pumps over a frequency range from a few hertz to greater than 3000 kHz. The baseline data included spectrum analysis of the baseband vibration signal, spectrum analysis of the detected high frequency bandpass acoustic signal, and amplitude distribution of the high frequency bandpass acoustic signal. A total of eight bearing defects and two unbalancings was introduced into the five test items. All defects were detected by at least one of a set of vibration/acoustic parameters with a margin of at least 2:1 over the worst case baseline. The design of a portable instrument using this set of vibration/acoustic parameters for detecting incipient failures in environmental control system components is described.

  6. Comparison between four dissimilar solar panel configurations

    NASA Astrophysics Data System (ADS)

    Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.

    2017-12-01

    Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.

  7. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  8. Probabilistic finite elements for fracture and fatigue analysis

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Lawrence, M.; Besterfield, G. H.

    1989-01-01

    The fusion of the probabilistic finite element method (PFEM) and reliability analysis for probabilistic fracture mechanics (PFM) is presented. A comprehensive method for determining the probability of fatigue failure for curved crack growth was developed. The criterion for failure or performance function is stated as: the fatigue life of a component must exceed the service life of the component; otherwise failure will occur. An enriched element that has the near-crack-tip singular strain field embedded in the element is used to formulate the equilibrium equation and solve for the stress intensity factors at the crack-tip. Performance and accuracy of the method is demonstrated on a classical mode 1 fatigue problem.

  9. Artificial Immune System for Flight Envelope Estimation and Protection

    DTIC Science & Technology

    2014-12-31

    Throttle Failure 103 5.3. Estimation Algorithms for Sensor AC 108 5.3.1. Roll Rate Sensor Bias 108...4.13. Reference Features-Pattern for a Roll Rate Sensor Under Low Severity Failure 93 Figure 4.14. Reference Features-Pattern for a Roll Rate...Average PI for Different ACs 134 Figure 6.9. Roll Response Under High Magnitude Stabilator Failure 135 Figure 6.10. Pitch

  10. [Girls are more successful than boys at the university. Gender group differences in models integrating motivational and aggressive components correlated with Test-Anxiety].

    PubMed

    Masson, A-M; Hoyois, Ph; Cadot, M; Nahama, V; Petit, F; Ansseau, M

    2004-01-01

    It is surprising to note the evolution of success rates in Belgian universities especially in the first Year. Men are less successful than women and the differences are escalating in an alarming way. Dropouts take the same direction and women now represent a majority of the students at the university. In a previous study, we assessed 616 students in the first Year at the university of Liège with Vasev, the English name of which was TASTE, a self report questionnaire constituted of 4 factors: anxiety, self confidence, procrastination and performance value; anxiety particularly concerned somatic expression of students before and during test evaluations; self confidence was a cognitive component close to self efficacy; procrastination was the behavioral component characterizing avoidance when students are confronted with the risk of failure; performance value referred to intrinsic and extrinsic motivations. French validation of TASTE led to an abbreviated version of 50 items (THEE) consisting of 5 factors, the four of TASTE and an additional one, very consistent, at first called depression because of its correlations with this dimension, then called sense of competence on account of its semantic content. Self-competence has been described in the literature of Achievement Motivation and corresponded to expectancy and ability beliefs in performance process which was also relevant to self-efficacy except the particularity of comparison with others, which was not included in the last construct. Self-competence has been considered as an important part of the Worry component of test anxiety. Some Authors didn't hesitate to view causality flowing from self-competence to test anxiety and have conceptualized the latter as a failure of the self where one's sense of competence has been undermined as a result of experienced failure. In our study, only that factor was equally scored in women and men whereas it was scored higher in failed students. In other respects anxiety and performance value were scored higher in women, self-confidence and procrastination higher in men. Because TASTE didn't discriminate the different components of motivation (performance value referred to intrinsic and extrinsic motivations without precise distinction) we decided to use the MPS (Multidimensional Perfectionism Scale) which gave the opportunity to distinguish SOP (Self Oriented Perfectionism) ie, the self-imposed unrealistic standards with inability to accept faults in order to know and master a subject, that corresponded to intrinsic motivation; SPP (Socially Prescribed Perfectionism) ie, the exaggerated expectancies of others which are subjectively believed as imposed and uncontrollable leading to anxiety, feelings of failure or helplessness, that corresponded to extrinsic motivation; POO (Perfectionism Oriented to Others) ie, the unrealistic demands expected from significant others, which especially characterized males. We assumed that women attached more importance to succeed and submitted more to society exigencies. That way extrinsic and intrinsic motivations were probably more combined unlike men who, dreading a loss of self esteem, tried to avoid failure responsibility in using self handicapping or aggressive behaviours, so separating motivation in an extrinsic part turned to performance value and an intrinsic one more concerned by self confidence and sense of competence with the result that the motivational balance was surely disrupted in case of high competition leading to failure or avoidance. In another previous study we established a structural model illustrating, according to gender, correlations between anxiety, sense of incompetence, self-oriented perfectionism and socially prescribed perfectionism. Self-oriented perfectionism was less correlated to socially prescribed perfectionism in boys than in girls; furthermore especially by those who had never failed, it was negatively correlated to sense of incompetence, thus leading to lower scores of anxiety while in girls, by contrast, such a correlation didn't exist, thus involving higher anxiety. That way, on the one hand, intrinsic and extrinsic motivations by female students complementarily operated on the sense of incompetence and consequently on anxiety, the emotional component of test anxiety; on the other hand, by male students, intrinsic motivation had a negative correlation with the sense of incompetence and a lower correlation with extrinsic motivation, thereby shedding some light on the problem of anxiety level differences according to gender. More, that observation corresponded well to the model of self-worth where test anxiety was understood as a manifestation of perceived incompetence and as a defensive way to ward off negative self-evaluation; that model suited particularly well to boys and explained their attempts to maintain self-worth when risking academic failure. The present research assumes that independence or combination of motivation components is also correlated to different expressions of aggressiveness: hostility corresponding to threat and characterizing more girls while physical aggression is corresponding to personal challenge, a more masculine attribution. If fighting against the sense of incompetence actually characterizes men and consequently shows too the competitive aspects of performance strong enough to mobilize intrinsic motivation, what would be expected regarding the notion of threat suspected to be predominant in girls? The idea of using a questionnaire discriminating the specific dimensions of aggressiveness in fact the Aggression Questionnaire should meet the following purposes: At first establish a French version of that aggression questionnaire, perform the factorial analyses and internal consistency, compare them with other previous samples, then differentiate gender in general and in failure versus success situations. Finally include the different components of aggressiveness in the first described model and build a new one liable to define in boys the explicit pathways between test anxiety, perfectionism and aggressiveness. Statistical analyses have confirmed, in a 3 factor solution, the presence of emotional (anger), cognitive (hostility) and behavioural (physical aggression) components. Internal consistency is satisfactory. It is demonstrated that physical aggression characterizes boys (F=12.04, p=0.0001) while hostility (F=5.22, p=0.0015) and anger (F=0.49, p=0.0001) characterizes girls; furthermore it is noted that physical aggression characterizes more failed students (F=13.43, p=0.0003). Four models (see figures 2, 3, 4, 5) have been established, at first focused on the distinction of correlations between motivation and cognitive and emotional components on the samples of boys (n=268) and girls (348), then developed on the samples of successful students, male (n=193) and female (n=271). They describe the differentiated action of intrinsic and extrinsic motivations on the different components of aggressiveness and test-anxiety according to gender and without experience of failure. The dynamic process of the organizational factors is different according to gender and psychopathology resulting from the combinations of behaviors, cognitions and emotions would be assumed, prioritizing physical aggression and psychopathy by boys, anxiety and depression by girls. Anyway more explanation about the evolution of success rates of boys and girls in Belgian universities is proposed.

  11. Spironolactone for heart failure with preserved ejection fraction.

    PubMed

    Pitt, Bertram; Pfeffer, Marc A; Assmann, Susan F; Boineau, Robin; Anand, Inder S; Claggett, Brian; Clausell, Nadine; Desai, Akshay S; Diaz, Rafael; Fleg, Jerome L; Gordeev, Ivan; Harty, Brian; Heitner, John F; Kenwood, Christopher T; Lewis, Eldrin F; O'Meara, Eileen; Probstfield, Jeffrey L; Shaburishvili, Tamaz; Shah, Sanjiv J; Solomon, Scott D; Sweitzer, Nancy K; Yang, Song; McKinlay, Sonja M

    2014-04-10

    Mineralocorticoid-receptor antagonists improve the prognosis for patients with heart failure and a reduced left ventricular ejection fraction. We evaluated the effects of spironolactone in patients with heart failure and a preserved left ventricular ejection fraction. In this randomized, double-blind trial, we assigned 3445 patients with symptomatic heart failure and a left ventricular ejection fraction of 45% or more to receive either spironolactone (15 to 45 mg daily) or placebo. The primary outcome was a composite of death from cardiovascular causes, aborted cardiac arrest, or hospitalization for the management of heart failure. With a mean follow-up of 3.3 years, the primary outcome occurred in 320 of 1722 patients in the spironolactone group (18.6%) and 351 of 1723 patients in the placebo group (20.4%) (hazard ratio, 0.89; 95% confidence interval [CI], 0.77 to 1.04; P=0.14). Of the components of the primary outcome, only hospitalization for heart failure had a significantly lower incidence in the spironolactone group than in the placebo group (206 patients [12.0%] vs. 245 patients [14.2%]; hazard ratio, 0.83; 95% CI, 0.69 to 0.99, P=0.04). Neither total deaths nor hospitalizations for any reason were significantly reduced by spironolactone. Treatment with spironolactone was associated with increased serum creatinine levels and a doubling of the rate of hyperkalemia (18.7%, vs. 9.1% in the placebo group) but reduced hypokalemia. With frequent monitoring, there were no significant differences in the incidence of serious adverse events, a serum creatinine level of 3.0 mg per deciliter (265 μmol per liter) or higher, or dialysis. In patients with heart failure and a preserved ejection fraction, treatment with spironolactone did not significantly reduce the incidence of the primary composite outcome of death from cardiovascular causes, aborted cardiac arrest, or hospitalization for the management of heart failure. (Funded by the National Heart, Lung, and Blood Institute; TOPCAT ClinicalTrials.gov number, NCT00094302.).

  12. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 2

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.

  13. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.

    1986-01-01

    A methodology is established to predict thermal barrier coating life in a environment similar to that experienced by gas turbine airfoils. Experiments were conducted to determine failure modes of the thermal barrier coating. Analytical studies were employed to derive a life prediction model. A review of experimental and flight service components as well as laboratory post evaluations indicates that the predominant mode of TBC failure involves thermomechanical spallation of the ceramic coating layer. This ceramic spallation involves the formation of a dominant crack in the ceramic coating parallel to and closely adjacent to the topologically complex metal ceramic interface. This mechanical failure mode clearly is influenced by thermal exposure effects as shown in experiments conducted to study thermal pre-exposure and thermal cycle-rate effects. The preliminary life prediction model developed focuses on the two major damage modes identified in the critical experiments tasks. The first of these involves a mechanical driving force, resulting from cyclic strains and stresses caused by thermally induced and externally imposed mechanical loads. The second is an environmental driving force based on experimental results, and is believed to be related to bond coat oxidation. It is also believed that the growth of this oxide scale influences the intensity of the mechanical driving force.

  14. A Summary of Taxonomies of Digital System Failure Modes Provided by the DigRel Task Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu T. L.; Yue M.; Postma, W.

    2012-06-25

    Recently, the CSNI directed WGRisk to set up a task group called DIGREL to initiate a new task on developing a taxonomy of failure modes of digital components for the purposes of PSA. It is an important step towards standardized digital I&C reliability assessment techniques for PSA. The objective of this paper is to provide a comparison of the failure mode taxonomies provided by the participants. The failure modes are classified in terms of their levels of detail. Software and hardware failure modes are discussed separately.

  15. A failure management prototype: DR/Rx

    NASA Technical Reports Server (NTRS)

    Hammen, David G.; Baker, Carolyn G.; Kelly, Christine M.; Marsh, Christopher A.

    1991-01-01

    This failure management prototype performs failure diagnosis and recovery management of hierarchical, distributed systems. The prototype, which evolved from a series of previous prototypes following a spiral model for development, focuses on two functions: (1) the diagnostic reasoner (DR) performs integrated failure diagnosis in distributed systems; and (2) the recovery expert (Rx) develops plans to recover from the failure. Issues related to expert system prototype design and the previous history of this prototype are discussed. The architecture of the current prototype is described in terms of the knowledge representation and functionality of its components.

  16. Spent fuel behavior under abnormal thermal transients during dry storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stahl, D.; Landow, M.P.; Burian, R.J.

    1986-01-01

    This study was performed to determine the effects of abnormally high temperatures on spent fuel behavior. Prior to testing, calculations using the CIRFI3 code were used to determine the steady-state fuel and cask component temperatures. The TRUMP code was used to determine transient heating rates under postulated abnormal events during which convection cooling of the cask surfaces was obstructed by a debris bed covering the cask. The peak rate of temperature rise during the first 6 h was calculated to be about 15/sup 0/C/h, followed by a rate of about 1/sup 0/C/h. A Turkey Point spent fuel rod segment wasmore » heated to approx. 800/sup 0/C. The segment deformed uniformly with an average strain of 17% at failure and a local strain of 60%. Pretest characterization of the spent fuel consisted of visual examination, profilometry, eddy-current examination, gamma scanning, fission gas collection, void volume measurement, fission gas analysis, hydrogen analysis of the cladding, burnup analysis, cladding metallography, and fuel ceramography. Post-test characterization showed that the failure was a pinhole cladding breach. The results of the tests showed that spent fuel temperatures in excess of 700/sup 0/C are required to produce a cladding breach in fuel rods pressurized to 500 psing (3.45 MPa) under postulated abnormal thermal transient cask conditions. The pinhole cladding breach that developed would be too small to compromise the confinement of spent fuel particles during an abnormal event or after normal cooling conditions are restored. This behavior is similar to that found in other slow ramp tests with irradiated and nonirradiated rod sections and nonirradiated whole rods under conditions that bracketed postulated abnormal heating rates. This similarity is attributed to annealing of the irradiation-strengthened Zircaloy cladding during heating. In both cases, the failure was a benign, ductile pinhole rupture.« less

  17. Implementation of an Outer Can Welding System for Savannah River Site FB-Line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howard, S.R.

    2003-03-27

    This paper details three phases of testing to confirm use of a Gas Tungsten Arc (GTA) system for closure welding the 3013 outer container used for stabilization/storage of plutonium metals and oxides. The outer container/lid closure joint was originally designed for laser welding, but for this application, the gas tungsten arc (GTA) welding process has been adapted. The testing progressed in three phases: (1) system checkout to evaluate system components for operational readiness, (2) troubleshooting to evaluate high weld failure rates and develop corrective techniques, and (3) pre-installation acceptance testing.

  18. Early detection of nonneurologic organ failure in patients with severe traumatic brain injury: Multiple organ dysfunction score or sequential organ failure assessment?

    PubMed

    Ramtinfar, Sara; Chabok, Shahrokh Yousefzadeh; Chari, Aliakbar Jafari; Reihanian, Zoheir; Leili, Ehsan Kazemnezhad; Alizadeh, Arsalan

    2016-10-01

    The aim of this study is to compare the discriminant function of multiple organ dysfunction score (MODS) and sequential organ failure assessment (SOFA) components in predicting the Intensive Care Unit (ICU) mortality and neurologic outcome. A descriptive-analytic study was conducted at a level I trauma center. Data were collected from patients with severe traumatic brain injury admitted to the neurosurgical ICU. Basic demographic data, SOFA and MOD scores were recorded daily for all patients. Odd's ratios (ORs) were calculated to determine the relationship of each component score to mortality, and area under receiver operating characteristic (AUROC) curve was used to compare the discriminative ability of two tools with respect to ICU mortality. The most common organ failure observed was respiratory detected by SOFA of 26% and MODS of 13%, and the second common was cardiovascular detected by SOFA of 18% and MODS of 13%. No hepatic or renal failure occurred, and coagulation failure reported as 2.5% by SOFA and MODS. Cardiovascular failure defined by both tools had a correlation to ICU mortality and it was more significant for SOFA (OR = 6.9, CI = 3.6-13.3, P < 0.05 for SOFA; OR = 5, CI = 3-8.3, P < 0.05 for MODS; AUROC = 0.82 for SOFA; AUROC = 0.73 for MODS). The relationship of cardiovascular failure to dichotomized neurologic outcome was not significant statistically. ICU mortality was not associated with respiratory or coagulation failure. Cardiovascular failure defined by either tool significantly related to ICU mortality. Compared to MODS, SOFA-defined cardiovascular failure was a stronger predictor of death. ICU mortality was not affected by respiratory or coagulation failures.

  19. Strategy for the management of uncomplicated retinal detachments: the European vitreo-retinal society retinal detachment study report 1.

    PubMed

    Adelman, Ron A; Parnes, Aaron J; Ducournau, Didier

    2013-09-01

    To study success and failure in the treatment of uncomplicated rhegmatogenous retinal detachments (RRDs). Nonrandomized, multicenter retrospective study. One hundred seventy-six surgeons from 48 countries spanning 5 continents provided information on the primary procedures for 7678 cases of RRDs including 4179 patients with uncomplicated RRDs. Reported data included specific clinical findings, the method of repair, and the outcome after intervention. Final failure of retinal detachment repair (level 1 failure rate), remaining silicone oil at the study's conclusion (level 2 failure rate), and need for additional procedures to repair the detachment (level 3 failure rate). Four thousand one hundred seventy-nine uncomplicated cases of RRD were included. Combining phakic, pseudophakic, and aphakic groups, those treated with scleral buckle alone (n = 1341) had a significantly lower final failure rate than those treated with vitrectomy, with or without a supplemental buckle (n = 2723; P = 0.04). In phakic patients, final failure rate was lower in the scleral buckle group compared with those who had vitrectomy, with or without a supplemental buckle (P = 0.028). In pseudophakic patients, the failure rate of the initial procedure was lower in the vitrectomy group compared with the scleral buckle group (P = 3×10(-8)). There was no statistically significant difference in failure rate between segmental (n = 721) and encircling (n = 351) buckles (P = 0.5). Those who underwent vitrectomy with a supplemental scleral buckle (n = 488) had an increased failure rate compared with those who underwent vitrectomy alone (n = 2235; P = 0.048). Pneumatic retinopexy was found to be comparable with scleral buckle when a retinal hole was present (P = 0.65), but not in cases with a flap tear (P = 0.034). In the treatment of uncomplicated phakic retinal detachments, repair using scleral buckle may be a good option. There was no significant difference between segmental versus 360-degree buckle. For pseudophakic uncomplicated retinal detachments, the surgeon should balance the risks and benefits of vitrectomy versus scleral buckle and keep in mind that the single-surgery reattachment rate may be higher with vitrectomy. However, if a vitrectomy is to be performed, these data suggest that a supplemental buckle is not helpful. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  20. Validity testing and neuropsychology practice in the VA healthcare system: results from recent practitioner survey (.).

    PubMed

    Young, J Christopher; Roper, Brad L; Arentsen, Timothy J

    2016-05-01

    A survey of neuropsychologists in the Veterans Health Administration examined symptom/performance validity test (SPVT) practices and estimated base rates for patient response bias. Invitations were emailed to 387 psychologists employed within the Veterans Affairs (VA), identified as likely practicing neuropsychologists, resulting in 172 respondents (44.4% response rate). Practice areas varied, with 72% at least partially practicing in general neuropsychology clinics and 43% conducting VA disability exams. Mean estimated failure rates were 23.0% for clinical outpatient, 12.9% for inpatient, and 39.4% for disability exams. Failure rates were the highest for mTBI and PTSD referrals. Failure rates were positively correlated with the number of cases seen and frequency and number of SPVT use. Respondents disagreed regarding whether one (45%) or two (47%) failures are required to establish patient response bias, with those administering more measures employing the more stringent criterion. Frequency of the use of specific SPVTs is reported. Base rate estimates for SPVT failure in VA disability exams are comparable to those in other medicolegal settings. However, failure in routine clinical exams is much higher in the VA than in other settings, possibly reflecting the hybrid nature of the VA's role in both healthcare and disability determination. Generally speaking, VA neuropsychologists use SPVTs frequently and eschew pejorative terms to describe their failure. Practitioners who require only one SPVT failure to establish response bias may overclassify patients. Those who use few or no SPVTs may fail to identify response bias. Additional clinical and theoretical implications are discussed.

  1. Technological development of cylindrical and flat shaped high energy density capacitors. [using polymeric films

    NASA Technical Reports Server (NTRS)

    Zelik, J. A.; Parker, R. D.

    1977-01-01

    Cylindrical wound metallized film capacitors rated 2 micron F 500 VDC that had an energy density greater than 0.3 J/g, and flat flexible metallized film capacitors rated at 2 micron F 500 VDC that had an energy density greater than 0.1 J/g were developed. Polysulfone, polycarbonate, and polyvinylidene fluoride (PVF2) were investigated as dielectrics for the cylindrical units. PVF2 in 6.0 micron m thickness was employed in the final components of both types. Capacitance and dissipation factor measurements were made over the range 25 C to 100 C, and 10 Hz to 10 kHz. No pre-life-test burning was performed, and six of ten cylindrical units survived a 2500 hour AC plus DC lift test. Three of the four failures were infant mortality. All but two of the flat components survived 400 hours. Finished energy densities were 0.104 J/g at 500 V and 0.200 J/g at 700 V, the energy density being limited by the availability of thin PVF2 films.

  2. Thermal barrier coating life prediction model development, phase 1

    NASA Technical Reports Server (NTRS)

    Demasi, Jeanine T.; Ortiz, Milton

    1989-01-01

    The objective of this program was to establish a methodology to predict thermal barrier coating (TBC) life on gas turbine engine components. The approach involved experimental life measurement coupled with analytical modeling of relevant degradation modes. Evaluation of experimental and flight service components indicate the predominant failure mode to be thermomechanical spallation of the ceramic coating layer resulting from propagation of a dominant near interface crack. Examination of fractionally exposed specimens indicated that dominant crack formation results from progressive structural damage in the form of subcritical microcrack link-up. Tests conducted to isolate important life drivers have shown MCrAlY oxidation to significantly affect the rate of damage accumulation. Mechanical property testing has shown the plasma deposited ceramic to exhibit a non-linear stress-strain response, creep and fatigue. The fatigue based life prediction model developed accounts for the unusual ceramic behavior and also incorporates an experimentally determined oxide rate model. The model predicts the growth of this oxide scale to influence the intensity of the mechanic driving force, resulting from cyclic strains and stresses caused by thermally induced and externally imposed mechanical loads.

  3. Scoping review: Hospital nursing factors associated with 30-day readmission rates of patients with heart failure.

    PubMed

    Jun, Jin; Faulkner, Kenneth M

    2018-04-01

    To review the current literature on hospital nursing factors associated with 30-day readmission rates of patients with heart failure. Heart failure is a common, yet debilitating chronic illness with high mortality and morbidity. One in five patients with heart failure will experience unplanned readmission to a hospital within 30 days. Given the significance of heart failure to individuals, families and healthcare system, the Center for Medicare and Medicaid Services has made reducing 30-day readmission rates a priority. Scoping review, which maps the key concepts of a research area, is used. Published primary studies in English assessing factors related to nurses in hospitals and readmission of patients with heart failure were included. Other inclusion criteria were written in English and published in peer-reviewed journals. The search resulted in 2,782 articles. After removing duplicates and reviewing the inclusion and exclusion criteria, five articles were selected. Three nursing workforce factors emerged as follows: (i) nursing staffing, (ii) nursing care and work environment, and (iii) nurses' knowledge of heart failure. This is the first scoping review examining the association between hospital nursing factors and 30-day readmission rates of patients with heart failure. Further studies examining the extent of nursing structural and process factors influencing the outcomes of patients with heart failure are needed. Nurses are an integral part of the healthcare system. Identifying the factors related to nurses in hospitals is important to ensure comprehensive delivery of care to the chronically ill population. Hospital administrators, managers and policymakers can use the findings from this review to implement strategies to reduce 30-day readmission rates of patients with heart failure. © 2018 John Wiley & Sons Ltd.

  4. Comparative study of the failure rates among 3 implantable defibrillator leads.

    PubMed

    van Malderen, Sophie C H; Szili-Torok, Tamas; Yap, Sing C; Hoeks, Sanne E; Zijlstra, Felix; Theuns, Dominic A M J

    2016-12-01

    After the introduction of the Biotronik Linox S/SD high-voltage lead, several cases of early failure have been observed. The purpose of this article was to assess the performance of the Linox S/SD lead in comparison to 2 other contemporary leads. We used the prospective Erasmus MC ICD registry to identify all implanted Linox S/SD (n = 408), Durata (St. Jude Medical, model 7122) (n = 340), and Endotak Reliance (Boston Scientific, models 0155, 0138, and 0158) (n = 343) leads. Lead failure was defined by low- or high-voltage impedance, failure to capture, sense or defibrillate, or the presence of nonphysiological signals not due to external interference. During a median follow-up of 5.1 years, 24 Linox (5.9%), 5 Endotak (1.5%), and 5 Durata (1.5%) leads failed. At 5-year follow-up, the cumulative failure rate of Linox leads (6.4%) was higher than that of Endotak (0.4%; P < .0001) and Durata (2.0%; P = .003) leads. The incidence rate was higher in Linox leads (1.3 per 100 patient-years) than in Endotak and Durata leads (0.2 and 0.3 per 100 patient-years, respectively; P < .001). A log-log analysis of the cumulative hazard for Linox leads functioning at 3-year follow-up revealed a stable failure rate of 3% per year. The majority of failures consisted of noise (62.5%) and abnormal impedance (33.3%). This study demonstrates a higher failure rate of Linox S/SD high-voltage leads compared to contemporary leads. Although the mechanism of lead failure is unclear, the majority presents with abnormal electrical parameters. Comprehensive monitoring of Linox S/SD high-voltage leads includes remote monitoring to facilitate early detection of lead failure. Copyright © 2016. Published by Elsevier Inc.

  5. Effects of Self-Graphing and Goal Setting on the Math Fact Fluency of Students with Disabilities

    PubMed Central

    Figarola, Patricia M; Gunter, Philip L; Reffel, Julia M; Worth, Susan R; Hummel, John; Gerber, Brian L

    2008-01-01

    We evaluated the impact of goal setting and students' participation in graphing their own performance data on the rate of math fact calculations. Participants were 3 students with mild disabilities in the first and second grades; 2 of the 3 students were also identified with Attention-Deficit/Hyperactivity Disorder (ADHD). They were taught to use Microsoft Excel® software to graph their rate of correct calculations when completing timed, independent practice sheets consisting of single-digit mathematics problems. Two students' rates of correct calculations nearly always met or exceeded the aim line established for their correct calculations. Additional interventions were required for the third student. Results are discussed in terms of implications and future directions for increasing the use of evaluation components in classrooms for students at risk for behavior disorders and academic failure. PMID:22477686

  6. KOH concentration effect on the cycle life of nickel-hydrogen cells. Part 4: Results of failure analyses

    NASA Technical Reports Server (NTRS)

    Lim, H. S.; Verzwyvelt, S. A.

    1989-01-01

    KOH concentration effects on cycle life of a Ni/H2 cell have been studied by carrying out a cycle life test of ten Ni/H2 boiler plate cells which contain electrolytes of various KOH concentrations. Failure analyses of these cells were carried out after completion of the life test which accumulated up to 40,000 cycles at an 80 percent depth of discharge over a period of 3.7 years. These failure analyses included studies on changes of electrical characteristics of test cells and component analyses after disassembly of the cell. The component analyses included visual inspections, dimensional changes, capacity measurements of nickel electrodes, scanning electron microscopy, BET surface area measurements, and chemical analyses. Results have indicated that failure mode and change in the nickel electrode varied as the concentration was varied, especially, when the concentration was changed from 31 percent or higher to 26 percent or lower.

  7. Failure mode analysis to predict product reliability.

    NASA Technical Reports Server (NTRS)

    Zemanick, P. P.

    1972-01-01

    The failure mode analysis (FMA) is described as a design tool to predict and improve product reliability. The objectives of the failure mode analysis are presented as they influence component design, configuration selection, the product test program, the quality assurance plan, and engineering analysis priorities. The detailed mechanics of performing a failure mode analysis are discussed, including one suggested format. Some practical difficulties of implementation are indicated, drawn from experience with preparing FMAs on the nuclear rocket engine program.

  8. Two-year outcome of a prospective, controlled study of a disease management programme for elderly patients with heart failure.

    PubMed

    Del Sindaco, Donatella; Pulignano, Giovanni; Minardi, Giovanni; Apostoli, Antonella; Guerrieri, Luca; Rotoloni, Marina; Petri, Gabriella; Fabrizi, Lino; Caroselli, Attilia; Venusti, Rita; Chiantera, Angelo; Giulivi, Alessia; Giovannini, Ezio; Leggio, Francesco

    2007-05-01

    Elderly heart failure patients are at high risk of events. Available studies and systematic reviews suggest that elderly patients benefit from disease management programmes (DMPs). However, important questions remain open, including the optimal follow-up intensity and duration and whether such interventions are cost-effective during long-term follow-up and in different healthcare systems. The aim of this study was to determine the long-term efficacy of a hybrid DMP in consecutive older outpatients. Intervention consisted in combined hospital-based (cardiologists and nurse-coordinators from two heart failure clinics) and home-based (patient's general practitioner visits) care. The components of the DMP were the following: discharge planning, education, therapy optimisation, improved communication, early attention to signs and symptoms. Intensive follow-up was based on scheduled hospital visits (starting within 14 days of discharge), nurse's phone call and home general practitioner visits. A group of 173 patients aged > or =70 years (mean age 77 +/- 6 years, 48% women) was randomly assigned to DMP (n = 86) or usual care (n = 87). At 2-year follow-up, a 36% reduction in all-cause death and heart failure hospital admissions was observed in DMP vs. usual care. All-cause and heart failure admissions as well as the length of hospital stay were also reduced. DMP patients reported, compared to baseline, significant improvements in functional status, quality of life and beta-blocker prescription rate. The intervention was cost-effective with a mean saving of euro 982.04 per patient enrolled. A hybrid DMP for elderly heart failure patients improves outcomes and is cost-effective over a long-term follow-up.

  9. Failure factors in non-life insurance companies in United Kingdom

    NASA Astrophysics Data System (ADS)

    Samsudin, Humaida Banu

    2013-04-01

    Failure in insurance company is a condition of financial distress where a company has difficulty paying off its financial obligations to its creditors. This study continues the research from the study in identifying the determinants for run-off non-life insurance companies in United Kingdom. The analysis continues to identify other variables that could lead companies to financial distress that is macroeconomic factors (GDP rates, inflation rates and interest rates); total companies failed a year before and average size for failed companies'. The result from the analysis indicates that inflation rates, interest rates, total companies failed a year before and average sizes for failed companies are the best predictors. An early detection of failure can prevent companies from bankruptcy and allow management to take action to reduce the failure costs.

  10. Reliability considerations in the placement of control system components

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1983-01-01

    This paper presents a methodology, along with applications to a grid type structure, for incorporating reliability considerations in the decision for actuator placement on large space structures. The method involves the minimization of a criterion that considers mission life and the reliability of the system components. It is assumed that the actuator gains are to be readjusted following failures, but their locations cannot be changed. The goal of the design is to suppress vibrations of the grid and the integral square of the grid modal amplitudes is used as a measure of performance of the control system. When reliability of the actuators is considered, a more pertinent measure is the expected value of the integral; that is, the sum of the squares of the modal amplitudes for each possible failure state considered, multiplied by the probability that the failure state will occur. For a given set of actuator locations, the optimal criterion may be graphed as a function of the ratio of the mean time to failure of the components and the design mission life or reservicing interval. The best location of the actuators is typically different for a short mission life than for a long one.

  11. Prognostics for Microgrid Components

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav

    2012-01-01

    Prognostics is the science of predicting future performance and potential failures based on targeted condition monitoring. Moving away from the traditional reliability centric view, prognostics aims at detecting and quantifying the time to impending failures. This advance warning provides the opportunity to take actions that can preserve uptime, reduce cost of damage, or extend the life of the component. The talk will focus on the concepts and basics of prognostics from the viewpoint of condition-based systems health management. Differences with other techniques used in systems health management and philosophies of prognostics used in other domains will be shown. Examples relevant to micro grid systems and subsystems will be used to illustrate various types of prediction scenarios and the resources it take to set up a desired prognostic system. Specifically, the implementation results for power storage and power semiconductor components will demonstrate specific solution approaches of prognostics. The role of constituent elements of prognostics, such as model, prediction algorithms, failure threshold, run-to-failure data, requirements and specifications, and post-prognostic reasoning will be explained. A discussion on performance evaluation and performance metrics will conclude the technical discussion followed by general comments on open research problems and challenges in prognostics.

  12. Overview of the Smart Network Element Architecture and Recent Innovations

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.

    2008-01-01

    In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.

  13. A Critical Analysis of the Conventionally Employed Creep Lifing Methods

    PubMed Central

    Abdallah, Zakaria; Gray, Veronica; Whittaker, Mark; Perkins, Karen

    2014-01-01

    The deformation of structural alloys presents problems for power plants and aerospace applications due to the demand for elevated temperatures for higher efficiencies and reductions in greenhouse gas emissions. The materials used in such applications experience harsh environments which may lead to deformation and failure of critical components. To avoid such catastrophic failures and also increase efficiency, future designs must utilise novel/improved alloy systems with enhanced temperature capability. In recognising this issue, a detailed understanding of creep is essential for the success of these designs by ensuring components do not experience excessive deformation which may ultimately lead to failure. To achieve this, a variety of parametric methods have been developed to quantify creep and creep fracture in high temperature applications. This study reviews a number of well-known traditionally employed creep lifing methods with some more recent approaches also included. The first section of this paper focuses on predicting the long-term creep rupture properties which is an area of interest for the power generation sector. The second section looks at pre-defined strains and the re-production of full creep curves based on available data which is pertinent to the aerospace industry where components are replaced before failure. PMID:28788623

  14. Failure Rates and Patterns of Recurrence in Patients With Resected N1 Non-Small-Cell Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varlotto, John M., E-mail: jvarlotto@hmc.psu.edu; Medford-Davis, Laura Nyshel; Recht, Abram

    2011-10-01

    Purpose: To examine the local and distant recurrence rates and patterns of failure in patients undergoing potentially curative resection of N1 non-small-cell lung cancer. Methods and Materials: The study included 60 consecutive unirradiated patients treated from 2000 to 2006. Median follow-up was 30 months. Failure rates were calculated by the Kaplan-Meier method. A univariate Cox proportional hazard model was used to assess factors associated with recurrence. Results: Local and distant failure rates (as the first site of failure) at 2, 3, and 5 years were 33%, 33%, and 46%; and 26%, 26%, and 32%, respectively. The most common site ofmore » local failure was in the mediastinum; 12 of 18 local recurrences would have been included within proposed postoperative radiotherapy fields. Patients who received chemotherapy were found to be at increased risk of local failure, whereas those who underwent pneumonectomy or who had more positive nodes had significantly increased risks of distant failure. Conclusions: Patients with resected non-small-cell lung cancer who have N1 disease are at substantial risk of local recurrence as the first site of relapse, which is greater than the risk of distant failure. The role of postoperative radiotherapy in such patients should be revisited in the era of adjuvant chemotherapy.« less

  15. A comparative analysis of the outcomes of aortic cuffs and converters for endovascular graft migration.

    PubMed

    Thomas, Bradley G; Sanchez, Luis A; Geraghty, Patrick J; Rubin, Brian G; Money, Samuel R; Sicard, Gregorio A

    2010-06-01

    Proximal attachment failure, often leading to graft migration, is a severe complication of endovascular aneurysm repair (EVAR). Aortic cuffs have been used to treat proximal attachment failure with mixed results. The Zenith Renu AAA Ancillary Graft (Cook Inc, Bloomington, Ind) is available in two configurations: converter and main body extension. Both provide proximal extension with active fixation for the treatment of pre-existing endovascular grafts with failed or failing proximal fixation or seal in patients who are not surgical candidates. We prospectively compared the outcomes of patient treatment with these two device configurations. From September 2005 to May 2008, a prospective, nonrandomized, postmarket registry was conducted to collect data from 151 patients treated at 95 institutions for proximal aortic endovascular graft failure using the Renu graft. Treatment indications included inadequate proximal fixation or seal, for example, migration, and type I and III endoleak. A total of 136 patients (90%) had migration, 111 (74%) had endoleak, and 94 (62%) had endoleaks and graft migration. AneuRx grafts were present in 126 patients (83%), of which 89 (59%) were treated with a converter and 62 (41%) with a main body extension. Outcomes using converters vs main body extensions for endoleak rates, changes in aneurysm size, and ruptures were compared. Preprocedural demographics between the two groups did not differ significantly. Procedural success rates were 98% for the converter group and 100% for the main body extension group. At a mean follow-up of 12.8 +/- 7.5 months, no type III endoleaks (0%)were identified in the converter group, and five (8%) were identified in the main body extension group. There were no aneurysm ruptures in patients treated with converters (0%) and three ruptures (5%) in patients treated with main body extensions. Each patient with aneurysm rupture had been treated with a Renu main body extension, developed a type III endoleak, and underwent surgical conversion. Two of the three patients died postoperatively. Proximal attachment failure and graft migration are potentially lethal complications of EVAR. Proximal graft extension using an aortic cuff is the easiest technique for salvaging an endovascular graft. Unfortunately, it has a predictable failure mode (development of a type III endoleak due to component separation) and is associated with a significantly higher failure rate than with the use of a converter. EVAR salvage with a converter and a femorofemoral bypass is a more complex but superior option for endovascular graft salvage. Copyright (c) 2010 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  16. A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.

    DTIC Science & Technology

    1981-06-01

    Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that

  17. High-Strain Rate Failure Modeling Incorporating Shear Banding and Fracture

    DTIC Science & Technology

    2017-11-22

    High Strain Rate Failure Modeling Incorporating Shear Banding and Fracture The views, opinions and/or findings contained in this report are those of...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS...Report as of 05-Dec-2017 Agreement Number: W911NF-13-1-0238 Organization: Columbia University Title: High Strain Rate Failure Modeling Incorporating

  18. Mechanistic Considerations Used in the Development of the PROFIT PCI Failure Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pankaskie, P. J.

    A fuel Pellet-Zircaloy Cladding (thermo-mechanical-chemical) Interactions (PC!) failure model for estimating the probability of failure in !ransient increases in power (PROFIT) was developed. PROFIT is based on 1) standard statistical methods applied to available PC! fuel failure data and 2) a mechanistic analysis of the environmental and strain-rate-dependent stress versus strain characteristics of Zircaloy cladding. The statistical analysis of fuel failures attributable to PCI suggested that parameters in addition to power, transient increase in power, and burnup are needed to define PCI fuel failures in terms of probability estimates with known confidence limits. The PROFIT model, therefore, introduces an environmentalmore » and strain-rate dependent strain energy absorption to failure (SEAF) concept to account for the stress versus strain anomalies attributable to interstitial-disloction interaction effects in the Zircaloy cladding. Assuming that the power ramping rate is the operating corollary of strain-rate in the Zircaloy cladding, then the variables of first order importance in the PCI fuel failure phenomenon are postulated to be: 1. pre-transient fuel rod power, P{sub I}, 2. transient increase in fuel rod power, {Delta}P, 3. fuel burnup, Bu, and 4. the constitutive material property of the Zircaloy cladding, SEAF.« less

  19. Failure on the American Board of Surgery Examinations of General Surgery Residency Graduates Correlates Positively with States' Malpractice Risk.

    PubMed

    Dent, Daniel L; Al Fayyadh, Mohammed J; Rawlings, Jeremy A; Hassan, Ramy A; Kempenich, Jason W; Willis, Ross E; Stewart, Ronald M

    2018-03-01

    It has been suggested that in environments where there is greater fear of litigation, resident autonomy and education is compromised. Our aim was to examine failure rates on American Board of Surgery (ABS) examinations in comparison with medical malpractice payments in 47 US states/territories that have general surgery residency programs. We hypothesized higher ABS examination failure rates for general surgery residents who graduate from residencies in states with higher malpractice risk. We conducted a retrospective review of five-year (2010-2014) pass rates of first-time examinees of the ABS examinations. States' malpractice data were adjusted based on population. ABS examinations failure rates for programs in states with above and below median malpractice payments per capita were 31 and 24 per cent (P < 0.01) respectively. This difference was seen in university and independent programs regardless of size. Pearson correlation confirmed a significant positive correlation between board failure rates and malpractice payments per capita for Qualifying Examination (P < 0.02), Certifying Examination (P < 0.02), and Qualifying and Certifying combined index (P < 0.01). Malpractice risk correlates positively with graduates' failure rates on ABS examinations regardless of program size or type. We encourage further examination of training environments and their relationship to surgical residency graduate performance.

  20. "Failure Is a Major Component of Learning Anything": The Role of Failure in the Development of STEM Professionals

    ERIC Educational Resources Information Center

    Simpson, Amber; Maltese, Adam

    2017-01-01

    The term failure typically evokes negative connotations in educational settings and is likely to be accompanied by negative emotional states, low sense of confidence, and lack of persistence. These negative emotional and behavioral states may factor into an individual not pursuing a degree or career in science, technology, engineering, or…

  1. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  2. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  3. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  4. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  5. 40 CFR 51.352 - Basic I/M performance standard.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...% emission test failure rate among pre-1981 model year vehicles. (10) Waiver rate. A 0% waiver rate. (11... 20% emission test failure rate among pre-1981 model year vehicles. (11) Waiver rate. A 0% waiver rate... Requirements § 51.352 Basic I/M performance standard. (a) Basic I/M programs shall be designed and implemented...

  6. Characterization of cracking behavior using posttest fractographic analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, T.; Shockey, D.A.

    A determination of time to initiation of stress corrosion cracking in structures and test specimens is important for performing structural failure analysis and for setting inspection intervals. Yet it is seldom possible to establish how much of a component's lifetime represents the time to initiation of fracture and how much represents postinitiation crack growth. This exploratory research project was undertaken to examine the feasibility of determining crack initiation times and crack growth rates from posttest examination of fracture surfaces of constant-extension-rate-test (CERT) specimens by using the fracture reconstruction applying surface topography analysis (FRASTA) technique. The specimens used in this studymore » were Type 304 stainless steel fractured in several boiling water reactor (BWR) aqueous environments. 2 refs., 25 figs., 2 tabs.« less

  7. On rate-state and Coulomb failure models

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified Coulomb failure model in which the failure stress threshold is lowered due to weakening, increasing the clock advance. The deviation from a non-Coulomb response also depends on the loading rate, elastic stiffness, initial conditions, and assumptions about how state evolves.

  8. A methodology for probabilistic remaining creep life assessment of gas turbine components

    NASA Astrophysics Data System (ADS)

    Liu, Zhimin

    Certain gas turbine components operate in harsh environments and various mechanisms may lead to component failure. It is common practice to use remaining life assessments to help operators schedule maintenance and component replacements. Creep is a major failure mechanisms that affect the remaining life assessment, and the resulting life consumption of a component is highly sensitive to variations in the material stresses and temperatures, which fluctuate significantly due to the changes in real operating conditions. In addition, variations in material properties and geometry will result in changes in creep life consumption rate. The traditional method used for remaining life assessment assumes a set of fixed operating conditions at all times, and it fails to capture the variations in operating conditions. This translates into a significant loss of accuracy and unnecessary high maintenance and replacement cost. A new method that captures these variations described above and improves the prediction accuracy of remaining life is developed. First, a metamodel is built to approximate the relationship between variables (operating conditions, material properties, geometry, etc.) and a creep response. The metamodel is developed using Response Surface Method/Design of Experiments methodology. Design of Experiments is an efficient sampling method, and for each sampling point a set of finite element analyses are used to compute the corresponding response value. Next, a low order polynomial Response Surface Equation (RSE) is used to fit these values. Four techniques are suggested to dramatically reduce computational effort, and to increase the accuracy of the RSE: smart meshing technique, automatic geometry parameterization, screening test and regional RSE refinement. The RSEs, along with a probabilistic method and a life fraction model are used to compute current damage accumulation and remaining life. By capturing the variations mentioned above, the new method results in much better accuracy than that available using the traditional method. After further development and proper verification the method should bring significant savings by reducing the number of inspections and deferring part replacement.

  9. Academic performance of ethnic minority candidates and discrimination in the MRCGP examinations between 2010 and 2012: analysis of data

    PubMed Central

    Roberts, Chris

    2013-01-01

    Objective To determine the difference in failure rates in the postgraduate examination of the Royal College of General Practitioners (MRCGP) by ethnic or national background, and to identify factors associated with pass rates in the clinical skills assessment component of the examination. Design Analysis of data provided by the Royal College of General Practitioners and the General Medical Council. Participants Cohort of 5095 candidates sitting the applied knowledge test and clinical skills assessment components of the MRCGP examination between November 2010 and November 2012. A further analysis was carried out on 1175 candidates not trained in the United Kingdom, who sat an English language capability test (IELTS) and the Professional and Linguistic Assessment Board (PLAB) examination (as required for full medical registration), controlling for scores on these examinations and relating them to pass rates of the clinical skills assessment. Setting United Kingdom. Results After controlling for age, sex, and performance in the applied knowledge test, significant differences persisted between white UK graduates and other candidate groups. Black and minority ethnic graduates trained in the UK were more likely to fail the clinical skills assessment at their first attempt than their white UK colleagues (odds ratio 3.536 (95% confidence interval 2.701 to 4.629), P<0.001; failure rate 17% v 4.5%). Black and minority ethnic candidates who trained abroad were also more likely to fail the clinical skills assessment than white UK candidates (14.741 (11.397 to 19.065), P<0.001; 65% v 4.5%). For candidates not trained in the UK, black or minority ethnic candidates were more likely to fail than white candidates, but this difference was no longer significant after controlling for scores in the applied knowledge test, IELTS, and PLAB examinations (adjusted odds ratio 1.580 (95% confidence interval 0.878 to 2.845), P=0.127). Conclusions Subjective bias due to racial discrimination in the clinical skills assessment may be a cause of failure for UK trained candidates and international medical graduates. The difference between British black and minority ethnic candidates and British white candidates in the pass rates of the clinical skills assessment, despite controlling for prior attainment, suggests that subjective bias could also be a factor. Changes to the clinical skills assessment could improve the perception of the examination as being biased against black and minority ethnic candidates. The difference in training experience and other cultural factors between candidates trained in the UK and abroad could affect outcomes. Consideration should be given to strengthening postgraduate training for international medical graduates. PMID:24072882

  10. Mechanical Properties of Transgenic Silkworm Silk Under High Strain Rate Tensile Loading

    NASA Astrophysics Data System (ADS)

    Chu, J.-M.; Claus, B.; Chen, W.

    2017-12-01

    Studies have shown that transgenic silkworm silk may be capable of having similar properties of spider silk while being mass-producible. In this research, the tensile stress-strain response of transgenic silkworm silk fiber is systematically characterized using a quasi-static load frame and a tension Kolsky bar over a range of strain-rates between 10^{-3} and 700/s. The results show that transgenic silkworm silk tends to have higher overall ultimate stress and failure strain at high strain rate (700/s) compared to quasi-static strain rates, indicating rate sensitivity of the material. The failure strain at the high strain rate is higher than that of spider silk. However, the stress levels are significantly below that of spider silk, and far below that of high-performance fiber. Failure surfaces are examined via scanning electron microscopy and reveal that the failure modes are similar to those of spider silk.

  11. A Hybrid Approach to Composite Damage and Failure Analysis Combining Synergistic Damage Mechanics and Peridynamics

    DTIC Science & Technology

    2017-06-30

    along the intermetallic component or at the interface between the two components of the composite. The availability of rnicroscale experimental data in...obtained with the PD model; (c) map of strain energy density; (d) the new quasi -index damage is a predictor of fai lure. As in the case of FRCs, one...which points are most likely to fail, before actual failure happens. The " quasi -damage index", shown in the formula below, is a point-wise measure

  12. Forensic applications of metallurgy - Failure analysis of metal screw and bolt products

    NASA Astrophysics Data System (ADS)

    Tiner, Nathan A.

    1993-03-01

    It is often necessary for engineering consultants in liability lawsuits to consider whether a component has a manufacturing and/or design defect, as judged by industry standards, as well as whether the component was strong enough to resist service loads. Attention is presently given to the principles that must be appealed to in order to clarify these two issues in the cases of metal screw and bolt failures, which are subject to fatigue and brittle fractures and ductile dimple rupture.

  13. Controlling stress corrosion cracking in mechanism components of ground support equipment

    NASA Technical Reports Server (NTRS)

    Majid, W. A.

    1988-01-01

    The selection of materials for mechanism components used in ground support equipment so that failures resulting from stress corrosion cracking will be prevented is described. A general criteria to be used in designing for resistance to stress corrosion cracking is also provided. Stress corrosion can be defined as combined action of sustained tensile stress and corrosion to cause premature failure of materials. Various aluminum, steels, nickel, titanium and copper alloys, and tempers and corrosive environment are evaluated for stress corrosion cracking.

  14. Modular space vehicle boards, control software, reprogramming, and failure recovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judd, Stephen; Dallmann, Nicholas; McCabe, Kevin

    A space vehicle may have a modular board configuration that commonly uses some or all components and a common operating system for at least some of the boards. Each modular board may have its own dedicated processing, and processing loads may be distributed. The space vehicle may be reprogrammable, and may be launched without code that enables all functionality and/or components. Code errors may be detected and the space vehicle may be reset to a working code version to prevent system failure.

  15. Vocal fold tissue failure: preliminary data and constitutive modeling.

    PubMed

    Chan, Roger W; Siegmund, Thomas

    2004-08-01

    In human voice production (phonation), linear small-amplitude vocal fold oscillation occurs only under restricted conditions. Physiologically, phonation more often involves large-amplitude oscillation associated with tissue stresses and strains beyond their linear viscoelastic limits, particularly in the lamina propria extracellular matrix (ECM). This study reports some preliminary measurements of tissue deformation and failure response of the vocal fold ECM under large-strain shear The primary goal was to formulate and test a novel constitutive model for vocal fold tissue failure, based on a standard-linear cohesive-zone (SL-CZ) approach. Tissue specimens of the sheep vocal fold mucosa were subjected to torsional deformation in vitro, at constant strain rates corresponding to twist rates of 0.01, 0.1, and 1.0 rad/s. The vocal fold ECM demonstrated nonlinear stress-strain and rate-dependent failure response with a failure strain as low as 0.40 rad. A finite-element implementation of the SL-CZ model was capable of capturing the rate dependence in these preliminary data, demonstrating the model's potential for describing tissue failure. Further studies with additional tissue specimens and model improvements are needed to better understand vocal fold tissue failure.

  16. Studies on the Biotribological and Biological Behavior of Thermally Oxidized Ti6Al4V for Use in Artificial Cervical Disk

    NASA Astrophysics Data System (ADS)

    Wang, Song; Li, Junhui; Lu, Junzhe; Tyagi, Rajnesh; Liao, Zhenhua; Feng, Pingfa; Liu, Weiqiang

    2017-05-01

    The artificial cervical disk was simplified and considered as a ball-on-socket model with the material configuration of ultra-high molecular weight polyethylene and Ti6Al4V (PE-on-TC4). In order to improve the wear resistance, an optimized thermal oxidation (TO) coating was applied on TC4 component. The long-term wear behavior of the model was assessed in vitro using a wear simulator under 10 million cycles (MC) testing intervals. The biological behavior was investigated by bone marrow-derived mesenchymal stem cells (BMSCs) cell attachment and cell viability/proliferation assays, respectively. The total average wear rate for PE/TC4 pair was found to be 0.81 mg/MC, whereas the same was about 0.96 mg/MC for PE/TO pair. The wear rate of the metal has been neglected in comparison with that of the mating polymer. PE component was found to suffer severe damage characterized by scratches, fatigue cracks and arc-shaped wear grooves on the edge zone of ball. The dominant wear mechanism was abrasion for metal component while the dominant failure mechanism was a mix of plowing, fatigue and plastic deformation for polymer component. TO coating improved the cell attachment property of TC4, and the cell viability results were also quite good. TO coating protected TC4 from being plowed and avoided the release of toxic metal ions. However, this intensified the wear of PE component. Considering the biotribological and biological behavior in totality, TO coating could still be promising when applied in articulation surfaces.

  17. Fracture Damage and Failure of Cannon Components by Service Loading

    DTIC Science & Technology

    1983-02-01

    the result of normal service :onALtiolt. )etaits of the failure and the redesign of the cannon h3’e eea iLOs’rtbed elsewhere.| , The brief review...here is intenlded to ie.,Acrlbe the extreme situation of very severe damage and failure of a cannon. In fact, this failure led to many fracture- safe ...criterion; elastic-perfectly plastic material properties. The experiments summarized in Figure 6 used cannon tubes in which a 6.4 mm deep semi

  18. ELECTRONIC COMPONENT COOLING ALTERNATIVES: COMPRESSED AIR AND LIQUID NITROGEN

    EPA Science Inventory

    The goal of this study was to evaluate topics used to troubleshoot circuit boards with known or suspected thermally intermittent components. Failure modes for thermally intermittent components are typically mechanical defects, such as cracks in solder paths or joints, or broken b...

  19. Failure to Rescue, Rescue Surgery and Centralization of Postoperative Complications: A Challenge for General and Acute Care Surgeons.

    PubMed

    Zago, Mauro; Bozzo, Samantha; Carrara, Giulia; Mariani, Diego

    2017-01-01

    To explore the current literature on the failure to rescue and rescue surgery concepts, to identify the key items for decreasing the failure to rescue rate and improve outcome, to verify if there is a rationale for centralization of patients suffering postoperative complications. There is a growing awareness about the need to assess and measure the failure to rescue rate, on institutional, regional and national basis. Many factors affect failure to rescue, and all should be individually analyzed and considered. Rescue surgery is one of these factors. Rescue surgery assumes an acute care surgery background. Measurement of failure to rescue rate should become a standard for quality improvement programs. Implementation of all clinical and organizational items involved is the key for better outcomes. Preparedness for rescue surgery is a main pillar in this process. Centralization of management, audit, and communication are important as much as patient centralization. Celsius.

  20. CARES/Life Software for Designing More Reliable Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.

  1. Logic analysis of complex systems by characterizing failure phenomena to achieve diagnosis and fault-isolation

    NASA Technical Reports Server (NTRS)

    Wong, J. T.; Andre, W. L.

    1981-01-01

    A recent result shows that, for a certain class of systems, the interdependency among the elements of such a system together with the elements constitutes a mathematical structure a partially ordered set. It is called a loop free logic model of the system. On the basis of an intrinsic property of the mathematical structure, a characterization of system component failure in terms of maximal subsets of bad test signals of the system was obtained. Also, as a consequence, information concerning the total number of failure components in the system was deduced. Detailed examples are given to show how to restructure real systems containing loops into loop free models for which the result is applicable.

  2. Quality of Life for Saudi Patients With Heart Failure: A Cross-Sectional Correlational Study

    PubMed Central

    AbuRuz, Mohannad Eid; Alaloul, Fawwaz; Saifan, Ahmed; Masa’Deh, Rami; Abusalem, Said

    2016-01-01

    Introduction: Heart failure is a major public health issue and a growing concern in developing countries, including Saudi Arabia. Most related research was conducted in Western cultures and may have limited applicability for individuals in Saudi Arabia. Thus, this study assesses the quality of life of Saudi patients with heart failure. Materials and Methods: A cross-sectional correlational design was used on a convenient sample of 103 patients with heart failure. Data were collected using the Short Form-36 and the Medical Outcomes Study-Social Support Survey. Results: Overall, the patients’ scores were low for all domains of Quality of Life. The Physical Component Summary and Mental Component Summary mean scores and SDs were (36.7±12.4, 48.8±6.5) respectively, indicating poor Quality of Life. Left ventricular ejection fraction was the strongest predictor of both physical and mental summaries. Conclusion: Identifying factors that impact quality of life for Saudi heart failure patients is important in identifying and meeting their physical and psychosocial needs. PMID:26493415

  3. Modeling joint restoration strategies for interdependent infrastructure systems

    PubMed Central

    Simonovic, Slobodan P.

    2018-01-01

    Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300

  4. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  5. Failure Rates of Orthodontic Fixed Lingual Retainers bonded with Two Flowable Light-cured Adhesives: A Comparative Prospective Clinical Trial.

    PubMed

    Talic, Nabeel F

    2016-08-01

    This comparative prospective randomized clinical trial examined the in vivo failure rates of fixed mandibular and maxillary lingual retainers bonded with two light-cured flowable composites over 6 months. Consecutive patients were divided into two groups on a 1:1 basis. Two hundred fixed lingual retainers were included, and their failures were followed for 6 months. One group (n = 50) received retainers bonded with a nano-hybrid composite based on nano-optimized technology (Tetric-N-Flow, Ivoclar Vivadent). Another group (n = 50) received retainers bonded with a low viscosity (LV) composite (Transbond Supreme LV, 3M Unitek). There was no significant difference between the overall failure rates of mandibular retainers bonded with Transbond (8%) and those bonded with Tetric-N-Flow (18%). However, the odds ratio for failure using Tetric-N-flow was 2.52-fold greater than that of Transbond. The failure rate of maxillary retainers bonded with Transbond was higher (14%), but not significantly different, than that of maxillary retainers bonded with Tetric-N-flow (10%). There was no significant difference in the estimated mean survival times of the maxillary and mandibular retainers bonded with the two composites. Both types of composites tested in the current study can be used to bond fixed maxillary and mandibular lingual retainers, with low failure rates.

  6. Comparison of Extensive Thermal Cycling Effects on Microstructure Development in Micro-alloyed Sn-Ag-Cu Solder Joints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Iver E.; Boesenberg, Adam; Harringa, Joel

    2011-09-28

    Pb-free solder alloys based on the Sn-Ag-Cu (SAC) ternary eutectic have promise for widespread adoption across assembly conditions and operating environments, but enhanced microstructural control is needed. Micro-alloying with elements such as Zn was demonstrated for promoting a preferred solidification path and joint microstructure earlier in simple (Cu/Cu) solder joints studies for different cooling rates. This beneficial behavior now has been verified in reworked ball grid array (BGA) joints, using dissimilar SAC305 (Sn-3.0Ag-0.5Cu, wt.%) solder paste. After industrial assembly, BGA components joined with Sn-3.5Ag-0.74Cu-0.21Zn solder were tested in thermal cycling (-55 C/+125 C) along with baseline SAC305 BGA joints beyondmore » 3000 cycles with continuous failure monitoring. Weibull analysis of the results demonstrated that BGA components joined with SAC + Zn/SAC305 have less joint integrity than SAC305 joints, but their lifetime is sufficient for severe applications in consumer, defense, and avionics electronic product field environments. Failure analysis of the BGA joints revealed that cracking did not deviate from the typical top area (BGA component side) of each joint, in spite of different Ag3Sn blade content. Thus, SAC + Zn solder has not shown any advantage over SAC305 solder in these thermal cycling trials, but other characteristics of SAC + Zn solder may make it more attractive for use across the full range of harsh conditions of avionics or defense applications.« less

  7. Tribological assessment of a flexible carbon-fibre-reinforced poly(ether-ether-ketone) acetabular cup articulating against an alumina femoral head.

    PubMed

    Scholes, S C; Inman, I A; Unsworth, A; Jones, E

    2008-04-01

    New material combinations have been introduced as the bearing surfaces of hip prostheses in an attempt to prolong their life by overcoming the problems of failure due to wear-particle-induced osteolysis. This will hopefully reduce the need for revision surgery. The study detailed here used a hip simulator to assess the volumetric wear rates of large-diameter carbon-fibre-reinforced pitch-based poly(ether-ether-ketone) (CFR-PEEK) acetabular cups articulating against alumina femoral heads. The joints were tested for 25 x 10(6) cycles. Friction tests were also performed on these joints to determine the lubrication regime under which they operate. The average volumetric wear rate of the CFR-PEEK acetabular component of 54 mm diameter was 1.16 mm(3)/10(6) cycles, compared with 38.6 mm(3)/10(6) cycles for an ultra-high-molecular-weight polyethylene acetabular component of 28 mm diameter worn against a ceramic head. This extremely low wear rate was sustained over 25 x 10(6) cycles (the equivalent of up to approximately 25 years in vivo). The frictional studies showed that the joints worked under the mixed-boundary lubrication regime. The low wear produced by these joints showed that this novel joint couple offers low wear rates and therefore may be an alternative material choice for the reduction of osteolysis.

  8. High frequency of brain metastases after adjuvant therapy for high-risk melanoma.

    PubMed

    Samlowski, Wolfram E; Moon, James; Witter, Merle; Atkins, Michael B; Kirkwood, John M; Othus, Megan; Ribas, Antoni; Sondak, Vernon K; Flaherty, Lawrence E

    2017-11-01

    The incidence of CNS progression in patients with high-risk regional melanoma (stages IIIAN2a-IIIC) is not well characterized. Data from the S0008 trial provided an opportunity to examine the role of CNS progression in treatment failure and survival. All patients were surgically staged. Following wide excision and full regional lymphadenectomy, patients were randomized to receive adjuvant biochemotherapy (BCT) or high-dose interferon alfa-2B (HDI). CNS progression was retrospectively identified from data forms. Survival was measured from date of CNS progression. A total of 402 eligible patients were included in the analysis (BCT: 199, HDI: 203). Median follow-up (if alive) was over 7 years (range: 1 month to 11 years). The site of initial progression was identifiable in 80% of relapsing patients. CNS progression was a component of systemic melanoma relapse in 59/402 patients (15% overall). In 34/402 patients (9%) CNS progression represented the initial site of treatment failure. CNS progression was a component of initial progression in 27% of all patients whose melanoma relapsed (59/221). The risk of CNS progression was highest within 3 years of randomization. The difference in CNS progression rates between treatment arms was not significant (BCT = 25, HDI = 34, P = 0.24). Lymph node macrometastases strongly associated with CNS progression (P = 0.001), while ulceration and head and neck primaries were not significant predictors. This retrospective analysis of the S0008 trial identified a high brain metastasis rate (15%) in regionally advanced melanoma patients. Further studies are needed to establish whether screening plus earlier treatment would improve survival following CNS progression. © 2017 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  9. Exercise Intervention for Cancer Survivors with Heart Failure: Two Case Reports

    PubMed Central

    Hughes, Daniel C.; Lenihan, Daniel J.; Harrison, Carol A.; Basen-Engquist, Karen M.

    2011-01-01

    Rationale Cardiotoxicity is a troubling long-term side effect of chemotherapy cancer treatment, affecting therapy and quality of life (QOL). Exercise is beneficial in heart failure (HF) patients and in cancer survivors without HF, but has not been tested in cancer survivors with treatment induced HF. Methods We present case studies for two survivors: a 56-year old female Hodgkin’s lymphoma survivor (Pt 1) and a 44-year old male leukemia survivor (Pt 2). We conducted a 16-week exercise program with the goal of 30 minutes of exercise performed 3 times per week at a minimum intensity of 50% heart rate reserve (HRR) or ‘12’ rating of perceived exertion (RPE). Results Pt 1 improved from 11.5 minutes of exercise split over two bouts at an RPE of 14 to a 30 minute bout at an RPE of 15. Pt 2 improved from 11 minutes of exercise split over two bouts at an RPE of 12 to an 18 minute bout at an RPE of 12. Both improved in VO2 peak (Pt 1: 13.9 to 14.3 mlO2/kg/min; Pt 2: 12.5 to 18.7 mlO2/kg/min). Ejection fraction increased for Pt 2 (25–30% to 35–40%) but not for Pt 1 (35–40%). QOL as assessed by the SF-36 Physical Component Scale (PCS) improved from 17.79 to 25.31 for Pt 1 and the Mental Component Scale (MCS) improved from 43.84 to 56.65 for Pt 1 and from 34.79 to 44.45 for Pt 2. Conclusions Properly designed exercise interventions can improve physical functioning and quality of life for this growing group of survivors. PMID:21709755

  10. Remote operation of an orbital maneuvering vehicle in simulated docking maneuvers

    NASA Technical Reports Server (NTRS)

    Brody, Adam R.

    1990-01-01

    Simulated docking maneuvers were performed to assess the effect of initial velocity on docking failure rate, mission duration, and delta v (fuel consumption). Subjects performed simulated docking maneuvers of an orbital maneuvering vehicle (OMV) to a space station. The effect of the removal of the range and rate displays (simulating a ranging instrumentation failure) was also examined. Naive subjects were capable of achieving a high success rate in performing simulated docking maneuvers without extensive training. Failure rate was a function of individual differences; there was no treatment effect on failure rate. The amount of time subjects reserved for final approach increased with starting velocity. Piloting of docking maneuvers was not significantly affected in any way by the removal of range and rate displays. Radial impulse was significant both by subject and by treatment. NASA's 0.1 percent rule, dictating an approach rate no greater than 0.1 percent of the range, is seen to be overly conservative for nominal docking missions.

  11. Diminished auditory sensory gating during active auditory verbal hallucinations.

    PubMed

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Advanced Sensor Platform to Evaluate Manloads For Exploration Suit Architectures

    NASA Technical Reports Server (NTRS)

    McFarland, Shane; Pierce, Gregory

    2016-01-01

    Space suit manloads are defined as the outer bounds of force that the human occupant of a suit is able to exert onto the suit during motion. They are defined on a suit-component basis as a unit of maximum force that the suit component in question must withstand without failure. Existing legacy manloads requirements are specific to the suit architecture of the EMU and were developed in an iterative fashion; however, future exploration needs dictate a new suit architecture with bearings, load paths, and entry capability not previously used in any flight suit. No capability currently exists to easily evaluate manloads imparted by a suited occupant, which would be required to develop requirements for a flight-rated design. However, sensor technology has now progressed to the point where an easily-deployable, repeatable and flexible manloads measuring technique could be developed leveraging recent advances in sensor technology. INNOVATION: This development positively impacts schedule, cost and safety risk associated with new suit exploration architectures. For a final flight design, a comprehensive and accurate man loads requirements set must be communicated to the contractor; failing that, a suit design which does not meet necessary manloads limits is prone to failure during testing or worse, during an EVA, which could cause catastrophic failure of the pressure garment posing risk to the crew. This work facilitates a viable means of developing manloads requirements using a range of human sizes & strengths. OUTCOME / RESULTS: Performed sensor market research. Highlighted three viable options (primary, secondary, and flexible packaging option). Designed/fabricated custom bracket to evaluate primary option on a single suit axial. Manned suited manload testing completed and general approach verified.

  13. Analysis of factors affecting failure of glass cermet tunnel restorations in a multi-center study.

    PubMed

    Pilebro, C E; van Dijken, J W

    2001-06-01

    The aim of this study was to analyze factors influencing the failures of tunnel restorations performed with a glass cermet cement (Ketac Silver). Caries activity, lesion size, tunnel cavity opening size, partial or total tunnel, composite lamination or operating time showed no significant correlation to failure rate. Twelve dentists in eight clinics clinically experienced and familiar with the tunnel technique placed 374 restorations. The occlusal sections of fifty percent of the restorations were laminated with hybrid resin composite. The results of the yearly clinical and radiographic evaluations over the course of 3 years were correlated to factors that could influence the failure rate using logistic regression analysis. At the 3-year recall a cumulative number of 305 restorations were available. The cumulative replacement rate was 20%. The main reasons for replacement were marginal ridge fracture (14%) and dentin caries (3%). Another 7% of the restorations which had not been replaced were classified as failures because of untreated dentin caries. The only significant variable observed was the individual failure rate of the participating dentists varying between 9 and 50% (p=0.013).

  14. Effectiveness of back-to-back testing

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.

    1987-01-01

    Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.

  15. Increase in hospital admission rates for heart failure in The Netherlands, 1980-1993.

    PubMed Central

    Reitsma, J. B.; Mosterd, A.; de Craen, A. J.; Koster, R. W.; van Capelle, F. J.; Grobbee, D. E.; Tijssen, J. G.

    1996-01-01

    OBJECTIVE: To study the trend in hospital admission rates for heart failure in the Netherlands from 1980 to 1993. DESIGN: All hospital admissions in the Netherlands with a principal discharge diagnosis of heart failure were analysed. In addition, individual records of heart failure patients from a subset of 7 hospitals were analysed to estimate the frequency and timing of readmissions. RESULTS: The total number of discharges for men increased from 7377 in 1980 to 13 022 in 1993, and for women from 7064 to 12 944. From 1980 through 1993 age adjusted discharge rates rose 48% for men and 40% for women. Age adjusted in-hospital mortality for heart failure decreased from 19% in 1980 to 15% in 1993. For all age groups in-hospital mortality for men was higher than for women. The mean length of hospital admissions in 1993 was 14.0 days for men and 16.4 days for women. A review of individual patient records from a 6.3% sample of all hospital admissions in the Netherlands indicated that within a 2 year period 18% of the heart failure patients were admitted more than once and 5% more than twice. CONCLUSIONS: For both men and women a pronounced increase in age adjusted discharge rates for heart failure was observed in the Netherlands from 1980 to 1993. Readmissions were a prominent feature among heart failure patients. Higher survival rates after acute myocardial infarction and the longer survival of patients with heart disease, including heart failure may have contributed to the observed increase. The importance of advances in diagnostic tools and of possible changes in admission policy remain uncertain. PMID:8944582

  16. First time description of early lead failure of the Linox Smart lead compared to other contemporary high-voltage leads.

    PubMed

    Weberndörfer, Vanessa; Nyffenegger, Tobias; Russi, Ian; Brinkert, Miriam; Berte, Benjamin; Toggweiler, Stefan; Kobza, Richard

    2018-05-01

    Early lead failure has recently been reported in ICD patients with Linox SD leads. We aimed to compare the long-term performance of the following lead model Linox Smart SD with other contemporary high-voltage leads. All patients receiving high-voltage leads at our center between November 2009 and May 2017 were retrospectively analyzed. Lead failure was defined as the occurrence of one or more of the following: non-physiological high-rate episodes, low- or high-voltage impedance anomalies, undersensing, or non-capture. In total, 220 patients were included (Linox Smart SD, n = 113; contemporary lead, n = 107). During a median follow-up of 3.8 years (IQR 1.6-5.9 years), a total of 16 (14 in Linox Smart SD and 2 in contemporary group) lead failures occurred, mostly due to non-physiological high-rate sensing or impedance abnormalities. Lead failure incidence rates per 100 person-years were 2.9 (95% CI 1.7-4.9) and 0.6 (95% CI 0.1-2.3) for Linox Smart SD compared to contemporary leads respectively. Kaplan Meier estimates of 5-year lead failure rates were 14.0% (95% CI 8.1-23.6%) and 1.3% (95% CI 0.2-8.9%), respectively (log-rank p = 0.028). Implantation of a Linox Smart SD lead increased the risk of lead failure with a hazard ratio (HR) of 4.53 (95% CI 1.03-19.95, p = 0.046) and 4.44 (95% CI 1.00-19.77, p = 0.05) in uni- and multivariable Cox models. The new Linox Smart SD lead model was associated with high failure rates and should be monitored closely to detect early signs of lead failure.

  17. Analysis of Gas Turbine Engine Failure Modes.

    DTIC Science & Technology

    1974-01-01

    failure due to factors ex- ternal (foreign to the power plant. Because in practice it is virtually impossible to distinguish accurately between the two, all...45 55 APPEN’DIX E WHEN DISCO ’=RED z z J-79 ENGINE AND HIGH FAILURE COMPONENTS H z Compressor R or242 Copeo R F4 -C H C s SeH UPi 0. 0- H U 4 C, Engine

  18. Failure to activate the in-hospital emergency team: causes and outcomes.

    PubMed

    Barbosa, Vera; Gomes, Ernestina; Vaz, Senio; Azevedo, Gustavo; Fernandes, Gonçalo; Ferreira, Amélia; Araujo, Rui

    2016-01-01

    To determine the incidence of afferent limb failure of the in-hospital Medical Emergency Team, characterizing it and comparing the mortality between the population experiencing afferent limb failure and the population not experiencing afferent limb failure. A total of 478 activations of the Medical Emergency Team of Hospital Pedro Hispano occurred from January 2013 to July 2015. A sample of 285 activations was obtained after excluding incomplete records and activations for patients with less than 6 hours of hospitalization. The sample was divided into two groups: the group experiencing afferent limb failure and the group not experiencing afferent limb failure of the Medical Emergency Team. Both populations were characterized and compared. Statistical significance was set at p ≤ 0.05. Afferent limb failure was observed in 22.1% of activations. The causal analysis revealed significant differences in Medical Emergency Team activation criteria (p = 0.003) in the group experiencing afferent limb failure, with higher rates of Medical Emergency Team activation for cardiac arrest and cardiovascular dysfunction. Regarding patient outcomes, the group experiencing afferent limb failure had higher immediate mortality rates and higher mortality rates at hospital discharge, with no significant differences. No significant differences were found for the other parameters. The incidence of cardiac arrest and the mortality rate were higher in patients experiencing failure of the afferent limb of the Medical Emergency Team. This study highlights the need for health units to invest in the training of all healthcare professionals regarding the Medical Emergency Team activation criteria and emergency medical response system operations.

  19. The failure of earthquake failure models

    USGS Publications Warehouse

    Gomberg, J.

    2001-01-01

    In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.

  20. Research of Planetary Gear Fault Diagnosis Based on Permutation Entropy of CEEMDAN and ANFIS

    PubMed Central

    Kuai, Moshen; Cheng, Gang; Li, Yong

    2018-01-01

    For planetary gear has the characteristics of small volume, light weight and large transmission ratio, it is widely used in high speed and high power mechanical system. Poor working conditions result in frequent failures of planetary gear. A method is proposed for diagnosing faults in planetary gear based on permutation entropy of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) Adaptive Neuro-fuzzy Inference System (ANFIS) in this paper. The original signal is decomposed into 6 intrinsic mode functions (IMF) and residual components by CEEMDAN. Since the IMF contains the main characteristic information of planetary gear faults, time complexity of IMFs are reflected by permutation entropies to quantify the fault features. The permutation entropies of each IMF component are defined as the input of ANFIS, and its parameters and membership functions are adaptively adjusted according to training samples. Finally, the fuzzy inference rules are determined, and the optimal ANFIS is obtained. The overall recognition rate of the test sample used for ANFIS is 90%, and the recognition rate of gear with one missing tooth is relatively high. The recognition rates of different fault gears based on the method can also achieve better results. Therefore, the proposed method can be applied to planetary gear fault diagnosis effectively. PMID:29510569

  1. Research of Planetary Gear Fault Diagnosis Based on Permutation Entropy of CEEMDAN and ANFIS.

    PubMed

    Kuai, Moshen; Cheng, Gang; Pang, Yusong; Li, Yong

    2018-03-05

    For planetary gear has the characteristics of small volume, light weight and large transmission ratio, it is widely used in high speed and high power mechanical system. Poor working conditions result in frequent failures of planetary gear. A method is proposed for diagnosing faults in planetary gear based on permutation entropy of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) Adaptive Neuro-fuzzy Inference System (ANFIS) in this paper. The original signal is decomposed into 6 intrinsic mode functions (IMF) and residual components by CEEMDAN. Since the IMF contains the main characteristic information of planetary gear faults, time complexity of IMFs are reflected by permutation entropies to quantify the fault features. The permutation entropies of each IMF component are defined as the input of ANFIS, and its parameters and membership functions are adaptively adjusted according to training samples. Finally, the fuzzy inference rules are determined, and the optimal ANFIS is obtained. The overall recognition rate of the test sample used for ANFIS is 90%, and the recognition rate of gear with one missing tooth is relatively high. The recognition rates of different fault gears based on the method can also achieve better results. Therefore, the proposed method can be applied to planetary gear fault diagnosis effectively.

  2. An Efficient Implementation of Fixed Failure-Rate Ratio Test for GNSS Ambiguity Resolution.

    PubMed

    Hou, Yanqing; Verhagen, Sandra; Wu, Jie

    2016-06-23

    Ambiguity Resolution (AR) plays a vital role in precise GNSS positioning. Correctly-fixed integer ambiguities can significantly improve the positioning solution, while incorrectly-fixed integer ambiguities can bring large positioning errors and, therefore, should be avoided. The ratio test is an extensively used test to validate the fixed integer ambiguities. To choose proper critical values of the ratio test, the Fixed Failure-rate Ratio Test (FFRT) has been proposed, which generates critical values according to user-defined tolerable failure rates. This contribution provides easy-to-implement fitting functions to calculate the critical values. With a massive Monte Carlo simulation, the functions for many different tolerable failure rates are provided, which enriches the choices of critical values for users. Moreover, the fitting functions for the fix rate are also provided, which for the first time allows users to evaluate the conditional success rate, i.e., the success rate once the integer candidates are accepted by FFRT. The superiority of FFRT over the traditional ratio test regarding controlling the failure rate and preventing unnecessary false alarms is shown by a simulation and a real data experiment. In the real data experiment with a baseline of 182.7 km, FFRT achieved much higher fix rates (up to 30% higher) and the same level of positioning accuracy from fixed solutions as compared to the traditional critical value.

  3. Accelerated Aging Experiments for Capacitor Health Monitoring and Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Celaya, Jose Ramon; Biswas, Gautam; Goebel, Kai

    2012-01-01

    This paper discusses experimental setups for health monitoring and prognostics of electrolytic capacitors under nominal operation and accelerated aging conditions. Electrolytic capacitors have higher failure rates than other components in electronic systems like power drives, power converters etc. Our current work focuses on developing first-principles-based degradation models for electrolytic capacitors under varying electrical and thermal stress conditions. Prognostics and health management for electronic systems aims to predict the onset of faults, study causes for system degradation, and accurately compute remaining useful life. Accelerated life test methods are often used in prognostics research as a way to model multiple causes and assess the effects of the degradation process through time. It also allows for the identification and study of different failure mechanisms and their relationships under different operating conditions. Experiments are designed for aging of the capacitors such that the degradation pattern induced by the aging can be monitored and analyzed. Experimental setups and data collection methods are presented to demonstrate this approach.

  4. Acute and Chronic Lateral Ankle Instability Diagnosis, Management, and New Concepts.

    PubMed

    Shakked, Rachel; Sheskier, Steven

    2017-01-01

    Lateral ankle instability is a common entity that can result in degenerative arthritis if left untreated. Acute ligament injuries should primarily be treated nonoperatively with a course of physical therapy and functional bracing. Chronic ankle instability is defined as mechanical or functional and can be diagnosed using a combination of history, physical examination, stress radiographs, and magnetic resonance imaging. After failure of nonoperative treatment, surgical treatment with anatomic ligament repair and inferior extensor retinaculum augmentation has the best clinical outcomes. Patients with high athletic demands, ligamentous instability, and failure of initial surgical treatment may do better with an anatomic ligament reconstruction or combined ligament repair with peroneus brevis transfer. Those patients with underlying foot deformity benefit from deformity correction in addition to ligament repair or reconstruction. Ankle arthroscopy is an important component of ankle instability to treat the commonly associated intraarticular lesions; however, all-arthroscopic ligament repair is associated with a high complication rate, and techniques may not be perfected as of yet.

  5. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  6. Effects of Thermo-Mechanical Treatments on Deformation Behavior and IGSCC Susceptibility of Stainless Steels in Pwr Primary Water Chemistry

    NASA Astrophysics Data System (ADS)

    Nouraei, S.; Tice, D. R.; Mottershead, K. J.; Wright, D. M.

    Field experience of 300 series stainless steels in the primary circuit of PWR plant has been good. Stress Corrosion Cracking of components has been infrequent and mainly associated with contamination by impurities/oxygen in occluded locations. However, some instances of failures have been observed which cannot necessarily be attributed to deviations in the water chemistry. These failures appear to be associated with the presence of cold-work produced by surface finishing and/or by welding-induced shrinkage. Recent data indicate that some heats of SS show an increased susceptibility to SCC; relatively high crack growth rates were observed even when the crack growth direction is orthogonal to the cold-work direction. SCC of cold-worked SS in PWR coolant is therefore determined by a complex interaction of material composition, microstructure, prior cold-work and heat treatment. This paper will focus on the interactions between these parameters on crack propagation in simulated PWR conditions.

  7. Component Position and Metal Ion Levels in Computer-Navigated Hip Resurfacing Arthroplasty.

    PubMed

    Mann, Stephen M; Kunz, Manuela; Ellis, Randy E; Rudan, John F

    2017-01-01

    Metal ion levels are used as a surrogate marker for wear in hip resurfacing arthroplasties. Improper component position, particularly on the acetabular side, plays an important role in problems with the bearing surfaces, such as edge loading, impingement on the acetabular component rim, lack of fluid-film lubrication, and acetabular component deformation. There are little data regarding femoral component position and its possible implications on wear and failure rates. The purpose of this investigation was to determine both femoral and acetabular component positions in our cohort of mechanically stable hip resurfacing arthroplasties and to determine if these were related to metal ion levels. One hundred fourteen patients who had undergone a computer-assisted metal-on-metal hip resurfacing were prospectively followed. Cobalt and chromium levels, Harris Hip, and UCLA activity scores in addition to measures of the acetabular and femoral component position and angles of the femur and acetabulum were recorded. Significant changes included increases in the position of the acetabular component compared to the native acetabulum; increase in femoral vertical offset; and decreases in global offset, gluteus medius activation angle, and abductor arm angle (P < .05). Multiple regression analysis found no significant predictors of cobalt and chromium metal ion levels. Femoral and acetabular components placed in acceptable position failed to predict increased metal ion levels, and increased levels did not adversely impact patient function or satisfaction. Further research is necessary to clarify factors contributing to prosthesis wear. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Mitral regurgitation: anatomy is destiny.

    PubMed

    Athanasuleas, Constantine L; Stanley, Alfred W H; Buckberg, Gerald D

    2018-04-26

    Mitral regurgitation (MR) occurs when any of the valve and ventricular mitral apparatus components are disturbed. As MR progresses, left ventricular remodelling occurs, ultimately causing heart failure when the enlarging left ventricle (LV) loses its conical shape and becomes globular. Heart failure and lethal ventricular arrhythmias may develop if the left ventricular end-systolic volume index exceeds 55 ml/m2. These adverse changes persist despite satisfactory correction of the annular component of MR. Our goal was to describe this process and summarize evolving interventions that reduce the volume of the left ventricle and rebuild its elliptical shape. This 'valve/ventricle' approach addresses the spherical ventricular culprit and offsets the limits of treating MR by correcting only its annular component.

  9. Structural reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.

    1991-01-01

    For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.

  10. Failure prediction of thin beryllium sheets used in spacecraft structures

    NASA Technical Reports Server (NTRS)

    Roschke, Paul N.; Mascorro, Edward; Papados, Photios; Serna, Oscar R.

    1991-01-01

    The primary objective of this study is to develop a method for prediction of failure of thin beryllium sheets that undergo complex states of stress. Major components of the research include experimental evaluation of strength parameters for cross-rolled beryllium sheet, application of the Tsai-Wu failure criterion to plate bending problems, development of a high order failure criterion, application of the new criterion to a variety of structures, and incorporation of both failure criteria into a finite element code. A Tsai-Wu failure model for SR-200 sheet material is developed from available tensile data, experiments carried out by NASA on two circular plates, and compression and off-axis experiments performed in this study. The failure surface obtained from the resulting criterion forms an ellipsoid. By supplementing experimental data used in the the two-dimensional criterion and modifying previously suggested failure criteria, a multi-dimensional failure surface is proposed for thin beryllium structures. The new criterion for orthotropic material is represented by a failure surface in six-dimensional stress space. In order to determine coefficients of the governing equation, a number of uniaxial, biaxial, and triaxial experiments are required. Details of these experiments and a complementary ultrasonic investigation are described in detail. Finally, validity of the criterion and newly determined mechanical properties is established through experiments on structures composed of SR200 sheet material. These experiments include a plate-plug arrangement under a complex state of stress and a series of plates with an out-of-plane central point load. Both criteria have been incorporated into a general purpose finite element analysis code. Numerical simulation incrementally applied loads to a structural component that is being designed and checks each nodal point in the model for exceedance of a failure criterion. If stresses at all locations do not exceed the failure criterion, the load is increased and the process is repeated. Failure results for the plate-plug and clamped plate tests are accurate to within 2 percent.

  11. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; He, Fei; Ma, Chris Y. T.

    In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less

  12. Real-time forecasting and predictability of catastrophic failure events: from rock failure to volcanoes and earthquakes

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Bell, A. F.; Naylor, M.; Atkinson, M.; Filguera, R.; Meredith, P. G.; Brantut, N.

    2012-12-01

    Accurate prediction of catastrophic brittle failure in rocks and in the Earth presents a significant challenge on theoretical and practical grounds. The governing equations are not known precisely, but are known to produce highly non-linear behavior similar to those of near-critical dynamical systems, with a large and irreducible stochastic component due to material heterogeneity. In a laboratory setting mechanical, hydraulic and rock physical properties are known to change in systematic ways prior to catastrophic failure, often with significant non-Gaussian fluctuations about the mean signal at a given time, for example in the rate of remotely-sensed acoustic emissions. The effectiveness of such signals in real-time forecasting has never been tested before in a controlled laboratory setting, and previous work has often been qualitative in nature, and subject to retrospective selection bias, though it has often been invoked as a basis in forecasting natural hazard events such as volcanoes and earthquakes. Here we describe a collaborative experiment in real-time data assimilation to explore the limits of predictability of rock failure in a best-case scenario. Data are streamed from a remote rock deformation laboratory to a user-friendly portal, where several proposed physical/stochastic models can be analysed in parallel in real time, using a variety of statistical fitting techniques, including least squares regression, maximum likelihood fitting, Markov-chain Monte-Carlo and Bayesian analysis. The results are posted and regularly updated on the web site prior to catastrophic failure, to ensure a true and and verifiable prospective test of forecasting power. Preliminary tests on synthetic data with known non-Gaussian statistics shows how forecasting power is likely to evolve in the live experiments. In general the predicted failure time does converge on the real failure time, illustrating the bias associated with the 'benefit of hindsight' in retrospective analyses. Inference techniques that account explicitly for non-Gaussian statistics significantly reduce the bias, and increase the reliability and accuracy, of the forecast failure time in prospective mode.

  13. Microcircuit Device Reliability. Digital Failure Rate Data

    DTIC Science & Technology

    1981-01-01

    Center Staff I IT Research Institute Under Contract to: Rome Air Development Center Griffiss AFB, NY 13441 fortes Ordering No. MDR- 17 biKi frbi...r ■■ ■—■ — SECURITY CLASSIFICATION Or THIS PAGE (Whin Dmlm Enlti»<l) REPORT DOCUMENTATION PAGE «EPO«TNUMBER MDR- 17 4. TITLE (md...MDR- 17 presents com- parisons between actual field experienced failure rates and MIL-HDBK-217C, Notice 1, predicted failure rates. The use of

  14. Infraclavicular versus axillary nerve catheters: A retrospective comparison of early catheter failure rate.

    PubMed

    Quast, Michaela B; Sviggum, Hans P; Hanson, Andrew C; Stoike, David E; Martin, David P; Niesen, Adam D

    2018-05-01

    Continuous brachial plexus catheters are often used to decrease pain following elbow surgery. This investigation aimed to assess the rate of early failure of infraclavicular (IC) and axillary (AX) nerve catheters following elbow surgery. Retrospective study. Postoperative recovery unit and inpatient hospital floor. 328 patients who received IC or AX nerve catheters and underwent elbow surgery were identified by retrospective query of our institution's database. Data collected included unplanned catheter dislodgement, catheter replacement rate, postoperative pain scores, and opioid administration on postoperative day 1. Catheter failure was defined as unplanned dislodging within 24 h of placement or requirement for catheter replacement and evaluated using a covariate adjusted model. 119 IC catheters and 209 AX catheters were evaluated. There were 8 (6.7%) failed IC catheters versus 13 (6.2%) failed AX catheters. After adjusting for age, BMI, and gender there was no difference in catheter failure rate between IC and AX nerve catheters (p = 0.449). These results suggest that IC and AX nerve catheters do not differ in the rate of early catheter failure, despite differences in anatomic location and catheter placement techniques. Both techniques provided effective postoperative analgesia with median pain scores < 3/10 for patients following elbow surgery. Reasons other than rate of early catheter failure should dictate which approach is performed. Copyright © 2018. Published by Elsevier Inc.

  15. National trends in rates of death and hospital admissions related to acute myocardial infarction, heart failure and stroke, 1994–2004

    PubMed Central

    Tu, Jack V.; Nardi, Lorelei; Fang, Jiming; Liu, Juan; Khalid, Laila; Johansen, Helen

    2009-01-01

    Background Rates of death from cardiovascular and cerebrovascular diseases have been steadily declining over the past few decades. Whether such declines are occurring to a similar degree for common disorders such as acute myocardial infarction, heart failure and stroke is uncertain. We examined recent national trends in mortality and rates of hospital admission for these 3 conditions. Methods We analyzed mortality data from Statistic Canada’s Canadian Mortality Database and data on hospital admissions from the Canadian Institute for Health Information’s Hospital Morbidity Database for the period 1994–2004. We determined age- and sex-standardized rates of death and hospital admissions per 100 000 population aged 20 years and over as well as in-hospital case-fatality rates. Results The overall age- and sex-standardized rate of death from cardiovascular disease in Canada declined 30.0%, from 360.6 per 100 000 in 1994 to 252.5 per 100 000 in 2004. During the same period, the rate fell 38.1% for acute myocardial infarction, 23.5% for heart failure and 28.2% for stroke, with improvements observed across most age and sex groups. The age- and sex-standardized rate of hospital admissions decreased 27.6% for stroke and 27.2% for heart failure. The rate for acute myocardial infarction fell only 9.2%. In contrast, the relative decline in the inhospital case-fatality rate was greatest for acute myocardial infarction (33.1%; p < 0.001). Much smaller relative improvements in case-fatality rates were noted for heart failure (8.1%) and stroke (8.9%). Interpretation The rates of death and hospital admissions for acute myocardial infarction, heart failure and stroke in Canada changed at different rates over the 10-year study period. Awareness of these trends may guide future efforts for health promotion and health care planning and help to determine priorities for research and treatment. PMID:19546444

  16. Fixation strength of a polyetheretherketone femoral component in total knee arthroplasty.

    PubMed

    de Ruiter, Lennert; Janssen, Dennis; Briscoe, Adam; Verdonschot, Nico

    2017-11-01

    Introducing polyetheretherketone (PEEK) polymer as a material for femoral components in total knee arthroplasty (TKA) could potentially lead to a reduction of the cemented fixation strength. A PEEK implant is more likely to deform under high loads, rendering geometrical locking features less effective. Fixation strength may be enhanced by adding more undercuts or specific surface treatments. The aim of this study is to measure the initial fixation strength and investigate the associated failure patterns of three different iterations of PEEK-OPTIMA ® implants compared with a Cobalt-Chromium (CoCr) component. Femoral components were cemented onto trabecular bone analogue foam blocks and preconditioned with 86,400 cycles of compressive loading (2600 N-260 N at 1 Hz). They were then extracted while the force was measured and the initial failure mechanism was recorded. Four groups were compared: CoCr, regular PEEK, PEEK with an enhanced cement-bonding surface and the latter with additional surface primer. The mean pull-off forces for the four groups were 3814 N, 688 N, 2525 N and 2552 N, respectively. The initial failure patterns for groups 1, 3 and 4 were the same; posterior condylar foam fracture and cement-bone debonding. Implants from group 2 failed at the cement-implant interface. This study has shown that a PEEK-OPTIMA ® femoral TKA component with enhanced macro- and microtexture is able to replicate the main failure mechanism of a conventional CoCr femoral implant. The fixation strength is lower than for a CoCr implant, but substantially higher than loads occurring under in-vivo conditions. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Using Generic Data to Establish Dormancy Failure Rates

    NASA Technical Reports Server (NTRS)

    Reistle, Bruce

    2014-01-01

    Many hardware items are dormant prior to being operated. The dormant period might be especially long, for example during missions to the moon or Mars. In missions with long dormant periods the risk incurred during dormancy can exceed the active risk contribution. Probabilistic Risk Assessments (PRAs) need to account for the dormant risk contribution as well as the active contribution. A typical method for calculating a dormant failure rate is to multiply the active failure rate by a constant, the dormancy factor. For example, some practitioners use a heuristic and divide the active failure rate by 30 to obtain an estimate of the dormant failure rate. To obtain a more empirical estimate of the dormancy factor, this paper uses the recently updated database NPRD-2011 [1] to arrive at a set of distributions for the dormancy factor. The resulting dormancy factor distributions are significantly different depending on whether the item is electrical, mechanical, or electro-mechanical. Additionally, this paper will show that using a heuristic constant fails to capture the uncertainty of the possible dormancy factors.

  18. Relief and Recurrence of Congestion During and After Hospitalization for Acute Heart Failure: Insights From Diuretic Optimization Strategy Evaluation in Acute Decompensated Heart Failure (DOSE-AHF) and Cardiorenal Rescue Study in Acute Decompensated Heart Failure (CARESS-HF).

    PubMed

    Lala, Anuradha; McNulty, Steven E; Mentz, Robert J; Dunlay, Shannon M; Vader, Justin M; AbouEzzeddine, Omar F; DeVore, Adam D; Khazanie, Prateeti; Redfield, Margaret M; Goldsmith, Steven R; Bart, Bradley A; Anstrom, Kevin J; Felker, G Michael; Hernandez, Adrian F; Stevenson, Lynne W

    2015-07-01

    Congestion is the most frequent cause for hospitalization in acute decompensated heart failure. Although decongestion is a major goal of acute therapy, it is unclear how the clinical components of congestion (eg, peripheral edema, orthopnea) contribute to outcomes after discharge or how well decongestion is maintained. A post hoc analysis was performed of 496 patients enrolled in the Diuretic Optimization Strategy Evaluation in Acute Decompensated Heart Failure (DOSE-AHF) and Cardiorenal Rescue Study in Acute Decompensated Heart Failure (CARRESS-HF) trials during hospitalization with acute decompensated heart failure and clinical congestion. A simple orthodema congestion score was generated based on symptoms of orthopnea (≥2 pillows=2 points, <2 pillows=0 points) and peripheral edema (trace=0 points, moderate=1 point, severe=2 points) at baseline, discharge, and 60-day follow-up. Orthodema scores were classified as absent (score of 0), low-grade (score of 1-2), and high-grade (score of 3-4), and the association with death, rehospitalization, or unscheduled medical visits through 60 days was assessed. At baseline, 65% of patients had high-grade orthodema and 35% had low-grade orthodema. At discharge, 52% patients were free from orthodema at discharge (score=0) and these patients had lower 60-day rates of death, rehospitalization, or unscheduled visits (50%) compared with those with low-grade or high-grade orthodema (52% and 68%, respectively; P=0.038). Of the patients without orthodema at discharge, 27% relapsed to low-grade orthodema and 38% to high-grade orthodema at 60-day follow-up. Increased severity of congestion by a simple orthodema assessment is associated with increased morbidity and mortality. Despite intent to relieve congestion, current therapy often fails to relieve orthodema during hospitalization or to prevent recurrence after discharge. URL: http://www.clinicaltrials.gov. Unique identifiers: NCT00608491, NCT00577135. © 2015 American Heart Association, Inc.

  19. Physiology of respiratory disturbances in muscular dystrophies

    PubMed Central

    Lo Mauro, Antonella

    2016-01-01

    Muscular dystrophy is a group of inherited myopathies characterised by progressive skeletal muscle wasting, including of the respiratory muscles. Respiratory failure, i.e. when the respiratory system fails in its gas exchange functions, is a common feature in muscular dystrophy, being the main cause of death, and it is a consequence of lung failure, pump failure or a combination of the two. The former is due to recurrent aspiration, the latter to progressive weakness of respiratory muscles and an increase in the load against which they must contract. In fact, both the resistive and elastic components of the work of breathing increase due to airway obstruction and chest wall and lung stiffening, respectively. The respiratory disturbances in muscular dystrophy are restrictive pulmonary function, hypoventilation, altered thoracoabdominal pattern, hypercapnia, dyspnoea, impaired regulation of breathing, inefficient cough and sleep disordered breathing. They can be present at different rates according to the type of muscular dystrophy and its progression, leading to different onset of each symptom, prognosis and degree of respiratory involvement. Key points A common feature of muscular dystrophy is respiratory failure, i.e. the inability of the respiratory system to provide proper oxygenation and carbon dioxide elimination. In the lung, respiratory failure is caused by recurrent aspiration, and leads to hypoxaemia and hypercarbia. Ventilatory failure in muscular dystrophy is caused by increased respiratory load and respiratory muscles weakness. Respiratory load increases in muscular dystrophy because scoliosis makes chest wall compliance decrease, atelectasis and fibrosis make lung compliance decrease, and airway obstruction makes airway resistance increase. The consequences of respiratory pump failure are restrictive pulmonary function, hypoventilation, altered thoracoabdominal pattern, hypercapnia, dyspnoea, impaired regulation of breathing, inefficient cough and sleep disordered breathing. Educational aims To understand the mechanisms leading to respiratory disturbances in patients with muscular dystrophy. To understand the impact of respiratory disturbances in patients with muscular dystrophy. To provide a brief description of the main forms of muscular dystrophy with their respiratory implications. PMID:28210319

  20. Physiology of respiratory disturbances in muscular dystrophies.

    PubMed

    Lo Mauro, Antonella; Aliverti, Andrea

    2016-12-01

    Muscular dystrophy is a group of inherited myopathies characterised by progressive skeletal muscle wasting, including of the respiratory muscles. Respiratory failure, i.e . when the respiratory system fails in its gas exchange functions, is a common feature in muscular dystrophy, being the main cause of death, and it is a consequence of lung failure, pump failure or a combination of the two. The former is due to recurrent aspiration, the latter to progressive weakness of respiratory muscles and an increase in the load against which they must contract. In fact, both the resistive and elastic components of the work of breathing increase due to airway obstruction and chest wall and lung stiffening, respectively. The respiratory disturbances in muscular dystrophy are restrictive pulmonary function, hypoventilation, altered thoracoabdominal pattern, hypercapnia, dyspnoea, impaired regulation of breathing, inefficient cough and sleep disordered breathing. They can be present at different rates according to the type of muscular dystrophy and its progression, leading to different onset of each symptom, prognosis and degree of respiratory involvement. A common feature of muscular dystrophy is respiratory failure, i.e. the inability of the respiratory system to provide proper oxygenation and carbon dioxide elimination.In the lung, respiratory failure is caused by recurrent aspiration, and leads to hypoxaemia and hypercarbia.Ventilatory failure in muscular dystrophy is caused by increased respiratory load and respiratory muscles weakness.Respiratory load increases in muscular dystrophy because scoliosis makes chest wall compliance decrease, atelectasis and fibrosis make lung compliance decrease, and airway obstruction makes airway resistance increase.The consequences of respiratory pump failure are restrictive pulmonary function, hypoventilation, altered thoracoabdominal pattern, hypercapnia, dyspnoea, impaired regulation of breathing, inefficient cough and sleep disordered breathing. To understand the mechanisms leading to respiratory disturbances in patients with muscular dystrophy.To understand the impact of respiratory disturbances in patients with muscular dystrophy.To provide a brief description of the main forms of muscular dystrophy with their respiratory implications.

  1. Finite Element Creep-Fatigue Analysis of a Welded Furnace Roll for Identifying Failure Root Cause

    NASA Astrophysics Data System (ADS)

    Yang, Y. P.; Mohr, W. C.

    2015-11-01

    Creep-fatigue induced failures are often observed in engineering components operating under high temperature and cyclic loading. Understanding the creep-fatigue damage process and identifying failure root cause are very important for preventing such failures and improving the lifetime of engineering components. Finite element analyses including a heat transfer analysis and a creep-fatigue analysis were conducted to model the cyclic thermal and mechanical process of a furnace roll in a continuous hot-dip coating line. Typically, the roll has a short life, <1 year, which has been a problem for a long time. The failure occurred in the weld joining an end bell to a roll shell and resulted in the complete 360° separation of the end bell from the roll shell. The heat transfer analysis was conducted to predict the temperature history of the roll by modeling heat convection from hot air inside the furnace. The creep-fatigue analysis was performed by inputting the predicted temperature history and applying mechanical loads. The analysis results showed that the failure was resulted from a creep-fatigue mechanism rather than a creep mechanism. The difference of material properties between the filler metal and the base metal is the root cause for the roll failure, which induces higher creep strain and stress in the interface between the weld and the HAZ.

  2. Failure detection and recovery in the assembly/contingency subsystem

    NASA Technical Reports Server (NTRS)

    Gantenbein, Rex E.

    1993-01-01

    The Assembly/Contingency Subsystem (ACS) is the primary communications link on board the Space Station. Any failure in a component of this system or in the external devices through which it communicates with ground-based systems will isolate the Station. The ACS software design includes a failure management capability (ACFM) that provides protocols for failure detection, isolation, and recovery (FDIR). The the ACFM design requirements as outlined in the current ACS software requirements specification document are reviewed. The activities carried out in this review include: (1) an informal, but thorough, end-to-end failure mode and effects analysis of the proposed software architecture for the ACFM; and (2) a prototype of the ACFM software, implemented as a C program under the UNIX operating system. The purpose of this review is to evaluate the FDIR protocols specified in the ACS design and the specifications themselves in light of their use in implementing the ACFM. The basis of failure detection in the ACFM is the loss of signal between the ground and the Station, which (under the appropriate circumstances) will initiate recovery to restore communications. This recovery involves the reconfiguration of the ACS to either a backup set of components or to a degraded communications mode. The initiation of recovery depends largely on the criticality of the failure mode, which is defined by tables in the ACFM and can be modified to provide a measure of flexibility in recovery procedures.

  3. Improving the Estimates of International Space Station (ISS) Induced K-Factor Failure Rates for On-Orbit Replacement Unit (ORU) Supportability Analyses

    NASA Technical Reports Server (NTRS)

    Anderson, Leif F.; Harrington, Sean P.; Omeke, Ojei, II; Schwaab, Douglas G.

    2009-01-01

    This is a case study on revised estimates of induced failure for International Space Station (ISS) on-orbit replacement units (ORUs). We devise a heuristic to leverage operational experience data by aggregating ORU, associated function (vehicle sub -system), and vehicle effective' k-factors using actual failure experience. With this input, we determine a significant failure threshold and minimize the difference between the actual and predicted failure rates. We conclude with a discussion on both qualitative and quantitative improvements the heuristic methods and potential benefits to ISS supportability engineering analysis.

  4. Vagal Nerve Stimulation Therapy: What Is Being Stimulated?

    PubMed Central

    Kember, Guy; Ardell, Jeffrey L.; Armour, John A.; Zamir, Mair

    2014-01-01

    Vagal nerve stimulation in cardiac therapy involves delivering electrical current to the vagal sympathetic complex in patients experiencing heart failure. The therapy has shown promise but the mechanisms by which any benefit accrues is not understood. In this paper we model the response to increased levels of stimulation of individual components of the vagal sympathetic complex as a differential activation of each component in the control of heart rate. The model provides insight beyond what is available in the animal experiment in as much as allowing the simultaneous assessment of neuronal activity throughout the cardiac neural axis. The results indicate that there is sensitivity of the neural network to low level subthreshold stimulation. This leads us to propose that the chronic effects of vagal nerve stimulation therapy lie within the indirect pathways that target intrinsic cardiac local circuit neurons because they have the capacity for plasticity. PMID:25479368

  5. Vagal nerve stimulation therapy: what is being stimulated?

    PubMed

    Kember, Guy; Ardell, Jeffrey L; Armour, John A; Zamir, Mair

    2014-01-01

    Vagal nerve stimulation in cardiac therapy involves delivering electrical current to the vagal sympathetic complex in patients experiencing heart failure. The therapy has shown promise but the mechanisms by which any benefit accrues is not understood. In this paper we model the response to increased levels of stimulation of individual components of the vagal sympathetic complex as a differential activation of each component in the control of heart rate. The model provides insight beyond what is available in the animal experiment in as much as allowing the simultaneous assessment of neuronal activity throughout the cardiac neural axis. The results indicate that there is sensitivity of the neural network to low level subthreshold stimulation. This leads us to propose that the chronic effects of vagal nerve stimulation therapy lie within the indirect pathways that target intrinsic cardiac local circuit neurons because they have the capacity for plasticity.

  6. Aging of electronics with application to nuclear power plant instrumentation. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jr, R T; Thome, F V; Craft, C M

    1983-01-01

    A survey to identify areas of needed research to understand aging mechanisms for electronics in nuclear power plant instrumentation has been completed. The emphasis was on electronic components such as semiconductors, capacitors, and resistors used in safety-related instrumentation in the reactor containment area. The environmental and operational stress factors which may produce degradation during long-term operation were identified. Some attention was also given to humidity effects as related to seals and encapsulants, and failures in printed circuit boards and bonds and solder joints. Results suggest that neutron as well as gamma irradiations should be considered in simulating the aging environmentmore » for electronic components. Radiation dose-rate effects in semiconductor devices and organic capacitors need to be further investigated, as well as radiation-voltage bias synergistic effects in semiconductor devices and leakage and permeation of moisture through seals in electronics packages.« less

  7. Interim Report on the Examination of Corrosion Damage in Homes Constructed With Imported Wallboard: Examination of Samples Received September 28, 2009.

    PubMed

    Pitchure, D J; Ricker, R E; Williams, M E; Claggett, S A

    2010-01-01

    Since many household systems are fabricated out of metallic materials, changes to the household environment that accelerate corrosion rates will increase the frequency of failures in these systems. Recently, it has been reported that homes constructed with imported wallboard have increased failure rates in appliances, air conditioner heat exchanger coils, and visible corrosion on electrical wiring and other metal components. At the request of the Consumer Product Safety Commission (CPSC), the National Institute of Standards and Technology (NIST) became involved through the Interagency Agreement CPSC-1-09-0023 to perform metallurgical analyses on samples and corrosion products removed from homes constructed using imported wallboard. This document reports on the analysis of the first group of samples received by NIST from CPSC. The samples received by NIST on September 28, 2009 consisted of copper tubing for supplying natural gas and two air conditioner heat exchanger coils. The examinations performed by NIST consisted of photography, metallurgical cross-sectioning, optical microscopy, scanning electron microscopy (SEM), and x-ray diffraction (XRD). Leak tests were also performed on the air conditioner heat exchanger coils. The objective of these examinations was to determine extent and nature of the corrosive attack, the chemical composition of the corrosion product, and the potential chemical reactions or environmental species responsible for accelerated corrosion. A thin black corrosion product was found on samples of the copper tubing. The XRD analysis of this layer indicated that this corrosion product was a copper sulfide phase and the diffraction peaks corresponded with those for the mineral digenite (Cu9S5). Corrosion products were also observed on other types of metals in the air conditioner coils where condensation would frequently wet the metals. The thickness of the corrosion product layer on a copper natural gas supply pipe with a wall thickness of 1.2 mm ± 0.2 mm was between 5 μm and 10 μm. These results indicate that a chemical compound that contains reduced sulfur, such as hydrogen sulfide (H2S), is present in the environment to which these samples were exposed. The literature indicates that these species strongly influence corrosion rates of most metals and alloys even at low concentrations. None of the samples examined were failed components, and no evidence of imminent failure was found on any of the samples examined. All of the corrosion damage observed to date is consistent with a general attack form of corrosion that will progress in a uniform and relatively predictable manner. No evidence of localized attack was found, but these forms of attack typically require an incubation period before they initiate. Therefore, the number of samples examined to date is too small to draw a conclusion on the relative probability of these forms of corrosion being able to cause or not cause failure. Samples from failed systems or from laboratory tests conducted over a wide range of metallurgical and environmental conditions will be required to assess the probability of these other forms of corrosion causing failure.

  8. Interim Report on the Examination of Corrosion Damage in Homes Constructed With Imported Wallboard: Examination of Samples Received September 28, 2009

    PubMed Central

    Pitchure, D. J.; Ricker, R. E.; Williams, M. E.; Claggett, S. A.

    2010-01-01

    Since many household systems are fabricated out of metallic materials, changes to the household environment that accelerate corrosion rates will increase the frequency of failures in these systems. Recently, it has been reported that homes constructed with imported wallboard have increased failure rates in appliances, air conditioner heat exchanger coils, and visible corrosion on electrical wiring and other metal components. At the request of the Consumer Product Safety Commission (CPSC), the National Institute of Standards and Technology (NIST) became involved through the Interagency Agreement CPSC-1-09-0023 to perform metallurgical analyses on samples and corrosion products removed from homes constructed using imported wallboard. This document reports on the analysis of the first group of samples received by NIST from CPSC. The samples received by NIST on September 28, 2009 consisted of copper tubing for supplying natural gas and two air conditioner heat exchanger coils. The examinations performed by NIST consisted of photography, metallurgical cross-sectioning, optical microscopy, scanning electron microscopy (SEM), and x-ray diffraction (XRD). Leak tests were also performed on the air conditioner heat exchanger coils. The objective of these examinations was to determine extent and nature of the corrosive attack, the chemical composition of the corrosion product, and the potential chemical reactions or environmental species responsible for accelerated corrosion. A thin black corrosion product was found on samples of the copper tubing. The XRD analysis of this layer indicated that this corrosion product was a copper sulfide phase and the diffraction peaks corresponded with those for the mineral digenite (Cu9S5). Corrosion products were also observed on other types of metals in the air conditioner coils where condensation would frequently wet the metals. The thickness of the corrosion product layer on a copper natural gas supply pipe with a wall thickness of 1.2 mm ± 0.2 mm was between 5 μm and 10 μm. These results indicate that a chemical compound that contains reduced sulfur, such as hydrogen sulfide (H2S), is present in the environment to which these samples were exposed. The literature indicates that these species strongly influence corrosion rates of most metals and alloys even at low concentrations. None of the samples examined were failed components, and no evidence of imminent failure was found on any of the samples examined. All of the corrosion damage observed to date is consistent with a general attack form of corrosion that will progress in a uniform and relatively predictable manner. No evidence of localized attack was found, but these forms of attack typically require an incubation period before they initiate. Therefore, the number of samples examined to date is too small to draw a conclusion on the relative probability of these forms of corrosion being able to cause or not cause failure. Samples from failed systems or from laboratory tests conducted over a wide range of metallurgical and environmental conditions will be required to assess the probability of these other forms of corrosion causing failure. PMID:27134786

  9. Predicting Quarantine Failure Rates

    PubMed Central

    2004-01-01

    Preemptive quarantine through contact-tracing effectively controls emerging infectious diseases. Occasionally this quarantine fails, however, and infected persons are released. The probability of quarantine failure is typically estimated from disease-specific data. Here a simple, exact estimate of the failure rate is derived that does not depend on disease-specific parameters. This estimate is universally applicable to all infectious diseases. PMID:15109418

  10. High rate of virological failure and low rate of switching to second-line treatment among adolescents and adults living with HIV on first-line ART in Myanmar, 2005-2015

    PubMed Central

    Harries, Anthony D.; Kumar, Ajay M. V.; Oo, Myo Minn; Kyaw, Khine Wut Yee; Win, Than; Aung, Thet Ko; Min, Aung Chan; Oo, Htun Nyunt

    2017-01-01

    Background The number of people living with HIV on antiretroviral treatment (ART) in Myanmar has been increasing rapidly in recent years. This study aimed to estimate rates of virological failure on first-line ART and switching to second-line ART due to treatment failure at the Integrated HIV Care program (IHC). Methods Routinely collected data of all adolescent and adult patients living with HIV who were initiated on first-line ART at IHC between 2005 and 2015 were retrospectively analyzed. The cumulative hazard of virological failure on first-line ART and switching to second-line ART were estimated. Crude and adjusted hazard ratios were calculated using the Cox regression model to identify risk factors associated with the two outcomes. Results Of 23,248 adults and adolescents, 7,888 (34%) were tested for HIV viral load. The incidence rate of virological failure among those tested was 3.2 per 100 person-years follow-up and the rate of switching to second-line ART among all patients was 1.4 per 100 person-years follow-up. Factors associated with virological failure included: being adolescent; being lost to follow-up at least once; having WHO stage 3 and 4 at ART initiation; and having taken first-line ART elsewhere before coming to IHC. Of the 1032 patients who met virological failure criteria, 762 (74%) switched to second-line ART. Conclusions We found high rates of virological failure among one third of patients in the cohort who were tested for viral load. Of those failing virologically on first-line ART, about one quarter were not switched to second-line ART. Routine viral load monitoring, especially for those identified as having a higher risk of treatment failure, should be considered in this setting to detect all patients failing on first-line ART. Strategies also need to be put in place to prevent treatment failure and to treat more of those patients who are actually failing. PMID:28182786

  11. Is rhythm-control superior to rate-control in patients with atrial fibrillation and diastolic heart failure?

    PubMed

    Kong, Melissa H; Shaw, Linda K; O'Connor, Christopher; Califf, Robert M; Blazing, Michael A; Al-Khatib, Sana M

    2010-07-01

    Although no clinical trial data exist on the optimal management of atrial fibrillation (AF) in patients with diastolic heart failure, it has been hypothesized that rhythm-control is more advantageous than rate-control due to the dependence of these patients' left ventricular filling on atrial contraction. We aimed to determine whether patients with AF and heart failure with preserved ejection fraction (EF) survive longer with rhythm versus rate-control strategy. The Duke Cardiovascular Disease Database was queried to identify patients with EF > 50%, heart failure symptoms and AF between January 1,1995 and June 30, 2005. We compared baseline characteristics and survival of patients managed with rate- versus rhythm-control strategies. Using a 60-day landmark view, Kaplan-Meier curves were generated and results were adjusted for baseline differences using Cox proportional hazards modeling. Three hundred eighty-two patients met the inclusion criteria (285 treated with rate-control and 97 treated with rhythm-control). The 1-, 3-, and 5-year survival rates were 93.2%, 69.3%, and 56.8%, respectively in rate-controlled patients and 94.8%, 78.0%, and 59.9%, respectively in rhythm-controlled patients (P > 0.10). After adjustments for baseline differences, no significant difference in mortality was detected (hazard ratio for rhythm-control vs rate-control = 0.696, 95% CI 0.453-1.07, P = 0.098). Based on our observational data, rhythm-control seems to offer no survival advantage over rate-control in patients with heart failure and preserved EF. Randomized clinical trials are needed to verify these findings and examine the effect of each strategy on stroke risk, heart failure decompensation, and quality of life.

  12. Limits on rock strength under high confinement

    NASA Astrophysics Data System (ADS)

    Renshaw, Carl E.; Schulson, Erland M.

    2007-06-01

    Understanding of deep earthquake source mechanisms requires knowledge of failure processes active under high confinement. Under low confinement the compressive strength of rock is well known to be limited by frictional sliding along stress-concentrating flaws. Under higher confinement strength is usually assumed limited by power-law creep associated with the movement of dislocations. In a review of existing experimental data, we find that when the confinement is high enough to suppress frictional sliding, rock strength increases as a power-law function only up to a critical normalized strain rate. Within the regime where frictional sliding is suppressed and the normalized strain rate is below the critical rate, both globally distributed ductile flow and localized brittle-like failure are observed. When frictional sliding is suppressed and the normalized strain rate is above the critical rate, failure is always localized in a brittle-like manner at a stress that is independent of the degree of confinement. Within the high-confinement, high-strain rate regime, the similarity in normalized failure strengths across a variety of rock types and minerals precludes both transformational faulting and dehydration embrittlement as strength-limiting mechanisms. The magnitude of the normalized failure strength corresponding to the transition to the high-confinement, high-strain rate regime and the observed weak dependence of failure strength on strain rate within this regime are consistent with a localized Peierls-type strength-limiting mechanism. At the highest strain rates the normalized strengths approach the theoretical limit for crystalline materials. Near-theoretical strengths have previously been observed only in nano- and micro-scale regions of materials that are effectively defect-free. Results are summarized in a new deformation mechanism map revealing that when confinement and strain rate are sufficient, strengths approaching the theoretical limit can be achieved in cm-scale sized samples of rocks rich in defects. Thus, non-frictional failure processes must be considered when interpreting rock deformation data collected under high confinement and low temperature. Further, even at higher temperatures the load-bearing ability of crustal rocks under high confinement may not be limited by a frictional process under typical geologic strain rates.

  13. Movable bridge maintenance monitoring : [technical summary].

    DOT National Transportation Integrated Search

    2013-10-01

    Maintenance costs for movable bridges are considerably higher than for fixed bridges, mostly because of the complex interaction of mechanical, electrical, and structural components. Malfunction of any component can cause unexpected failure of bridge ...

  14. Failure investigations of failed valve plug SS410 steel due to cracking

    NASA Astrophysics Data System (ADS)

    Kalyankar, V. D.; Deshmukh, D. D.

    2017-12-01

    Premature and sudden in service failure of a valve plug due to crack formation, applied in power plant has been investigated. The plug was tempered and heat treated, the crack was originated at centre, developed along the axis and propagates radially towards outer surface of plug. The expected life of the component is 10-15 years while, the component had failed just after the installation that is, within 3 months of its service. No corrosion products were observed on the crack interface and on the failed surface; hence, causes of corrosion failure are neglected. This plug of level separator control valve, is welded to the stem by means of plasma-transferred arc welding and as there is no crack observed at the welding zone, the failure due to welding residual stresses are also neglected. The failed component discloses exposed surface of a crack interface that originated from centre and propagates radially. The micro-structural observation, hardness testing, and visual observation are carried out of the specimen prepared from the failed section and base portion. The microstructure from the cracked interface showed severe carbide formation along the grain boundaries. From the microstructural analysis of the failed sample, it is observed that there is a formation of acicular carbides along the grain boundaries due to improper tempering heat treatment.

  15. Program For Evaluation Of Reliability Of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, N.; Janosik, L. A.; Gyekenyesi, J. P.; Powers, Lynn M.

    1996-01-01

    CARES/LIFE predicts probability of failure of monolithic ceramic component as function of service time. Assesses risk that component fractures prematurely as result of subcritical crack growth (SCG). Effect of proof testing of components prior to service also considered. Coupled to such commercially available finite-element programs as ANSYS, ABAQUS, MARC, MSC/NASTRAN, and COSMOS/M. Also retains all capabilities of previous CARES code, which includes estimation of fast-fracture component reliability and Weibull parameters from inert strength (without SCG contributing to failure) specimen data. Estimates parameters that characterize SCG from specimen data as well. Written in ANSI FORTRAN 77 to be machine-independent. Program runs on any computer in which sufficient addressable memory (at least 8MB) and FORTRAN 77 compiler available. For IBM-compatible personal computer with minimum 640K memory, limited program available (CARES/PC, COSMIC number LEW-15248).

  16. Narrowed indications improve outcomes for hip resurfacing arthroplasty.

    PubMed

    Johnson, Aaron J; Zywiel, Michael G; Hooper, Hassan; Mont, Michael A

    2011-01-01

    Hip resurfacing arthroplasty has had excellent clinical outcomes from multiple centers. However, controversy exists regarding the most appropriate patient selection criteria. Many proponents of hip resurfacing believe that narrowing the patient indications with strict inclusion and exclusion criteria may lead to improved outcomes and decreased complication rates. The purpose of this study was to review the results of resurfacing performed by an experienced surgeon to determine if implant survival and complication rates were different between subgroups of patients with different demographic factors. We evaluated 311 patients who had a hip resurfacing arthroplasty performed after the initial learning curve and who had a minimum follow-up of 5 years (mean, 93 months). These patients were compared to a group of 93 patients (96 hips) who underwent resurfacings, with newer selection criteria based on the findings of the first cohort. Overall, there were 10 failures in the first patient cohort (97% survivorship), compared to no failures in the second cohort. Higher revision rates were associated with patients who had osteonecrosis or rheumatoid arthritis. Patients who had femoral component sizes larger than 50 millimeters had lower revision rates. There were no revisions in patients who were under 50 years of age, had head sizes greater than 50 millimeters, and who had a primary diagnosis of osteoarthritis. After evaluating our initial experience after the learning curve, the ideal patient selection criteria was determined to be young males who have femoral head sizes greater than 50 millimeters. The early results are encouraging in that, although resurfacing may not be appropriate for all patients, it can provide predictable, excellent survivorship in these patients.

  17. On a Stochastic Failure Model under Random Shocks

    NASA Astrophysics Data System (ADS)

    Cha, Ji Hwan

    2013-02-01

    In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.

  18. Relation between lowered colloid osmotic pressure, respiratory failure, and death.

    PubMed

    Tonnesen, A S; Gabel, J C; McLeavey, C A

    1977-01-01

    Plasma colloid osmotic pressure was measured each day in 84 intensive care unit patients. Probit analysis demonstrated a direct relationship between colloid osmotic pressure (COP) and survival. The COP associated with a 50% survival rate was 15.0 torr. COP was higher in survivors than in nonsurvivors without respiratory failure and in patients who recovered from respiratory failure. We conclude that lowered COP is associated with an elevated mortality rate. However, the relationship to death is not explained by the relationship to respiratory failure.

  19. Microstructures, Forming Limit and Failure Analyses of Inconel 718 Sheets for Fabrication of Aerospace Components

    NASA Astrophysics Data System (ADS)

    Sajun Prasad, K.; Panda, Sushanta Kumar; Kar, Sujoy Kumar; Sen, Mainak; Murty, S. V. S. Naryana; Sharma, Sharad Chandra

    2017-04-01

    Recently, aerospace industries have shown increasing interest in forming limits of Inconel 718 sheet metals, which can be utilised in designing tools and selection of process parameters for successful fabrication of components. In the present work, stress-strain response with failure strains was evaluated by uniaxial tensile tests in different orientations, and two-stage work-hardening behavior was observed. In spite of highly preferred texture, tensile properties showed minor variations in different orientations due to the random distribution of nanoprecipitates. The forming limit strains were evaluated by deforming specimens in seven different strain paths using limiting dome height (LDH) test facility. Mostly, the specimens failed without prior indication of localized necking. Thus, fracture forming limit diagram (FFLD) was evaluated, and bending correction was imposed due to the use of sub-size hemispherical punch. The failure strains of FFLD were converted into major-minor stress space ( σ-FFLD) and effective plastic strain-stress triaxiality space ( ηEPS-FFLD) as failure criteria to avoid the strain path dependence. Moreover, FE model was developed, and the LDH, strain distribution and failure location were predicted successfully using above-mentioned failure criteria with two stages of work hardening. Fractographs were correlated with the fracture behavior and formability of sheet metal.

  20. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

Top