Sample records for reliability physics-of-failure based

  1. Statistics-related and reliability-physics-related failure processes in electronics devices and products

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    2014-05-01

    The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.

  2. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  3. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  4. Methodology for Physics and Engineering of Reliable Products

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Gibbel, Mark

    1996-01-01

    Physics of failure approaches have gained wide spread acceptance within the electronic reliability community. These methodologies involve identifying root cause failure mechanisms, developing associated models, and utilizing these models to inprove time to market, lower development and build costs and higher reliability. The methodology outlined herein sets forth a process, based on integration of both physics and engineering principles, for achieving the same goals.

  5. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  6. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  7. Failure rate and reliability of the KOMATSU hydraulic excavator in surface limestone mine

    NASA Astrophysics Data System (ADS)

    Harish Kumar N., S.; Choudhary, R. P.; Murthy, Ch. S. N.

    2018-04-01

    The model with failure rate function of bathtub-shaped is helpful in reliability analysis of any system and particularly in reliability associated privative maintenance. The usual Weibull distribution is, however, not capable to model the complete lifecycle of the any with a bathtub-shaped failure rate function. In this paper, failure rate and reliability analysis of the KOMATSU hydraulic excavator/shovel in surface mine is presented and also to improve the reliability and decrease the failure rate of each subsystem of the shovel based on the preventive maintenance. The model of the bathtub-shaped for shovel can also be seen as a simplification of the Weibull distribution.

  8. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  9. Dynamic decision-making for reliability and maintenance analysis of manufacturing systems based on failure effects

    NASA Astrophysics Data System (ADS)

    Zhang, Ding; Zhang, Yingjie

    2017-09-01

    A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.

  10. Reliability Analysis of Systems Subject to First-Passage Failure

    NASA Technical Reports Server (NTRS)

    Lutes, Loren D.; Sarkani, Shahram

    2009-01-01

    An obvious goal of reliability analysis is the avoidance of system failure. However, it is generally recognized that it is often not feasible to design a practical or useful system for which failure is impossible. Thus it is necessary to use techniques that estimate the likelihood of failure based on modeling the uncertainty about such items as the demands on and capacities of various elements in the system. This usually involves the use of probability theory, and a design is considered acceptable if it has a sufficiently small probability of failure. This report contains findings of analyses of systems subject to first-passage failure.

  11. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  12. Performance and Reliability Analysis of Water Distribution Systems under Cascading Failures and the Identification of Crucial Pipes

    PubMed Central

    Shuang, Qing; Zhang, Mingyuan; Yuan, Yongbo

    2014-01-01

    As a mean of supplying water, Water distribution system (WDS) is one of the most important complex infrastructures. The stability and reliability are critical for urban activities. WDSs can be characterized by networks of multiple nodes (e.g. reservoirs and junctions) and interconnected by physical links (e.g. pipes). Instead of analyzing highest failure rate or highest betweenness, reliability of WDS is evaluated by introducing hydraulic analysis and cascading failures (conductive failure pattern) from complex network. The crucial pipes are identified eventually. The proposed methodology is illustrated by an example. The results show that the demand multiplier has a great influence on the peak of reliability and the persistent time of the cascading failures in its propagation in WDS. The time period when the system has the highest reliability is when the demand multiplier is less than 1. There is a threshold of tolerance parameter exists. When the tolerance parameter is less than the threshold, the time period with the highest system reliability does not meet minimum value of demand multiplier. The results indicate that the system reliability should be evaluated with the properties of WDS and the characteristics of cascading failures, so as to improve its ability of resisting disasters. PMID:24551102

  13. Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.

  14. A Novel Multiscale Physics Based Progressive Failure Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.; Yarrington, Phillip W.

    2008-01-01

    A variable fidelity, multiscale, physics based finite element procedure for predicting progressive damage and failure of laminated continuous fiber reinforced composites is introduced. At every integration point in a finite element model, progressive damage is accounted for at the lamina-level using thermodynamically based Schapery Theory. Separate failure criteria are applied at either the global-scale or the microscale in two different FEM models. A micromechanics model, the Generalized Method of Cells, is used to evaluate failure criteria at the micro-level. The stress-strain behavior and observed failure mechanisms are compared with experimental results for both models.

  15. Common Cause Failures and Ultra Reliability

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A common cause failure occurs when several failures have the same origin. Common cause failures are either common event failures, where the cause is a single external event, or common mode failures, where two systems fail in the same way for the same reason. Common mode failures can occur at different times because of a design defect or a repeated external event. Common event failures reduce the reliability of on-line redundant systems but not of systems using off-line spare parts. Common mode failures reduce the dependability of systems using off-line spare parts and on-line redundancy.

  16. Reliability and Validity of an Internet-based Questionnaire Measuring Lifetime Physical Activity

    PubMed Central

    De Vera, Mary A.; Ratzlaff, Charles; Doerfling, Paul; Kopec, Jacek

    2010-01-01

    Lifetime exposure to physical activity is an important construct for evaluating associations between physical activity and disease outcomes, given the long induction periods in many chronic diseases. The authors' objective in this study was to evaluate the measurement properties of the Lifetime Physical Activity Questionnaire (L-PAQ), a novel Internet-based, self-administered instrument measuring lifetime physical activity, among Canadian men and women in 2005–2006. Reliability was examined using a test-retest study. Validity was examined in a 2-part study consisting of 1) comparisons with previously validated instruments measuring similar constructs, the Lifetime Total Physical Activity Questionnaire (LT-PAQ) and the Chasan-Taber Physical Activity Questionnaire (CT-PAQ), and 2) a priori hypothesis tests of constructs measured by the L-PAQ. The L-PAQ demonstrated good reliability, with intraclass correlation coefficients ranging from 0.67 (household activity) to 0.89 (sports/recreation). Comparison between the L-PAQ and the LT-PAQ resulted in Spearman correlation coefficients ranging from 0.41 (total activity) to 0.71 (household activity); comparison between the L-PAQ and the CT-PAQ yielded coefficients of 0.58 (sports/recreation), 0.56 (household activity), and 0.50 (total activity). L-PAQ validity was further supported by observed relations between the L-PAQ and sociodemographic variables, consistent with a priori hypotheses. Overall, the L-PAQ is a useful instrument for assessing multiple domains of lifetime physical activity with acceptable reliability and validity. PMID:20876666

  17. Reliability and validity of an internet-based questionnaire measuring lifetime physical activity.

    PubMed

    De Vera, Mary A; Ratzlaff, Charles; Doerfling, Paul; Kopec, Jacek

    2010-11-15

    Lifetime exposure to physical activity is an important construct for evaluating associations between physical activity and disease outcomes, given the long induction periods in many chronic diseases. The authors' objective in this study was to evaluate the measurement properties of the Lifetime Physical Activity Questionnaire (L-PAQ), a novel Internet-based, self-administered instrument measuring lifetime physical activity, among Canadian men and women in 2005-2006. Reliability was examined using a test-retest study. Validity was examined in a 2-part study consisting of 1) comparisons with previously validated instruments measuring similar constructs, the Lifetime Total Physical Activity Questionnaire (LT-PAQ) and the Chasan-Taber Physical Activity Questionnaire (CT-PAQ), and 2) a priori hypothesis tests of constructs measured by the L-PAQ. The L-PAQ demonstrated good reliability, with intraclass correlation coefficients ranging from 0.67 (household activity) to 0.89 (sports/recreation). Comparison between the L-PAQ and the LT-PAQ resulted in Spearman correlation coefficients ranging from 0.41 (total activity) to 0.71 (household activity); comparison between the L-PAQ and the CT-PAQ yielded coefficients of 0.58 (sports/recreation), 0.56 (household activity), and 0.50 (total activity). L-PAQ validity was further supported by observed relations between the L-PAQ and sociodemographic variables, consistent with a priori hypotheses. Overall, the L-PAQ is a useful instrument for assessing multiple domains of lifetime physical activity with acceptable reliability and validity.

  18. Reliability Measurement for Mixed Mode Failures of 33/11 Kilovolt Electric Power Distribution Stations

    PubMed Central

    Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346

  19. Reliability measurement for mixed mode failures of 33/11 kilovolt electric power distribution stations.

    PubMed

    Alwan, Faris M; Baharum, Adam; Hassan, Geehan S

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.

  20. LMI-based adaptive reliable H∞ static output feedback control against switched actuator failures

    NASA Astrophysics Data System (ADS)

    An, Liwei; Zhai, Ding; Dong, Jiuxiang; Zhang, Qingling

    2017-08-01

    This paper investigates the H∞ static output feedback (SOF) control problem for switched linear system under arbitrary switching, where the actuator failure models are considered to depend on switching signal. An active reliable control scheme is developed by combination of linear matrix inequality (LMI) method and adaptive mechanism. First, by exploiting variable substitution and Finsler's lemma, new LMI conditions are given for designing the SOF controller. Compared to the existing results, the proposed design conditions are more relaxed and can be applied to a wider class of no-fault linear systems. Then a novel adaptive mechanism is established, where the inverses of switched failure scaling factors are estimated online to accommodate the effects of actuator failure on systems. Two main difficulties arise: first is how to design the switched adaptive laws to prevent the missing of estimating information due to switching; second is how to construct a common Lyapunov function based on a switched estimate error term. It is shown that the new method can give less conservative results than that for the traditional control design with fixed gain matrices. Finally, simulation results on the HiMAT aircraft are given to show the effectiveness of the proposed approaches.

  1. Failure mode analysis to predict product reliability.

    NASA Technical Reports Server (NTRS)

    Zemanick, P. P.

    1972-01-01

    The failure mode analysis (FMA) is described as a design tool to predict and improve product reliability. The objectives of the failure mode analysis are presented as they influence component design, configuration selection, the product test program, the quality assurance plan, and engineering analysis priorities. The detailed mechanics of performing a failure mode analysis are discussed, including one suggested format. Some practical difficulties of implementation are indicated, drawn from experience with preparing FMAs on the nuclear rocket engine program.

  2. Reliable Broadcast under Cascading Failures in Interdependent Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Sisi; Lee, Sangkeun; Chinthavali, Supriya

    Reliable broadcast is an essential tool to disseminate information among a set of nodes in the presence of failures. We present a novel study of reliable broadcast in interdependent networks, in which the failures in one network may cascade to another network. In particular, we focus on the interdependency between the communication network and power grid network, where the power grid depends on the signals from the communication network for control and the communication network depends on the grid for power. In this paper, we build a resilient solution to handle crash failures in the communication network that may causemore » cascading failures and may even partition the network. In order to guarantee that all the correct nodes deliver the messages, we use soft links, which are inactive backup links to non-neighboring nodes that are only active when failures occur. At the core of our work is a fully distributed algorithm for the nodes to predict and collect the information of cascading failures so that soft links can be maintained to correct nodes prior to the failures. In the presence of failures, soft links are activated to guarantee message delivery and new soft links are built accordingly for long term robustness. Our evaluation results show that the algorithm achieves low packet drop rate and handles cascading failures with little overhead.« less

  3. Reliability assessment of slender concrete columns at the stability failure

    NASA Astrophysics Data System (ADS)

    Valašík, Adrián; Benko, Vladimír; Strauss, Alfred; Täubling, Benjamin

    2018-01-01

    The European Standard for designing concrete columns within the use of non-linear methods shows deficiencies in terms of global reliability, in case that the concrete columns fail by the loss of stability. The buckling failure is a brittle failure which occurs without warning and the probability of its formation depends on the columns slenderness. Experiments with slender concrete columns were carried out in cooperation with STRABAG Bratislava LTD in Central Laboratory of Faculty of Civil Engineering SUT in Bratislava. The following article aims to compare the global reliability of slender concrete columns with slenderness of 90 and higher. The columns were designed according to methods offered by EN 1992-1-1 [1]. The mentioned experiments were used as basis for deterministic nonlinear modelling of the columns and subsequent the probabilistic evaluation of structural response variability. Final results may be utilized as thresholds for loading of produced structural elements and they aim to present probabilistic design as less conservative compared to classic partial safety factor based design and alternative ECOV method.

  4. On-clip high frequency reliability and failure test structures

    DOEpatents

    Snyder, Eric S.; Campbell, David V.

    1997-01-01

    Self-stressing test structures for realistic high frequency reliability characterizations. An on-chip high frequency oscillator, controlled by DC signals from off-chip, provides a range of high frequency pulses to test structures. The test structures provide information with regard to a variety of reliability failure mechanisms, including hot-carriers, electromigration, and oxide breakdown. The system is normally integrated at the wafer level to predict the failure mechanisms of the production integrated circuits on the same wafer.

  5. On-clip high frequency reliability and failure test structures

    DOEpatents

    Snyder, E.S.; Campbell, D.V.

    1997-04-29

    Self-stressing test structures for realistic high frequency reliability characterizations. An on-chip high frequency oscillator, controlled by DC signals from off-chip, provides a range of high frequency pulses to test structures. The test structures provide information with regard to a variety of reliability failure mechanisms, including hot-carriers, electromigration, and oxide breakdown. The system is normally integrated at the wafer level to predict the failure mechanisms of the production integrated circuits on the same wafer. 22 figs.

  6. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  7. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  8. Application of a truncated normal failure distribution in reliability testing

    NASA Technical Reports Server (NTRS)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  9. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    NASA Astrophysics Data System (ADS)

    Michaelides, Stylianos

    -down devices without the underfill, based on the thorough understanding of the failure modes. Also, practical design guidelines for material, geometry and process parameters for reliable flip-chip devices have been developed.

  10. Predicting Failure Under Laboratory Conditions: Learning the Physics of Slow Frictional Slip and Dynamic Failure

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, B.; Hulbert, C.; Riviere, J.; Lubbers, N.; Barros, K.; Marone, C.; Johnson, P. A.

    2016-12-01

    Forecasting failure is a primary goal in diverse domains that include earthquake physics, materials science, nondestructive evaluation of materials and other engineering applications. Due to the highly complex physics of material failure and limitations on gathering data in the failure nucleation zone, this goal has often appeared out of reach; however, recent advances in instrumentation sensitivity, instrument density and data analysis show promise toward forecasting failure times. Here, we show that we can predict frictional failure times of both slow and fast stick slip failure events in the laboratory. This advance is made possible by applying a machine learning approach known as Random Forests1(RF) to the continuous acoustic emission (AE) time series recorded by detectors located on the fault blocks. The RF is trained using a large number of statistical features derived from the AE time series signal. The model is then applied to data not previously analyzed. Remarkably, we find that the RF method predicts upcoming failure time far in advance of a stick slip event, based only on a short time window of data. Further, the algorithm accurately predicts the time of the beginning and end of the next slip event. The predicted time improves as failure is approached, as other data features add to prediction. Our results show robust predictions of slow and dynamic failure based on acoustic emissions from the fault zone throughout the laboratory seismic cycle. The predictions are based on previously unidentified tremor-like acoustic signals that occur during stress build up and the onset of macroscopic frictional weakening. We suggest that the tremor-like signals carry information about fault zone processes and allow precise predictions of failure at any time in the slow slip or stick slip cycle2. If the laboratory experiments represent Earth frictional conditions, it could well be that signals are being missed that contain highly useful predictive information. 1Breiman

  11. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less

  12. Comprehensive Deployment Method for Technical Characteristics Base on Multi-failure Modes Correlation Analysis

    NASA Astrophysics Data System (ADS)

    Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.

    2017-12-01

    This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.

  13. Some Aspects of the Failure Mechanisms in BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, David Donhang; Sampson, Michael J.

    2012-01-01

    The objective of this presentation is to gain insight into possible failure mechanisms in BaTiO3-based ceramic capacitors that may be associated with the reliability degradation that accompanies a reduction in dielectric thickness, as reported by Intel Corporation in 2010. The volumetric efficiency (microF/cm3) of a multilayer ceramic capacitor (MLCC) has been shown to not increase limitlessly due to the grain size effect on the dielectric constant of ferroelectric ceramic BaTiO3 material. The reliability of an MLCC has been discussed with respect to its structure. The MLCCs with higher numbers of dielectric layers will pose more challenges for the reliability of dielectric material, which is the case for most base-metal-electrode (BME) capacitors. A number of MLCCs manufactured using both precious-metal-electrode (PME) and BME technology, with 25 V rating and various chip sizes and capacitances, were tested at accelerated stress levels. Most of these MLCCs had a failure behavior with two mixed failure modes: the well-known rapid dielectric wearout, and so-called 'early failures." The two failure modes can be distinguished when the testing data were presented and normalized at use-level using a 2-parameter Weibull plot. The early failures had a slope parameter of Beta >1, indicating that the early failures are not infant mortalities. Early failures are triggered due to external electrical overstress and become dominant as dielectric layer thickness decreases, accompanied by a dramatic reduction in reliability. This indicates that early failures are the main cause of the reliability degradation in MLCCs as dielectric layer thickness decreases. All of the early failures are characterized by an avalanche-like breakdown leakage current. The failures have been attributed to the extrinsic minor construction defects introduced during fabrication of the capacitors. A reliability model including dielectric thickness and extrinsic defect feature size is proposed in this

  14. Validity and Reliability of Field-Based Measures for Assessing Movement Skill Competency in Lifelong Physical Activities: A Systematic Review.

    PubMed

    Hulteen, Ryan M; Lander, Natalie J; Morgan, Philip J; Barnett, Lisa M; Robertson, Samuel J; Lubans, David R

    2015-10-01

    It has been suggested that young people should develop competence in a variety of 'lifelong physical activities' to ensure that they can be active across the lifespan. The primary aim of this systematic review is to report the methodological properties, validity, reliability, and test duration of field-based measures that assess movement skill competency in lifelong physical activities. A secondary aim was to clearly define those characteristics unique to lifelong physical activities. A search of four electronic databases (Scopus, SPORTDiscus, ProQuest, and PubMed) was conducted between June 2014 and April 2015 with no date restrictions. Studies addressing the validity and/or reliability of lifelong physical activity tests were reviewed. Included articles were required to assess lifelong physical activities using process-oriented measures, as well as report either one type of validity or reliability. Assessment criteria for methodological quality were adapted from a checklist used in a previous review of sport skill outcome assessments. Movement skill assessments for eight different lifelong physical activities (badminton, cycling, dance, golf, racquetball, resistance training, swimming, and tennis) in 17 studies were identified for inclusion. Methodological quality, validity, reliability, and test duration (time to assess a single participant), for each article were assessed. Moderate to excellent reliability results were found in 16 of 17 studies, with 71% reporting inter-rater reliability and 41% reporting intra-rater reliability. Only four studies in this review reported test-retest reliability. Ten studies reported validity results; content validity was cited in 41% of these studies. Construct validity was reported in 24% of studies, while criterion validity was only reported in 12% of studies. Numerous assessments for lifelong physical activities may exist, yet only assessments for eight lifelong physical activities were included in this review

  15. Improving the reliability of inverter-based welding machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiedermayer, M.

    1997-02-01

    Although inverter-based welding power sources have been available since the late 1980s, many people hesitated to purchase them because of reliability issues. Unfortunately, their hesitancy had a basis, until now. Recent improvements give some inverters a reliability level that approaches that of traditional, transformer-based industrial welding machines, which have a failure rate of about 1%. Acceptance of inverter-based welding machines is important because, for many welding applications, they provide capabilities that solid-state, transformer-based machines cannot deliver. These advantages include enhanced pulsed gas metal arc welding (GMAW-P), lightweight portability, an ultrastable arc, and energy efficiency--all while producing highly aesthetic weld beadsmore » and delivering multiprocess capabilities.« less

  16. Probabilistic confidence for decisions based on uncertain reliability estimates

    NASA Astrophysics Data System (ADS)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  17. Photovoltaic module reliability improvement through application testing and failure analysis

    NASA Technical Reports Server (NTRS)

    Dumas, L. N.; Shumka, A.

    1982-01-01

    During the first four years of the U.S. Department of Energy (DOE) National Photovoltatic Program, the Jet Propulsion Laboratory Low-Cost Solar Array (LSA) Project purchased about 400 kW of photovoltaic modules for test and experiments. In order to identify, report, and analyze test and operational problems with the Block Procurement modules, a problem/failure reporting and analysis system was implemented by the LSA Project with the main purpose of providing manufacturers with feedback from test and field experience needed for the improvement of product performance and reliability. A description of the more significant types of failures is presented, taking into account interconnects, cracked cells, dielectric breakdown, delamination, and corrosion. Current design practices and reliability evaluations are also discussed. The conducted evaluation indicates that current module designs incorporate damage-resistant and fault-tolerant features which address field failure mechanisms observed to date.

  18. An Evaluation method for C2 Cyber-Physical Systems Reliability Based on Deep Learning

    DTIC Science & Technology

    2014-06-01

    the reliability testing data of the system, we obtain the prior distribution of the relia- bility is 1 1( ) ( ; , )R LG R r  . By Bayes theo- rem ...criticality cyber-physical sys- tems[C]//Proc of ICDCS. Piscataway, NJ: IEEE, 2010:169-178. [17] Zimmer C, Bhat B, Muller F, et al. Time-based intrusion de

  19. [Examination of safety improvement by failure record analysis that uses reliability engineering].

    PubMed

    Kato, Kyoichi; Sato, Hisaya; Abe, Yoshihisa; Ishimori, Yoshiyuki; Hirano, Hiroshi; Higashimura, Kyoji; Amauchi, Hiroshi; Yanakita, Takashi; Kikuchi, Kei; Nakazawa, Yasuo

    2010-08-20

    How the maintenance checks of the medical treatment system, including start of work check and the ending check, was effective for preventive maintenance and the safety improvement was verified. In this research, date on the failure of devices in multiple facilities was collected, and the data of the trouble repair record was analyzed by the technique of reliability engineering. An analysis of data on the system (8 general systems, 6 Angio systems, 11 CT systems, 8 MRI systems, 8 RI systems, and the radiation therapy system 9) used in eight hospitals was performed. The data collection period assumed nine months from April to December 2008. Seven items were analyzed. (1) Mean time between failures (MTBF) (2) Mean time to repair (MTTR) (3) Mean down time (MDT) (4) Number found by check in morning (5) Failure generation time according to modality. The classification of the breakdowns per device, the incidence, and the tendency could be understood by introducing reliability engineering. Analysis, evaluation, and feedback on the failure generation history are useful to keep downtime to a minimum and to ensure safety.

  20. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false General Principles of Reliability-Based... STANDARDS Pt. 238, App. E Appendix E to Part 238—General Principles of Reliability-Based Maintenance... maintenance programs are based on the following general principles. A failure is an unsatisfactory condition...

  1. Reliability and Failure Modes of a Hybrid Ceramic Abutment Prototype.

    PubMed

    Silva, Nelson Rfa; Teixeira, Hellen S; Silveira, Lucas M; Bonfante, Estevam A; Coelho, Paulo G; Thompson, Van P

    2018-01-01

    A ceramic and metal abutment prototype was fatigue tested to determine the probability of survival at various loads. Lithium disilicate CAD-milled abutments (n = 24) were cemented to titanium sleeve inserts and then screw attached to titanium fixtures. The assembly was then embedded at a 30° angle in polymethylmethacrylate. Each (n = 24) was restored with a resin-cemented machined lithium disilicate all-ceramic central incisor crown. Single load (lingual-incisal contact) to failure was determined for three specimens. Fatigue testing (n = 21) was conducted employing the step-stress method with lingual mouth motion loading. Failures were recorded, and reliability calculations were performed using proprietary software. Probability Weibull curves were calculated with 90% confidence bounds. Fracture modes were classified with a stereomicroscope, and representative samples imaged with scanning electron microscopy. Fatigue results indicated that the limiting factor in the current design is the fatigue strength of the abutment screw, where screw fracture often leads to failure of the abutment metal sleeve and/or cracking in the implant fixture. Reliability for completion of a mission at 200 N load for 50K cycles was 0.38 (0.52% to 0.25 90% CI) and for 100K cycles was only 0.12 (0.26 to 0.05)-only 12% predicted to survive. These results are similar to those from previous studies on metal to metal abutment/fixture systems where screw failure is a limitation. No ceramic crown or ceramic abutment initiated fractures occurred, supporting the research hypothesis. The limiting factor in performance was the screw failure in the metal-to-metal connection between the prototyped abutment and the fixture, indicating that this configuration should function clinically with no abutment ceramic complications. The combined ceramic with titanium sleeve abutment prototype performance was limited by the fatigue degradation of the abutment screw. In fatigue, no ceramic crown or ceramic

  2. Reliability of physical examination tests for the diagnosis of knee disorders: Evidence from a systematic review.

    PubMed

    Décary, Simon; Ouellet, Philippe; Vendittoli, Pascal-André; Desmeules, François

    2016-12-01

    Clinicians often rely on physical examination tests to guide them in the diagnostic process of knee disorders. However, reliability of these tests is often overlooked and may influence the consistency of results and overall diagnostic validity. Therefore, the objective of this study was to systematically review evidence on the reliability of physical examination tests for the diagnosis of knee disorders. A structured literature search was conducted in databases up to January 2016. Included studies needed to report reliability measures of at least one physical test for any knee disorder. Methodological quality was evaluated using the QAREL checklist. A qualitative synthesis of the evidence was performed. Thirty-three studies were included with a mean QAREL score of 5.5 ± 0.5. Based on low to moderate quality evidence, the Thessaly test for meniscal injuries reached moderate inter-rater reliability (k = 0.54). Based on moderate to excellent quality evidence, the Lachman for anterior cruciate ligament injuries reached moderate to excellent inter-rater reliability (k = 0.42 to 0.81). Based on low to moderate quality evidence, the Tibiofemoral Crepitus, Joint Line and Patellofemoral Pain/Tenderness, Bony Enlargement and Joint Pain on Movement tests for knee osteoarthritis reached fair to excellent inter-rater reliability (k = 0.29 to 0.93). Based on low to moderate quality evidence, the Lateral Glide, Lateral Tilt, Lateral Pull and Quality of Movement tests for patellofemoral pain reached moderate to good inter-rater reliability (k = 0.49 to 0.73). Many physical tests appear to reach good inter-rater reliability, but this is based on low-quality and conflicting evidence. High-quality research is required to evaluate the reliability of knee physical examination tests. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Risk factors for early failure after peripheral endovascular intervention: application of a reliability engineering approach.

    PubMed

    Meltzer, Andrew J; Graham, Ashley; Connolly, Peter H; Karwowski, John K; Bush, Harry L; Frazier, Peter I; Schneider, Darren B

    2013-01-01

    We apply an innovative and novel analytic approach, based on reliability engineering (RE) principles frequently used to characterize the behavior of manufactured products, to examine outcomes after peripheral endovascular intervention. We hypothesized that this would allow for improved prediction of outcome after peripheral endovascular intervention, specifically with regard to identification of risk factors for early failure. Patients undergoing infrainguinal endovascular intervention for chronic lower-extremity ischemia from 2005 to 2010 were identified in a prospectively maintained database. The primary outcome of failure was defined as patency loss detected by duplex ultrasonography, with or without clinical failure. Analysis included univariate and multivariate Cox regression models, as well as RE-based analysis including product life-cycle models and Weibull failure plots. Early failures were distinguished using the RE principle of "basic rating life," and multivariate models identified independent risk factors for early failure. From 2005 to 2010, 434 primary endovascular peripheral interventions were performed for claudication (51.8%), rest pain (16.8%), or tissue loss (31.3%). Fifty-five percent of patients were aged ≥75 years; 57% were men. Failure was noted after 159 (36.6%) interventions during a mean follow-up of 18 months (range, 0-71 months). Using multivariate (Cox) regression analysis, rest pain and tissue loss were independent predictors of patency loss, with hazard ratios of 2.5 (95% confidence interval, 1.6-4.1; P < 0.001) and 3.2 (95% confidence interval, 2.0-5.2, P < 0.001), respectively. The distribution of failure times for both claudication and critical limb ischemia fit distinct Weibull plots, with different characteristics: interventions for claudication demonstrated an increasing failure rate (β = 1.22, θ = 13.46, mean time to failure = 12.603 months, index of fit = 0.99037, R(2) = 0.98084), whereas interventions for critical limb

  4. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  5. Reliability and mode of failure of bonded monolithic and multilayer ceramics.

    PubMed

    Alessandretti, Rodrigo; Borba, Marcia; Benetti, Paula; Corazza, Pedro Henrique; Ribeiro, Raissa; Della Bona, Alvaro

    2017-02-01

    To evaluate the reliability of monolithic and multilayer ceramic structures used in the CAD-on technique (Ivoclar), and the mode of failure produced in ceramic structures bonded to a dentin analog material (NEMA-G10). Ceramic specimens were fabricated as follows (n=30): CAD-on- trilayer structure (IPS e.max ZirCAD/IPS e.max Crystall./Connect/IPS e.max CAD); YLD- bilayer structure (IPS e.max ZirCAD/IPS e.max Ceram); LDC- monolithic structure (IPS e.max CAD); and YZW- monolithic structure (Zenostar Zr Translucent). All ceramic specimens were bonded to G10 and subjected to compressive load in 37°C distilled water until the sound of the first crack, monitored acoustically. Failure load (L f ) values were recorded (N) and statistically analyzed using Weibull distribution, Kruskal-Wallis test, and Student-Newman-Keuls test (α=0.05). L f values of CAD-on and YZW structures were statistically similar (p=0.917), but higher than YLD and LDC (p<0.01). Weibull modulus (m) values were statistically similar for all experimental groups. Monolithic structures (LDC and YZW) failed from radial cracks. Failures in the CAD-on and YLD groups showed, predominantly, both radial and cone cracks. Monolithic zirconia (YZW) and CAD-on structures showed similar failure resistance and reliability, but a different fracture behavior. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  6. Impaired chronotropic response to physical activities in heart failure patients.

    PubMed

    Shen, Hong; Zhao, Jianrong; Zhou, Xiaohong; Li, Jingbo; Wan, Qing; Huang, Jing; Li, Hui; Wu, Liqun; Yang, Shungang; Wang, Ping

    2017-05-25

    While exercise-based cardiac rehabilitation has a beneficial effect on heart failure hospitalization and mortality, it is limited by the presence of chronotropic incompetence (CI) in some patients. This study explored the feasibility of using wearable devices to assess impaired chronotropic response in heart failure patients. Forty patients with heart failure (left ventricular ejection fraction, LVEF: 44.6 ± 5.8; age: 54.4 ± 11.7) received ECG Holter and accelerometer to monitor heart rate (HR) and physical activities during symptom-limited treadmill exercise testing, 6-min hall walk (6MHW), and 24-h daily living. CI was defined as maximal HR during peak exercise testing failing to reach 70% of age-predicted maximal HR (APMHR, 220 - age). The correlation between HR and physical activities in Holter-accelerometer recording was analyzed. Of 40 enrolled patients, 26 were able to perform treadmill exercise testing. Based on exercise test reports, 13 (50%) of 26 patients did not achieve at least 70% of APMHR (CI patients). CI patients achieved a lower % APMHR (62.0 ± 6.3%) than non-CI patients who achieved 72.0 ± 1.2% of APMHR (P < 0.0001). When Holter-accelerometer recording was used to assess chronotropic response, the percent APMHR achieved during 6MHW and physical activities was significantly lower in CI patients than in non-CI patients. CI patients had a significantly shorter 6MHW distance and less physical activity intensity than non-CI patients. The study found impaired chronotropic response in 50% of heart failure patients who took treadmill exercise testing. The wearable Holter-accelerometer recording could help to identify impaired chronotropic response to physical activities in heart failure patients. ClinicalTrials.gov ID NCT02358603 . Registered 16 May 2014.

  7. Reliability Evaluation of Next Generation Inverter: Cooperative Research and Development Final Report, CRADA Number CRD-12-478

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paret, Paul

    The National Renewable Energy Laboratory (NREL) will conduct thermal and reliability modeling on three sets of power modules for the development of a next generation inverter for electric traction drive vehicles. These modules will be chosen by General Motors (GM) to represent three distinct technological approaches to inverter power module packaging. Likely failure mechanisms will be identified in each package and a physics-of-failure-based reliability assessment will be conducted.

  8. Reliability and sensitivity analysis of a system with multiple unreliable service stations and standby switching failures

    NASA Astrophysics Data System (ADS)

    Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung

    2007-07-01

    This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.

  9. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  10. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  11. A Case Study on Improving Intensive Care Unit (ICU) Services Reliability: By Using Process Failure Mode and Effects Analysis (PFMEA)

    PubMed Central

    Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad

    2016-01-01

    Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162

  12. An experimental evaluation of software redundancy as a strategy for improving reliability

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.

    1990-01-01

    The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.

  13. NASA Applications and Lessons Learned in Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  14. A probabilisitic based failure model for components fabricated from anisotropic graphite

    NASA Astrophysics Data System (ADS)

    Xiao, Chengfeng

    The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of

  15. School-based behavioral assessment tools are reliable and valid for measurement of fruit and vegetable intake, physical activity, and television viewing in young children.

    PubMed

    Economos, Christina D; Sacheck, Jennifer M; Kwan Ho Chui, Kenneth; Irizarry, Laura; Irizzary, Laura; Guillemont, Juliette; Collins, Jessica J; Hyatt, Raymond R

    2008-04-01

    Interventions aiming to modify the dietary and physical activity behaviors of young children require precise and accurate measurement tools. As part of a larger community-based project, three school-based questionnaires were developed to assess (a) fruit and vegetable intake, (b) physical activity and television (TV) viewing, and (c) perceived parental support for diet and physical activity. Test-retest reliability was performed on all questionnaires and validity was measured for fruit and vegetable intake, physical activity, and TV viewing. Eighty-four school children (8.3+/-1.1 years) were studied. Test-retest reliability was performed by administering questionnaires twice, 1 to 2 hours apart. Validity of the fruit and vegetable questionnaire was measured by direct observation, while the physical activity and TV questionnaire was validated by a parent phone interview. All three questionnaires yielded excellent test-retest reliability (P<0.001). The majority of fruit and vegetable questions and the questions regarding specific physical activities and TV viewing were valid. Low validity scores were found for questions on watching TV during breakfast or dinner. These questionnaires are reliable and valid tools to assess fruit and vegetable intake, physical activity, and TV viewing behaviors in early elementary school-aged children. Methods for assessment of children's TV viewing during meals should be further investigated because of parent-child discrepancies.

  16. ANALYSIS OF SEQUENTIAL FAILURES FOR ASSESSMENT OF RELIABILITY AND SAFETY OF MANUFACTURING SYSTEMS. (R828541)

    EPA Science Inventory

    Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...

  17. Reliability and failure modes of narrow implant systems.

    PubMed

    Hirata, Ronaldo; Bonfante, Estevam A; Anchieta, Rodolfo B; Machado, Lucas S; Freitas, Gileade; Fardin, Vinicius P; Tovar, Nick; Coelho, Paulo G

    2016-09-01

    Narrow implants are indicated in areas of limited bone width or when grafting is nonviable. However, the reduction of implant diameter may compromise their performance. This study evaluated the reliability of several narrow implant systems under fatigue, after restored with single-unit crowns. Narrow implant systems were divided (n = 18 each), as follows: Astra (ASC); BioHorizons (BSC); Straumann Roxolid (SNC), Intra-Lock (IMC), and Intra-Lock one-piece abutment (ILO). Maxillary central incisor crowns were cemented and subjected to step-stress accelerated life testing in water. Use level probability Weibull curves and reliability for a mission of 100,000 cycles at 130- and 180-N loads (90 % two-sided confidence intervals) were calculated. Scanning electron microscopy was used for fractography. Reliability for 100,000 cycles at 130 N was ∼99 % in group ASC, ∼99 % in BSC, ∼96 % in SNC, ∼99 % in IMC, and ∼100 % in ILO. At 180 N, reliability of ∼34 % resulted for the ASC group, ∼91 % for BSC, ∼53 % for SNC, ∼70 % for IMC, and ∼99 % for ILO. Abutment screw fracture was the main failure mode for all groups. Reliability was not different between systems for 100,000 cycles at the 130-N load. A significant decrease was observed at the 180-N load for ASC, SNC, and IMC, whereas it was maintained for BSC and ILO. The investigated narrow implants presented mechanical performance under fatigue that suggests their safe use as single crowns in the anterior region.

  18. Fear of failure and self-handicapping in college physical education.

    PubMed

    Chen, Lung Hung; Chen, Mei-Yen; Lin, Meng-Shyan; Kee, Ying Hwa; Shui, Shang-Hsueh

    2009-12-01

    The purpose of this study was to examine the relationship between fear of failure and self-handicapping within the context of physical education. Participants were 103 college freshmen enrolled in aerobic dance physical education classes in Taiwan. They completed the Performance Failure Appraisal Inventory and Self-Handicapping Scale for Sport 3 mo. after entering the class. Hierarchical regression indicated that scores on fear of failure predicted self-handicapping scores.

  19. Reliability Based Design for a Raked Wing Tip of an Airframe

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2011-01-01

    A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.

  20. A Delay-Aware and Reliable Data Aggregation for Cyber-Physical Sensing

    PubMed Central

    Zhang, Jinhuan; Long, Jun; Zhang, Chengyuan; Zhao, Guihu

    2017-01-01

    Physical information sensed by various sensors in a cyber-physical system should be collected for further operation. In many applications, data aggregation should take reliability and delay into consideration. To address these problems, a novel Tiered Structure Routing-based Delay-Aware and Reliable Data Aggregation scheme named TSR-DARDA for spherical physical objects is proposed. By dividing the spherical network constructed by dispersed sensor nodes into circular tiers with specifically designed widths and cells, TSTR-DARDA tries to enable as many nodes as possible to transmit data simultaneously. In order to ensure transmission reliability, lost packets are retransmitted. Moreover, to minimize the latency while maintaining reliability for data collection, in-network aggregation and broadcast techniques are adopted to deal with the transmission between data collecting nodes in the outer layer and their parent data collecting nodes in the inner layer. Thus, the optimization problem is transformed to minimize the delay under reliability constraints by controlling the system parameters. To demonstrate the effectiveness of the proposed scheme, we have conducted extensive theoretical analysis and comparisons to evaluate the performance of TSR-DARDA. The analysis and simulations show that TSR-DARDA leads to lower delay with reliability satisfaction. PMID:28218668

  1. MEMS reliability: The challenge and the promise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, W.M.; Tanner, D.M.; Miller, S.L.

    1998-05-01

    MicroElectroMechanical Systems (MEMS) that think, sense, act and communicate will open up a broad new array of cost effective solutions only if they prove to be sufficiently reliable. A valid reliability assessment of MEMS has three prerequisites: (1) statistical significance; (2) a technique for accelerating fundamental failure mechanisms, and (3) valid physical models to allow prediction of failures during actual use. These already exist for the microelectronics portion of such integrated systems. The challenge lies in the less well understood micromachine portions and its synergistic effects with microelectronics. This paper presents a methodology addressing these prerequisites and a description ofmore » the underlying physics of reliability for micromachines.« less

  2. Development of KSC program for investigating and generating field failure rates. Reliability handbook for ground support equipment

    NASA Technical Reports Server (NTRS)

    Bloomquist, C. E.; Kallmeyer, R. H.

    1972-01-01

    Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results are also included.

  3. Is self-reporting workplace activity worthwhile? Validity and reliability of occupational sitting and physical activity questionnaire in desk-based workers.

    PubMed

    Pedersen, Scott J; Kitic, Cecilia M; Bird, Marie-Louise; Mainsbridge, Casey P; Cooley, P Dean

    2016-08-19

    With the advent of workplace health and wellbeing programs designed to address prolonged occupational sitting, tools to measure behaviour change within this environment should derive from empirical evidence. In this study we measured aspects of validity and reliability for the Occupational Sitting and Physical Activity Questionnaire that asks employees to recount the percentage of work time they spend in the seated, standing, and walking postures during a typical workday. Three separate cohort samples (N = 236) were drawn from a population of government desk-based employees across several departmental agencies. These volunteers were part of a larger state-wide intervention study. Workplace sitting and physical activity behaviour was measured both subjectively against the International Physical Activity Questionnaire, and objectively against ActivPal accelerometers before the intervention began. Criterion validity and concurrent validity for each of the three posture categories were assessed using Spearman's rank correlation coefficients, and a bias comparison with 95 % limits of agreement. Test-retest reliability of the survey was reported with intraclass correlation coefficients. Criterion validity for this survey was strong for sitting and standing estimates, but weak for walking. Participants significantly overestimated the amount of walking they did at work. Concurrent validity was moderate for sitting and standing, but low for walking. Test-retest reliability of this survey proved to be questionable for our sample. Based on our findings we must caution occupational health and safety professionals about the use of employee self-report data to estimate workplace physical activity. While the survey produced accurate measurements for time spent sitting at work it was more difficult for employees to estimate their workplace physical activity.

  4. Product Reliability Trends, Derating Considerations and Failure Mechanisms with Scaled CMOS

    NASA Technical Reports Server (NTRS)

    White, Mark; Vu, Duc; Nguyen, Duc; Ruiz, Ron; Chen, Yuan; Bernstein, Joseph B.

    2006-01-01

    As microelectronics is scaled into the deep sub-micron regime, space and aerospace users of advanced technology CMOS are reassessing how scaling effects impact long-term product reliability. The effects of electromigration (EM), time-dependent-dielectric-breakdown (TDDB) and hot carrier degradation (HCI and NBTI) wearout mechanisms on scaled technologies and product reliability are investigated, accelerated stress testing across several technology nodes is performed, and FA is conducted to confirm the failure mechanism(s).

  5. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  6. Psychometric properties of the Symptom Status Questionnaire-Heart Failure.

    PubMed

    Heo, Seongkum; Moser, Debra K; Pressler, Susan J; Dunbar, Sandra B; Mudd-Martin, Gia; Lennie, Terry A

    2015-01-01

    Many patients with heart failure (HF) experience physical symptoms, poor health-related quality of life (HRQOL), and high rates of hospitalization. Physical symptoms are associated with HRQOL and are major antecedents of hospitalization. However, reliable and valid physical symptom instruments have not been established. Therefore, this study examined the psychometric properties of the Symptom Status Questionnaire-Heart Failure (SSQ-HF) in patients with HF. Data on symptoms using the SSQ-HF were collected from 249 patients (aged 61 years, 67% male, 45% in New York Heart Association functional class III/IV). Internal consistency reliability was assessed using Cronbach's α. Item homogeneity was assessed using item-total and interitem correlations. Construct validity was assessed using factor analysis and testing hypotheses on known relationships. Data on depressive symptoms (Beck Depression Inventory II), HRQOL (Minnesota Living With Heart Failure Questionnaire), and event-free survival were collected to test known relationships. Internal consistency reliability was supported: Cronbach's α was .80. Item-total correlation coefficients and interitem correlation coefficients were acceptable. Factor analysis supported the construct validity of the instrument. More severe symptoms were associated with more depressive symptoms, poorer HRQOL, and more risk for hospitalization, emergency department visit, or death, controlling for covariates. The findings of this study support the reliability and validity of the SSQ-HF. Clinicians and researchers can use this instrument to assess physical symptoms in patients with HF.

  7. Reliability and Maintainability (RAM) Training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Packard, Michael H. (Editor)

    2000-01-01

    The theme of this manual is failure physics-the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low-cost reliable products. In a broader sense the manual should do more. It should underscore the urgent need CI for mature attitudes toward reliability. Five of the chapters were originally presented as a classroom course to over 1000 Martin Marietta engineers and technicians. Another four chapters and three appendixes have been added, We begin with a view of reliability from the years 1940 to 2000. Chapter 2 starts the training material with a review of mathematics and a description of what elements contribute to product failures. The remaining chapters elucidate basic reliability theory and the disciplines that allow us to control and eliminate failures.

  8. Reliability-Based Control Design for Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  9. Bridge reliability assessment based on the PDF of long-term monitored extreme strains

    NASA Astrophysics Data System (ADS)

    Jiao, Meiju; Sun, Limin

    2011-04-01

    Structural health monitoring (SHM) systems can provide valuable information for the evaluation of bridge performance. As the development and implementation of SHM technology in recent years, the data mining and use has received increasingly attention and interests in civil engineering. Based on the principle of probabilistic and statistics, a reliability approach provides a rational basis for analysis of the randomness in loads and their effects on structures. A novel approach combined SHM systems with reliability method to evaluate the reliability of a cable-stayed bridge instrumented with SHM systems was presented in this paper. In this study, the reliability of the steel girder of the cable-stayed bridge was denoted by failure probability directly instead of reliability index as commonly used. Under the assumption that the probability distributions of the resistance are independent to the responses of structures, a formulation of failure probability was deduced. Then, as a main factor in the formulation, the probability density function (PDF) of the strain at sensor locations based on the monitoring data was evaluated and verified. That Donghai Bridge was taken as an example for the application of the proposed approach followed. In the case study, 4 years' monitoring data since the operation of the SHM systems was processed, and the reliability assessment results were discussed. Finally, the sensitivity and accuracy of the novel approach compared with FORM was discussed.

  10. The memory failures of everyday questionnaire (MFE): internal consistency and reliability.

    PubMed

    Montejo Carrasco, Pedro; Montenegro, Peña Mercedes; Sueiro, Manuel J

    2012-07-01

    The Memory Failures of Everyday Questionnaire (MFE) is one of the most widely-used instruments to assess memory failures in daily life. The original scale has nine response options, making it difficult to apply; we created a three-point scale (0-1-2) with response choices that make it easier to administer. We examined the two versions' equivalence in a sample of 193 participants between 19 and 64 years of age. The test-retest reliability and internal consistency of the version we propose were also computed in a sample of 113 people. Several indicators attest to the two forms' equivalence: the correlation between the items' means (r = .94; p < .001) and the order of the items' frequencies (r = .92; p < .001). However, the correlation between global scores on the two forms was not very high (r = .67; p < .001). The results indicate this new version has adequate reliability and internal consistency (r(xx) = .83; p < .001; alpha = .83; p < .001) equivalent to those of the MFE 1-9. The MFE 0-2 provides a brief, simple evaluation, so we recommend it for use in clinical practice as well as research.

  11. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    NASA Astrophysics Data System (ADS)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  12. Reliability of peripheral arterial tonometry in patients with heart failure, diabetic nephropathy and arterial hypertension.

    PubMed

    Weisrock, Fabian; Fritschka, Max; Beckmann, Sebastian; Litmeier, Simon; Wagner, Josephine; Tahirovic, Elvis; Radenovic, Sara; Zelenak, Christine; Hashemi, Djawid; Busjahn, Andreas; Krahn, Thomas; Pieske, Burkert; Dinh, Wilfried; Düngen, Hans-Dirk

    2017-08-01

    Endothelial dysfunction plays a major role in cardiovascular diseases and pulse amplitude tonometry (PAT) offers a non-invasive way to assess endothelial dysfunction. However, data about the reliability of PAT in cardiovascular patient populations are scarce. Thus, we evaluated the test-retest reliability of PAT using the natural logarithmic transformed reactive hyperaemia index (LnRHI). Our cohort consisted of 91 patients (mean age: 65±9.7 years, 32% female), who were divided into four groups: those with heart failure with preserved ejection fraction (HFpEF) ( n=25), heart failure with reduced ejection fraction (HFrEF) ( n=22), diabetic nephropathy ( n=21), and arterial hypertension ( n=23). All subjects underwent two separate PAT measurements at a median interval of 7 days (range 4-14 days). LnRHI derived by PAT showed good reliability in subjects with diabetic nephropathy (intra-class correlation (ICC) = 0.863) and satisfactory reliability in patients with both HFpEF (ICC = 0.557) and HFrEF (ICC = 0.576). However, in subjects with arterial hypertension, reliability was poor (ICC = 0.125). We demonstrated that PAT is a reliable technique to assess endothelial dysfunction in adults with diabetic nephropathy, HFpEF or HFrEF. However, in subjects with arterial hypertension, we did not find sufficient reliability, which can possibly be attributed to variations in heart rate and the respective time of the assessments. Clinical Trial Registration Identifier: NCT02299960.

  13. Physical Properties of Granulates Used in Analogue Experiments of Caprock Failure and Sediment Remobilisation

    NASA Astrophysics Data System (ADS)

    Kukowski, N.; Warsitzka, M.; May, F.

    2014-12-01

    Geological systems consisting of a porous reservoir and a low-permeable caprock are prone to hydraulic fracturing, if pore pressure rises to the effective stress. Under certain conditions, hydraulic fracturing is associated with sediment remobilisation, e.g. sand injections or pipes, leading to reduced seal capacity of the caprock. In dynamically scaled analogue experiments using granular materials and air pressure, we intent to investigate strain patterns and deformation mechanisms during caprock failure and fluidisation of shallow over-pressured reservoirs. The aim of this study is to improve the understanding of leakage potential of a sealing formation and the fluidisation potential of a reservoir formation depending on rock properties and effective stress. For reliable interpretation of analogue experiments, physical properties of analogue materials, e.g. frictional strength, cohesion, density, permeability etc., have to be correctly scaled according to those of their natural equivalents. The simulation of caprock requires that the analogue material possess a low permeability and is capable to shear failure and tensional failure. In contrast, materials representing the reservoir have to possess high porosity and low shear strength. In order to find suitable analogue materials, we measured the stress-strain behaviour and the permeability of over 25 different types of natural and artificial granular materials, e.g. glass powder, siliceous microspheres, diatomite powder, loess, or plastic granulate. Here, we present data of frictional parameters, compressibility and permeability of these granular materials characterized as a function of sphericity, grain size, and density. The repertoire of different types of granulates facilitates the adjustment of accurate mechanical properties in the analogue experiments. Furthermore, conditions during seal failure and fluidisation can be examined depending on the wide range of varying physical properties.

  14. Test-retest reliability of Yale Physical Activity Survey among older Mexican American adults: a pilot investigation.

    PubMed

    Pennathur, Arunkumar; Magham, Rohini; Contreras, Luis Rene; Dowling, Winifred

    2004-01-01

    The objective of the work reported in this paper is to assess test-retest reliability of Yale Physical Activity Survey Total Time, Estimated Energy Expenditure, Activity Dimension Indices, and Activities Check-list in older Mexican American men and women. A convenience-based healthy sample of 49 (42 women and 7 men) older Mexican American adults recruited from senior recreation centers aged 68 to 80 years volunteered to participate in this pilot study. Forty-nine older Mexican American adults filled out the Yale Physical Activity Survey for this study. Fifteen (12 women and 3 men) of the 49 volunteers responded twice to the Yale Physical Activity Survey after a 2-week period, and helped assess the test-retest reliability of the Yale Physical Activity Survey. Results indicate that based on a 2-week test-retest administration, the Yale Physical Activity Survey was found to have moderate (rhoI= .424, p < .05) to good reliability (rs = .789, p < .01) for physical activity assessment in older Mexican American adults who responded.

  15. Physically based DC lifetime model for lead zirconate titanate films

    NASA Astrophysics Data System (ADS)

    Garten, Lauren M.; Hagiwara, Manabu; Ko, Song Won; Trolier-McKinstry, Susan

    2017-09-01

    Accurate lifetime predictions for Pb(Zr0.52Ti0.48)O3 thin films are critical for a number of applications, but current reliability models are not consistent with the resistance degradation mechanisms in lead zirconate titanate. In this work, the reliability and lifetime of chemical solution deposited (CSD) and sputtered Pb(Zr0.52Ti0.48)O3 thin films are characterized using highly accelerated lifetime testing (HALT) and leakage current-voltage (I-V) measurements. Temperature dependent HALT results and impedance spectroscopy show activation energies of approximately 1.2 eV for the CSD films and 0.6 eV for the sputtered films. The voltage dependent HALT results are consistent with previous reports, but do not clearly indicate what causes device failure. To understand more about the underlying physical mechanisms leading to degradation, the I-V data are fit to known conduction mechanisms, with Schottky emission having the best-fit and realistic extracted material parameters. Using the Schottky emission equation as a base, a unique model is developed to predict the lifetime under highly accelerated testing conditions based on the physical mechanisms of degradation.

  16. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank

  17. A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The

  18. Long-term reliability study and failure analysis of quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Xie, Feng; Nguyen, Hong-Ky; Leblanc, Herve; Hughes, Larry; Wang, Jie; Miller, Dean J.; Lascola, Kevin

    2017-02-01

    Here we present lifetime test results of 4 groups of quantum cascade lasers (QCL) under various aging conditions including an accelerated life test. The total accumulated life time exceeds 1.5 million device·hours, which is the largest QCL reliability study ever reported. The longest single device aging time was 46.5 thousand hours (without failure) in the room temperature test. Four failures were found in a group of 19 devices subjected to the accelerated life test with a heat-sink temperature of 60 °C and a continuous-wave current of 1 A. Visual inspection of the laser facets of failed devices revealed an astonishing phenomenon, which has never been reported before, which manifested as a dark belt of an unknown substance appearing on facets. Although initially assumed to be contamination from the environment, failure analysis revealed that the dark substance is a thermally induced oxide of InP in the buried heterostructure semiinsulating layer. When the oxidized material starts to cover the core and blocks the light emission, it begins to cause the failure of QCLs in the accelerated test. An activation energy of 1.2 eV is derived from the dependence of the failure rate on laser core temperature. With the activation energy, the mean time to failure of the quantum cascade lasers operating at a current density of 5 kA/cm2 and heat-sink temperature of 25°C is expected to be 809 thousand hours.

  19. Minding the Cyber-Physical Gap: Model-Based Analysis and Mitigation of Systemic Perception-Induced Failure.

    PubMed

    Mordecai, Yaniv; Dori, Dov

    2017-07-17

    The cyber-physical gap (CPG) is the difference between the 'real' state of the world and the way the system perceives it. This discrepancy often stems from the limitations of sensing and data collection technologies and capabilities, and is inevitable at some degree in any cyber-physical system (CPS). Ignoring or misrepresenting such limitations during system modeling, specification, design, and analysis can potentially result in systemic misconceptions, disrupted functionality and performance, system failure, severe damage, and potential detrimental impacts on the system and its environment. We propose CPG-Aware Modeling & Engineering (CPGAME), a conceptual model-based approach to capturing, explaining, and mitigating the CPG. CPGAME enhances the systems engineer's ability to cope with CPGs, mitigate them by design, and prevent erroneous decisions and actions. We demonstrate CPGAME by applying it for modeling and analysis of the 1979 Three Miles Island 2 nuclear accident, and show how its meltdown could be mitigated. We use ISO-19450:2015-Object Process Methodology as our conceptual modeling framework.

  20. A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2017-01-01

    Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.

  1. Reliability based design of the primary structure of oil tankers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casella, G.; Dogliani, M.; Guedes Soares, C.

    1996-12-31

    The present paper describes the reliability analysis carried out for two oil tanker-ships having comparable dimensions but different design. The scope of the analysis was to derive indications on the value of the reliability index obtained for existing, typical and well designed oil tankers, as well as to apply the tentative rule checking formulation developed within the CEC-funded SHIPREL Project. The checking formula was adopted to redesign the midships section of one of the considered ships, upgrading her in order to meet the target failure probability considered in the rule development process. The resulting structure, in view of an upgradingmore » of the steel grade in the central part of the deck, lead to a convenient reliability level. The results of the analysis clearly showed that a large scatter exists presently in the design safety levels of ships, even when the Classification Societies` unified requirements are satisfied. A reliability based approach for the calibration of the rules for the global strength of ships is therefore proposed, in order to assist designers and Classification Societies in the process of producing ships which are more optimized, with respect to ensured safety levels. Based on the work reported in the paper, the feasibility and usefulness of a reliability based approach in the development of ship longitudinal strength requirements has been demonstrated.« less

  2. Survey of critical failure events in on-chip interconnect by fault tree analysis

    NASA Astrophysics Data System (ADS)

    Yokogawa, Shinji; Kunii, Kyousuke

    2018-07-01

    In this paper, a framework based on reliability physics is proposed for adopting fault tree analysis (FTA) to the on-chip interconnect system of a semiconductor. By integrating expert knowledge and experience regarding the possibilities of failure on basic events, critical issues of on-chip interconnect reliability will be evaluated by FTA. In particular, FTA is used to identify the minimal cut sets with high risk priority. Critical events affecting the on-chip interconnect reliability are identified and discussed from the viewpoint of long-term reliability assessment. The moisture impact is evaluated as an external event.

  3. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those

  4. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  5. International physical activity questionnaire: reliability and validity of the Turkish version.

    PubMed

    Saglam, Melda; Arikan, Hulya; Savci, Sema; Inal-Ince, Deniz; Bosnak-Guclu, Meral; Karabulut, Erdem; Tokgozoglu, Lale

    2010-08-01

    Physical inactivity is a global problem which is related to many chronic health disorders. Physical activity scales which allow cross-cultural comparisons have been developed. The goal was to assess the reliability and validity of a Turkish version of the International Physical Activity Questionnaire (IPAQ). 1,097 university students (721 women, 376 men; ages 18-32) volunteered. Short and long forms of the IPAQ gave good agreement and comparable 1-wk. test-retest reliabilities. Caltrac accelerometer data were compared with IPAQ scores in 80 participants with good agreement for short and long forms. Turkish versions of the IPAQ short and long forms are reliable and valid in assessment of physical activity.

  6. Reliability and failure modes of implant-supported zirconium-oxide fixed dental prostheses related to veneering techniques

    PubMed Central

    Baldassarri, Marta; Zhang, Yu; Thompson, Van P.; Rekow, Elizabeth D.; Stappert, Christian F. J.

    2011-01-01

    Summary Objectives To compare fatigue failure modes and reliability of hand-veneered and over-pressed implant-supported three-unit zirconium-oxide fixed-dental-prostheses(FDPs). Methods Sixty-four custom-made zirconium-oxide abutments (n=32/group) and thirty-two zirconium-oxide FDP-frameworks were CAD/CAM manufactured. Frameworks were veneered with hand-built up or over-pressed porcelain (n=16/group). Step-stress-accelerated-life-testing (SSALT) was performed in water applying a distributed contact load at the buccal cusp-pontic-area. Post failure examinations were carried out using optical (polarized-reflected-light) and scanning electron microscopy (SEM) to visualize crack propagation and failure modes. Reliability was compared using cumulative-damage step-stress analysis (Alta-7-Pro, Reliasoft). Results Crack propagation was observed in the veneering porcelain during fatigue. The majority of zirconium-oxide FDPs demonstrated porcelain chipping as the dominant failure mode. Nevertheless, fracture of the zirconium-oxide frameworks was also observed. Over-pressed FDPs failed earlier at a mean failure load of 696 ± 149 N relative to hand-veneered at 882 ± 61 N (profile I). Weibull-stress-number of cycles-unreliability-curves were generated. The reliability (2-sided at 90% confidence bounds) for a 400N load at 100K cycles indicated values of 0.84 (0.98-0.24) for the hand-veneered FDPs and 0.50 (0.82-0.09) for their over-pressed counterparts. Conclusions Both zirconium-oxide FDP systems were resistant under accelerated-life-time-testing. Over-pressed specimens were more susceptible to fatigue loading with earlier veneer chipping. PMID:21557985

  7. Field Programmable Gate Array Reliability Analysis Guidelines for Launch Vehicle Reliability Block Diagrams

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.

    2017-01-01

    Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.

  8. Reliability of smartphone-based gait measurements for quantification of physical activity/inactivity levels.

    PubMed

    Ebara, Takeshi; Azuma, Ryohei; Shoji, Naoto; Matsukawa, Tsuyoshi; Yamada, Yasuyuki; Akiyama, Tomohiro; Kurihara, Takahiro; Yamada, Shota

    2017-11-25

    Objective measurements using built-in smartphone sensors that can measure physical activity/inactivity in daily working life have the potential to provide a new approach to assessing workers' health effects. The aim of this study was to elucidate the characteristics and reliability of built-in step counting sensors on smartphones for development of an easy-to-use objective measurement tool that can be applied in ergonomics or epidemiological research. To evaluate the reliability of step counting sensors embedded in seven major smartphone models, the 6-minute walk test was conducted and the following analyses of sensor precision and accuracy were performed: 1) relationship between actual step count and step count detected by sensors, 2) reliability between smartphones of the same model, and 3) false detection rates when sitting during office work, while riding the subway, and driving. On five of the seven models, the inter-class correlations coefficient (ICC (3,1) ) showed high reliability with a range of 0.956-0.993. The other two models, however, had ranges of 0.443-0.504 and the relative error ratios of the sensor-detected step count to the actual step count were ±48.7%-49.4%. The level of agreement between the same models was ICC (3,1) : 0.992-0.998. The false detection rates differed between the sitting conditions. These results suggest the need for appropriate regulation of step counts measured by sensors, through means such as correction or calibration with a predictive model formula, in order to obtain the highly reliable measurement results that are sought in scientific investigation.

  9. Reliability and Validity of the Evidence-Based Practice Confidence (EPIC) Scale

    ERIC Educational Resources Information Center

    Salbach, Nancy M.; Jaglal, Susan B.; Williams, Jack I.

    2013-01-01

    Introduction: The reliability, minimal detectable change (MDC), and construct validity of the evidence-based practice confidence (EPIC) scale were evaluated among physical therapists (PTs) in clinical practice. Methods: A longitudinal mail survey was conducted. Internal consistency and test-retest reliability were estimated using Cronbach's alpha…

  10. Failure Prevention of Hydraulic System Based on Oil Contamination

    NASA Astrophysics Data System (ADS)

    Singh, M.; Lathkar, G. S.; Basu, S. K.

    2012-07-01

    Oil contamination is the major source of failure and wear of hydraulic system components. As per literature survey, approximately 70 % of hydraulic system failures are caused by oil contamination. Hence, to operate the hydraulic system reliably, the hydraulic oil should be of perfect condition. This requires a proper `Contamination Management System' which involves monitoring of various parameters like oil viscosity, oil temperature, contamination level etc. A study has been carried out on vehicle mounted hydraulically operated system used for articulation of heavy article, after making the platform levelled with outrigger cylinders. It is observed that by proper monitoring of contamination level, there is considerably increase in reliability, economy in operation and long service life. This also prevents the frequent failure of hydraulic system.

  11. Physical nature of longevity of light actinides in dynamic failure phenomenon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uchaev, A. Ya., E-mail: uchaev@expd.vniief.ru; Punin, V. T.; Selchenkova, N. I.

    It is shown in this work that the physical nature of the longevity of light actinides under extreme conditions in a range of nonequilibrium states of t ∼ 10{sup –6}–10{sup –10} s is determined by the time needed for the formation of a critical concentration of a cascade of failure centers, which changes connectivity of the body. These centers form a percolation cluster. The longevity is composed of waiting time t{sub w} for the appearance of failure centers and clusterization time t{sub c} of cascade of failure centers, when connectivity in the system of failure centers and the percolation clustermore » arise. A unique mechanism of the dynamic failure process, a unique order parameter, and an equal dimensionality of the space in which the process occurs determine the physical nature of the longevity of metals, including fissionable materials.« less

  12. European Symposium on Reliability of Electron Devices, Failure Physics and Analysis (5th)

    DTIC Science & Technology

    1994-10-07

    Characterisation and Modelling WEDNESDAY 5th OCTOBER Session C Hot Carriers Session D Oxide States Session E Power Devices Workshop 2 Power Devices Session F...Medium Enterprises .......... 17 W2 Power Devices Workshop "Reliability of Power Semiconductors for Traction Applications...New Mexico, USA Sandia National Laboratories, Albuquerque, New Mexico, USA SESSION E Power Devices El Reliability Issues in New Technology

  13. Electric propulsion reliability: Statistical analysis of on-orbit anomalies and comparative analysis of electric versus chemical propulsion failure rates

    NASA Astrophysics Data System (ADS)

    Saleh, Joseph Homer; Geng, Fan; Ku, Michelle; Walker, Mitchell L. R.

    2017-10-01

    With a few hundred spacecraft launched to date with electric propulsion (EP), it is possible to conduct an epidemiological study of EP's on orbit reliability. The first objective of the present work was to undertake such a study and analyze EP's track record of on orbit anomalies and failures by different covariates. The second objective was to provide a comparative analysis of EP's failure rates with those of chemical propulsion. Satellite operators, manufacturers, and insurers will make reliability- and risk-informed decisions regarding the adoption and promotion of EP on board spacecraft. This work provides evidence-based support for such decisions. After a thorough data collection, 162 EP-equipped satellites launched between January 1997 and December 2015 were included in our dataset for analysis. Several statistical analyses were conducted, at the aggregate level and then with the data stratified by severity of the anomaly, by orbit type, and by EP technology. Mean Time To Anomaly (MTTA) and the distribution of the time to (minor/major) anomaly were investigated, as well as anomaly rates. The important findings in this work include the following: (1) Post-2005, EP's reliability has outperformed that of chemical propulsion; (2) Hall thrusters have robustly outperformed chemical propulsion, and they maintain a small but shrinking reliability advantage over gridded ion engines. Other results were also provided, for example the differentials in MTTA of minor and major anomalies for gridded ion engines and Hall thrusters. It was shown that: (3) Hall thrusters exhibit minor anomalies very early on orbit, which might be indicative of infant anomalies, and thus would benefit from better ground testing and acceptance procedures; (4) Strong evidence exists that EP anomalies (onset and likelihood) and orbit type are dependent, a dependence likely mediated by either the space environment or differences in thrusters duty cycles; (5) Gridded ion thrusters exhibit both

  14. Reliability and Validity of the Transport and Physical Activity Questionnaire (TPAQ) for Assessing Physical Activity Behaviour

    PubMed Central

    Adams, Emma J.; Goad, Mary; Sahlqvist, Shannon; Bull, Fiona C.; Cooper, Ashley R.; Ogilvie, David

    2014-01-01

    Background No current validated survey instrument allows a comprehensive assessment of both physical activity and travel behaviours for use in interdisciplinary research on walking and cycling. This study reports on the test-retest reliability and validity of physical activity measures in the transport and physical activity questionnaire (TPAQ). Methods The TPAQ assesses time spent in different domains of physical activity and using different modes of transport for five journey purposes. Test-retest reliability of eight physical activity summary variables was assessed using intra-class correlation coefficients (ICC) and Kappa scores for continuous and categorical variables respectively. In a separate study, the validity of three survey-reported physical activity summary variables was assessed by computing Spearman correlation coefficients using accelerometer-derived reference measures. The Bland-Altman technique was used to determine the absolute validity of survey-reported time spent in moderate-to-vigorous physical activity (MVPA). Results In the reliability study, ICC for time spent in different domains of physical activity ranged from fair to substantial for walking for transport (ICC = 0.59), cycling for transport (ICC = 0.61), walking for recreation (ICC = 0.48), cycling for recreation (ICC = 0.35), moderate leisure-time physical activity (ICC = 0.47), vigorous leisure-time physical activity (ICC = 0.63), and total physical activity (ICC = 0.56). The proportion of participants estimated to meet physical activity guidelines showed acceptable reliability (k = 0.60). In the validity study, comparison of survey-reported and accelerometer-derived time spent in physical activity showed strong agreement for vigorous physical activity (r = 0.72, p<0.001), fair but non-significant agreement for moderate physical activity (r = 0.24, p = 0.09) and fair agreement for MVPA (r = 0.27, p = 0.05). Bland-Altman analysis

  15. Reliability and validity of the transport and physical activity questionnaire (TPAQ) for assessing physical activity behaviour.

    PubMed

    Adams, Emma J; Goad, Mary; Sahlqvist, Shannon; Bull, Fiona C; Cooper, Ashley R; Ogilvie, David

    2014-01-01

    No current validated survey instrument allows a comprehensive assessment of both physical activity and travel behaviours for use in interdisciplinary research on walking and cycling. This study reports on the test-retest reliability and validity of physical activity measures in the transport and physical activity questionnaire (TPAQ). The TPAQ assesses time spent in different domains of physical activity and using different modes of transport for five journey purposes. Test-retest reliability of eight physical activity summary variables was assessed using intra-class correlation coefficients (ICC) and Kappa scores for continuous and categorical variables respectively. In a separate study, the validity of three survey-reported physical activity summary variables was assessed by computing Spearman correlation coefficients using accelerometer-derived reference measures. The Bland-Altman technique was used to determine the absolute validity of survey-reported time spent in moderate-to-vigorous physical activity (MVPA). In the reliability study, ICC for time spent in different domains of physical activity ranged from fair to substantial for walking for transport (ICC = 0.59), cycling for transport (ICC = 0.61), walking for recreation (ICC = 0.48), cycling for recreation (ICC = 0.35), moderate leisure-time physical activity (ICC = 0.47), vigorous leisure-time physical activity (ICC = 0.63), and total physical activity (ICC = 0.56). The proportion of participants estimated to meet physical activity guidelines showed acceptable reliability (k = 0.60). In the validity study, comparison of survey-reported and accelerometer-derived time spent in physical activity showed strong agreement for vigorous physical activity (r = 0.72, p<0.001), fair but non-significant agreement for moderate physical activity (r = 0.24, p = 0.09) and fair agreement for MVPA (r = 0.27, p = 0.05). Bland-Altman analysis showed a mean overestimation of

  16. Reliability of smartphone-based gait measurements for quantification of physical activity/inactivity levels

    PubMed Central

    Ebara, Takeshi; Azuma, Ryohei; Shoji, Naoto; Matsukawa, Tsuyoshi; Yamada, Yasuyuki; Akiyama, Tomohiro; Kurihara, Takahiro; Yamada, Shota

    2017-01-01

    Objectives: Objective measurements using built-in smartphone sensors that can measure physical activity/inactivity in daily working life have the potential to provide a new approach to assessing workers' health effects. The aim of this study was to elucidate the characteristics and reliability of built-in step counting sensors on smartphones for development of an easy-to-use objective measurement tool that can be applied in ergonomics or epidemiological research. Methods: To evaluate the reliability of step counting sensors embedded in seven major smartphone models, the 6-minute walk test was conducted and the following analyses of sensor precision and accuracy were performed: 1) relationship between actual step count and step count detected by sensors, 2) reliability between smartphones of the same model, and 3) false detection rates when sitting during office work, while riding the subway, and driving. Results: On five of the seven models, the inter-class correlations coefficient (ICC (3,1)) showed high reliability with a range of 0.956-0.993. The other two models, however, had ranges of 0.443-0.504 and the relative error ratios of the sensor-detected step count to the actual step count were ±48.7%-49.4%. The level of agreement between the same models was ICC (3,1): 0.992-0.998. The false detection rates differed between the sitting conditions. Conclusions: These results suggest the need for appropriate regulation of step counts measured by sensors, through means such as correction or calibration with a predictive model formula, in order to obtain the highly reliable measurement results that are sought in scientific investigation. PMID:28835575

  17. Minding the Cyber-Physical Gap: Model-Based Analysis and Mitigation of Systemic Perception-Induced Failure

    PubMed Central

    2017-01-01

    The cyber-physical gap (CPG) is the difference between the ‘real’ state of the world and the way the system perceives it. This discrepancy often stems from the limitations of sensing and data collection technologies and capabilities, and is inevitable at some degree in any cyber-physical system (CPS). Ignoring or misrepresenting such limitations during system modeling, specification, design, and analysis can potentially result in systemic misconceptions, disrupted functionality and performance, system failure, severe damage, and potential detrimental impacts on the system and its environment. We propose CPG-Aware Modeling & Engineering (CPGAME), a conceptual model-based approach to capturing, explaining, and mitigating the CPG. CPGAME enhances the systems engineer’s ability to cope with CPGs, mitigate them by design, and prevent erroneous decisions and actions. We demonstrate CPGAME by applying it for modeling and analysis of the 1979 Three Miles Island 2 nuclear accident, and show how its meltdown could be mitigated. We use ISO-19450:2015—Object Process Methodology as our conceptual modeling framework. PMID:28714910

  18. A novel approach for analyzing fuzzy system reliability using different types of intuitionistic fuzzy failure rates of components.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-03-01

    This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Multi-mode reliability-based design of horizontal curves.

    PubMed

    Essa, Mohamed; Sayed, Tarek; Hussein, Mohamed

    2016-08-01

    Recently, reliability analysis has been advocated as an effective approach to account for uncertainty in the geometric design process and to evaluate the risk associated with a particular design. In this approach, a risk measure (e.g. probability of noncompliance) is calculated to represent the probability that a specific design would not meet standard requirements. The majority of previous applications of reliability analysis in geometric design focused on evaluating the probability of noncompliance for only one mode of noncompliance such as insufficient sight distance. However, in many design situations, more than one mode of noncompliance may be present (e.g. insufficient sight distance and vehicle skidding at horizontal curves). In these situations, utilizing a multi-mode reliability approach that considers more than one failure (noncompliance) mode is required. The main objective of this paper is to demonstrate the application of multi-mode (system) reliability analysis to the design of horizontal curves. The process is demonstrated by a case study of Sea-to-Sky Highway located between Vancouver and Whistler, in southern British Columbia, Canada. Two noncompliance modes were considered: insufficient sight distance and vehicle skidding. The results show the importance of accounting for several noncompliance modes in the reliability model. The system reliability concept could be used in future studies to calibrate the design of various design elements in order to achieve consistent safety levels based on all possible modes of noncompliance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Reliability-based management of buried pipelines considering external corrosion defects

    NASA Astrophysics Data System (ADS)

    Miran, Seyedeh Azadeh

    Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub

  1. Personnel reliability impact on petrochemical facilities monitoring system's failure skipping probability

    NASA Astrophysics Data System (ADS)

    Kostyukov, V. N.; Naumenko, A. P.

    2017-08-01

    The paper dwells upon urgent issues of evaluating impact of actions conducted by complex technological systems operators on their safe operation considering application of condition monitoring systems for elements and sub-systems of petrochemical production facilities. The main task for the research is to distinguish factors and criteria of monitoring system properties description, which would allow to evaluate impact of errors made by personnel on operation of real-time condition monitoring and diagnostic systems for machinery of petrochemical facilities, and find and objective criteria for monitoring system class, considering a human factor. On the basis of real-time condition monitoring concepts of sudden failure skipping risk, static and dynamic error, monitoring systems, one may solve a task of evaluation of impact that personnel's qualification has on monitoring system operation in terms of error in personnel or operators' actions while receiving information from monitoring systems and operating a technological system. Operator is considered as a part of the technological system. Although, personnel's behavior is usually a combination of the following parameters: input signal - information perceiving, reaction - decision making, response - decision implementing. Based on several researches on behavior of nuclear powers station operators in USA, Italy and other countries, as well as on researches conducted by Russian scientists, required data on operator's reliability were selected for analysis of operator's behavior at technological facilities diagnostics and monitoring systems. The calculations revealed that for the monitoring system selected as an example, the failure skipping risk for the set values of static (less than 0.01) and dynamic (less than 0.001) errors considering all related factors of data on reliability of information perception, decision-making, and reaction fulfilled is 0.037, in case when all the facilities and error probability are under

  2. Reliability and validity of Web-SPAN, a web-based method for assessing weight status, diet and physical activity in youth.

    PubMed

    Storey, K E; McCargar, L J

    2012-02-01

    Web-based surveys are becoming increasing popular. The present study aimed to assess the reliability and validity of the Web-Survey of Physical Activity and Nutrition (Web-SPAN) for self-report of height and weight, diet and physical activity by youth. School children aged 11-15years (grades 7-9; n=459) participated in the school-based research (boys, n=225; girls, n=233; mean age, 12.8years). Students completed Web-SPAN (self-administered) twice and participated in on-site school assessments [height, weight, 3-day food/pedometer record, Physical Activity Questionnaire for Older Children (PAQ-C), shuttle run]. Intraclass (ICC) and Pearson's correlation coefficients and paired samples t-tests were used to assess the test-retest reliability of Web-SPAN and to compare Web-SPAN with the on-site assessments. Test-retest reliability for height (ICC=0.90), weight (ICC=0.98) and the PAQ-C (ICC=0.79) were highly correlated, whereas correlations for nutrients were not as strong (ICC=0.37-0.64). There were no differences between Web-SPAN times 1 and 2 for height and weight, although there were differences for the PAQ-C and most nutrients. Web-SPAN was strongly correlated with the on-site assessments, including height (ICC=0.88), weight (ICC=0.93) and the PAQ-C (ICC=0.70). Mean differences for height and the PAQ-C were not significant, whereas mean differences for weight were significant resulting in an underestimation of being overweight/obesity prevalence (84% agreement). Correlations for nutrients were in the range 0.24-0.40; mean differences were small but generally significantly different. Correlations were weak between the web-based PAQ-C and 3-day pedometer record (r=0.28) and 20-m shuttle run (r=0.28). Web-SPAN is a time- and cost-effective method that can be used to assess the diet and physical activity status of youth in large cross-sectional studies and to assess group trends (weight status). © 2011 The Authors. Journal of Human Nutrition and Dietetics © 2011 The

  3. Reliable communication in the presence of failures

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, Thomas A.

    1987-01-01

    The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistant orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols is the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach.

  4. Limit states and reliability-based pipeline design. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, T.J.E.; Chen, Q.; Pandey, M.D.

    1997-06-01

    This report provides the results of a study to develop limit states design (LSD) procedures for pipelines. Limit states design, also known as load and resistance factor design (LRFD), provides a unified approach to dealing with all relevant failure modes combinations of concern. It explicitly accounts for the uncertainties that naturally occur in the determination of the loads which act on a pipeline and in the resistance of the pipe to failure. The load and resistance factors used are based on reliability considerations; however, the designer is not faced with carrying out probabilistic calculations. This work is done during developmentmore » and periodic updating of the LSD document. This report provides background information concerning limits states and reliability-based design (Section 2), gives the limit states design procedures that were developed (Section 3) and provides results of the reliability analyses that were undertaken in order to partially calibrate the LSD method (Section 4). An appendix contains LSD design examples in order to demonstrate use of the method. Section 3, Limit States Design has been written in the format of a recommended practice. It has been structured so that, in future, it can easily be converted to a limit states design code format. Throughout the report, figures and tables are given at the end of each section, with the exception of Section 3, where to facilitate understanding of the LSD method, they have been included with the text.« less

  5. Physical Exercise and Patients with Chronic Renal Failure: A Meta-Analysis.

    PubMed

    Qiu, Zhenzhen; Zheng, Kai; Zhang, Haoxiang; Feng, Ji; Wang, Lizhi; Zhou, Hao

    2017-01-01

    Chronic renal failure is a severe clinical problem which has some significant socioeconomic impact worldwide and hemodialysis is an important way to maintain patients' health state, but it seems difficult to get better in short time. Considering these, the aim in our research is to update and evaluate the effects of exercise on the health of patients with chronic renal failure. The databases were used to search for the relevant studies in English or Chinese. And the association between physical exercise and health state of patients with chronic renal failure has been investigated. Random-effect model was used to compare the physical function and capacity in exercise and control groups. Exercise is helpful in ameliorating the situation of blood pressure in patients with renal failure and significantly reduces VO 2 in patients with renal failure. The results of subgroup analyses show that, in the age >50, physical activity can significantly reduce blood pressure in patients with renal failure. The activity program containing warm-up, strength, and aerobic exercises has benefits in blood pressure among sick people and improves their maximal oxygen consumption level. These can help patients in physical function and aerobic capacity and may give them further benefits.

  6. Predicting remaining life by fusing the physics of failure modeling with diagnostics

    NASA Astrophysics Data System (ADS)

    Kacprzynski, G. J.; Sarlashkar, A.; Roemer, M. J.; Hess, A.; Hardman, B.

    2004-03-01

    Technology that enables failure prediction of critical machine components (prognostics) has the potential to significantly reduce maintenance costs and increase availability and safety. This article summarizes a research effort funded through the U.S. Defense Advanced Research Projects Agency and Naval Air System Command aimed at enhancing prognostic accuracy through more advanced physics-of-failure modeling and intelligent utilization of relevant diagnostic information. H-60 helicopter gear is used as a case study to introduce both stochastic sub-zone crack initiation and three-dimensional fracture mechanics lifing models along with adaptive model updating techniques for tuning key failure mode variables at a local material/damage site based on fused vibration features. The overall prognostic scheme is aimed at minimizing inherent modeling and operational uncertainties via sensed system measurements that evolve as damage progresses.

  7. An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments

    DOE PAGES

    Guthrie, Michael A.

    2013-01-01

    limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment.more » For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.« less

  8. Reliability Evaluation of Base-Metal-Electrode Multilayer Ceramic Capacitors for Potential Space Applications

    NASA Technical Reports Server (NTRS)

    Liu, David (Donhang); Sampson, Michael J.

    2011-01-01

    Base-metal-electrode (BME) ceramic capacitors are being investigated for possible use in high-reliability spacelevel applications. This paper focuses on how BME capacitors construction and microstructure affects their lifetime and reliability. Examination of the construction and microstructure of commercial off-the-shelf (COTS) BME capacitors reveals great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and 0.5 m, which is much less than that of most PME capacitors. BME capacitors can be fabricated with more internal electrode layers and thinner dielectric layers than PME capacitors because they have a fine-grained microstructure and do not shrink much during ceramic sintering. This makes it possible for BME capacitors to achieve a very high capacitance volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT). Most BME capacitors were found to fail with an early avalanche breakdown, followed by a regular dielectric wearout failure during the HALT test. When most of the early failures, characterized with avalanche breakdown, were removed, BME capacitors exhibited a minimum mean time-to-failure (MTTF) of more than 105 years at room temperature and rated voltage. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically around 12 for a number of BME capacitors with a rated voltage of 25V. This may suggest that the number of grains per dielectric layer is more critical than the

  9. Validity and reliability of a self-report instrument to assess social support and physical environmental correlates of physical activity in adolescents.

    PubMed

    Reimers, Anne K; Jekauc, Darko; Mess, Filip; Mewes, Nadine; Woll, Alexander

    2012-08-29

    The purpose of this study was to examine the internal consistency, test-retest reliability, construct validity and predictive validity of a new German self-report instrument to assess the influence of social support and the physical environment on physical activity in adolescents. Based on theoretical consideration, the short scales on social support and physical environment were developed and cross-validated in two independent study samples of 9 to 17 year-old girls and boys. The longitudinal sample of Study I (n = 196) was recruited from a German comprehensive school, and subjects in this study completed the questionnaire twice with a between-test interval of seven days. Cronbach's alphas were computed to determine the internal consistency of the factors. Test-retest reliability of the latent factors was assessed using intra-class coefficients. Factorial validity of the scales was assessed using principle components analysis. Construct validity was determined using a cross-validation technique by performing confirmatory factor analysis with the independent nationwide cross-sectional sample of Study II (n = 430). Correlations between factors and three measures of physical activity (objectively measured moderate-to-vigorous physical activity (MVPA), self-reported habitual MVPA and self-reported recent MVPA) were calculated to determine the predictive validity of the instrument. Construct validity of the social support scale (two factors: parental support and peer support) and the physical environment scale (four factors: convenience, public recreation facilities, safety and private sport providers) was shown. Both scales had moderate test-retest reliability. The factors of the social support scale also had good internal consistency and predictive validity. Internal consistency and predictive validity of the physical environment scale were low to acceptable. The results of this study indicate moderate to good reliability and construct validity of the

  10. Field reliability of Ricor microcoolers

    NASA Astrophysics Data System (ADS)

    Pundak, N.; Porat, Z.; Barak, M.; Zur, Y.; Pasternak, G.

    2009-05-01

    Over the recent 25 years Ricor has fielded in excess of 50,000 Stirling cryocoolers, among which approximately 30,000 units are of micro integral rotary driven type. The statistical population of the fielded units is counted in thousands/ hundreds per application category. In contrast to MTTF values as gathered and presented based on standard reliability demonstration tests, where the failure of the weakest component dictates the end of product life, in the case of field reliability, where design and workmanship failures are counted and considered, the values are usually reported in number of failures per million hours of operation. These values are important and relevant to the prediction of service capabilities and plan.

  11. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  12. Validity and reliability of a video questionnaire to assess physical function in older adults.

    PubMed

    Balachandran, Anoop; N Verduin, Chelsea; Potiaumpai, Melanie; Ni, Meng; Signorile, Joseph F

    2016-08-01

    Self-report questionnaires are widely used to assess physical function in older adults. However, they often lack a clear frame of reference and hence interpreting and rating task difficulty levels can be problematic for the responder. Consequently, the usefulness of traditional self-report questionnaires for assessing higher-level functioning is limited. Video-based questionnaires can overcome some of these limitations by offering a clear and objective visual reference for the performance level against which the subject is to compare his or her perceived capacity. Hence the purpose of the study was to develop and validate a novel, video-based questionnaire to assess physical function in older adults independently living in the community. A total of 61 community-living adults, 60years or older, were recruited. To examine validity, 35 of the subjects completed the video questionnaire, two types of physical performance tests: a test of instrumental activity of daily living (IADL) included in the Short Physical Functional Performance battery (PFP-10), and a composite of 3 performance tests (30s chair stand, single-leg balance and usual gait speed). To ascertain reliability, two-week test-retest reliability was assessed in the remaining 26 subjects who did not participate in validity testing. The video questionnaire showed a moderate correlation with the IADLs (Spearman rho=0.64, p<0.001; 95% CI (0.4, 0.8)), and a lower correlation with the composite score of physical performance tests (Spearman rho=0.49, p<0.01; 95% CI (0.18, 0.7)). The test-retest assessment yielded an intra-class correlation (ICC) of 0.87 (p<0.001; 95% CI (0.70, 0.94)) and a Cronbach's alpha of 0.89 demonstrating good reliability and internal consistency. Our results show that the video questionnaire developed to evaluate physical function in community-living older adults is a valid and reliable assessment tool; however, further validation is needed for definitive conclusions. Copyright © 2016

  13. Improving the Validity and Reliability of a Health Promotion Survey for Physical Therapists

    PubMed Central

    Stephens, Jaca L.; Lowman, John D.; Graham, Cecilia L.; Morris, David M.; Kohler, Connie L.; Waugh, Jonathan B.

    2013-01-01

    Purpose Physical therapists (PTs) have a unique opportunity to intervene in the area of health promotion. However, no instrument has been validated to measure PTs’ views on health promotion in physical therapy practice. The purpose of this study was to evaluate the content validity and test-retest reliability of a health promotion survey designed for PTs. Methods An expert panel of PTs assessed the content validity of “The Role of Health Promotion in Physical Therapy Survey” and provided suggestions for revision. Item content validity was assessed using the content validity ratio (CVR) as well as the modified kappa statistic. Therapists then participated in the test-retest reliability assessment of the revised health promotion survey, which was assessed using a weighted kappa statistic. Results Based on feedback from the expert panelists, significant revisions were made to the original survey. The expert panel reached at least a majority consensus agreement for all items in the revised survey and the survey-CVR improved from 0.44 to 0.66. Only one item on the revised survey had substantial test-retest agreement, with 55% of the items having moderate agreement and 43% poor agreement. Conclusions All items on the revised health promotion survey demonstrated at least fair validity, but few items had reasonable test-retest reliability. Further modifications should be made to strengthen the validity and improve the reliability of this survey. PMID:23754935

  14. Reliable Control Using Disturbance Observer and Equivalent Transfer Function for Position Servo System in Current Feedback Loop Failure

    NASA Astrophysics Data System (ADS)

    Ishikawa, Kaoru; Nakamura, Taro; Osumi, Hisashi

    A reliable control method is proposed for multiple loop control system. After a feedback loop failure, such as case of the sensor break down, the control system becomes unstable and has a big fluctuation even if it has a disturbance observer. To cope with this problem, the proposed method uses an equivalent transfer function (ETF) as active redundancy compensation after the loop failure. The ETF is designed so that it does not change the transfer function of the whole system before and after the loop failure. In this paper, the characteristic of reliable control system that uses an ETF and a disturbance observer is examined by the experiment that uses the DC servo motor for the current feedback loop failure in the position servo system.

  15. Reliability and Validity of the Physical Education Activities Scale

    ERIC Educational Resources Information Center

    Thomason, Diane L.; Feng, Du

    2016-01-01

    Background: Measuring adolescent perceptions of physical education (PE) activities is necessary in understanding determinants of school PE activity participation. This study assessed reliability and validity of the Physical Education Activities Scale (PEAS), a 41-item visual analog scale measuring high school adolescent perceptions of school PE…

  16. Reliable classification of facial phenotypic variation in craniofacial microsomia: a comparison of physical exam and photographs.

    PubMed

    Birgfeld, Craig B; Heike, Carrie L; Saltzman, Babette S; Leroux, Brian G; Evans, Kelly N; Luquetti, Daniela V

    2016-03-31

    Craniofacial microsomia is a common congenital condition for which children receive longitudinal, multidisciplinary team care. However, little is known about the etiology of craniofacial microsomia and few outcome studies have been published. In order to facilitate large, multicenter studies in craniofacial microsomia, we assessed the reliability of phenotypic classification based on photographs by comparison with direct physical examination. Thirty-nine children with craniofacial microsomia underwent a physical examination and photographs according to a standardized protocol. Three clinicians completed ratings during the physical examination and, at least a month later, using respective photographs for each participant. We used descriptive statistics for participant characteristics and intraclass correlation coefficients (ICCs) to assess reliability. The agreement between ratings on photographs and physical exam was greater than 80 % for all 15 categories included in the analysis. The ICC estimates were higher than 0.6 for most features. Features with the highest ICC included: presence of epibulbar dermoids, ear abnormalities, and colobomas (ICC 0.85, 0.81, and 0.80, respectively). Orbital size, presence of pits, tongue abnormalities, and strabismus had the lowest ICC, values (0.17 or less). There was not a strong tendency for either type of rating, physical exam or photograph, to be more likely to designate a feature as abnormal. The agreement between photographs and physical exam regarding the presence of a prior surgery was greater than 90 % for most features. Our results suggest that categorization of facial phenotype in children with CFM based on photographs is reliable relative to physical examination for most facial features.

  17. Physical Exercise and Patients with Chronic Renal Failure: A Meta-Analysis

    PubMed Central

    Qiu, Zhenzhen; Zheng, Kai; Zhang, Haoxiang; Feng, Ji; Wang, Lizhi

    2017-01-01

    Chronic renal failure is a severe clinical problem which has some significant socioeconomic impact worldwide and hemodialysis is an important way to maintain patients' health state, but it seems difficult to get better in short time. Considering these, the aim in our research is to update and evaluate the effects of exercise on the health of patients with chronic renal failure. The databases were used to search for the relevant studies in English or Chinese. And the association between physical exercise and health state of patients with chronic renal failure has been investigated. Random-effect model was used to compare the physical function and capacity in exercise and control groups. Exercise is helpful in ameliorating the situation of blood pressure in patients with renal failure and significantly reduces VO2 in patients with renal failure. The results of subgroup analyses show that, in the age >50, physical activity can significantly reduce blood pressure in patients with renal failure. The activity program containing warm-up, strength, and aerobic exercises has benefits in blood pressure among sick people and improves their maximal oxygen consumption level. These can help patients in physical function and aerobic capacity and may give them further benefits. PMID:28316986

  18. The reliability, validity, and feasibility of physical activity measurement in adults with traumatic brain injury: an observational study.

    PubMed

    Hassett, Leanne; Moseley, Anne; Harmer, Alison; van der Ploeg, Hidde P

    2015-01-01

    To determine the reliability and validity of the Physical Activity Scale for Individuals with a Physical Disability (PASIPD) in adults with severe traumatic brain injury (TBI) and estimate the proportion of the sample participants who fail to meet the World Health Organization guidelines for physical activity. A single-center observational study recruited a convenience sample of 30 community-based ambulant adults with severe TBI. Participants completed the PASIPD on 2 occasions, 1 week apart, and wore an accelerometer (ActiGraph GT3X; ActiGraph LLC, Pensacola, Florida) for the 7 days between these 2 assessments. The PASIPD test-retest reliability was substantial (intraclass correlation coefficient = 0.85; 95% confidence interval, 0.70-0.92), and the correlation with the accelerometer ranged from too low to be meaningful (R = 0.09) to moderate (R = 0.57). From device-based measurement of physical activity, 56% of participants failed to meet the World Health Organization physical activity guidelines. The PASIPD is a reliable measure of the type of physical activity people with severe TBI participate in, but it is not a valid measure of the amount of moderate to vigorous physical activity in which they engage. Accelerometers should be used to quantify moderate to vigorous physical activity in people with TBI.

  19. Predictors of validity and reliability of a physical activity record in adolescents

    PubMed Central

    2013-01-01

    Background Poor to moderate validity of self-reported physical activity instruments is commonly observed in young people in low- and middle-income countries. However, the reasons for such low validity have not been examined in detail. We tested the validity of a self-administered daily physical activity record in adolescents and assessed if personal characteristics or the convenience level of reporting physical activity modified the validity estimates. Methods The study comprised a total of 302 adolescents from an urban and rural area in Ecuador. Validity was evaluated by comparing the record with accelerometer recordings for seven consecutive days. Test-retest reliability was examined by comparing registrations from two records administered three weeks apart. Time spent on sedentary (SED), low (LPA), moderate (MPA) and vigorous (VPA) intensity physical activity was estimated. Bland Altman plots were used to evaluate measurement agreement. We assessed if age, sex, urban or rural setting, anthropometry and convenience of completing the record explained differences in validity estimates using a linear mixed model. Results Although the record provided higher estimates for SED and VPA and lower estimates for LPA and MPA compared to the accelerometer, it showed an overall fair measurement agreement for validity. There was modest reliability for assessing physical activity in each intensity level. Validity was associated with adolescents’ personal characteristics: sex (SED: P = 0.007; LPA: P = 0.001; VPA: P = 0.009) and setting (LPA: P = 0.000; MPA: P = 0.047). Reliability was associated with the convenience of completing the physical activity record for LPA (low convenience: P = 0.014; high convenience: P = 0.045). Conclusions The physical activity record provided acceptable estimates for reliability and validity on a group level. Sex and setting were associated with validity estimates, whereas convenience to fill out the record was

  20. Reliability and Validity Testing of the Physical Resilience Measure

    ERIC Educational Resources Information Center

    Resnick, Barbara; Galik, Elizabeth; Dorsey, Susan; Scheve, Ann; Gutkin, Susan

    2011-01-01

    Objective: The purpose of this study was to test reliability and validity of the Physical Resilience Scale. Methods: A single-group repeated measure design was used and 130 older adults from three different housing sites participated. Participants completed the Physical Resilience Scale, Hardy-Gill Resilience Scale, 14-item Resilience Scale,…

  1. Validity and Reliability of the School Physical Activity Environment Questionnaire

    ERIC Educational Resources Information Center

    Martin, Jeffrey J.; McCaughtry, Nate; Flory, Sara; Murphy, Anne; Wisdom, Kimberlydawn

    2011-01-01

    The goal of the current study was to establish the factor validity of the Questionnaire Assessing School Physical Activity Environment (Robertson-Wilson, Levesque, & Holden, 2007) using confirmatory factor analysis procedures. Another goal was to establish internal reliability and test-retest reliability. The confirmatory factor analysis…

  2. Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates

    PubMed Central

    Barrese, James C; Rao, Naveen; Paroo, Kaivon; Triebwasser, Corey; Vargas-Irwin, Carlos; Franquemont, Lachlan; Donoghue, John P

    2016-01-01

    Objective Brain–computer interfaces (BCIs) using chronically implanted intracortical microelectrode arrays (MEAs) have the potential to restore lost function to people with disabilities if they work reliably for years. Current sensors fail to provide reliably useful signals over extended periods of time for reasons that are not clear. This study reports a comprehensive retrospective analysis from a large set of implants of a single type of intracortical MEA in a single species, with a common set of measures in order to evaluate failure modes. Approach Since 1996, 78 silicon MEAs were implanted in 27 monkeys (Macaca mulatta). We used two approaches to find reasons for sensor failure. First, we classified the time course leading up to complete recording failure as acute (abrupt) or chronic (progressive). Second, we evaluated the quality of electrode recordings over time based on signal features and electrode impedance. Failure modes were divided into four categories: biological, material, mechanical, and unknown. Main results Recording duration ranged from 0 to 2104 days (5.75 years), with a mean of 387 days and a median of 182 days (n = 78). Sixty-two arrays failed completely with a mean time to failure of 332 days (median = 133 days) while nine array experiments were electively terminated for experimental reasons (mean = 486 days). Seven remained active at the close of this study (mean = 753 days). Most failures (56%) occurred within a year of implantation, with acute mechanical failures the most common class (48%), largely because of connector issues (83%). Among grossly observable biological failures (24%), a progressive meningeal reaction that separated the array from the parenchyma was most prevalent (14.5%). In the absence of acute interruptions, electrode recordings showed a slow progressive decline in spike amplitude, noise amplitude, and number of viable channels that predicts complete signal loss by about eight years. Impedance measurements showed

  3. Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates

    NASA Astrophysics Data System (ADS)

    Barrese, James C.; Rao, Naveen; Paroo, Kaivon; Triebwasser, Corey; Vargas-Irwin, Carlos; Franquemont, Lachlan; Donoghue, John P.

    2013-12-01

    Objective. Brain-computer interfaces (BCIs) using chronically implanted intracortical microelectrode arrays (MEAs) have the potential to restore lost function to people with disabilities if they work reliably for years. Current sensors fail to provide reliably useful signals over extended periods of time for reasons that are not clear. This study reports a comprehensive retrospective analysis from a large set of implants of a single type of intracortical MEA in a single species, with a common set of measures in order to evaluate failure modes. Approach. Since 1996, 78 silicon MEAs were implanted in 27 monkeys (Macaca mulatta). We used two approaches to find reasons for sensor failure. First, we classified the time course leading up to complete recording failure as acute (abrupt) or chronic (progressive). Second, we evaluated the quality of electrode recordings over time based on signal features and electrode impedance. Failure modes were divided into four categories: biological, material, mechanical, and unknown. Main results. Recording duration ranged from 0 to 2104 days (5.75 years), with a mean of 387 days and a median of 182 days (n = 78). Sixty-two arrays failed completely with a mean time to failure of 332 days (median = 133 days) while nine array experiments were electively terminated for experimental reasons (mean = 486 days). Seven remained active at the close of this study (mean = 753 days). Most failures (56%) occurred within a year of implantation, with acute mechanical failures the most common class (48%), largely because of connector issues (83%). Among grossly observable biological failures (24%), a progressive meningeal reaction that separated the array from the parenchyma was most prevalent (14.5%). In the absence of acute interruptions, electrode recordings showed a slow progressive decline in spike amplitude, noise amplitude, and number of viable channels that predicts complete signal loss by about eight years. Impedance measurements showed

  4. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of

  5. Reliability Analysis of Sealing Structure of Electromechanical System Based on Kriging Model

    NASA Astrophysics Data System (ADS)

    Zhang, F.; Wang, Y. M.; Chen, R. W.; Deng, W. W.; Gao, Y.

    2018-05-01

    The sealing performance of aircraft electromechanical system has a great influence on flight safety, and the reliability of its typical seal structure is analyzed by researcher. In this paper, we regard reciprocating seal structure as a research object to study structural reliability. Having been based on the finite element numerical simulation method, the contact stress between the rubber sealing ring and the cylinder wall is calculated, and the relationship between the contact stress and the pressure of the hydraulic medium is built, and the friction force on different working conditions are compared. Through the co-simulation, the adaptive Kriging model obtained by EFF learning mechanism is used to describe the failure probability of the seal ring, so as to evaluate the reliability of the sealing structure. This article proposes a new idea of numerical evaluation for the reliability analysis of sealing structure, and also provides a theoretical basis for the optimal design of sealing structure.

  6. Obtaining Reliable Estimates of Ambulatory Physical Activity in People with Parkinson's Disease.

    PubMed

    Paul, Serene S; Ellis, Terry D; Dibble, Leland E; Earhart, Gammon M; Ford, Matthew P; Foreman, K Bo; Cavanaugh, James T

    2016-05-05

    We determined the number of days required, and whether to include weekdays and/or weekends, to obtain reliable measures of ambulatory physical activity in people with Parkinson's disease (PD). Ninety-two persons with PD wore a step activity monitor for seven days. The number of days required to obtain a reliable estimate of daily activity was determined from the mean intraclass correlation (ICC2,1) for all possible combinations of 1-6 consecutive days of monitoring. Two days of monitoring were sufficient to obtain reliable daily activity estimates (ICC2,1 > 0.9). Amount (p = 0.03) but not intensity (p = 0.13) of ambulatory activity was greater on weekdays than weekends. Activity prescription based on amount rather than intensity may be more appropriate for people with PD.

  7. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  8. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  9. [The reliability of a questionnaire regarding Colombian children's physical activity].

    PubMed

    Herazo-Beltrán, Aliz Y; Domínguez-Anaya, Regina

    2012-10-01

    Reporting the Physical Activity Questionnaire for school children's (PAQ-C) test-retest reliability and internal consistency. This was a descriptive study of 100 school-aged children aged 9 to 11 years old attending a school in Cartagena, Colombia. The sample was randomly selected. The PAQ-C was given twice, one week apart, after the informed consent forms had been signing by the children's parents and school officials. Cronbach's alpha coefficient of reliability was used for assessing internal consistency and an intra-class correlation coefficient for test-retest reliability SPSS (version 17.0) was used for statistical analysis. The questionnaire scored 0.73 internal consistencies during the first measurement and 0.78 on the second; intra-class correlation coefficient was 0.60. There were differences between boys and girls regarding both measurements. The PAQ-C had acceptable internal consistency and test-retest reliability, thereby making it useful for measuring children's self-reported physical activity and a valuable tool for population studies in Colombia.

  10. Reliability Analysis and Modeling of ZigBee Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to

  11. Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH) program

    NASA Astrophysics Data System (ADS)

    Fayette, Daniel F.; Speicher, Patricia; Stoklosa, Mark J.; Evans, Jillian V.; Evans, John W.; Gentile, Mike; Pagel, Chuck A.; Hakim, Edward

    1993-08-01

    A joint military-commercial effort to evaluate multichip module (MCM) structures is discussed. The program, Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH), has been designed to identify the failure mechanisms that are possible in MCM structures. The RELTECH test vehicles, technical assessment task, product evaluation plan, reliability modeling task, accelerated and environmental testing, and post-test physical analysis and failure analysis are described. The information obtained through RELTECH can be used to address standardization issues, through development of cost effective qualification and appropriate screening criteria, for inclusion into a commercial specification and the MIL-H-38534 general specification for hybrid microcircuits.

  12. Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH) program

    NASA Technical Reports Server (NTRS)

    Fayette, Daniel F.; Speicher, Patricia; Stoklosa, Mark J.; Evans, Jillian V.; Evans, John W.; Gentile, Mike; Pagel, Chuck A.; Hakim, Edward

    1993-01-01

    A joint military-commercial effort to evaluate multichip module (MCM) structures is discussed. The program, Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH), has been designed to identify the failure mechanisms that are possible in MCM structures. The RELTECH test vehicles, technical assessment task, product evaluation plan, reliability modeling task, accelerated and environmental testing, and post-test physical analysis and failure analysis are described. The information obtained through RELTECH can be used to address standardization issues, through development of cost effective qualification and appropriate screening criteria, for inclusion into a commercial specification and the MIL-H-38534 general specification for hybrid microcircuits.

  13. Validity and reliability of a self-report instrument to assess social support and physical environmental correlates of physical activity in adolescents

    PubMed Central

    2012-01-01

    Background The purpose of this study was to examine the internal consistency, test-retest reliability, construct validity and predictive validity of a new German self-report instrument to assess the influence of social support and the physical environment on physical activity in adolescents. Methods Based on theoretical consideration, the short scales on social support and physical environment were developed and cross-validated in two independent study samples of 9 to 17 year-old girls and boys. The longitudinal sample of Study I (n = 196) was recruited from a German comprehensive school, and subjects in this study completed the questionnaire twice with a between-test interval of seven days. Cronbach’s alphas were computed to determine the internal consistency of the factors. Test-retest reliability of the latent factors was assessed using intra-class coefficients. Factorial validity of the scales was assessed using principle components analysis. Construct validity was determined using a cross-validation technique by performing confirmatory factor analysis with the independent nationwide cross-sectional sample of Study II (n = 430). Correlations between factors and three measures of physical activity (objectively measured moderate-to-vigorous physical activity (MVPA), self-reported habitual MVPA and self-reported recent MVPA) were calculated to determine the predictive validity of the instrument. Results Construct validity of the social support scale (two factors: parental support and peer support) and the physical environment scale (four factors: convenience, public recreation facilities, safety and private sport providers) was shown. Both scales had moderate test-retest reliability. The factors of the social support scale also had good internal consistency and predictive validity. Internal consistency and predictive validity of the physical environment scale were low to acceptable. Conclusions The results of this study indicate moderate to good

  14. Determining Functional Reliability of Pyrotechnic Mechanical Devices

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.; Multhaup, Herbert A.

    1997-01-01

    This paper describes a new approach for evaluating mechanical performance and predicting the mechanical functional reliability of pyrotechnic devices. Not included are other possible failure modes, such as the initiation of the pyrotechnic energy source. The requirement of hundreds or thousands of consecutive, successful tests on identical components for reliability predictions, using the generally accepted go/no-go statistical approach routinely ignores physics of failure. The approach described in this paper begins with measuring, understanding and controlling mechanical performance variables. Then, the energy required to accomplish the function is compared to that delivered by the pyrotechnic energy source to determine mechanical functional margin. Finally, the data collected in establishing functional margin is analyzed to predict mechanical functional reliability, using small-sample statistics. A careful application of this approach can provide considerable cost improvements and understanding over that of go/no-go statistics. Performance and the effects of variables can be defined, and reliability predictions can be made by evaluating 20 or fewer units. The application of this approach to a pin puller used on a successful NASA mission is provided as an example.

  15. Failure mechanisms of fibrin-based surgical tissue adhesives

    NASA Astrophysics Data System (ADS)

    Sierra, David Hugh

    A series of studies was performed to investigate the potential impact of heterogeneity in the matrix of multiple-component fibrin-based tissue adhesives upon their mechanical and biomechanical properties both in vivo and in vitro. Investigations into the failure mechanisms by stereological techniques demonstrated that heterogeneity could be measured quantitatively and that the variation in heterogeneity could be altered both by the means of component mixing and delivery and by the formulation of the sealant. Ex vivo tensile adhesive strength was found to be inversely proportional to the amount of heterogeneity. In contrast, in vivo tensile wound-closure strength was found to be relatively unaffected by the degree of heterogeneity, while in vivo parenchymal organ hemostasis in rabbits was found to be affected: greater heterogeneity appeared to correlate with an increase in hemostasis time and amount of sealant necessary to effect hemostasis. Tensile testing of the bulk sealant showed that mechanical parameters were proportional to fibrin concentration and that the physical characteristics of the failure supported a ductile mechanism. Strain hardening as a function of percentage of strain, and strain rate was observed for both concentrations, and syneresis was observed at low strain rates for the lower fibrin concentration. Blister testing demonstrated that burst pressure and failure energy were proportional to fibrin concentration and decreased with increasing flow rate. Higher fibrin concentration demonstrated predominately compact morphology debonds with cohesive failure loci, demonstrating shear or viscous failure in a viscoelastic rubbery adhesive. The lower fibrin concentration sealant exhibited predominately fractal morphology debonds with cohesive failure loci, supporting an elastoviscous material condition. The failure mechanism for these was hypothesized and shown to be flow-induced ductile fracture. Based on these findings, the failure mechanism was

  16. [Reliability of nursing outcomes classification label "Knowledge: cardiac disease management (1830)" in outpatients with heart failure].

    PubMed

    Cañón-Montañez, Wilson; Oróstegui-Arenas, Myriam

    2015-01-01

    To determine the reliability (internal consistency, inter-rater reproducibility and level of agreement) of nursing outcome: "Knowledge: cardiac disease management (1830)" of the version published in Spanish, in outpatients with heart failure. A reliability study was conducted on 116 outpatients with heart failure. Six indicators of nursing outcome were operationalized. All participants were assessed simultaneously by two evaluators. Three evaluation periods were defined: initial (at baseline), final (a month later), and follow-up (two months later). Internal consistency by Cronbach alpha coefficient, inter-rater reproducibility with intraclass correlation coefficient of reproducibility or agreement and level agreement using the 95% limits of Bland and Altman. Cronbach's alpha was 0.83 (95% CI: 0.77 - 0.89) in the final evaluation, and follow-up values of 0.85 (95% CI: 0.82-0.89) and 0.83 (95% CI: 0.78 - 0.88) were found for the first and second evaluator, respectively. The intraclass correlation coefficient showed values greater 0.9 in the three evaluation periods in both the random and mixed model. The Bland-Altman 95% limits of agreement were close to zero in the three evaluations performed. The questionnaire operationalized to assess the nursing outcome: "Knowledge: cardiac disease management (1830)" in its Spanish version, is a reliable method to measure skills and knowledge in outpatients with heart failure in the Colombian context. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  17. The predictive value of physical examination findings in patients with suspected acute heart failure syndrome.

    PubMed

    Jang, Timothy B; Aubin, Chandra; Naunheim, Rosanne; Lewis, Lawrence M; Kaji, Amy H

    2012-06-01

    It can be difficult to differentiate acute heart failure syndrome (AHFS) from other causes of acute dyspnea, especially when patients present in extremis. The objective of the study was to determine the predictive value of physical examination findings for pulmonary edema and elevated B-type natriuretic peptide (BNP) levels in patients with suspected AHFS. This was a secondary analysis of a previously reported prospective study of jugular vein ultrasonography in patients with suspected AHFS. Charts were reviewed for physical examination findings, which were then compared to pulmonary edema on chest radiography (CXR) read by radiologists blinded to clinical information and BNP levels measured at presentation. The predictive value of every sign and combination of signs for pulmonary edema on CXR or an elevated BNP was poor. Since physical examination findings alone are not predictive of pulmonary edema or an elevated BNP, clinicians should have a low threshold for using CXR or BNP in clinical evaluation. This brief research report suggests that no physical examination finding or constellation of findings can be used to reliably predict pulmonary edema or an elevated BNP in patients with suspected AHFS.

  18. Analyzing Reliability and Performance Trade-Offs of HLS-Based Designs in SRAM-Based FPGAs Under Soft Errors

    NASA Astrophysics Data System (ADS)

    Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.

    2017-02-01

    The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.

  19. Validity and reliability of a modified english version of the physical activity questionnaire for adolescents.

    PubMed

    Aggio, Daniel; Fairclough, Stuart; Knowles, Zoe; Graves, Lee

    2016-01-01

    Adaptation of physical activity self-report questionnaires is sometimes required to reflect the activity behaviours of diverse populations. The processes used to modify self-report questionnaires though are typically underreported. This two-phased study used a formative approach to investigate the validity and reliability of the Physical Activity Questionnaire for Adolescents (PAQ-A) in English youth. Phase one examined test content and response process validity and subsequently informed a modified version of the PAQ-A. Phase two assessed the validity and reliability of the modified PAQ-A. In phase one, focus groups (n = 5) were conducted with adolescents (n = 20) to investigate test content and response processes of the original PAQ-A. Based on evidence gathered in phase one, a modified version of the questionnaire was administered to participants (n = 169, 14.5 ± 1.7 years) in phase two. Internal consistency and test-retest reliability were assessed using Cronbach's alpha and intra-class correlations, respectively. Spearman correlations were used to assess associations between modified PAQ-A scores and accelerometer-derived physical activity, self-reported fitness and physical activity self-efficacy. Phase one revealed that the original PAQ-A was unrepresentative for English youth and that item comprehension varied. Contextual and population/cultural-specific modifications were made to the PAQ-A for use in the subsequent phase. In phase two, modified PAQ-A scores had acceptable internal consistency (α = 0.72) and test-retest reliability (ICC = 0.78). Modified PAQ-A scores were significantly associated with objectively assessed moderate-to-vigorous physical activity (r = 0.39), total physical activity (r = 0.42), self-reported fitness (r = 0.35), and physical activity self-efficacy (r = 0.32) (p ≤ 0.01). The modified PAQ-A had acceptable internal consistency and test-retest reliability. Modified PAQ-A scores

  20. Incidence of School Failure According to Baseline Leisure-Time Physical Activity Practice: Prospective Study

    PubMed Central

    Rombaldi, Airton J.; Clark, Valerie L.; Reichert, Felipe F.; Araújo, Cora L.P.; Assunção, Maria C.; Menezes, Ana M.B.; Horta, Bernardo L.; Hallal, Pedro C.

    2012-01-01

    Purpose To evaluate the prospective association between leisure-time physical activity practice at 11 years of age and incidence of school failure from 11 to 15 years of age. Methods The sample comprised >4,300 adolescents followed up from birth to 15 years of age participating in a birth cohort study in Pelotas, Brazil. The incidence of school failure from age 11 to 15 years was calculated by first excluding from the analyses all subjects who experienced a school failure before 11 years of age, and then categorizing as “positive” all those who reported repeating a grade at school from 11 to 15 years of age. Leisure-time physical activity was measured using a validated questionnaire. Results The incidence of school failure was 47.9% among boys and 38.2% among girls. Adolescents in the top quartile of leisure-time physical activity practice at 11 years of age had a higher likelihood of school failure (OR: 1.36; 95% CI: 1.06, 1.75) compared with the least active adolescents. In adjusted analyses stratified by sex, boys in the top quartile of leisure-time physical activity practice at 11 years of age were also more likely to have failed at school from age 11 to 15 years (OR: 1.60; 95% CI: 1.09, 2.33). Conclusions Adolescents allocating >1,000 min/wk to leisure-time physical activity were more likely to experience a school failure from 11 to 15 years of age. Although this finding does not advocate against physical activity promotion, it indicates that excess time allocated to physical activity may jeopardize school performance among adolescents. PMID:23283155

  1. The determination of measures of software reliability

    NASA Technical Reports Server (NTRS)

    Maxwell, F. D.; Corn, B. C.

    1978-01-01

    Measurement of software reliability was carried out during the development of data base software for a multi-sensor tracking system. The failure ratio and failure rate were found to be consistent measures. Trend lines could be established from these measurements that provide good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  2. Systems Reliability Framework for Surface Water Sustainability and Risk Management

    NASA Astrophysics Data System (ADS)

    Myers, J. R.; Yeghiazarian, L.

    2016-12-01

    With microbial contamination posing a serious threat to the availability of clean water across the world, it is necessary to develop a framework that evaluates the safety and sustainability of water systems in respect to non-point source fecal microbial contamination. The concept of water safety is closely related to the concept of failure in reliability theory. In water quality problems, the event of failure can be defined as the concentration of microbial contamination exceeding a certain standard for usability of water. It is pertinent in watershed management to know the likelihood of such an event of failure occurring at a particular point in space and time. Microbial fate and transport are driven by environmental processes taking place in complex, multi-component, interdependent environmental systems that are dynamic and spatially heterogeneous, which means these processes and therefore their influences upon microbial transport must be considered stochastic and variable through space and time. A physics-based stochastic model of microbial dynamics is presented that propagates uncertainty using a unique sampling method based on artificial neural networks to produce a correlation between watershed characteristics and spatial-temporal probabilistic patterns of microbial contamination. These results are used to address the question of water safety through several sustainability metrics: reliability, vulnerability, resilience and a composite sustainability index. System reliability is described uniquely though the temporal evolution of risk along watershed points or pathways. Probabilistic resilience describes how long the system is above a certain probability of failure, and the vulnerability metric describes how the temporal evolution of risk changes throughout a hierarchy of failure levels. Additionally our approach allows for the identification of contributions in microbial contamination and uncertainty from specific pathways and sources. We expect that this

  3. Fibre Break Failure Processes in Unidirectional Composites. Part 2: Failure and Critical Damage State Induced by Sustained Tensile Loading

    NASA Astrophysics Data System (ADS)

    Thionnet, A.; Chou, H. Y.; Bunsell, A.

    2015-04-01

    The purpose of these three papers is not to just revisit the modelling of unidirectional composites. It is to provide a robust framework based on physical processes that can be used to optimise the design and long term reliability of internally pressurised filament wound structures. The model presented in Part 1 for the case of monotonically loaded unidirectional composites is further developed to consider the effects of the viscoelastic nature of the matrix in determining the kinetics of fibre breaks under slow or sustained loading. It is shown that the relaxation of the matrix around fibre breaks leads to locally increasing loads on neighbouring fibres and in some cases their delayed failure. Although ultimate failure is similar to the elastic case in that clusters of fibre breaks ultimately control composite failure the kinetics of their development varies significantly from the elastic case. Failure loads have been shown to reduce when loading rates are lowered.

  4. Incidence of school failure according to baseline leisure-time physical activity practice: prospective study.

    PubMed

    Rombaldi, Airton J; Clark, Valerie L; Reichert, Felipe F; Araújo, Cora L P; Assunção, Maria C; Menezes, Ana M B; Horta, Bernardo L; Hallal, Pedro C

    2012-12-01

    To evaluate the prospective association between leisure-time physical activity practice at 11 years of age and incidence of school failure from 11 to 15 years of age. The sample comprised >4,300 adolescents followed up from birth to 15 years of age participating in a birth cohort study in Pelotas, Brazil. The incidence of school failure from age 11 to 15 years was calculated by first excluding from the analyses all subjects who experienced a school failure before 11 years of age, and then categorizing as "positive" all those who reported repeating a grade at school from 11 to 15 years of age. Leisure-time physical activity was measured using a validated questionnaire. The incidence of school failure was 47.9% among boys and 38.2% among girls. Adolescents in the top quartile of leisure-time physical activity practice at 11 years of age had a higher likelihood of school failure (OR: 1.36; 95% CI: 1.06, 1.75) compared with the least active adolescents. In adjusted analyses stratified by sex, boys in the top quartile of leisure-time physical activity practice at 11 years of age were also more likely to have failed at school from age 11 to 15 years (OR: 1.60; 95% CI: 1.09, 2.33). Adolescents allocating >1,000 min/wk to leisure-time physical activity were more likely to experience a school failure from 11 to 15 years of age. Although this finding does not advocate against physical activity promotion, it indicates that excess time allocated to physical activity may jeopardize school performance among adolescents. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  5. Development of KSC program for investigating and generating field failure rates. Volume 2: Recommended format for reliability handbook for ground support equipment

    NASA Technical Reports Server (NTRS)

    Bloomquist, C. E.; Kallmeyer, R. H.

    1972-01-01

    Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results presented in this handbook are also included.

  6. A fuzzy set approach for reliability calculation of valve controlling electric actuators

    NASA Astrophysics Data System (ADS)

    Karmachev, D. P.; Yefremov, A. A.; Luneva, E. E.

    2017-02-01

    The oil and gas equipment and electric actuators in particular frequently perform in various operational modes and under dynamic environmental conditions. These factors affect equipment reliability measures in a vague, uncertain way. To eliminate the ambiguity, reliability model parameters could be defined as fuzzy numbers. We suggest a technique that allows constructing fundamental fuzzy-valued performance reliability measures based on an analysis of electric actuators failure data in accordance with the amount of work, completed before the failure, instead of failure time. Also, this paper provides a computation example of fuzzy-valued reliability and hazard rate functions, assuming Kumaraswamy complementary Weibull geometric distribution as a lifetime (reliability) model for electric actuators.

  7. The Physical Activity Scale for Individuals with Physical Disabilities: test-retest reliability and comparison with an accelerometer.

    PubMed

    van der Ploeg, Hidde P; Streppel, Kitty R M; van der Beek, Allard J; van der Woude, Luc H V; Vollenbroek-Hutten, Miriam; van Mechelen, Willem

    2007-01-01

    The objective was to determine the test-retest reliability and criterion validity of the Physical Activity Scale for Individuals with Physical Disabilities (PASIPD). Forty-five non-wheelchair dependent subjects were recruited from three Dutch rehabilitation centers. Subjects' diagnoses were: stroke, spinal cord injury, whiplash, and neurological-, orthopedic- or back disorders. The PASIPD is a 7-d recall physical activity questionnaire that was completed twice, 1 wk apart. During this week, physical activity was also measured with an Actigraph accelerometer. The test-retest reliability Spearman correlation of the PASIPD was 0.77. The criterion validity Spearman correlation was 0.30 when compared to the accelerometer. The PASIPD had test-retest reliability and criterion validity that is comparable to well established self-report physical activity questionnaires from the general population.

  8. Reliability of the OSCE for Physical and Occupational Therapists

    PubMed Central

    Sakurai, Hiroaki; Kanada, Yoshikiyo; Sugiura, Yoshito; Motoya, Ikuo; Wada, Yosuke; Yamada, Masayuki; Tomita, Masao; Tanabe, Shigeo; Teranishi, Toshio; Tsujimura, Toru; Sawa, Syunji; Okanishi, Tetsuo

    2014-01-01

    [Purpose] To examine agreement rates between faculty members and clinical supervisors as OSCE examiners. [Subjects] The study subjects were involved physical and occupational therapists working in clinical environments for 1 to 5 years after graduating from training schools as OSCE examinees, and a physical or occupational therapy faculty member and a clinical supervisor as examiners. Another clinical supervisor acted as a simulated patient. [Methods] The agreement rate between the examiners for each OSCE item was calculated based on Cohen’s kappa coefficient to confirm inter-rater reliability. [Results] The agreement rates for the behavioral aspects of the items were higher in the second than in the first examination. Similar increases were also observed in the agreement rates for the technical aspects until the initiation of each activity; however, the rates decreased during the middle to terminal stages of continuous movements. [Conclusion] The results may reflect the recent implementation of measures for the integration of therapist education in training schools and clinical training facilities. PMID:25202170

  9. Failure Modes Effects and Criticality Analysis, an Underutilized Safety, Reliability, Project Management and Systems Engineering Tool

    NASA Astrophysics Data System (ADS)

    Mullin, Daniel Richard

    2013-09-01

    The majority of space programs whether manned or unmanned for science or exploration require that a Failure Modes Effects and Criticality Analysis (FMECA) be performed as part of their safety and reliability activities. This comes as no surprise given that FMECAs have been an integral part of the reliability engineer's toolkit since the 1950s. The reasons for performing a FMECA are well known including fleshing out system single point failures, system hazards and critical components and functions. However, in the author's ten years' experience as a space systems safety and reliability engineer, findings demonstrate that the FMECA is often performed as an afterthought, simply to meet contract deliverable requirements and is often started long after the system requirements allocation and preliminary design have been completed. There are also important qualitative and quantitative components often missing which can provide useful data to all of project stakeholders. These include; probability of occurrence, probability of detection, time to effect and time to detect and, finally, the Risk Priority Number. This is unfortunate as the FMECA is a powerful system design tool that when used effectively, can help optimize system function while minimizing the risk of failure. When performed as early as possible in conjunction with writing the top level system requirements, the FMECA can provide instant feedback on the viability of the requirements while providing a valuable sanity check early in the design process. It can indicate which areas of the system will require redundancy and which areas are inherently the most risky from the onset. Based on historical and practical examples, it is this author's contention that FMECAs are an immense source of important information for all involved stakeholders in a given project and can provide several benefits including, efficient project management with respect to cost and schedule, system engineering and requirements management

  10. Physical performance tests after stroke: reliability and validity.

    PubMed

    Maeda, A; Yuasa, T; Nakamura, K; Higuchi, S; Motohashi, Y

    2000-01-01

    To evaluate the reliability and validity of the modified physical performance tests for stroke survivors who live in a community. The subjects included 40 stroke survivors and 40 apparently healthy independent elderly persons. The physical performance tests for the stroke survivors comprised two physical capacity evaluation tasks that represented physical abilities necessary to perform the main activities of daily living, e.g., standing-up ability (time needed to stand up from bed rest) and walking ability (time needed to walk 10 m). Regarding the reliability of tests, significant correlations were confirmed between test and retest of physical performance tests with both short and long intervals in individuals after stroke. Regarding the validity of tests, the authors studied the significant correlations between the maximum isometric strength of the quardriceps muscle and the time needed to walk 10 m, centimeters reached while sitting and reaching, and the time needed to stand up from bed rest. The authors confirmed that there were significant correlations between the instrumental activity of daily living and the time needed to stand up from bed rest, along with the time needed to walk 10 m for the stroke survivors. These physical performance tests are useful guides for evaluating a level of activity of daily living and physical frailty of stroke survivors living in a community.

  11. Analytical Study of different types Of network failure detection and possible remedies

    NASA Astrophysics Data System (ADS)

    Saxena, Shikha; Chandra, Somnath

    2012-07-01

    Faults in a network have various causes,such as the failure of one or more routers, fiber-cuts, failure of physical elements at the optical layer, or extraneous causes like power outages. These faults are usually detected as failures of a set of dependent logical entities and the links affected by the failed components. A reliable control plane plays a crucial role in creating high-level services in the next-generation transport network based on the Generalized Multiprotocol Label Switching (GMPLS) or Automatically Switched Optical Networks (ASON) model. In this paper, approaches to control-plane survivability, based on protection and restoration mechanisms, are examined. Procedures for the control plane state recovery are also discussed, including link and node failure recovery and the concepts of monitoring paths (MPs) and monitoring cycles (MCs) for unique localization of shared risk linked group (SRLG) failures in all-optical networks. An SRLG failure is a failure of multiple links due to a failure of a common resource. MCs (MPs) start and end at same (distinct) monitoring location(s). They are constructed such that any SRLG failure results in the failure of a unique combination of paths and cycles. We derive necessary and sufficient conditions on the set of MCs and MPs needed for localizing an SRLG failure in an arbitrary graph. Procedure of Protection and Restoration of the SRLG failure by backup re-provisioning algorithm have also been discussed.

  12. Multi-hop routing mechanism for reliable sensor computing.

    PubMed

    Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min

    2009-01-01

    Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.

  13. Reliability of the Serbian version of the International Physical Activity Questionnaire for older adults.

    PubMed

    Milanović, Zoran; Pantelić, Saša; Trajković, Nebojša; Jorgić, Bojan; Sporiš, Goran; Bratić, Milovan

    2014-01-01

    The purpose of this study was to determine the test-retest reliability of the International Physical Activity Questionnaire (IPAQ) for older adults in Serbia. Six hundred and sixty older adults (352 men, 53%; 308 women, 47%; mean age 67.65±5.76 years) participated in the study. To examine test-retest reliability, the participants were asked to complete the IPAQ on two occasions 2 weeks apart. Moderate reliability was observed between the repeated IPAQ, with intraclass correlation coefficients ranging from 0.53 to 0.91. The least reliability was established in leisure time activity (0.53) and the most reliability in the transport domain (0.91). Men and women had similar intraclass correlation coefficients for total physical activity (0.71 versus 0.74, respectively), while the biggest difference was obtained for housework in men (0.68) and in women (0.90). Our study shows that the long version of the IPAQ is a reliable instrument for assessing physical activity levels in older adults and that it may be useful for generating internationally comparable data.

  14. Daily physical activity in stable heart failure patients.

    PubMed

    Dontje, Manon L; van der Wal, Martje H L; Stolk, Ronald P; Brügemann, Johan; Jaarsma, Tiny; Wijtvliet, Petra E P J; van der Schans, Cees P; de Greef, Mathieu H G

    2014-01-01

    Physical activity is the only nonpharmacological therapy that is proven to be effective in heart failure (HF) patients in reducing morbidity. To date, little is known about the levels of daily physical activity in HF patients and about related factors. The objectives of this study were to (a) describe performance-based daily physical activity in HF patients, (b) compare it with physical activity guidelines, and (c) identify related factors of daily physical activity. The daily physical activity of 68 HF patients was measured using an accelerometer (SenseWear) for 48 hours. Psychological characteristics (self-efficacy, motivation, and depression) were measured using questionnaires. To have an indication how to interpret daily physical activity levels of the study sample, time spent on moderate- to vigorous-intensity physical activities was compared with the 30-minute activity guideline. Steps per day was compared with the criteria for healthy adults, in the absence of HF-specific criteria. Linear regression analyses were used to identify related factors of daily physical activity. Forty-four percent were active for less than 30 min/d, whereas 56% were active for more than 30 min/d. Fifty percent took fewer than 5000 steps per day, 35% took 5000 to 10 000 steps per day, and 15% took more than 10 000 steps per day. Linear regression models showed that New York Heart Association classification and self-efficacy were the most important factors explaining variance in daily physical activity. The variance in daily physical activity in HF patients is considerable. Approximately half of the patients had a sedentary lifestyle. Higher New York Heart Association classification and lower self-efficacy are associated with less daily physical activity. These findings contribute to the understanding of daily physical activity behavior of HF patients and can help healthcare providers to promote daily physical activity in sedentary HF patients.

  15. Interexaminer reliability in physical examination of patients with low back pain.

    PubMed

    Strender, L E; Sjöblom, A; Sundell, K; Ludwig, R; Taube, A

    1997-04-01

    Seventy-one patients with low back pain were examined by two physiotherapists (50 patients) and two physicians (21 patients). The two physiotherapists had worked together for many years, but the two physicians had not. The interexaminer reliability of the clinical tests included in the physical examination was evaluated. To evaluate the interexaminer reliability of clinical tests used in the physical examination of patients with low back pain under ideal circumstances, which was the case for the physiotherapists. Numerous clinical tests are used in the evaluation of patients with low back pain. To reach the correct diagnosis, only tests with an acceptable validity and reliability should be used. Previous studies have mainly shown low reliability. It is important that clinical tests not be rejected because of low reliability caused by differences between examiners in performance of the examination and in their definition of normal results. Two examiners, either two physiotherapists or two physicians, independently examined patients with low back pain. In approximately half of the clinical tests studied, an acceptable reliability was demonstrated. On the basis of the physiotherapists series, the reliability was acceptable for a number of clinical tests that are used in the evaluation of patients with low back pain. The results suggest that clinical tests should be standardized to a much higher degree than they are today.

  16. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  17. The importance of daily physical activity for improved exercise tolerance in heart failure patients with limited access to centre-based cardiac rehabilitation.

    PubMed

    Sato, Noriaki; Origuchi, Hideki; Yamamoto, Umpei; Takanaga, Yasuhiro; Mohri, Masahiro

    2012-09-01

    Supervised cardiac rehabilitation provided at dedicated centres ameliorates exercise intolerance in patients with chronic heart failure. To correlate the amount of physical activity outside the hospital with improved exercise tolerance in patients with limited access to centre-based programs. Forty patients (median age 69 years) with stable heart failure due to systolic left ventricular dysfunction participated in cardiac rehabilitation once per week for five months. Using a validated single-axial accelerometer, the number of steps and physical activity-related energy expenditures on nonrehabilitation days were determined. Median (interquartile range) peak oxygen consumption was increased from 14.4 mL/kg/min (range 12.9 mL/kg/min to 17.8 mL/kg/min) to 16.4 mL/kg/min (range 13.9 mL/kg/min to 19.1 mL/kg/min); P<0.0001, in association with a decreased slope of the minute ventilation to carbon dioxide production plot (34.2 [range 31.3 to 38.1] versus 32.7 [range 30.3 to 36.5]; P<0.0001). Changes in peak oxygen consumption were correlated with the daily number of steps (P<0.01) and physical activity-related energy expenditures (P<0.05). Furthermore, these changes were significantly correlated with total exercise time per day and time spent for light (≤3 metabolic equivalents) exercise, but not with time spent for moderate/vigorous (>3 metabolic equivalents) exercise. The number of steps and energy expenditures outside the hospital were correlated with improved exercise capacity. An accelerometer may be useful for guiding home-based cardiac rehabilitation.

  18. A mid-layer model for human reliability analysis : understanding the cognitive causes of human failure events.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Song-Hua; Chang, James Y. H.; Boring,Ronald L.

    2010-03-01

    The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identifiedmore » human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less

  19. Physical activity surveillance in the European Union: reliability and validity of the European Health Interview Survey-Physical Activity Questionnaire (EHIS-PAQ).

    PubMed

    Baumeister, Sebastian E; Ricci, Cristian; Kohler, Simone; Fischer, Beate; Töpfer, Christine; Finger, Jonas D; Leitzmann, Michael F

    2016-05-23

    The current study examined the reliability and validity of the European Health Interview Survey-Physical Activity Questionnaire (EHIS-PAQ), a novel questionnaire for the surveillance of physical activity (PA) during work, transportation, leisure time, sports, health-enhancing and muscle-strengthening activities over a typical week. Reliability was assessed by administering the 8-item questionnaire twice to a population-based sample of 123 participants aged 15-79 years at a 30-day interval. Concurrent (inter-method) validity was examined in 140 participants by comparisons with self-report (International Physical Activity Questionnaire-Long Form (IPAQ-LF), 7-day Physical Activity Record (PAR), and objective criterion measures (GT3X+ accelerometer, physical work capacity at 75% (PWC(75%)) from submaximal cycle ergometer test, hand grip strength). The EHIS-PAQ showed acceptable reliability, with a median intraclass correlation coefficient across PA domains of 0.55 (range 0.43-0.73). Compared to the GT3X+ (counts/minutes/day), the EHIS-PAQ underestimated moderate-to-vigorous PA (median difference -11.7, p-value = 0.054). Spearman correlation coefficients (ρ) for validity were moderate-to-strong (ρ's > 0.41) for work-related PA (IPAQ = 0.64, GT3X + =0.43, grip strength = 0.48), transportation-related PA (IPAQ = 0.62, GT3X + =0.43), walking (IPAQ = 0.58), and health-enhancing PA (IPAQ = 0.58, PAR = 0.64, GT3X + =0.44, PWC(75%) = 0.48), and fair-to-poor (ρ's < 0.41) for moderate-to-vigorous aerobic recreational and muscle-strengthening PA. The EHIS-PAQ showed good evidence for reliability and validity for the measurement of PA levels at work, during transportation and health-enhancing PA.

  20. Simulating stick-slip failure in a sheared granular layer using a physics-based constitutive model

    DOE PAGES

    Lieou, Charles K. C.; Daub, Eric G.; Guyer, Robert A.; ...

    2017-01-14

    In this paper, we model laboratory earthquakes in a biaxial shear apparatus using the Shear-Transformation-Zone (STZ) theory of dense granular flow. The theory is based on the observation that slip events in a granular layer are attributed to grain rearrangement at soft spots called STZs, which can be characterized according to principles of statistical physics. We model lab data on granular shear using STZ theory and document direct connections between the STZ approach and rate-and-state friction. We discuss the stability transition from stable shear to stick-slip failure and show that stick slip is predicted by STZ when the applied shearmore » load exceeds a threshold value that is modulated by elastic stiffness and frictional rheology. Finally, we also show that STZ theory mimics fault zone dilation during the stick phase, consistent with lab observations.« less

  1. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.

    2011-01-01

    A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.

  2. Validity and reliability of the Self-Reported Physical Fitness (SRFit) survey.

    PubMed

    Keith, NiCole R; Clark, Daniel O; Stump, Timothy E; Miller, Douglas K; Callahan, Christopher M

    2014-05-01

    An accurate physical fitness survey could be useful in research and clinical care. To estimate the validity and reliability of a Self-Reported Fitness (SRFit) survey; an instrument that estimates muscular fitness, flexibility, cardiovascular endurance, BMI, and body composition (BC) in adults ≥ 40 years of age. 201 participants completed the SF-36 Physical Function Subscale, International Physical Activity Questionnaire (IPAQ), Older Adults' Desire for Physical Competence Scale (Rejeski), the SRFit survey, and the Rikli and Jones Senior Fitness Test. BC, height and weight were measured. SRFit survey items described BC, BMI, and Senior Fitness Test movements. Correlations between the Senior Fitness Test and the SRFit survey assessed concurrent validity. Cronbach's Alpha measured internal consistency within each SRFit domain. SRFit domain scores were compared with SF-36, IPAQ, and Rejeski survey scores to assess construct validity. Intraclass correlations evaluated test-retest reliability. Correlations between SRFit and the Senior Fitness Test domains ranged from 0.35 to 0.79. Cronbach's Alpha scores were .75 to .85. Correlations between SRFit and other survey scores were -0.23 to 0.72 and in the expected direction. Intraclass correlation coefficients were 0.79 to 0.93. All P-values were 0.001. Initial evaluation supports the SRFit survey's validity and reliability.

  3. Reliability enhancement of APR + diverse protection system regarding common cause failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, Y. G.; Kim, Y. M.; Yim, H. S.

    2012-07-01

    The Advanced Power Reactor Plus (APR +) nuclear power plant design has been developed on the basis of the APR1400 (Advanced Power Reactor 1400 MWe) to further enhance safety and economics. For the mitigation of Anticipated Transients Without Scram (ATWS) as well as Common Cause Failures (CCF) within the Plant Protection System (PPS) and the Emergency Safety Feature - Component Control System (ESF-CCS), several design improvement features have been implemented for the Diverse Protection System (DPS) of the APR + plant. As compared to the APR1400 DPS design, the APR + DPS has been designed to provide the Safety Injectionmore » Actuation Signal (SIAS) considering a large break LOCA accident concurrent with the CCF. Additionally several design improvement features, such as channel structure with redundant processing modules, and changes of system communication methods and auto-system test methods, are introduced to enhance the functional reliability of the DPS. Therefore, it is expected that the APR + DPS can provide an enhanced safety and reliability regarding possible CCF in the safety-grade I and C systems as well as the DPS itself. (authors)« less

  4. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  5. Reliability and Validity of the Early Years Physical Activity Questionnaire (EY-PAQ)

    PubMed Central

    Bingham, Daniel D.; Collings, Paul J.; Clemes, Stacy A.; Costa, Silvia; Santorelli, Gillian; Griffiths, Paula; Barber, Sally E.

    2016-01-01

    Measuring physical activity (PA) and sedentary time (ST) in young children (<5 years) is complex. Objective measures have high validity but require specialist expertise, are expensive, and can be burdensome for participants. A proxy-report instrument for young children that accurately measures PA and ST is needed. The aim of this study was to assess the reliability and validity of the Early Years Physical Activity Questionnaire (EY-PAQ). In a setting where English and Urdu are the predominant languages spoken by parents of young children, a sample of 196 parents and their young children (mean age 3.2 ± 0.8 years) from Bradford, UK took part in the study. A total of 156 (79.6%) questionnaires were completed in English and 40 (20.4%) were completed in transliterated Urdu. A total of 109 parents took part in the reliability aspect of the study, which involved completion of the EY-PAQ on two occasions (7.2 days apart; standard deviation (SD) = 1.1). All 196 participants took part in the validity aspect which involved comparison of EY-PAQ scores against accelerometry. Validty anaylsis used all data and data falling with specific MVPA and ST boundaries. Reliability was assessed using intra-class correlations (ICC) and validity by Bland–Altman plots and rank correlation coefficients. The test re-test reliability of the EY-PAQ was moderate for ST (ICC = 0.47) and fair for moderate-to-vigorous physical activity (MVPA)(ICC = 0.35). The EY-PAQ had poor agreement with accelerometer-determined ST (mean difference = −87.5 min·day−1) and good agreement for MVPA (mean difference = 7.1 min·day−1) limits of agreement were wide for all variables. The rank correlation coefficient was non-significant for ST (rho = 0.19) and significant for MVPA (rho = 0.30). The EY-PAQ has comparable validity and reliability to other PA self-report tools and is a promising population-based measure of young children’s habitual MVPA but not ST. In situations when objective methods are not

  6. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  7. Space Vehicle Reliability Modeling in DIORAMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tornga, Shawn Robert

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  8. Reliability modelling and analysis of thermal MEMS

    NASA Astrophysics Data System (ADS)

    Muratet, Sylvaine; Lavu, Srikanth; Fourniols, Jean-Yves; Bell, George; Desmulliez, Marc P. Y.

    2006-04-01

    This paper presents a MEMS reliability study methodology based on the novel concept of 'virtual prototyping'. This methodology can be used for the development of reliable sensors or actuators and also to characterize their behaviour in specific use conditions and applications. The methodology is demonstrated on the U-shaped micro electro thermal actuator used as test vehicle. To demonstrate this approach, a 'virtual prototype' has been developed with the modeling tools MatLab and VHDL-AMS. A best practice FMEA (Failure Mode and Effect Analysis) is applied on the thermal MEMS to investigate and assess the failure mechanisms. Reliability study is performed by injecting the identified defaults into the 'virtual prototype'. The reliability characterization methodology predicts the evolution of the behavior of these MEMS as a function of the number of cycles of operation and specific operational conditions.

  9. Hybrid PV HgCdTe Detectors: Technology Reliability and Failure Physics Program

    DTIC Science & Technology

    1988-01-01

    interconnect reliability. 3-1 8912-16 SECTION 4 ACKNOWLEDGEMENTS The authors would like to thank Dr. Marion Reine, Dr. Andrei Szilagyi , Nancy Hartle... Szilagyi , Mat. Res. Soc. Syrnp. Proc. 69, 257, (1986) 9. Private communications with Andrei Szilagy. 10. R.J. Briggs, J.W. Marciniec, P.H. Zimmermann and

  10. Reliability and availability analysis of a 10 kW@20 K helium refrigerator

    NASA Astrophysics Data System (ADS)

    Li, J.; Xiong, L. Y.; Liu, L. Q.; Wang, H. R.; Wang, B. M.

    2017-02-01

    A 10 kW@20 K helium refrigerator has been established in the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. To evaluate and improve this refrigerator’s reliability and availability, a reliability and availability analysis is performed. According to the mission profile of this refrigerator, a functional analysis is performed. The failure data of the refrigerator components are collected and failure rate distributions are fitted by software Weibull++ V10.0. A Failure Modes, Effects & Criticality Analysis (FMECA) is performed and the critical components with higher risks are pointed out. Software BlockSim V9.0 is used to calculate the reliability and the availability of this refrigerator. The result indicates that compressors, turbine and vacuum pump are the critical components and the key units of this refrigerator. The mitigation actions with respect to design, testing, maintenance and operation are proposed to decrease those major and medium risks.

  11. U.S. Army Physical Demands Study: Reliability of Simulations of Physically Demanding Tasks Performed by Combat Arms Soldiers.

    PubMed

    Foulis, Stephen A; Redmond, Jan E; Frykman, Peter N; Warr, Bradley J; Zambraski, Edward J; Sharp, Marilyn A

    2017-12-01

    Foulis, SA, Redmond, JE, Frykman, PN, Warr, BJ, Zambraski, EJ, and Sharp, MA. U.S. Army physical demands study: reliability of simulations of physically demanding tasks performed by combat arms soldiers. J Strength Cond Res 31(12): 3245-3252, 2017-Recently, the U.S. Army has mandated that soldiers must successfully complete the physically demanding tasks of their job to graduate from their Initial Military Training. Evaluating individual soldiers in the field is difficult; however, simulations of these tasks may aid in the assessment of soldiers' abilities. The purpose of this study was to determine the reliability of simulated physical soldiering tasks relevant to combat arms soldiers. Three cohorts of ∼50 soldiers repeated a subset of 8 simulated tasks 4 times over 2 weeks. Simulations included: sandbag carry, casualty drag, and casualty evacuation from a vehicle turret, move under direct fire, stow ammunition on a tank, load the main gun of a tank, transferring ammunition with a field artillery supply vehicle, and a 4-mile foot march. Reliability was assessed using intraclass correlation coefficients (ICCs), standard errors of measurement (SEMs), and 95% limits of agreement. Performance of the casualty drag and foot march did not improve across trials (p > 0.05), whereas improvements, suggestive of learning effects, were observed on the remaining 6 tasks (p ≤ 0.05). The ICCs ranged from 0.76 to 0.96, and the SEMs ranged from 3 to 16% of the mean. These 8 simulated tasks show high reliability. Given proper practice, they are suitable for evaluating the ability of Combat Arms Soldiers to complete the physical requirements of their jobs.

  12. On the use and the performance of software reliability growth models

    NASA Technical Reports Server (NTRS)

    Keiller, Peter A.; Miller, Douglas R.

    1991-01-01

    We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.

  13. Dielectric Spectroscopic Detection of Early Failures in 3-D Integrated Circuits.

    PubMed

    Obeng, Yaw; Okoro, C A; Ahn, Jung-Joon; You, Lin; Kopanski, Joseph J

    The commercial introduction of three dimensional integrated circuits (3D-ICs) has been hindered by reliability challenges, such as stress related failures, resistivity changes, and unexplained early failures. In this paper, we discuss a new RF-based metrology, based on dielectric spectroscopy, for detecting and characterizing electrically active defects in fully integrated 3D devices. These defects are traceable to the chemistry of the insolation dielectrics used in the through silicon via (TSV) construction. We show that these defects may be responsible for some of the unexplained early reliability failures observed in TSV enabled 3D devices.

  14. A Mid-Layer Model for Human Reliability Analysis: Understanding the Cognitive Causes of Human Failure Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring

    The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failuremore » mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.« less

  15. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  16. Three-dimensional Simulation and Prediction of Solenoid Valve Failure Mechanism Based on Finite Element Model

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Xiao, Mingqing; Liang, Yajun; Tang, Xilang; Li, Chao

    2018-01-01

    The solenoid valve is a kind of basic automation component applied widely. It’s significant to analyze and predict its degradation failure mechanism to improve the reliability of solenoid valve and do research on prolonging life. In this paper, a three-dimensional finite element analysis model of solenoid valve is established based on ANSYS Workbench software. A sequential coupling method used to calculate temperature filed and mechanical stress field of solenoid valve is put forward. The simulation result shows the sequential coupling method can calculate and analyze temperature and stress distribution of solenoid valve accurately, which has been verified through the accelerated life test. Kalman filtering algorithm is introduced to the data processing, which can effectively reduce measuring deviation and restore more accurate data information. Based on different driving current, a kind of failure mechanism which can easily cause the degradation of coils is obtained and an optimization design scheme of electro-insulating rubbers is also proposed. The high temperature generated by driving current and the thermal stress resulting from thermal expansion can easily cause the degradation of coil wires, which will decline the electrical resistance of coils and result in the eventual failure of solenoid valve. The method of finite element analysis can be applied to fault diagnosis and prognostic of various solenoid valves and improve the reliability of solenoid valve’s health management.

  17. Measuring the Environment for Friendliness Toward Physical Activity: A Comparison of the Reliability of 3 Questionnaires

    PubMed Central

    Brownson, Ross C.; Chang, Jen Jen; Eyler, Amy A.; Ainsworth, Barbara E.; Kirtland, Karen A.; Saelens, Brian E.; Sallis, James F.

    2004-01-01

    Objectives. We tested the reliability of 3 instruments that assessed social and physical environments. Methods. We conducted a test–retest study among US adults (n = 289). We used telephone survey methods to measure suitableness of the perceived (vs objective) environment for recreational physical activity and nonmotorized transportation. Results. Most questions in our surveys that attempted to measure specific characteristics of the built environment showed moderate to high reliability. Questions about the social environment showed lower reliability than those that assessed the physical environment. Certain blocks of questions appeared to be selectively more reliable for urban or rural respondents. Conclusions. Despite differences in content and in response formats, all 3 surveys showed evidence of reliability, and most items are now ready for use in research and in public health surveillance. PMID:14998817

  18. Reliability of physical examination for diagnosis of myofascial trigger points: a systematic review of the literature.

    PubMed

    Lucas, Nicholas; Macaskill, Petra; Irwig, Les; Moran, Robert; Bogduk, Nikolai

    2009-01-01

    Trigger points are promoted as an important cause of musculoskeletal pain. There is no accepted reference standard for the diagnosis of trigger points, and data on the reliability of physical examination for trigger points are conflicting. To systematically review the literature on the reliability of physical examination for the diagnosis of trigger points. MEDLINE, EMBASE, and other sources were searched for articles reporting the reliability of physical examination for trigger points. Included studies were evaluated for their quality and applicability, and reliability estimates were extracted and reported. Nine studies were eligible for inclusion. None satisfied all quality and applicability criteria. No study specifically reported reliability for the identification of the location of active trigger points in the muscles of symptomatic participants. Reliability estimates varied widely for each diagnostic sign, for each muscle, and across each study. Reliability estimates were generally higher for subjective signs such as tenderness (kappa range, 0.22-1.0) and pain reproduction (kappa range, 0.57-1.00), and lower for objective signs such as the taut band (kappa range, -0.08-0.75) and local twitch response (kappa range, -0.05-0.57). No study to date has reported the reliability of trigger point diagnosis according to the currently proposed criteria. On the basis of the limited number of studies available, and significant problems with their design, reporting, statistical integrity, and clinical applicability, physical examination cannot currently be recommended as a reliable test for the diagnosis of trigger points. The reliability of trigger point diagnosis needs to be further investigated with studies of high quality that use current diagnostic criteria in clinically relevant patients.

  19. The influence of microstructure on the probability of early failure in aluminum-based interconnects

    NASA Astrophysics Data System (ADS)

    Dwyer, V. M.

    2004-09-01

    For electromigration in short aluminum interconnects terminated by tungsten vias, the well known "short-line" effect applies. In a similar manner, for longer lines, early failure is determined by a critical value Lcrit for the length of polygranular clusters. Any cluster shorter than Lcrit is "immortal" on the time scale of early failure where the figure of merit is not the standard t50 value (the time to 50% failures), but rather the total probability of early failure, Pcf. Pcf is a complex function of current density, linewidth, line length, and material properties (the median grain size d50 and grain size shape factor σd). It is calculated here using a model based around the theory of runs, which has proved itself to be a useful tool for assessing the probability of extreme events. Our analysis shows that Pcf is strongly dependent on σd, and a change in σd from 0.27 to 0.5 can cause an order of magnitude increase in Pcf under typical test conditions. This has implications for the web-based two-dimensional grain-growth simulator MIT/EmSim, which generates grain patterns with σd=0.27, while typical as-patterned structures are better represented by a σd in the range 0.4 - 0.6. The simulator will consequently overestimate interconnect reliability due to this particular electromigration failure mode.

  20. Reliability and validity of the instrument used in BRFSS to assess physical activity.

    PubMed

    Yore, Michelle M; Ham, Sandra A; Ainsworth, Barbara E; Kruger, Judy; Reis, Jared P; Kohl, Harold W; Macera, Caroline A

    2007-08-01

    State-level statistics of adherence to the physical activity objectives in Healthy People 2010 are derived from the Behavioral Risk Factor Surveillance System (BRFSS) data. BRFSS physical activity questions were updated in 2001 to include domains of leisure time, household, and transportation-related activity of moderate- and vigorous intensity, and walking questions. This article reports the reliability and validity of these questions. The BRFSS Physical Activity Study (BPAS) was conducted from September 2000 to May 2001 in Columbia, SC. Sixty participants were followed for 22 d; they answered the physical activity questions three times via telephone, wore a pedometer and accelerometer, and completed a daily physical activity log for 1 wk. Measures for moderate, vigorous, recommended (i.e., met the criteria for moderate or vigorous), and strengthening activities were created according to Healthy People 2010 operational definitions. Reliability and validity were assessed using Cohen's kappa (kappa) and Pearson correlation coefficients. Seventy-three percent of participants met the recommended activity criteria compared with 45% in the total U.S. population. Test-retest reliability (kappa) was 0.35-0.53 for moderate activity, 0.80-0.86 for vigorous activity, 0.67-0.84 for recommended activity, and 0.85-0.92 for strengthening. Validity (kappa) of the survey (using the accelerometer as the standard) was 0.17-0.22 for recommended activity. Validity (kappa) of the survey (using the physical activity log as the standard) was 0.40-0.52 for recommended activity. The validity and reliability of the BRFSS physical activity questions suggests that this instrument can classify groups of adults into the levels of recommended and vigorous activity as defined by Healthy People 2010. Repeated administration of these questions over time will help to identify trends in physical activity.

  1. Depressive symptoms and the relationship of inflammation to physical signs and symptoms in heart failure patients.

    PubMed

    Heo, Seongkum; Moser, Debra K; Pressler, Susan J; Dunbar, Sandra B; Dekker, Rebecca L; Lennie, Terry A

    2014-09-01

    Depressive symptoms in patients with heart failure can affect the relationship between physical signs and symptoms and inflammation. To examine the relationship between soluble tumor necrosis factor receptor I and physical signs and symptoms and the effects of depressive symptoms on this relationship in patients with heart failure. Data on physical signs and symptoms (Symptom Status Questionnaire-Heart Failure), depressive symptoms (Beck Depression Inventory-II), and levels of the receptor (blood samples) were collected from 145 patients with heart failure. Data on the receptor were square root transformed to achieve normality. Patients were divided into 2 groups according to their scores for depressive symptoms (nondepressed <14 and depressed ≥14). Hierarchical multiple regression was used to analyze the data. In the total sample, with controls for covariates, higher levels of the receptor were significantly related to more severe physical signs and symptoms (F = 7.915; P < .001). In subgroup analyses, with controls for covariates, levels of the receptor were significantly related to physical signs and symptoms only in the patients without depression (F = 3.174; P = .005). Both depressive symptoms and inflammation should be considered along with physical signs and symptoms in patients with heart failure. Further studies are needed to determine the effects of improvement in inflammation on improvement in physical signs and symptoms, with consideration given to the effects of depressive symptoms. ©2014 American Association of Critical-Care Nurses.

  2. Reliable dual-redundant sensor failure detection and identification for the NASA F-8 DFBW aircraft

    NASA Technical Reports Server (NTRS)

    Deckert, J. C.; Desai, M. N.; Deyst, J. J., Jr.; Willsky, A. S.

    1978-01-01

    A technique was developed which provides reliable failure detection and identification (FDI) for a dual redundant subset of the flight control sensors onboard the NASA F-8 digital fly by wire (DFBW) aircraft. The technique was successfully applied to simulated sensor failures on the real time F-8 digital simulator and to sensor failures injected on telemetry data from a test flight of the F-8 DFBW aircraft. For failure identification the technique utilized the analytic redundancy which exists as functional and kinematic relationships among the various quantities being measured by the different control sensor types. The technique can be used not only in a dual redundant sensor system, but also in a more highly redundant system after FDI by conventional voting techniques reduced to two the number of unfailed sensors of a particular type. In addition the technique can be easily extended to the case in which only one sensor of a particular type is available.

  3. The reliability of physical examination tests for the diagnosis of anterior cruciate ligament rupture--A systematic review.

    PubMed

    Lange, Toni; Freiberg, Alice; Dröge, Patrik; Lützner, Jörg; Schmitt, Jochen; Kopkow, Christian

    2015-06-01

    Systematic literature review. Despite their frequent application in routine care, a systematic review on the reliability of clinical examination tests to evaluate the integrity of the ACL is missing. To summarize and evaluate intra- and interrater reliability research on physical examination tests used for the diagnosis of ACL tears. A comprehensive systematic literature search was conducted in MEDLINE, EMBASE and AMED until May 30th 2013. Studies were included if they assessed the intra- and/or interrater reliability of physical examination tests for the integrity of the ACL. Methodological quality was evaluated with the Quality Appraisal of Reliability Studies (QAREL) tool by two independent reviewers. 110 hits were achieved of which seven articles finally met the inclusion criteria. These studies examined the reliability of four physical examination tests. Intrarater reliability was assessed in three studies and ranged from fair to almost perfect (Cohen's k = 0.22-1.00). Interrater reliability was assessed in all included studies and ranged from slight to almost perfect (Cohen's k = 0.02-0.81). The Lachman test is the physical tests with the highest intrarater reliability (Cohen's k = 1.00), the Lachman test performed in prone position the test with the highest interrater reliability (Cohen's k = 0.81). Included studies were partly of low methodological quality. A meta-analysis could not be performed due to the heterogeneity in study populations, reliability measures and methodological quality of included studies. Systematic investigations on the reliability of physical examination tests to assess the integrity of the ACL are scarce and of varying methodological quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Reliability, Compliance, and Security in Web-Based Course Assessments

    ERIC Educational Resources Information Center

    Bonham, Scott

    2008-01-01

    Pre- and postcourse assessment has become a very important tool for education research in physics and other areas. The web offers an attractive alternative to in-class paper administration, but concerns about web-based administration include reliability due to changes in medium, student compliance rates, and test security, both question leakage…

  5. [The Development and Application of the Orthopaedics Implants Failure Database Software Based on WEB].

    PubMed

    Huang, Jiahua; Zhou, Hai; Zhang, Binbin; Ding, Biao

    2015-09-01

    This article develops a new failure database software for orthopaedics implants based on WEB. The software is based on B/S mode, ASP dynamic web technology is used as its main development language to achieve data interactivity, Microsoft Access is used to create a database, these mature technologies make the software extend function or upgrade easily. In this article, the design and development idea of the software, the software working process and functions as well as relative technical features are presented. With this software, we can store many different types of the fault events of orthopaedics implants, the failure data can be statistically analyzed, and in the macroscopic view, it can be used to evaluate the reliability of orthopaedics implants and operations, it also can ultimately guide the doctors to improve the clinical treatment level.

  6. The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-07-01

    In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Reliability of the Brazilian version of the Physical Activity Checklist Interview in children.

    PubMed

    Adami, Fernando; Cruciani, Fernanda; Douek, Michelle; Sewell, Carolina Dumit; Mariath, Aline Brandão; Hinnig, Patrícia de Fragas; Freaza, Silvia Rafaela Mascarenhas; Bergamaschi, Denise Pimentel

    2011-04-01

    To assess the reliability of the Lista de Atividades Físicas (Brazilian version of the Physical Activity Checklist Interview) in children. The study is part of a cross-cultural adaptation of the Physical Activity Checklist Interview, conducted with 83 school children aged between seven and ten years, enrolled between the 2nd and 5th grades of primary education in the city of São Paulo, Southeastern Brazil, in 2008. The questionnaire was responded by children through individual interviews. It is comprised of a list of 21 moderate to vigorous physical activities performed on the previous day, it is divided into periods (before, during and after school) and it has a section for interview assessment. This questionnaire enables the quantification of time spent in physical and sedentary activities and the total and weighed metabolic costs. Reliability was assessed by comparing two interviews conducted with a mean interval of three hours. For the interview assessment, data from the first interview and those from an external evaluator were compared. Bland-Altman's proposal, the intraclass correlation coefficient and Lin's concordance correlation coefficient were used to assess reliability. The intraclass correlation coefficient lower limits for the outcomes analyzed varied from 0.84 to 0.96. Precision and agreement varied between 0.83 and 0.97 and between 0.99 and 1, respectively. The line estimated from the pairs of values obtained in both interviews indicates high data precision. The interview item showing the poorest result was the ability to estimate time (fair in 27.7% of interviews). Interview assessment items showed intraclass correlation coefficients between 0.60 and 0.70, except for level of cooperation (0.46). The Brazilian version of the Physical Activity Checklist Interview shows high reliability to assess physical and sedentary activity on the previous day in children.

  8. Intelligent failure-tolerant control

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.

    1991-01-01

    An overview of failure-tolerant control is presented, beginning with robust control, progressing through parallel and analytical redundancy, and ending with rule-based systems and artificial neural networks. By design or implementation, failure-tolerant control systems are 'intelligent' systems. All failure-tolerant systems require some degrees of robustness to protect against catastrophic failure; failure tolerance often can be improved by adaptivity in decision-making and control, as well as by redundancy in measurement and actuation. Reliability, maintainability, and survivability can be enhanced by failure tolerance, although each objective poses different goals for control system design. Artificial intelligence concepts are helpful for integrating and codifying failure-tolerant control systems, not as alternatives but as adjuncts to conventional design methods.

  9. Measuring physical activity during pregnancy - Cultural adaptation of the Pregnancy Physical Activity Questionnaire (PPAQ) and assessment of its reliability in Polish conditions.

    PubMed

    Krzepota, Justyna; Sadowska, Dorota; Sempolska, Katarzyna; Pelczar, Małgorzata

    2017-12-23

    The assessment of physical activity during pregnancy is crucial in perinatal care and it is an important research topic. Unfortunately, in Poland there is a lack of one commonly accepted questionnaire of physical activity during pregnancy. The aim of this study was to adapt the Pregnancy Physical Activity Questionnaire (PPAQ) to Polish conditions and assess the reliability of its Polish version (PPAQ-PL). The PPAQ was translated from English into Polish and its reliability tested. 64 correctly completed (twice, one week apart) questionnaires were qualified for analysis. Test-retest reliability was assessed using Intraclass Correlation Coefficient (ICC). As a result of the adaptation and psychometric assessment, in the Polish version of the questionnaire the number of questions was reduced from 36 to 35 by removing the question concerning 'mowing lawn while on a riding mower'. The ICC value for total activity was 0.75, which confirms a substantial level of reliability. The ICC values for subscales of intensity ranged from 0.53 (light) - 0.86 (vigorous). For subscales of type, ICC values ranged from 0.59 (transportation) - 0.89 (household/caregiving). The PPAQ-PL can be accepted as a reliable tool for the assessing physical activity of pregnant women in Poland. Information obtained using the questionnaire might be helpful in monitoring health behaviours, preventing obesity, as well as designing and promoting physical activity programmes for pregnant women.

  10. Gender-Specific Physical Symptom Biology in Heart Failure.

    PubMed

    Lee, Christopher S; Hiatt, Shirin O; Denfeld, Quin E; Chien, Christopher V; Mudd, James O; Gelow, Jill M

    2015-01-01

    There are several gender differences that may help explain the link between biology and symptoms in heart failure (HF). The aim of this study was to examine gender-specific relationships between objective measures of HF severity and physical symptoms. Detailed clinical data, including left ventricular ejection fraction and left ventricular internal end-diastolic diameter, and HF-specific physical symptoms were collected as part of a prospective cohort study. Gender interaction terms were tested in linear regression models of physical symptoms. The sample (101 women and 101 men) averaged 57 years of age and most participants (60%) had class III/IV HF. Larger left ventricle size was associated with better physical symptoms for women and worse physical symptoms for men. Decreased ventricular compliance may result in worse physical HF symptoms for women and dilation of the ventricle may be a greater progenitor of symptoms for men with HF.

  11. Physical Functioning, Physical Activity, Exercise Self-Efficacy, and Quality of Life Among Individuals With Chronic Heart Failure in Korea: A Cross-Sectional Descriptive Study.

    PubMed

    Lee, Haejung; Boo, Sunjoo; Yu, Jihyoung; Suh, Soon-Rim; Chun, Kook Jin; Kim, Jong Hyun

    2017-04-01

    Both the beneficial relationship between exercise and quality of life and the important role played by exercise self-efficacy in maintaining an exercise regimen among individuals with chronic heart failure are well known. However, most nursing interventions for Korean patients with chronic heart failure focus only on providing education related to risk factors and symptoms. Little information is available regarding the influence of physical functions, physical activity, and exercise self-efficacy on quality of life. This study was conducted to examine the impact of physical functioning, physical activity, and exercise self-efficacy on quality of life among individuals with chronic heart failure. This study used a cross-sectional descriptive design. Data were collected from 116 outpatients with chronic heart failure in Korea. Left ventricular ejection fraction and New York Heart Association classifications were chart reviewed. Information pertaining to levels of physical activity, exercise self-efficacy, and quality of life were collected using self-administered questionnaires. Data were analyzed using descriptive statistics, t tests, analyses of variance, correlations, and hierarchical multiple regressions. About 60% of participants were physically inactive, and most showed relatively low exercise self-efficacy. The mean quality-of-life score was 80.09. The significant correlates for quality of life were poverty, functional status, physical inactivity, and exercise self-efficacy. Collectively, these four variables accounted for 50% of the observed total variance in quality of life. Approaches that focus on enhancing exercise self-efficacy may improve patient-centered outcomes in those with chronic heart failure. In light of the low level of exercise self-efficacy reported and the demonstrated ability of this factor to predict quality of life, the development of effective strategies to enhance exercise self-efficacy offers a novel and effective approach to improving

  12. Feasibility and Reliability of Physical Fitness Tests in Older Adults with Intellectual Disability: A Pilot Study

    ERIC Educational Resources Information Center

    Hilgenkamp, Thessa I. M.; van Wijck, Ruud; Evenhuis, Heleen M.

    2012-01-01

    Background: Physical fitness is relevant for wellbeing and health, but knowledge on the feasibility and reliability of instruments to measure physical fitness for older adults with intellectual disability is lacking. Methods: Feasibility and test-retest reliability of a physical fitness test battery (Box and Block Test, Response Time Test, walking…

  13. Approach to developing reliable space reactor power systems

    NASA Technical Reports Server (NTRS)

    Mondt, Jack F.; Shinbrot, Charles H.

    1991-01-01

    During Phase II, the Engineering Development Phase, the SP-100 Project has defined and is pursuing a new approach to developing reliable power systems. The approach to developing such a system during the early technology phase is described along with some preliminary examples to help explain the approach. Developing reliable components to meet space reactor power system requirements is based on a top-down systems approach which includes a point design based on a detailed technical specification of a 100-kW power system. The SP-100 system requirements implicitly recognize the challenge of achieving a high system reliability for a ten-year lifetime, while at the same time using technologies that require very significant development efforts. A low-cost method for assessing reliability, based on an understanding of fundamental failure mechanisms and design margins for specific failure mechanisms, is being developed as part of the SP-100 Program.

  14. Reliability and validity of a physical activity scale among urban pregnant women in eastern China.

    PubMed

    Jiang, Hong; He, Gengsheng; Li, Mu; Fan, Yanyan; Jiang, Hongyi; Bauman, Adrian; Qian, Xu

    2015-03-01

    This study aimed to determine the reliability and validity of the physical activity scale adapted from a Danish scale for assessing physical activity among urban pregnant women in eastern China. Participants recruited in an urban setting of eastern China were asked to complete the physical activity scale, the activity diary, and to wear a pedometer for the same 4 days, followed by repeating the activity scale for another 4 days within 2 weeks. A total of 109 pregnant women completed data recording. Good reliability of the physical activity scale was observed (intraclass correlation coefficient = .87). There was also a good comparability between the activity scale and the activity diary (Spearman's r = .75 for total energy expenditure). The agreement between the scale and pedometer reading was acceptable (Spearman's r = .45). The adapted physical activity scale is a reliable and reasonably accurate instrument for estimating physical activity among urban pregnant women in eastern China. © 2012 APJPH.

  15. [Validity and reliability of a scale to assess self-efficacy for physical activity in elderly].

    PubMed

    Borges, Rossana Arruda; Rech, Cassiano Ricardo; Meurer, Simone Teresinha; Benedetti, Tânia Rosane Bertoldo

    2015-04-01

    This study aimed to analyze the confirmatory factor validity and reliability of a self-efficacy scale for physical activity in a sample of 118 elderly (78% women) from 60 to 90 years of age. Mplus 6.1 was used to evaluate the confirmatory factor analysis. Reliability was tested by internal consistency and temporal stability. The original scale consisted of five items with dichotomous answers (yes/no), independently for walking and moderate and vigorous physical activity. The analysis excluded the item related to confidence in performing physical activities when on vacation. Two constructs were identified, called "self-efficacy for walking" and "self-efficacy for moderate and vigorous physical activity", with a factor load ≥ 0.50. Internal consistency was adequate both for walking (> 0.70) and moderate and vigorous physical activity (> 0.80), and temporal stability was adequate for all the items. In conclusion, the self-efficacy scale for physical activity showed adequate validity, reliability, and internal consistency for evaluating this construct in elderly Brazilians.

  16. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  17. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  18. Reliability and relative validity of three physical activity questionnaires in Taizhou population of China: the Taizhou Longitudinal Study.

    PubMed

    Hu, B; Lin, L F; Zhuang, M Q; Yuan, Z Y; Li, S Y; Yang, Y J; Lu, M; Yu, S Z; Jin, L; Ye, W M; Wang, X F

    2015-09-01

    To examine the test-retest reliabilities and relative validities of the Chinese version of short International Physical Activity Questionnaire (IPAQ-S-C), the Global Physical Activity Questionnaire (GPAQ-C), and the Total Energy Expenditure Questionnaire (TEEQ-C) in a population-based prospective study, the Taizhou Longitudinal Study (TZLS). A longitudinal comparative study. A total of 205 participants (male: 38.54%) aged 30-70 years completed three questionnaires twice (day one and day nine) and physical activity log (PA-log) over seven consecutive days. The test-retest reliabilities were evaluated using intra-class correlation coefficients (ICCs) and the relative validities were estimated by comparing the data from physical activity questionnaires (PAQs) and PA-log. Good reliabilities were observed between the repeated PAQs. The ICCs ranged from 0.51 to 0.80 for IPAQ-C, 0.67 to 0.85 for GPAQ-C, and 0.74 to 0.94 for TEEQ-C, respectively. Energy expenditure of most PA domains estimated by the three PAQs correlated moderately with the results recorded by PA-log except the walking domain of IPAQ-S-C. The partial correlation coefficients between the PAQs and PA-log ranged from 0.44 to 0.58 for IPAQ-S-C, 0.26 to 0.52 for GPAQ-C, and 0.41 to 0.72 for TEEQ-C, respectively. Bland-Altman plots showed acceptable agreement between the three PAQs and PA-log. The three PAQs, especially TEEQ-C, were relatively reliable and valid for assessment of physical activity and could be used in TZLS. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  19. A new method for computing the reliability of consecutive k-out-of-n:F systems

    NASA Astrophysics Data System (ADS)

    Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak

    2016-01-01

    In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  20. Reliability growth modeling analysis of the space shuttle main engines based upon the Weibull process

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1990-01-01

    The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.

  1. Top-down and bottom-up definitions of human failure events in human reliability analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald Laurids

    2014-10-01

    In the probabilistic risk assessments (PRAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approachesmore » should arrive at the same set of HFEs. This question is crucial, however, as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PRAs tend to be top-down—defined as a subset of the PRA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) often tend to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.« less

  2. Reliability of Health-Related Physical Fitness Tests among Colombian Children and Adolescents: The FUPRECOL Study.

    PubMed

    Ramírez-Vélez, Robinson; Rodrigues-Bezerra, Diogo; Correa-Bautista, Jorge Enrique; Izquierdo, Mikel; Lobelo, Felipe

    2015-01-01

    Substantial evidence indicates that youth physical fitness levels are an important marker of lifestyle and cardio-metabolic health profiles and predict future risk of chronic diseases. The reliability physical fitness tests have not been explored in Latino-American youth population. This study's aim was to examine the reliability of health-related physical fitness tests that were used in the Colombian health promotion "Fuprecol study". Participants were 229 Colombian youth (boys n = 124 and girls n = 105) aged 9 to 17.9 years old. Five components of health-related physical fitness were measured: 1) morphological component: height, weight, body mass index (BMI), waist circumference, triceps skinfold, subscapular skinfold, and body fat (%) via impedance; 2) musculoskeletal component: handgrip and standing long jump test; 3) motor component: speed/agility test (4x10 m shuttle run); 4) flexibility component (hamstring and lumbar extensibility, sit-and-reach test); 5) cardiorespiratory component: 20-meter shuttle-run test (SRT) to estimate maximal oxygen consumption. The tests were performed two times, 1 week apart on the same day of the week, except for the SRT which was performed only once. Intra-observer technical errors of measurement (TEMs) and inter-rater (reliability) were assessed in the morphological component. Reliability for the Musculoskeletal, motor and cardiorespiratory fitness components was examined using Bland-Altman tests. For the morphological component, TEMs were small and reliability was greater than 95% of all cases. For the musculoskeletal, motor, flexibility and cardiorespiratory components, we found adequate reliability patterns in terms of systematic errors (bias) and random error (95% limits of agreement). When the fitness assessments were performed twice, the systematic error was nearly 0 for all tests, except for the sit and reach (mean difference: -1.03% [95% CI = -4.35% to -2.28%]. The results from this study indicate that the "Fuprecol

  3. Reliability of Health-Related Physical Fitness Tests among Colombian Children and Adolescents: The FUPRECOL Study

    PubMed Central

    Ramírez-Vélez, Robinson; Rodrigues-Bezerra, Diogo; Correa-Bautista, Jorge Enrique; Izquierdo, Mikel; Lobelo, Felipe

    2015-01-01

    Substantial evidence indicates that youth physical fitness levels are an important marker of lifestyle and cardio-metabolic health profiles and predict future risk of chronic diseases. The reliability physical fitness tests have not been explored in Latino-American youth population. This study’s aim was to examine the reliability of health-related physical fitness tests that were used in the Colombian health promotion “Fuprecol study”. Participants were 229 Colombian youth (boys n = 124 and girls n = 105) aged 9 to 17.9 years old. Five components of health-related physical fitness were measured: 1) morphological component: height, weight, body mass index (BMI), waist circumference, triceps skinfold, subscapular skinfold, and body fat (%) via impedance; 2) musculoskeletal component: handgrip and standing long jump test; 3) motor component: speed/agility test (4x10 m shuttle run); 4) flexibility component (hamstring and lumbar extensibility, sit-and-reach test); 5) cardiorespiratory component: 20-meter shuttle-run test (SRT) to estimate maximal oxygen consumption. The tests were performed two times, 1 week apart on the same day of the week, except for the SRT which was performed only once. Intra-observer technical errors of measurement (TEMs) and inter-rater (reliability) were assessed in the morphological component. Reliability for the Musculoskeletal, motor and cardiorespiratory fitness components was examined using Bland–Altman tests. For the morphological component, TEMs were small and reliability was greater than 95% of all cases. For the musculoskeletal, motor, flexibility and cardiorespiratory components, we found adequate reliability patterns in terms of systematic errors (bias) and random error (95% limits of agreement). When the fitness assessments were performed twice, the systematic error was nearly 0 for all tests, except for the sit and reach (mean difference: -1.03% [95% CI = -4.35% to -2.28%]. The results from this study indicate that the

  4. Method of Testing and Predicting Failures of Electronic Mechanical Systems

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, Frances A.

    1996-01-01

    A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.

  5. Validity and reliability of the Short Physical Performance Battery (SPPB)

    PubMed Central

    Curcio, Carmen-Lucía; Alvarado, Beatriz; Zunzunegui, María Victoria; Guralnik, Jack

    2013-01-01

    Objectives: To assess the validity (convergent and construct) and reliability of the Short Physical Performance Battery (SPPB) among non-disabled adults between 65 to 74 years of age residing in the Andes Mountains of Colombia. Methods: Design Validation study; Participants: 150 subjects aged 65 to 74 years recruited from elderly associations (day-centers) in Manizales, Colombia. Measurements: The SPPB tests of balance, including time to walk 4 meters and time required to stand from a chair 5 times were administered to all participants. Reliability was analyzed with a 7-day interval between assessments and use of repeated ANOVA testing. Construct validity was assessed using factor analysis and by testing the relationship between SPPB and depressive symptoms, cognitive function, and self rated health (SRH), while the concurrent validity was measured through relationships with mobility limitations and disability in Activities of Daily Living (ADL). ANOVA tests were used to establish these associations. Results: Test-retest reliability of the SPPB was high: 0.87 (CI95%: 0.77-0.96). A one factor solution was found with three SPPB tests. SPPB was related to self-rated health, limitations in walking and climbing steps and to indicators of disability, as well as to cognitive function and depression. There was a graded decrease in the mean SPPB score with increasing disability and poor health. Conclusion: The Spanish version of SPPB is reliable and valid to assess physical performance among older adults from our region. Future studies should establish their clinical applications and explore usage in population studies. PMID:24892614

  6. Reliability and Validity of the Flemish Physical Activity Computerized Questionnaire in Adults

    ERIC Educational Resources Information Center

    Matton, Lynn; Wijndaele, Katrien; Duvigneaud, Nathalie; Duquet, William; Philippaerts, Renaat; Thomis, Martine; Lefevre, Johan

    2007-01-01

    The purpose of this study was to investigate the test-retest reliability and concurrent validity of the Flemish Physical Activity Computerized Questionnaire (FPACQ) in employed/unemployed and retired people. The FPACQ was developed to assess detailed information on several dimensions of physical activity and sedentary behavior over a usual week. A…

  7. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires.

    PubMed

    Helmerhorst, Hendrik J F; Brage, Søren; Warren, Janet; Besson, Herve; Ekelund, Ulf

    2012-08-31

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs.A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible.In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62-0.71 for existing, and 0.74-0.76 for new PAQs. Median validity coefficients ranged from 0.30-0.39 for existing, and from 0.25-0.41 for new PAQs.Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument.

  8. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires

    PubMed Central

    2012-01-01

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs. A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible. In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62–0.71 for existing, and 0.74–0.76 for new PAQs. Median validity coefficients ranged from 0.30–0.39 for existing, and from 0.25–0.41 for new PAQs. Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument. PMID:22938557

  9. An Examination of the Reliability and Factor Structure of the Physical Activity Scale for Individuals With Physical Disabilities (PASIPD) Among Individuals Living With Parkinson's Disease.

    PubMed

    Jimenez-Pardo, J; Holmes, J D; Jenkins, M E; Johnson, A M

    2015-07-01

    Physical activity is generally thought to be beneficial to individuals with Parkinson's disease (PD). There is, however, limited information regarding current rates of physical activity among individuals with PD, possibly due to a lack of well-validated measurement tools. In the current study we sampled 63 individuals (31 women) living with PD between the ages of 52 and 87 (M = 70.97 years, SD = 7.53), and evaluated the amount of physical activity in which they engaged over a 7-day period using a modified form of the Physical Activity Scale for Individuals with Physical Disabilities (PASIPD). The PASIPD was demonstrated to be a reliable measure within this population, with three theoretically defensible factors: (1) housework and home-based outdoor activities; (2) recreational and fitness activities; and (3) occupational activities. These results suggest that the PASIPD may be useful for monitoring physical activity involvement among individuals with PD, particularly within large-scale questionnaire-based studies.

  10. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  11. Interrelation Between Safety Factors and Reliability

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.

  12. Comparative analysis of different configurations of PLC-based safety systems from reliability point of view

    NASA Technical Reports Server (NTRS)

    Tapia, Moiez A.

    1993-01-01

    The study of a comparative analysis of distinct multiplex and fault-tolerant configurations for a PLC-based safety system from a reliability point of view is presented. It considers simplex, duplex and fault-tolerant triple redundancy configurations. The standby unit in case of a duplex configuration has a failure rate which is k times the failure rate of the standby unit, the value of k varying from 0 to 1. For distinct values of MTTR and MTTF of the main unit, MTBF and availability for these configurations are calculated. The effect of duplexing only the PLC module or only the sensors and the actuators module, on the MTBF of the configuration, is also presented. The results are summarized and merits and demerits of various configurations under distinct environments are discussed.

  13. Identification of delamination failure of boride layer on common Cr-based steels

    NASA Astrophysics Data System (ADS)

    Taktak, Sukru; Tasgetiren, Suleyman

    2006-10-01

    Adhesion is an important aspect in the reliability of coated components. With low-adhesion of interfaces, different crack paths may develop depending on the local stress field at the interface and the fracture toughness of the coating, substrate, and interface. In the current study, an attempt has been made to identify the delamination failure of coated Cr-based steels by boronizing. For this reason, two commonly used steels (AISI H13, AISI 304) are considered. The steels contain 5.3 and 18.3 wt.% Cr, respectively. Boriding treatment is carried out in a slurry salt bath consisting of borax, boric acid, and ferrosilicon at a temperature range of 800 950 °C for 3, 5, and 7 h. The general properties of the boron coating are obtained by mechanical and metallographic characterization tests. For identification of coating layer failure, some fracture toughness tests and the Daimler-Benz Rockwell-C adhesion test are used.

  14. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  15. Reliability of physical functioning tests in patients with low back pain: a systematic review.

    PubMed

    Denteneer, Lenie; Van Daele, Ulrike; Truijen, Steven; De Hertogh, Willem; Meirte, Jill; Stassijns, Gaetane

    2018-01-01

    The aim of this study was to provide a comprehensive overview of physical functioning tests in patients with low back pain (LBP) and to investigate their reliability. A systematic computerized search was finalized in four different databases on June 24, 2017: PubMed, Web of Science, Embase, and MEDLINE. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed during all stages of this review. Clinical studies that investigate the reliability of physical functioning tests in patients with LBP were eligible. The methodological quality of the included studies was assessed with the use of the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) checklist. To come to final conclusions on the reliability of the identified clinical tests, the current review assessed three factors, namely, outcome assessment, methodological quality, and consistency of description. A total of 20 studies were found eligible and 38 clinical tests were identified. Good overall test-retest reliability was concluded for the extensor endurance test (intraclass correlation coefficient [ICC]=0.93-0.97), the flexor endurance test (ICC=0.90-0.97), the 5-minute walking test (ICC=0.89-0.99), the 50-ft walking test (ICC=0.76-0.96), the shuttle walk test (ICC=0.92-0.99), the sit-to-stand test (ICC=0.91-0.99), and the loaded forward reach test (ICC=0.74-0.98). For inter-rater reliability, only one test, namely, the Biering-Sörensen test (ICC=0.88-0.99), could be concluded to have an overall good inter-rater reliability. None of the identified clinical tests could be concluded to have a good intrarater reliability. Further investigation should focus on a better overall study methodology and the use of identical protocols for the description of clinical tests. The assessment of reliability is only a first step in the recommendation process for the use of clinical tests. In future research, the identified clinical tests in the

  16. Demonstrating the Safety and Reliability of a New System or Spacecraft: Incorporating Analyses and Reviews of the Design and Processing in Determining the Number of Tests to be Conducted

    NASA Technical Reports Server (NTRS)

    Vesely, William E.; Colon, Alfredo E.

    2010-01-01

    Design Safety/Reliability is associated with the probability of no failure-causing faults existing in a design. Confidence in the non-existence of failure-causing faults is increased by performing tests with no failure. Reliability-Growth testing requirements are based on initial assurance and fault detection probability. Using binomial tables generally gives too many required tests compared to reliability-growth requirements. Reliability-Growth testing requirements are based on reliability principles and factors and should be used.

  17. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  18. Evaluative frailty index for physical activity (EFIP): a reliable and valid instrument to measure changes in level of frailty.

    PubMed

    de Vries, Nienke M; Staal, J Bart; Olde Rikkert, Marcel G M; Nijhuis-van der Sanden, Maria W G

    2013-04-01

    Physical activity is assumed to be important in the prevention and treatment of frailty. It is unclear, however, to what extent frailty can be influenced because instruments designed to assess frailty have not been validated as evaluative outcome instruments in clinical practice. The aims of this study were: (1) to develop a frailty index (i.e., the evaluative frailty index for physical activity [EFIP]) based on the method of deficit accumulation and (2) to test the clinimetric properties of the EFIP. The content of the EFIP was determined using a written Delphi procedure. Intrarater reliability, interrater reliability, and construct validity were determined in an observational study (n=24). Intrarater reliability and interrater reliability were calculated using Cohen kappa and intraclass correlation coefficients (ICCs). Construct validity was determined by correlating the score on the EFIP with those on the timed "up & go" test (TUG), the performance-oriented mobility assessment (POMA), and the Cumulative Illness Rating Scale for Geriatrics (CIRS-G). Fifty items were included in the EFIP. Interrater reliability (Cohen kappa=0.72, ICC=.96) and intrarater reliability (Cohen kappa=0.77 and 0.80, ICC=.93 and .98) were good. As expected, a fair to moderate correlation with the TUG, POMA, and CIRS-G was found (.61, -.70, and .66, respectively). Reliability and validity of the EFIP have been tested in a small sample. These and other clinimetric properties, such as responsiveness, will be assessed or reassessed in a larger study population. The EFIP is a reliable and valid instrument to evaluate the effect of physical activity on frailty in research and in clinical practice.

  19. Reliability and validity of the international physical activity questionnaire for assessing walking.

    PubMed

    van der Ploeg, Hidde P; Tudor-Locke, Catrine; Marshall, Alison L; Craig, Cora; Hagströmer, Maria; Sjöström, Michael; Bauman, Adrian

    2010-03-01

    Physical inactivity and its accompanying adverse sequelae (e.g., obesity and diabetes) are global health concerns. The single most commonly reported physical activity in public health surveys is walking (Centers for Disease Control and Prevention, 2000; Rafferty, Reeves, McGee, & Pivarnik, 2002). As evidence accumulates that walking is important for preventing weight gain (Levine et al., 2008) and reducing the risk of diabetes (Jeon, Lokken, Hu, & van Dam, 2007), there is increased need to capture this behavior in a valid and reliable manner. Although the disadvantages of a self-report methodology are well known (Sallis, & Saelens, 2000), it still represents the most feasible approach for conducting population-level surveillance across developed and developing countries. The International Physical Activity Questionnaire (IPAQ) was created and evaluated as a standardized instrument for this purpose. Although two versions of the IPAQwere designed and evaluated (short: nine items; and long: 31 items), the short form was recommended for population monitoring (Craig et al., 2003). However, it has not been recommended for intervention or research studies that require precise physical activity quantification to examine changes in physical activity at the individual level. IPAQ was also not intended to replace instruments that are more responsive to individual changes in activity level, such as objective measures. In addition to walking behaviors, IPAQ also assesses time spent in moderate- and vigorous-intensity activity as well as sitting behaviors, although the latter is not the focus of this analysis. Aggregated IPAQ data have been previously validated compared to accelerometers, and overall reliability was confirmed across 12 countries (Craig et al., 2003). Previous research showed criterion validity Spearman correlations with a median of 0.30 and test-retest reliability Spearman correlations clustered around 0.8 (Craig et al., 2003). The purpose of this study, however

  20. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  1. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  2. Constructing the 'Best' Reliability Data for the Job - Developing Generic Reliability Data from Alternative Sources Early in a Product's Development Phase

    NASA Technical Reports Server (NTRS)

    Kleinhammer, Roger K.; Graber, Robert R.; DeMott, D. L.

    2016-01-01

    Reliability practitioners advocate getting reliability involved early in a product development process. However, when assigned to estimate or assess the (potential) reliability of a product or system early in the design and development phase, they are faced with lack of reasonable models or methods for useful reliability estimation. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, analysts attempt to develop the "best" or composite analog data to support the assessments. Industries, consortia and vendors across many areas have spent decades collecting, analyzing and tabulating fielded item and component reliability performance in terms of observed failures and operational use. This data resource provides a huge compendium of information for potential use, but can also be compartmented by industry, difficult to find out about, access, or manipulate. One method used incorporates processes for reviewing these existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component. It can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. It also establishes a baseline prior that may updated based on test data or observed operational constraints and failures, i.e., using Bayesian techniques. This tutorial presents a descriptive compilation of historical data sources across numerous industries and disciplines, along with examples of contents

  3. Achieving High Reliability with People, Processes, and Technology.

    PubMed

    Saunders, Candice L; Brennan, John A

    2017-01-01

    High reliability as a corporate value in healthcare can be achieved by meeting the "Quadruple Aim" of improving population health, reducing per capita costs, enhancing the patient experience, and improving provider wellness. This drive starts with the board of trustees, CEO, and other senior leaders who ingrain high reliability throughout the organization. At WellStar Health System, the board developed an ambitious goal to become a top-decile health system in safety and quality metrics. To achieve this goal, WellStar has embarked on a journey toward high reliability and has committed to Lean management practices consistent with the Institute for Healthcare Improvement's definition of a high-reliability organization (HRO): one that is committed to the prevention of failure, early identification and mitigation of failure, and redesign of processes based on identifiable failures. In the end, a successful HRO can provide safe, effective, patient- and family-centered, timely, efficient, and equitable care through a convergence of people, processes, and technology.

  4. Reliability of stiffened structural panels: Two examples

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Davis, D. Dale, Jr.; Maring, Lise D.; Krishnamurthy, Thiagaraja; Elishakoff, Isaac

    1992-01-01

    The reliability of two graphite-epoxy stiffened panels that contain uncertainties is examined. For one panel, the effect of an overall bow-type initial imperfection is studied. The size of the bow is assumed to be a random variable. The failure mode is buckling. The benefits of quality control are explored by using truncated distributions. For the other panel, the effect of uncertainties in a strain-based failure criterion is studied. The allowable strains are assumed to be random variables. A geometrically nonlinear analysis is used to calculate a detailed strain distribution near an elliptical access hole in a wing panel that was tested to failure. Calculated strains are used to predict failure. Results are compared with the experimental failure load of the panel.

  5. Reliability and Validity of Objective Measures of Physical Activity in Youth With Cerebral Palsy Who Are Ambulatory.

    PubMed

    O'Neil, Margaret E; Fragala-Pinkham, Maria; Lennon, Nancy; George, Ameeka; Forman, Jeffrey; Trost, Stewart G

    2016-01-01

    Physical therapy for youth with cerebral palsy (CP) who are ambulatory includes interventions to increase functional mobility and participation in physical activity (PA). Thus, reliable and valid measures are needed to document PA in youth with CP. The purpose of this study was to evaluate the inter-instrument reliability and concurrent validity of 3 accelerometer-based motion sensors with indirect calorimetry as the criterion for measuring PA intensity in youth with CP. Fifty-seven youth with CP (mean age=12.5 years, SD=3.3; 51% female; 49.1% with spastic hemiplegia) participated. Inclusion criteria were: aged 6 to 20 years, ambulatory, Gross Motor Function Classification System (GMFCS) levels I through III, able to follow directions, and able to complete the full PA protocol. Protocol activities included standardized activity trials with increasing PA intensity (resting, writing, household chores, active video games, and walking at 3 self-selected speeds), as measured by weight-relative oxygen uptake (in mL/kg/min). During each trial, participants wore bilateral accelerometers on the upper arms, waist/hip, and ankle and a portable indirect calorimeter. Intraclass coefficient correlations (ICCs) were calculated to evaluate inter-instrument reliability (left-to-right accelerometer placement). Spearman correlations were used to examine concurrent validity between accelerometer output (activity and step counts) and indirect calorimetry. Friedman analyses of variance with post hoc pair-wise analyses were conducted to examine the validity of accelerometers to discriminate PA intensity across activity trials. All accelerometers exhibited excellent inter-instrument reliability (ICC=.94-.99) and good concurrent validity (rho=.70-.85). All accelerometers discriminated PA intensity across most activity trials. This PA protocol consisted of controlled activity trials. Accelerometers provide valid and reliable measures of PA intensity among youth with CP. © 2016 American

  6. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  7. Heroic Reliability Improvement in Manned Space Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.

  8. Telemetry Option in the Measurement of Physical Activity for Patients with Heart Failure

    ERIC Educational Resources Information Center

    Melczer, Csaba; Melczer, László; Oláh, András; Sélleyné-Gyúró, Mónika; Welker, Zsanett; Ács, Pongrác

    2015-01-01

    Measurement of physical activity among patients with heart failure typically requires a special approach due to the patients' physical status. Nowadays, a technology is already available that can measure the kinematic movements in 3-D by a pacemaker and implantable defibrillator giving an assessment on software. The telemetry data can be…

  9. HitPredict version 4: comprehensive reliability scoring of physical protein-protein interactions from more than 100 species.

    PubMed

    López, Yosvany; Nakai, Kenta; Patil, Ashwini

    2015-01-01

    HitPredict is a consolidated resource of experimentally identified, physical protein-protein interactions with confidence scores to indicate their reliability. The study of genes and their inter-relationships using methods such as network and pathway analysis requires high quality protein-protein interaction information. Extracting reliable interactions from most of the existing databases is challenging because they either contain only a subset of the available interactions, or a mixture of physical, genetic and predicted interactions. Automated integration of interactions is further complicated by varying levels of accuracy of database content and lack of adherence to standard formats. To address these issues, the latest version of HitPredict provides a manually curated dataset of 398 696 physical associations between 70 808 proteins from 105 species. Manual confirmation was used to resolve all issues encountered during data integration. For improved reliability assessment, this version combines a new score derived from the experimental information of the interactions with the original score based on the features of the interacting proteins. The combined interaction score performs better than either of the individual scores in HitPredict as well as the reliability score of another similar database. HitPredict provides a web interface to search proteins and visualize their interactions, and the data can be downloaded for offline analysis. Data usability has been enhanced by mapping protein identifiers across multiple reference databases. Thus, the latest version of HitPredict provides a significantly larger, more reliable and usable dataset of protein-protein interactions from several species for the study of gene groups. Database URL: http://hintdb.hgc.jp/htp. © The Author(s) 2015. Published by Oxford University Press.

  10. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  11. Reliability-Based Life Assessment of Stirling Convertor Heater Head

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Halford, Gary R.; Korovaichuk, Igor

    2004-01-01

    Onboard radioisotope power systems being developed and planned for NASA's deep-space missions require reliable design lifetimes of up to 14 yr. The structurally critical heater head of the high-efficiency Stirling power convertor has undergone extensive computational analysis of operating temperatures, stresses, and creep resistance of the thin-walled Inconel 718 bill of material. A preliminary assessment of the effect of uncertainties in the material behavior was also performed. Creep failure resistance of the thin-walled heater head could show variation due to small deviations in the manufactured thickness and in uncertainties in operating temperature and pressure. Durability prediction and reliability of the heater head are affected by these deviations from nominal design conditions. Therefore, it is important to include the effects of these uncertainties in predicting the probability of survival of the heater head under mission loads. Furthermore, it may be possible for the heater head to experience rare incidences of small temperature excursions of short duration. These rare incidences would affect the creep strain rate and, therefore, the life. This paper addresses the effects of such rare incidences on the reliability. In addition, the sensitivities of variables affecting the reliability are quantified, and guidelines developed to improve the reliability are outlined. Heater head reliability is being quantified with data from NASA Glenn Research Center's accelerated benchmark testing program.

  12. Validity and reliability of the International Physical Activity Questionnaire among adults in Mexico.

    PubMed

    Medina, Catalina; Barquera, Simón; Janssen, Ian

    2013-07-01

    To determine the test-retest reliability and validity of the Spanish version of the short-form International Physical Activity Questionnaire (IPAQ) among adults in Mexico. This was a cross-sectional study of a convenience sample of 267 adult factory workers in Mexico City. Participants were 19 - 68 years of age; 48% were female. Participants wore an accelerometer for 9 consecutive days and were administered the Spanish version of the short form IPAQ on two occasions (IPAQ1 and IPAQ2, separated by 9 days). The relation and differences between moderate-to-vigorous physical activity (MVPA) measures obtained from IPAQ1, IPAQ2, and the accelerometer were determined using correlations, linear regression, and paired t-tests. IPAQ1 and IPAQ2 measures of MVPA were significantly correlated to each other (r = 0.55, P < 0.01). However, MVPA was 44 ± 408 minutes/week lower in IPAQ1 than in IPAQ2, although this difference did not reach statistical significance (P = 0.08). The (min/week) measures from IPAQ1 and IPAQ2 were only modestly correlated with the accelerometer measures (r = 0.26 and r = 0.31, P < 0.01), and by comparison to accelerometer measures, MVPA values were higher when based on IPAQ1 (174 ± 357 min/week, P < 0.01) than for IPAQ2 (135 ± 360 min/week, P < 0.01). The percentage of participants who were classified as physically inactive according to the World Health Organization guidelines was 18.0% in IPAQ1, 25.1% in IPAQ2, and 28.2% based on the accelerometer. Similar to what has been observed in other populations, the short form IPAQ has a modest reliability and poor validity for assessing MVPA among Mexican adults.

  13. How do cardiorespiratory fitness improvements vary with physical training modality in heart failure patients? A quantitative guide

    PubMed Central

    Smart, Neil A

    2013-01-01

    BACKGROUND: Peak oxygen consumption (VO2) is the gold standard measure of cardiorespiratory fitness and a reliable predictor of survival in chronic heart failure patients. Furthermore, any form of physical training usually improves cardiorespiratory fitness, although the magnitude of improvement in peak VO2 may vary across different training prescriptions. OBJECTIVE: To quantify, and subsequently rank, the magnitude of improvement in peak VO2 for different physical training prescriptions using data from published meta-analyses and randomized controlled trials. METHODS: Prospective randomized controlled parallel trials and meta-analyses of exercise training in chronic heart failure patients that provided data on change in peak VO2 for nine a priori comparative analyses were examined. RESULTS: All forms of physical training were beneficial, although the improvement in peak VO2 varied with modality. High-intensity interval exercise yielded the largest increase in peak VO2, followed in descending order by moderate-intensity aerobic exercise, functional electrical stimulation, inspiratory muscle training, combined aerobic and resistance training, and isolated resistance training. With regard to setting, the present study was unable to determine whether outpatient or unsupervised home exercise provided greater benefits in terms of peak VO2 improvment. CONCLUSIONS: Interval exercise is not suitable for all patients, especially the high-intensity variety; however, when indicated, this form of exercise should be adopted to optimize peak VO2 adaptations. Other forms of activity, such as functional electrical stimulation, may be more appropriate for patients who are not capable of high-intensity interval training, especially for severely deconditioned patients who are initially unable to exercise. PMID:24294043

  14. Assuring Electronics Reliability: What Could and Should Be Done Differently

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    The following “ ten commandments” for the predicted and quantified reliability of aerospace electronic, and photonic products are addressed and discussed: 1) The best product is the best compromise between the needs for reliability, cost effectiveness and time-to-market; 2) Reliability cannot be low, need not be higher than necessary, but has to be adequate for a particular product; 3) When reliability is imperative, ability to quantify it is a must, especially if optimization is considered; 4) One cannot design a product with quantified, optimized and assured reliability by limiting the effort to the highly accelerated life testing (HALT) that does not quantify reliability; 5) Reliability is conceived at the design stage and should be taken care of, first of all, at this stage, when a “ genetically healthy” product should be created; reliability evaluations and assurances cannot be delayed until the product is fabricated and shipped to the customer, i.e., cannot be left to the prognostics-and-health-monitoring/managing (PHM) stage; it is too late at this stage to change the design or the materials for improved reliability; that is why, when reliability is imperative, users re-qualify parts to assess their lifetime and use redundancy to build a highly reliable system out of insufficiently reliable components; 6) Design, fabrication, qualification and PHM efforts should consider and be specific for particular products and their most likely actual or at least anticipated application(s); 7) Probabilistic design for reliability (PDfR) is an effective means for improving the state-of-the-art in the field: nothing is perfect, and the difference between an unreliable product and a robust one is “ merely” the probability of failure (PoF); 8) Highly cost-effective and highly focused failure oriented accelerated testing (FOAT) geared to a particular pre-determined reliability model and aimed at understanding the physics of failure- anticipated by this model is an

  15. Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment

    PubMed Central

    Seo, Aria; Kim, Yeichang

    2017-01-01

    As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users’ situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS. PMID:28805709

  16. Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment.

    PubMed

    Seo, Aria; Jeong, Junho; Kim, Yeichang

    2017-08-13

    As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users' situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS.

  17. Reliability Issues and Solutions in Flexible Electronics Under Mechanical Fatigue

    NASA Astrophysics Data System (ADS)

    Yi, Seol-Min; Choi, In-Suk; Kim, Byoung-Joon; Joo, Young-Chang

    2018-07-01

    Flexible devices are of significant interest due to their potential expansion of the application of smart devices into various fields, such as energy harvesting, biological applications and consumer electronics. Due to the mechanically dynamic operations of flexible electronics, their mechanical reliability must be thoroughly investigated to understand their failure mechanisms and lifetimes. Reliability issue caused by bending fatigue, one of the typical operational limitations of flexible electronics, has been studied using various test methodologies; however, electromechanical evaluations which are essential to assess the reliability of electronic devices for flexible applications had not been investigated because the testing method was not established. By employing the in situ bending fatigue test, we has studied the failure mechanism for various conditions and parameters, such as bending strain, fatigue area, film thickness, and lateral dimensions. Moreover, various methods for improving the bending reliability have been developed based on the failure mechanism. Nanostructures such as holes, pores, wires and composites of nanoparticles and nanotubes have been suggested for better reliability. Flexible devices were also investigated to find the potential failures initiated by complex structures under bending fatigue strain. In this review, the recent advances in test methodology, mechanism studies, and practical applications are introduced. Additionally, perspectives including the future advance to stretchable electronics are discussed based on the current achievements in research.

  18. Reliability Issues and Solutions in Flexible Electronics Under Mechanical Fatigue

    NASA Astrophysics Data System (ADS)

    Yi, Seol-Min; Choi, In-Suk; Kim, Byoung-Joon; Joo, Young-Chang

    2018-03-01

    Flexible devices are of significant interest due to their potential expansion of the application of smart devices into various fields, such as energy harvesting, biological applications and consumer electronics. Due to the mechanically dynamic operations of flexible electronics, their mechanical reliability must be thoroughly investigated to understand their failure mechanisms and lifetimes. Reliability issue caused by bending fatigue, one of the typical operational limitations of flexible electronics, has been studied using various test methodologies; however, electromechanical evaluations which are essential to assess the reliability of electronic devices for flexible applications had not been investigated because the testing method was not established. By employing the in situ bending fatigue test, we has studied the failure mechanism for various conditions and parameters, such as bending strain, fatigue area, film thickness, and lateral dimensions. Moreover, various methods for improving the bending reliability have been developed based on the failure mechanism. Nanostructures such as holes, pores, wires and composites of nanoparticles and nanotubes have been suggested for better reliability. Flexible devices were also investigated to find the potential failures initiated by complex structures under bending fatigue strain. In this review, the recent advances in test methodology, mechanism studies, and practical applications are introduced. Additionally, perspectives including the future advance to stretchable electronics are discussed based on the current achievements in research.

  19. The reliability and validity of the Complex Task Performance Assessment: A performance-based assessment of executive function.

    PubMed

    Wolf, Timothy J; Dahl, Abigail; Auen, Colleen; Doherty, Meghan

    2017-07-01

    The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): an ecologically valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was .991. The ICCs for all of the sub-scores of the CTPA were also high (.889-.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (p = -.425), and the Wechsler Test of Adult Reading (p  = -.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = .007) and all sub-scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function.

  20. Virtually-synchronous communication based on a weak failure suspector

    NASA Technical Reports Server (NTRS)

    Schiper, Andre; Ricciardi, Aleta

    1993-01-01

    Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.

  1. Reliability- and performance-based robust design optimization of MEMS structures considering technological uncertainties

    NASA Astrophysics Data System (ADS)

    Martowicz, Adam; Uhl, Tadeusz

    2012-10-01

    The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.

  2. Investigation of improving MEMS-type VOA reliability

    NASA Astrophysics Data System (ADS)

    Hong, Seok K.; Lee, Yeong G.; Park, Moo Y.

    2003-12-01

    MEMS technologies have been applied to a lot of areas, such as optical communications, Gyroscopes and Bio-medical components and so on. In terms of the applications in the optical communication field, MEMS technologies are essential, especially, in multi dimensional optical switches and Variable Optical Attenuators(VOAs). This paper describes the process for the development of MEMS type VOAs with good optical performance and improved reliability. Generally, MEMS VOAs have been fabricated by silicon micro-machining process, precise fibre alignment and sophisticated packaging process. Because, it is composed of many structures with various materials, it is difficult to make devices reliable. We have developed MEMS type VOSs with many failure mode considerations (FMEA: Failure Mode Effect Analysis) in the initial design step, predicted critical failure factors and revised the design, and confirmed the reliability by preliminary test. These predicted failure factors were moisture, bonding strength of the wire, which wired between the MEMS chip and TO-CAN and instability of supplied signals. Statistical quality control tools (ANOVA, T-test and so on) were used to control these potential failure factors and produce optimum manufacturing conditions. To sum up, we have successfully developed reliable MEMS type VOAs with good optical performances by controlling potential failure factors and using statistical quality control tools. As a result, developed VOAs passed international reliability standards (Telcodia GR-1221-CORE).

  3. Investigation of improving MEMS-type VOA reliability

    NASA Astrophysics Data System (ADS)

    Hong, Seok K.; Lee, Yeong G.; Park, Moo Y.

    2004-01-01

    MEMS technologies have been applied to a lot of areas, such as optical communications, Gyroscopes and Bio-medical components and so on. In terms of the applications in the optical communication field, MEMS technologies are essential, especially, in multi dimensional optical switches and Variable Optical Attenuators(VOAs). This paper describes the process for the development of MEMS type VOAs with good optical performance and improved reliability. Generally, MEMS VOAs have been fabricated by silicon micro-machining process, precise fibre alignment and sophisticated packaging process. Because, it is composed of many structures with various materials, it is difficult to make devices reliable. We have developed MEMS type VOSs with many failure mode considerations (FMEA: Failure Mode Effect Analysis) in the initial design step, predicted critical failure factors and revised the design, and confirmed the reliability by preliminary test. These predicted failure factors were moisture, bonding strength of the wire, which wired between the MEMS chip and TO-CAN and instability of supplied signals. Statistical quality control tools (ANOVA, T-test and so on) were used to control these potential failure factors and produce optimum manufacturing conditions. To sum up, we have successfully developed reliable MEMS type VOAs with good optical performances by controlling potential failure factors and using statistical quality control tools. As a result, developed VOAs passed international reliability standards (Telcodia GR-1221-CORE).

  4. Validity and Reliability of the Turkish Version of Needs Based Biopsychosocial Distress Instrument for Cancer Patients (CANDI)

    PubMed Central

    Beyhun, Nazim Ercument; Can, Gamze; Tiryaki, Ahmet; Karakullukcu, Serdar; Bulut, Bekir; Yesilbas, Sehbal; Kavgaci, Halil; Topbas, Murat

    2016-01-01

    Background Needs based biopsychosocial distress instrument for cancer patients (CANDI) is a scale based on needs arising due to the effects of cancer. Objectives The aim of this research was to determine the reliability and validity of the CANDI scale in the Turkish language. Patients and Methods The study was performed with the participation of 172 cancer patients aged 18 and over. Factor analysis (principal components analysis) was used to assess construct validity. Criterion validities were tested by computing Spearman correlation between CANDI and hospital anxiety depression scale (HADS), and brief symptom inventory (BSI) (convergent validity) and quality of life scales (FACT-G) (divergent validity). Test-retest reliabilities and internal consistencies were measured with intraclass correlation (ICC) and Cronbach-α. Results A three-factor solution (emotional, physical and social) was found with factor analysis. Internal reliability (α = 0.94) and test-retest reliability (ICC = 0.87) were significantly high. Correlations between CANDI and HADS (rs = 0.67), and BSI (rs = 0.69) and FACT-G (rs = -0.76) were moderate and significant in the expected direction. Conclusions CANDI is a valid and reliable scale in cancer patients with a three-factor structure (emotional, physical and social) in the Turkish language. PMID:27621931

  5. Landslide early warning based on failure forecast models: the example of Mt. de La Saxe rockslide, northern Italy

    NASA Astrophysics Data System (ADS)

    Manconi, A.; Giordan, D.

    2015-02-01

    We investigate the use of landslide failure forecast models by exploiting near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here we describe the main concepts of our method, and show an example of application to a real emergency scenario, the La Saxe rockslide, Aosta Valley region, northern Italy. Based on the herein presented case study, we identify operational thresholds based on the reliability of the forecast models, in order to support the management of early warning systems in the most critical phases of the landslide emergency.

  6. Reliability and validity of a nutrition and physical activity environmental self-assessment for child care

    PubMed Central

    Benjamin, Sara E; Neelon, Brian; Ball, Sarah C; Bangdiwala, Shrikant I; Ammerman, Alice S; Ward, Dianne S

    2007-01-01

    Background Few assessment instruments have examined the nutrition and physical activity environments in child care, and none are self-administered. Given the emerging focus on child care settings as a target for intervention, a valid and reliable measure of the nutrition and physical activity environment is needed. Methods To measure inter-rater reliability, 59 child care center directors and 109 staff completed the self-assessment concurrently, but independently. Three weeks later, a repeat self-assessment was completed by a sub-sample of 38 directors to assess test-retest reliability. To assess criterion validity, a researcher-administered environmental assessment was conducted at 69 centers and was compared to a self-assessment completed by the director. A weighted kappa test statistic and percent agreement were calculated to assess agreement for each question on the self-assessment. Results For inter-rater reliability, kappa statistics ranged from 0.20 to 1.00 across all questions. Test-retest reliability of the self-assessment yielded kappa statistics that ranged from 0.07 to 1.00. The inter-quartile kappa statistic ranges for inter-rater and test-retest reliability were 0.45 to 0.63 and 0.27 to 0.45, respectively. When percent agreement was calculated, questions ranged from 52.6% to 100% for inter-rater reliability and 34.3% to 100% for test-retest reliability. Kappa statistics for validity ranged from -0.01 to 0.79, with an inter-quartile range of 0.08 to 0.34. Percent agreement for validity ranged from 12.9% to 93.7%. Conclusion This study provides estimates of criterion validity, inter-rater reliability and test-retest reliability for an environmental nutrition and physical activity self-assessment instrument for child care. Results indicate that the self-assessment is a stable and reasonably accurate instrument for use with child care interventions. We therefore recommend the Nutrition and Physical Activity Self-Assessment for Child Care (NAP SACC

  7. Redundancy relations and robust failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.

    1984-01-01

    All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.

  8. Reliability and commercialization of oxidized VCSEL

    NASA Astrophysics Data System (ADS)

    Li, Alice; Pan, Jin-Shan; Lai, Horng-Ching; Lee, Bor-Lin; Wu, Jack; Lin, Yung-Sen; Huo, Tai-Chan; Wu, Calvin; Huang, Kai-Feng

    2003-06-01

    The reliability of oxidized VCSEL has similar result to implanted VCSEL. This paper presents our work on reliability data of oxidized VCSEL device and also the comparison with implanted VCSEL. The MTTF of oxidized VCSEL is 2.73 x 106 hrs at 55°C, 6 mA and failure rate ~ 1 FITs for the first 2 years operation. The reliability data of oxidized VCSEL includes activation energy, MTTF (mean-time-to failure), failure rate prediction, and 85°C / 85% humidity test will be presented below. Commercialization of oxidized VCSEL is demonstrated such as VCSEL structure, manufacturing facility, and packaging. A cost effective approach is key to its success in applications such as Datacomm.

  9. Nurses' decision making in heart failure management based on heart failure certification status.

    PubMed

    Albert, Nancy M; Bena, James F; Buxbaum, Denise; Martensen, Linda; Morrison, Shannon L; Prasun, Marilyn A; Stamp, Kelly D

    Research findings on the value of nurse certification were based on subjective perceptions or biased by correlations of certification status and global clinical factors. In heart failure, the value of certification is unknown. Examine the value of certification based nurses' decision-making. Cross-sectional study of nurses who completed heart failure clinical vignettes that reflected decision-making in clinical heart failure scenarios. Statistical tests included multivariable linear, logistic and proportional odds logistic regression models. Of nurses (N = 605), 29.1% were heart failure certified, 35.0% were certified in another specialty/job role and 35.9% were not certified. In multivariable modeling, nurses certified in heart failure (versus not heart failure certified) had higher clinical vignette scores (p = 0.002), reflecting higher evidence-based decision making; nurses with another specialty/role certification (versus no certification) did not (p = 0.62). Heart failure certification, but not in other specialty/job roles was associated with decisions that reflected delivery of high-quality care. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the

  11. Physically-based failure analysis of shallow layered soil deposits over large areas

    NASA Astrophysics Data System (ADS)

    Cuomo, Sabatino; Castorino, Giuseppe Claudio; Iervolino, Aniello

    2014-05-01

    In the last decades, the analysis of slope stability conditions over large areas has become popular among scientists and practitioners (Cascini et al., 2011; Cuomo and Della Sala, 2013). This is due to the availability of new computational tools (Baum et al., 2002; Godt et al., 2008; Baum and Godt, 2012; Salciarini et al., 2012) - implemented in GIS (Geographic Information System) platforms - which allow taking into account the major hydraulic and mechanical issues related to slope failure, even for unsaturated soils, as well as the spatial variability of both topography and soil properties. However, the effectiveness (Sorbino et al., 2010) of the above methods it is still controversial for landslides forecasting especially depending on the accuracy of DTM (Digital Terrain Model) and for the chance that distinct triggering mechanisms may occur over large area. Among the major uncertainties, layering of soil deposits is of primary importance due to soil layer conductivity contrast and differences in shear strength. This work deals with the hazard analysis of shallow landslides over large areas, considering two distinct schematizations of soil stratigraphy, i.e. homogeneous or layered. To this purpose, the physically-based model TRIGRS (Baum et al., 2002) is firstly used, then extended to the case of layered deposit: specifically, a unique set of hydraulic properties is assumed while distinct soil unit weight and shear strength are considered for each soil layer. Both models are applied to a significant study area of Southern Italy, about 4 km2 large, where shallow deposits of air-fall volcanic (pyroclastic) soils have been affected by several landslides, causing victims, damages and economic losses. The achieved results highlight that soil volume globally mobilized over the study area highly depends on local stratigraphy of shallow deposits. This relates to the depth of critical slip surface which rarely corresponds to the bedrock contact where cohesionless coarse

  12. Degradation mechanisms in high-power multi-mode InGaAs-AlGaAs strained quantum well lasers for high-reliability applications

    NASA Astrophysics Data System (ADS)

    Sin, Yongkun; Presser, Nathan; Brodie, Miles; Lingley, Zachary; Foran, Brendan; Moss, Steven C.

    2015-03-01

    Laser diode manufacturers perform accelerated multi-cell lifetests to estimate lifetimes of lasers using an empirical model. Since state-of-the-art laser diodes typically require a long period of latency before they degrade, significant amount of stress is applied to the lasers to generate failures in relatively short test durations. A drawback of this approach is the lack of mean-time-to-failure data under intermediate and low stress conditions, leading to uncertainty in model parameters (especially optical power and current exponent) and potential overestimation of lifetimes at usage conditions. This approach is a concern especially for satellite communication systems where high reliability is required of lasers for long-term duration in the space environment. A number of groups have studied reliability and degradation processes in GaAs-based lasers, but none of these studies have yielded a reliability model based on the physics of failure. The lack of such a model is also a concern for space applications where complete understanding of degradation mechanisms is necessary. Our present study addresses the aforementioned issues by performing long-term lifetests under low stress conditions followed by failure mode analysis (FMA) and physics of failure investigation. We performed low-stress lifetests on both MBE- and MOCVD-grown broad-area InGaAs- AlGaAs strained QW lasers under ACC (automatic current control) mode to study low-stress degradation mechanisms. Our lifetests have accumulated over 36,000 test hours and FMA is performed on failures using our angle polishing technique followed by EL. This technique allows us to identify failure types by observing dark line defects through a window introduced in backside metal contacts. We also investigated degradation mechanisms in MOCVD-grown broad-area InGaAs-AlGaAs strained QW lasers using various FMA techniques. Since it is a challenge to control defect densities during the growth of laser structures, we chose to

  13. The failure of earthquake failure models

    USGS Publications Warehouse

    Gomberg, J.

    2001-01-01

    In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.

  14. Establishing the reliability and concurrent validity of physical performance tests using virtual reality equipment for community-dwelling healthy elders.

    PubMed

    Griswold, David; Rockwell, Kyle; Killa, Carri; Maurer, Michael; Landgraff, Nancy; Learman, Ken

    2015-01-01

    The aim of this study was to determine the reliability and concurrent validity of commonly used physical performance tests using the OmniVR Virtual Rehabilitation System for healthy community-dwelling elders. Participants (N = 40) were recruited by the authors and were screened for eligibility. The initial method of measurement was randomized to either virtual reality (VR) or clinically based measures (CM). Physical performance tests included the five times sit to stand, Timed Up and Go (TUG), Forward Functional Reach (FFR) and 30-s stand test. A random number generator determined the testing order. The test-re-test reliability for the VR and CM was determined. Furthermore, concurrent validity was determined using a Pearson product moment correlation (Pearson r). The VR demonstrated excellent reliability for 5 × STS intraclass correlation coefficient (ICC) = 0.931(3,1), FFR ICC = 0.846(3,1) and the TUG ICC = 0.944(3,1). The concurrent validity data for the VR and CM (ICC 3, k) were moderate for FFR ICC = 0.682, excellent 5 × STS ICC = 0.889 and excellent for the TUG ICC = 0.878. The concurrent validity of the 30-s stand test was good ICC = 0.735(3,1). This study supports the use of VR equipment for measuring physical performance tests in the clinic for healthy community-dwelling elders. Virtual reality equipment is not only used to treat balance impairments but it is also used to measure and determine physical impairments through the use of physical performance tests. Virtual reality equipment is a reliable and valid tool for collecting physical performance data for the 5 × STS, FFR, TUG and 30-s stand test for healthy community-dwelling elders.

  15. Reliability culture at La Silla Paranal Observatory

    NASA Astrophysics Data System (ADS)

    Gonzalez, Sergio

    2010-07-01

    The Maintenance Department at the La Silla - Paranal Observatory has been an important base to keep the operations of the observatory at a good level of reliability and availability. Several strategies have been implemented and improved in order to cover these requirements and keep the system and equipment working properly when it is required. For that reason, one of the latest improvements has been the introduction of the concept of reliability, which implies that we don't simply speak about reliability concepts. It involves much more than that. It involves the use of technologies, data collecting, data analysis, decision making, committees concentrated in analysis of failure modes and how they can be eliminated, aligning the results with the requirements of our internal partners and establishing steps to achieve success. Some of these steps have already been implemented: data collection, use of technologies, analysis of data, development of priority tools, committees dedicated to analyze data and people dedicated to reliability analysis. This has permitted us to optimize our process, analyze where we can improve, avoid functional failures, reduce the failures range in several systems and subsystems; all this has had a positive impact in terms of results for our Observatory. All these tools are part of the reliability culture that allows our system to operate with a high level of reliability and availability.

  16. Reliability of 3D laser-based anthropometry and comparison with classical anthropometry.

    PubMed

    Kuehnapfel, Andreas; Ahnert, Peter; Loeffler, Markus; Broda, Anja; Scholz, Markus

    2016-05-26

    Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable.

  17. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  18. Reliability analysis of airship remote sensing system

    NASA Astrophysics Data System (ADS)

    Qin, Jun

    1998-08-01

    Airship Remote Sensing System (ARSS) for obtain the dynamic or real time images in the remote sensing of the catastrophe and the environment, is a mixed complex system. Its sensor platform is a remote control airship. The achievement of a remote sensing mission depends on a series of factors. For this reason, it is very important for us to analyze reliability of ARSS. In first place, the system model was simplified form multi-stage system to two-state system on the basis of the result of the failure mode and effect analysis and the failure tree failure mode effect and criticality analysis. The failure tree was created after analyzing all factors and their interrelations. This failure tree includes four branches, e.g. engine subsystem, remote control subsystem, airship construction subsystem, flying metrology and climate subsystem. By way of failure tree analysis and basic-events classing, the weak links were discovered. The result of test running shown no difference in comparison with theory analysis. In accordance with the above conclusions, a plan of the reliability growth and reliability maintenance were posed. System's reliability are raised from 89 percent to 92 percent with the reformation of the man-machine interactive interface, the augmentation of the secondary better-groupie and the secondary remote control equipment.

  19. Validity and reliability of the Fels physical activity questionnaire for children.

    PubMed

    Treuth, Margarita S; Hou, Ningqi; Young, Deborah R; Maynard, L Michele

    2005-03-01

    The aim was to evaluate the reliability and validity of the Fels physical activity questionnaire (PAQ) for children 7-19 yr of age. A cross-sectional study was conducted among 130 girls and 99 boys in elementary (N=70), middle (N=81), and high (N=78) schools in rural Maryland. Weight and height were measured on the initial school visit. All the children then wore an Actiwatch accelerometer for 6 d. The Fels PAQ for children was given on two separate occasions to evaluate reliability and was compared with accelerometry data to evaluate validity. The reliability of the Fels PAQ for the girls, boys, and the elementary, middle, and high school age groups range was r=0.48-0.76. For the elementary school children, the correlation coefficient examining validity between the Fels PAQ total score and Actiwatch (counts per minute) was 0.34 (P=0.004). The correlation coefficients were lower in middle school (r=0.11, P=0.31) and high school (r=0.21, P=0.006) adolescents. The sport index of the Fels PAQ for children had the highest validity in the high school participants (r=0.34, P=0.002). The Fels PAQ for children is moderately reliable for all age groups of children. Validity of the Fels PAQ for children is acceptable for elementary and high school students when the total activity score or the sport index is used. The sport index was similar to the total score for elementary students but was a better measure of physical activity among high school students.

  20. A physically-based method for predicting peak discharge of floods caused by failure of natural and constructed earthen dams

    USGS Publications Warehouse

    Walder, J.S.; O'Connor, J. E.; Costa, J.E.; ,

    1997-01-01

    We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V.D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether < ??? 1 or < ??? 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.We analyze a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a

  1. Nanowire growth process modeling and reliability models for nanodevices

    NASA Astrophysics Data System (ADS)

    Fathi Aghdam, Faranak

    . This work is an early attempt that uses a physical-statistical modeling approach to studying selective nanowire growth for the improvement of process yield. In the second research work, the reliability of nano-dielectrics is investigated. As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure (i.e., failure mechanisms and modes). This necessitates new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor that is subject to various failure mechanisms. Dielectric breakdown is known to be the most critical one and has become a major barrier for reliable circuit design in nano-scale. Due to the need for aggressive downscaling of transistors, dielectric films are being made extremely thin, and this has led to adopting high permittivity (k) dielectrics as an alternative to widely used SiO2 in recent years. Since most time-dependent dielectric breakdown test data on bilayer stacks show significant deviations from a Weibull trend, we have proposed two new approaches to modeling the time to breakdown of bi-layer high-k dielectrics. In the first approach, we have used a marked space-time self-exciting point process to model the defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is employed to minimize the Kullback-Leibler divergence between the empirical distribution obtained from the real data and the one based on the simulated data to find the best parameter values and to predict the total time to failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectric that is a function of time, space and size of the previous defects. In addition, in the second approach, a k-out-of-n system framework is proposed to estimate the total failure time after the generation of more than one soft breakdown.

  2. Development and reliability of an audit tool to assess the school physical activity environment across 12 countries

    PubMed Central

    Broyles, S T; Drazba, K T; Church, T S; Chaput, J-P; Fogelholm, M; Hu, G; Kuriyan, R; Kurpad, A; Lambert, E V; Maher, C; Maia, J; Matsudo, V; Olds, T; Onywera, V; Sarmiento, O L; Standage, M; Tremblay, M S; Tudor-Locke, C; Zhao, P; Katzmarzyk, P T

    2015-01-01

    Objectives: Schools are an important setting to enable and promote physical activity. Researchers have created a variety of tools to perform objective environmental assessments (or ‘audits') of other settings, such as neighborhoods and parks; yet, methods to assess the school physical activity environment are less common. The purpose of this study is to describe the approach used to objectively measure the school physical activity environment across 12 countries representing all inhabited continents, and to report on the reliability and feasibility of this methodology across these diverse settings. Methods: The International Study of Childhood Obesity, Lifestyle and the Environment (ISCOLE) school audit tool (ISAT) data collection required an in-depth training (including field practice and certification) and was facilitated by various supporting materials. Certified data collectors used the ISAT to assess the environment of all schools enrolled in ISCOLE. Sites completed a reliability audit (simultaneous audits by two independent, certified data collectors) for a minimum of two schools or at least 5% of their school sample. Item-level agreement between data collectors was assessed with both the kappa statistic and percent agreement. Inter-rater reliability of school summary scores was measured using the intraclass correlation coefficient. Results: Across the 12 sites, 256 schools participated in ISCOLE. Reliability audits were conducted at 53 schools (20.7% of the sample). For the assessed environmental features, inter-rater reliability (kappa) ranged from 0.37 to 0.96; 18 items (42%) were assessed with almost perfect reliability (κ=0.80–0.96), and a further 24 items (56%) were assessed with substantial reliability (κ=0.61–0.79). Likewise, scores that summarized a school's support for physical activity were highly reliable, with the exception of scores assessing aesthetics and perceived suitability of the school grounds for sport, informal games and general

  3. The intra- and inter-observer reliability of the physical examination methods used to assess patients with patellofemoral joint instability.

    PubMed

    Smith, Toby O; Clark, Allan; Neda, Sophia; Arendt, Elizabeth A; Post, William R; Grelsamer, Ronald P; Dejour, David; Almqvist, Karl Fredrik; Donell, Simon T

    2012-08-01

    An accurate physical examination of patients with patellar instability is an important aspect of the diagnosis and treatment. While previous studies have assessed the diagnostic accuracy of such physical examination tests, little has been undertaken to assess the inter- and intra-tester reliability of such techniques. The purpose of this study was to determine the inter- and intra-tester reliability of the physical examination tests used for patients with patellar instability. Five patients (10 knees) with bilateral recurrent patellar instability were assessed by five members of the International Patellofemoral Study Group. Each surgeon assessed each patient twice using 18 reported physical examination tests. The inter- and intra-observer reliability was assessed using weighted Kappa statistics with 95% confidence intervals. The findings of the study suggested that there were very poor inter-observer reliability for the majority of the physical tests, with only the assessments of patellofemoral crepitus, foot arch position and the J-sign presenting with fair to moderate agreement respectively. The intra-observer reliability indicated largely moderate to substantial agreement between the first and second tests performed by each assessor, with the greatest agreement seen for the assessment of tibial torsion, popliteal angle and the Bassett's sign. For the common physical examination tests used in the management of patients with patellar instability inter-observer reliability is poor, while intra-observer reliability is moderate. Standardization of physical exam assessments and further study of these results among different clinicians and more divergent patient groups is indicated. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Reliability and risk assessment of structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1991-01-01

    Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.

  5. Reliability and Validity of the Physical Education Activities Scale.

    PubMed

    Thomason, Diane L; Feng, Du

    2016-06-01

    Measuring adolescent perceptions of physical education (PE) activities is necessary in understanding determinants of school PE activity participation. This study assessed reliability and validity of the Physical Education Activities Scale (PEAS), a 41-item visual analog scale measuring high school adolescent perceptions of school PE activity participation. Adolescents (N = 529) from the Pacific Northwest aged 15-19 in grades 9-12 participated in the study. Construct validity was assessed using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Measurement invariance across sex groups was tested by multiple-group CFA. Internal consistency reliability was analyzed using Cronbach's alpha. Inter-subscale correlations (Pearson's r) were calculated for latent factors and observed subscale scores. Exploratory factor analysis suggested a 3-factor solution explaining 43.4% of the total variance. Confirmatory factor analysis showed the 3-factor model fit the data adequately (comparative fit index [CFI] = 0.90, Tucker-Lewis index [TLI] = 0.89, root mean squared error of approximation [RMSEA] = 0.063). Factorial invariance was supported. Cronbach's alpha of the total PEAS was α = 0.92, and for subscales α ranged from 0.65 to 0.92. Independent t-tests showed significantly higher mean scores for boys than girls on the total scale and all subscales. Findings provide psychometric support for using the PEAS for examining adolescent's psychosocial and environmental perceptions to participating in PE activities. © 2016, American School Health Association.

  6. Probabilistic and structural reliability analysis of laminated composite structures based on the IPACS code

    NASA Technical Reports Server (NTRS)

    Sobel, Larry; Buttitta, Claudio; Suarez, James

    1993-01-01

    Probabilistic predictions based on the Integrated Probabilistic Assessment of Composite Structures (IPACS) code are presented for the material and structural response of unnotched and notched, 1M6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply, and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is deficient because IPACS did not yet have a progressive failure capability. The paper also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.

  7. Medial tibial stress syndrome can be diagnosed reliably using history and physical examination.

    PubMed

    Winters, M; Bakker, E W P; Moen, M H; Barten, C C; Teeuwen, R; Weir, A

    2017-02-08

    The majority of sporting injuries are clinically diagnosed using history and physical examination as the cornerstone. There are no studies supporting the reliability of making a clinical diagnosis of medial tibial stress syndrome (MTSS). Our aim was to assess if MTSS can be diagnosed reliably, using history and physical examination. We also investigated if clinicians were able to reliably identify concurrent lower leg injuries. A clinical reliability study was performed at multiple sports medicine sites in The Netherlands. Athletes with non-traumatic lower leg pain were assessed for having MTSS by two clinicians, who were blinded to each others' diagnoses. We calculated the prevalence, percentage of agreement, observed percentage of positive agreement (Ppos), observed percentage of negative agreement (Pneg) and Kappa-statistic with 95%CI. Forty-nine athletes participated in this study, of whom 46 completed both assessments. The prevalence of MTSS was 74%. The percentage of agreement was 96%, with Ppos and Pneg of 97% and 92%, respectively. The inter-rater reliability was almost perfect; k=0.89 (95% CI 0.74 to 1.00), p<0.000001. Of the 34 athletes with MTSS, 11 (32%) had a concurrent lower leg injury, which was reliably noted by our clinicians, k=0.73, 95% CI 0.48 to 0.98, p<0.0001. Our findings show that MTSS can be reliably diagnosed clinically using history and physical examination, in clinical practice and research settings. We also found that concurrent lower leg injuries are common in athletes with MTSS. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. Development of an adaptive failure detection and identification system for detecting aircraft control element failures

    NASA Technical Reports Server (NTRS)

    Bundick, W. Thomas

    1990-01-01

    A methodology for designing a failure detection and identification (FDI) system to detect and isolate control element failures in aircraft control systems is reviewed. An FDI system design for a modified B-737 aircraft resulting from this methodology is also reviewed, and the results of evaluating this system via simulation are presented. The FDI system performed well in a no-turbulence environment, but it experienced an unacceptable number of false alarms in atmospheric turbulence. An adaptive FDI system, which adjusts thresholds and other system parameters based on the estimated turbulence level, was developed and evaluated. The adaptive system performed well over all turbulence levels simulated, reliably detecting all but the smallest magnitude partially-missing-surface failures.

  9. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  10. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  11. Reliability analysis of a robotic system using hybridized technique

    NASA Astrophysics Data System (ADS)

    Kumar, Naveen; Komal; Lather, J. S.

    2017-09-01

    In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.

  12. Interrater reliability of the cervicothoracic and shoulder physical examination in patients with a primary complaint of shoulder pain.

    PubMed

    Burns, Scott A; Cleland, Joshua A; Carpenter, Kristin; Mintken, Paul E

    2016-03-01

    Examine the interrater reliability of cervicothoracic and shoulder physical examination in patients with a primary complaint of shoulder pain. Single-group repeated-measures design for interrater reliability. Orthopaedic physical therapy clinics. Twenty-one patients with a primary complaint of shoulder pain underwent a standardized examination by a physical therapist (PT). A PT conducted the first examination and one of two additional PTs conducted the 2nd examination. The Cohen κ and weighted κ were used to calculate the interrater reliability of ordinal level data. Intraclass correlation coefficients model 2,1 (ICC2,1) and the 95% confidence intervals were calculated to determine the interrater reliability. The kappa coefficients ranged from -.24 to .83 for the mobility assessment of the glenohumeral, acromioclavicular and sternoclavicular joints. The kappa coefficients ranged from -.20 to .58 for joint mobility assessment of the cervical and thoracic spine. The kappa coefficients ranged from .23 to 1.0 for special tests of the shoulder and cervical spine. The present study reported the reliability of a comprehensive upper quarter physical examination for a group of patients with a primary report of shoulder pain. The reliability varied considerably for the cervical and shoulder examination and was significantly higher for the examination of muscle length and cervical range of motion. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Reliability analysis and initial requirements for FC systems and stacks

    NASA Astrophysics Data System (ADS)

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  14. Reliability Evaluation of Base-Metal-Electrode (BME) Multilayer Ceramic Capacitors for Space Applications

    NASA Technical Reports Server (NTRS)

    Liu, David (Donghang)

    2011-01-01

    This paper reports reliability evaluation of BME ceramic capacitors for possible high reliability space-level applications. The study is focused on the construction and microstructure of BME capacitors and their impacts on the capacitor life reliability. First, the examinations of the construction and microstructure of commercial-off-the-shelf (COTS) BME capacitors show great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and approximately 0.5 micrometers, which is much less than that of most PME capacitors. The primary reasons that a BME capacitor can be fabricated with more internal electrode layers and less dielectric layer thickness is that it has a fine-grained microstructure and does not shrink much during ceramic sintering. This results in the BME capacitors a very high volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT) and regular life testing as per MIL-PRF-123. Most BME capacitors were found to fail· with an early dielectric wearout, followed by a rapid wearout failure mode during the HALT test. When most of the early wearout failures were removed, BME capacitors exhibited a minimum mean time-to-failure of more than 10(exp 5) years. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically between 10 and 20. This may suggest that the number of grains per dielectric layer is more critical than the thickness itself for determining the rated voltage and the life

  15. Development, content validity and test-retest reliability of the Lifelong Physical Activity Skills Battery in adolescents.

    PubMed

    Hulteen, Ryan M; Barnett, Lisa M; Morgan, Philip J; Robinson, Leah E; Barton, Christian J; Wrotniak, Brian H; Lubans, David R

    2018-03-28

    Numerous skill batteries assess fundamental motor skill (e.g., kick, hop) competence. Few skill batteries examine lifelong physical activity skill competence (e.g., resistance training). This study aimed to develop and assess the content validity, test-retest and inter-rater reliability of the "Lifelong Physical Activity Skills Battery". Development of the skill battery occurred in three stages: i) systematic reviews of lifelong physical activity participation rates and existing motor skill assessment tools, ii) practitioner consultation and iii) research expert consultation. The final battery included eight skills: grapevine, golf swing, jog, push-up, squat, tennis forehand, upward dog and warrior I. Adolescents (28 boys, 29 girls; M = 15.8 years, SD = 0.4 years) completed the Lifelong Physical Activity Skills Battery on two occasions two weeks apart. The skill battery was highly reliable (ICC = 0.84, 95% CI = 0.72-0.90) with individual skill reliability scores ranging from moderate (warrior I; ICC = 0.56) to high (tennis forehand; ICC = 0.82). Typical error (4.0; 95% CI 3.4-5.0) and proportional bias (r = -0.21, p = .323) were low. This study has provided preliminary evidence for the content validity and reliability of the Lifelong Physical Activity Skills Battery in an adolescent population.

  16. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  17. Reliability modeling of fault-tolerant computer based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1987-01-01

    Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.

  18. Statistical Physics of Cascading Failures in Complex Networks

    NASA Astrophysics Data System (ADS)

    Panduranga, Nagendra Kumar

    Systems such as the power grid, world wide web (WWW), and internet are categorized as complex systems because of the presence of a large number of interacting elements. For example, the WWW is estimated to have a billion webpages and understanding the dynamics of such a large number of individual agents (whose individual interactions might not be fully known) is a challenging task. Complex network representations of these systems have proved to be of great utility. Statistical physics is the study of emergence of macroscopic properties of systems from the characteristics of the interactions between individual molecules. Hence, statistical physics of complex networks has been an effective approach to study these systems. In this dissertation, I have used statistical physics to study two distinct phenomena in complex systems: i) Cascading failures and ii) Shortest paths in complex networks. Understanding cascading failures is considered to be one of the "holy grails" in the study of complex systems such as the power grid, transportation networks, and economic systems. Studying failures of these systems as percolation on complex networks has proved to be insightful. Previously, cascading failures have been studied extensively using two different models: k-core percolation and interdependent networks. The first part of this work combines the two models into a general model, solves it analytically, and validates the theoretical predictions through extensive computer simulations. The phase diagram of the percolation transition has been systematically studied as one varies the average local k-core threshold and the coupling between networks. The phase diagram of the combined processes is very rich and includes novel features that do not appear in the models which study each of the processes separately. For example, the phase diagram consists of first- and second-order transition regions separated by two tricritical lines that merge together and enclose a two

  19. Probabilistic simulation of the human factor in structural reliability

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Chamis, Christos C.

    1991-01-01

    Many structural failures have occasionally been attributed to human factors in engineering design, analyses maintenance, and fabrication processes. Every facet of the engineering process is heavily governed by human factors and the degree of uncertainty associated with them. Factors such as societal, physical, professional, psychological, and many others introduce uncertainties that significantly influence the reliability of human performance. Quantifying human factors and associated uncertainties in structural reliability require: (1) identification of the fundamental factors that influence human performance, and (2) models to describe the interaction of these factors. An approach is being developed to quantify the uncertainties associated with the human performance. This approach consists of a multi factor model in conjunction with direct Monte-Carlo simulation.

  20. Assessing the environmental characteristics of cycling routes to school: a study on the reliability and validity of a Google Street View-based audit.

    PubMed

    Vanwolleghem, Griet; Van Dyck, Delfien; Ducheyne, Fabian; De Bourdeaudhuij, Ilse; Cardon, Greet

    2014-06-10

    Google Street View provides a valuable and efficient alternative to observe the physical environment compared to on-site fieldwork. However, studies on the use, reliability and validity of Google Street View in a cycling-to-school context are lacking. We aimed to study the intra-, inter-rater reliability and criterion validity of EGA-Cycling (Environmental Google Street View Based Audit - Cycling to school), a newly developed audit using Google Street View to assess the physical environment along cycling routes to school. Parents (n = 52) of 11-to-12-year old Flemish children, who mostly cycled to school, completed a questionnaire and identified their child's cycling route to school on a street map. Fifty cycling routes of 11-to-12-year olds were identified and physical environmental characteristics along the identified routes were rated with EGA-Cycling (5 subscales; 37 items), based on Google Street View. To assess reliability, two researchers performed the audit. Criterion validity of the audit was examined by comparing the ratings based on Google Street View with ratings through on-site assessments. Intra-rater reliability was high (kappa range 0.47-1.00). Large variations in the inter-rater reliability (kappa range -0.03-1.00) and criterion validity scores (kappa range -0.06-1.00) were reported, with acceptable inter-rater reliability values for 43% of all items and acceptable criterion validity for 54% of all items. EGA-Cycling can be used to assess physical environmental characteristics along cycling routes to school. However, to assess the micro-environment specifically related to cycling, on-site assessments have to be added.

  1. Physics-Based Methods of Failure Analysis and Diagnostics in Human Space Flight

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Luchinsky, Dmitry Georgievich; Hafiychuk, Vasyl Nmn; Osipov, Viatcheslav V.; Patterson-Hine, F. Ann

    2010-01-01

    The Integrated Health Management (IHM) for the future aerospace systems requires to interface models of multiple subsystems in an efficient and accurate information environment at the earlier stages of system design. The complexity of modern aeronautic and aircraft systems (including e.g. the power distribution, flight control, solid and liquid motors) dictates employment of hybrid models and high-level reasoners for analysing mixed continuous and discrete information flow involving multiple modes of operation in uncertain environments, unknown state variables, heterogeneous software and hardware components. To provide the information link between key design/performance parameters and high-level reasoners we rely on development of multi-physics performance models, distributed sensors networks, and fault diagnostic and prognostic (FD&P) technologies in close collaboration with system designers. The main challenges of our research are related to the in-flight assessment of the structural stability, engine performance, and trajectory control. The main goal is to develop an intelligent IHM that not only enhances components and system reliability, but also provides a post-flight feedback helping to optimize design of the next generation of aerospace systems. Our efforts are concentrated on several directions of the research. One of the key components of our strategy is an innovative approach to the diagnostics/prognostics based on the real time dynamical inference (DI) technologies extended to encompass hybrid systems with hidden state trajectories. The major investments are into the multiphysics performance modelling that provides an access of the FD&P technologies to the main performance parameters of e.g. solid and liquid rocket motors and composite materials of the nozzle and case. Some of the recent results of our research are discussed in this chapter. We begin by introducing the problem of dynamical inference of stochastic nonlinear models and reviewing earlier

  2. Design for a Crane Metallic Structure Based on Imperialist Competitive Algorithm and Inverse Reliability Strategy

    NASA Astrophysics Data System (ADS)

    Fan, Xiao-Ning; Zhi, Bo

    2017-07-01

    Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.

  3. Field Programmable Gate Array Failure Rate Estimation Guidelines for Launch Vehicle Fault Tree Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.

    2017-01-01

    Today's launch vehicles complex electronic and avionics systems heavily utilize Field Programmable Gate Array (FPGA) integrated circuits (IC) for their superb speed and reconfiguration capabilities. Consequently, FPGAs are prevalent ICs in communication protocols such as MILSTD- 1553B and in control signal commands such as in solenoid valve actuations. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.

  4. A Passive System Reliability Analysis for a Station Blackout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David

    2015-05-03

    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less

  5. Reliability and validity of a school recess physical activity recall in Spanish youth.

    PubMed

    Martínez-Gómez, David; Calabro, M Andres; Welk, Gregory J; Marcos, Ascension; Veiga, Oscar L

    2010-05-01

    Recess is a frequent target in school-based physical activity (PA) promotion research but there are challenges in assessing PA during this time period. The purpose of this study was to evaluate the reliability and validity of a recess PA recall (RPAR) instrument designed to assess total PA and time spent in moderate to vigorous PA (MVPA) during recess. One hundred twenty-five 7th and 8th-grade students (59 females), age 12-14 years, participated in the study. Activity levels were objectively monitored on Mondays using different activity monitors (Yamax Digiwalker, Biotrainer and ActiGraph). On Tuesdays, 2 RPAR self-reports were administered within 1-hr. Test-retest reliability showed ICC = 0.87 and 0.88 for total PA and time spent in MVPA, respectively. The RPAR was correlated against Yamax (r = .35), Biotrainer (r = .40 and 0.54) and ActiGraph (r = .42) to assess total PA during recess. The RPAR was also correlated against ActiGraph (r = .54) to assess time spent in MVPA during recess. Mean difference between the RPAR and ActiGraph to assess time spent in MVPA during recess was no significant (2.15 +/- 3.67 min, p = .313). The RPAR showed an adequate reliability and a reasonable validity for assessing PA during the school recess in youth.

  6. Reliability and Validity of the Self- and Interviewer-Administered Versions of the Global Physical Activity Questionnaire (GPAQ)

    PubMed Central

    Chu, Anne H. Y.; Ng, Sheryl H. X.; Koh, David; Müller-Riemenschneider, Falk

    2015-01-01

    Objective The Global Physical Activity Questionnaire (GPAQ) was originally designed to be interviewer-administered by the World Health Organization in assessing physical activity. The main aim of this study was to compare the psychometric properties of a self-administered GPAQ with the original interviewer-administered approach. Additionally, this study explored whether using different accelerometry-based physical activity bout definitions might affect the questionnaire’s validity. Methods A total of 110 participants were recruited and randomly allocated to an interviewer- (n = 56) or a self-administered (n = 54) group for test-retest reliability, of which 108 participants who met the wear time criteria were included in the validity study. Reliability was assessed by administration of questionnaires twice with a one-week interval. Criterion validity was assessed by comparing against seven-day accelerometer measures. Two definitions for accelerometry-data scoring were employed: (1) total-min of activity, and (2) 10-min bout. Results Participants had similar baseline characteristics in both administration groups and no significant difference was found between the two formats in terms of validity (correlations between the GPAQ and accelerometer). For validity, the GPAQ demonstrated fair-to-moderate correlations for moderate-to-vigorous physical activity (MVPA) for self-administration (r s = 0.30) and interviewer-administration (r s = 0.46). Findings were similar when considering 10-min activity bouts in the accelerometer analysis for MVPA (r s = 0.29 vs. 0.42 for self vs. interviewer). Within each mode of administration, the strongest correlations were observed for vigorous-intensity activity. However, Bland-Altman plots illustrated bias toward overestimation for higher levels of MVPA, vigorous- and moderate-intensity activities, and underestimation for lower levels of these measures. Reliability for MVPA revealed moderate correlations (r s = 0.61 vs. 0.63 for self

  7. Stability, Nonlinearity and Reliability of Electrostatically Actuated MEMS Devices

    PubMed Central

    Zhang, Wen-Ming; Meng, Guang; Chen, Di

    2007-01-01

    Electrostatic micro-electro-mechanical system (MEMS) is a special branch with a wide range of applications in sensing and actuating devices in MEMS. This paper provides a survey and analysis of the electrostatic force of importance in MEMS, its physical model, scaling effect, stability, nonlinearity and reliability in detail. It is necessary to understand the effects of electrostatic forces in MEMS and then many phenomena of practical importance, such as pull-in instability and the effects of effective stiffness, dielectric charging, stress gradient, temperature on the pull-in voltage, nonlinear dynamic effects and reliability due to electrostatic forces occurred in MEMS can be explained scientifically, and consequently the great potential of MEMS technology could be explored effectively and utilized optimally. A simplified parallel-plate capacitor model is proposed to investigate the resonance response, inherent nonlinearity, stiffness softened effect and coupled nonlinear effect of the typical electrostatically actuated MEMS devices. Many failure modes and mechanisms and various methods and techniques, including materials selection, reasonable design and extending the controllable travel range used to analyze and reduce the failures are discussed in the electrostatically actuated MEMS devices. Numerical simulations and discussions indicate that the effects of instability, nonlinear characteristics and reliability subjected to electrostatic forces cannot be ignored and are in need of further investigation.

  8. Developing Reliable Life Support for Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and

  9. Polish adaptation and reliability testing of the nine-item European Heart Failure Self-care Behaviour Scale (9-EHFScBS).

    PubMed

    Uchmanowicz, Izabella; Wleklik, Marta

    According to the guidelines of the European Society of Cardiology, education in heart failure (HF) should focus on preparing the patient for self-control and self-care. Only systematic assessment of the level of self-care in HF enables the optimisation and adaptation of education to meet the patient's needs. The research tool commonly used to assess self-care in HF patients is the nine-item European Heart Failure Self-care Behaviour Scale (9-EHFScBS). To test the reliability of the Polish version of the 9-EHFScBS. A standard guideline was used for the translation and cultural adaptation of the English version of the 9-EHFScBS into Polish. The study included 110 Polish patients (mean age 66.0 ± 11.4 years); 51 men and 59 women. Cronbach's alpha was used for the analysis of the internal consistency of the 9-EHFScBS. The mean overall level of self-care in the study group was 27.65 points (SD 7.13 points). Good or satisfactory levels of self-care were found in three out of nine analysed variables. The reliability of the self-care scale was alpha = 0.787. The value of Cronbach's alpha after the exclusion of individual statements ranged from 0.75 to 0.81. The 9-EHFScBS questionnaire is a reliable research tool in assessing the level of self-care among patients with HF in the Polish population.

  10. The Physical Education and School Sport Environment Inventory: Preliminary Validation and Reliability

    ERIC Educational Resources Information Center

    Fairclough, Stuart J.; Hilland, Toni A.; Vinson, Don; Stratton, Gareth

    2012-01-01

    The study purpose was to assess preliminary validity and reliability of the Physical Education and School Sport Environment Inventory (PESSEI), which was designed to audit physical education (PE) and school sport spaces and resources. PE teachers from eight English secondary schools completed the PESSEI. Criterion validity was assessed by…

  11. Reliability evaluation methodology for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1992-01-01

    Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.

  12. Feasibility and reliability of pocket-size ultrasound examinations of the pleural cavities and vena cava inferior performed by nurses in an outpatient heart failure clinic.

    PubMed

    Dalen, Havard; Gundersen, Guri H; Skjetne, Kyrre; Haug, Hilde H; Kleinau, Jens O; Norekval, Tone M; Graven, Torbjorn

    2015-08-01

    Routine assessment of volume state by ultrasound may improve follow-up of heart failure patients. We aimed to study the feasibility and reliability of focused pocket-size ultrasound examinations of the pleural cavities and the inferior vena cava performed by nurses to assess volume state at an outpatient heart failure clinic. Ultrasound examinations were performed in 62 included heart failure patients by specialized nurses with a pocket-size imaging device (PSID). Patients were then re-examined by a cardiologist with a high-end scanner for reference within 1 h. Specialized nurses were able to obtain and interpret images from both pleural cavities and the inferior vena cava and estimate the volume status in all patients. Time consumption for focused ultrasound examination was median 5 min. In total 26 patients had any kind of pleural effusion (in 39 pleural cavities) by reference. The sensitivity, specificity, positive and negative predictive values were high, all ≥ 92%. The correlations with reference were high for all measurements, all r ≥ 0.79. Coefficients of variation for end-expiratory dimension of inferior vena cava and quantification of pleural effusion were 10.8% and 12.7%, respectively. Specialized nurses were, after a dedicated training protocol, able to obtain reliable recordings of both pleural cavities and the inferior vena cava by PSID and interpret the images in a reliable way. Implementing focused ultrasound examinations to assess volume status by nurses in an outpatient heart failure clinic may improve diagnostics, and thus improve therapy. © The European Society of Cardiology 2014.

  13. [Reliability theory based on quality risk network analysis for Chinese medicine injection].

    PubMed

    Li, Zheng; Kang, Li-Yuan; Fan, Xiao-Hui

    2014-08-01

    A new risk analysis method based upon reliability theory was introduced in this paper for the quality risk management of Chinese medicine injection manufacturing plants. The risk events including both cause and effect ones were derived in the framework as nodes with a Bayesian network analysis approach. It thus transforms the risk analysis results from failure mode and effect analysis (FMEA) into a Bayesian network platform. With its structure and parameters determined, the network can be used to evaluate the system reliability quantitatively with probabilistic analytical appraoches. Using network analysis tools such as GeNie and AgenaRisk, we are able to find the nodes that are most critical to influence the system reliability. The importance of each node to the system can be quantitatively evaluated by calculating the effect of the node on the overall risk, and minimization plan can be determined accordingly to reduce their influences and improve the system reliability. Using the Shengmai injection manufacturing plant of SZYY Ltd as a user case, we analyzed the quality risk with both static FMEA analysis and dynamic Bayesian Network analysis. The potential risk factors for the quality of Shengmai injection manufacturing were identified with the network analysis platform. Quality assurance actions were further defined to reduce the risk and improve the product quality.

  14. Analysis of reliability for multi-ring interconnection of RPR networks

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Jin, Depeng; Zeng, Lieguang; Li, Yong

    2008-11-01

    In this paper, the reliability and MTTF (Mean Time to Failure) for multi-ring RPR (Resilient Packet Ring) are calculated on the conditions of single-link failures, double-link failures and no failure, respectively. The parameters such as the total number of stations N, the number of the sub-rings R, and the distribution of Ni which represents the number of the stations in the i-th sub-ring (1<=i<=R) are contained in the formulas. The relationship between the value of the reliability/MTTF and the parameters N, R and Ni is analyzed. The result shows that reliability/MTTF of the RPR multi-rings is increasing while the variance of Ni is decreasing. It is also proved that the value of the reliability/MTTF is maximum when Ni=Nj ( i ≠j and 1<=i, j<=R) by using Lagrange multipliers method, i.e. the condition of the optimal reliability of multi-ring RPR is satisfied when var(Ni) =0.

  15. Validity and reliability of the Physical Activity Scale for the Elderly (PASE) in Japanese elderly people.

    PubMed

    Hagiwara, Akiko; Ito, Naomi; Sawai, Kazuhiko; Kazuma, Keiko

    2008-09-01

    In Japan, there are no valid and reliable physical activity questionnaires for elderly people. In this study, we translated the Physical Activity Scale for the Elderly (PASE) into Japanese and assessed its validity and reliability. Three hundred and twenty-five healthy and elderly subjects over 65 years were enrolled. Concurrent validity was evaluated by Spearman's rank correlation coefficient between PASE scores and an accelerometer (waking steps and energy expenditure), a physical activity questionnaire for adults in general (the Japan Arteriosclerosis Longitudinal Study Physical Activity Questionnaire, JALSPAQ), grip strength, mid-thigh muscle area per bodyweight, static valance and bodyfat percentage. Reliability was evaluated by the test-retest method over a period of 3-4 weeks. The mean PASE score in this study was 114.9. The PASE score was significantly correlated with walking steps (rho = 0.17, P = 0.014), energy expenditure (rho = 0.16, P = 0.024), activity measured with the JALSPAQ (rho = 0.48, P < 0.001), mid-thigh muscle area per bodyweight (rho = 0.15, P = 0.006) and static balance (rho = 0.19, P = 0.001). The proportion of consistency in the response between the first and second surveys was adequately high. The intraclass correlation coefficient for the PASE score was 0.65. The Japanese version of PASE was shown to have acceptable validity and reliability. The PASE is useful to measure the physical activity of elderly people in Japan.

  16. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  17. Optimized Vertex Method and Hybrid Reliability

    NASA Technical Reports Server (NTRS)

    Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.

    2002-01-01

    A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.

  18. Microcircuit Device Reliability. Digital Failure Rate Data

    DTIC Science & Technology

    1981-01-01

    Center Staff I IT Research Institute Under Contract to: Rome Air Development Center Griffiss AFB, NY 13441 fortes Ordering No. MDR- 17 biKi frbi...r ■■ ■—■ — SECURITY CLASSIFICATION Or THIS PAGE (Whin Dmlm Enlti»<l) REPORT DOCUMENTATION PAGE «EPO«TNUMBER MDR- 17 4. TITLE (md...MDR- 17 presents com- parisons between actual field experienced failure rates and MIL-HDBK-217C, Notice 1, predicted failure rates. The use of

  19. Validity and Reliability of the Persian Version of Baecke Habitual Physical Activity Questionnaire in Healthy Subjects.

    PubMed

    Sadeghisani, Meissam; Dehghan Manshadi, Farideh; Azimi, Hadi; Montazeri, Ali

    2016-09-01

    Baecke Habitual Physical Activity Questionnaire (BHPAQ) has widely been employed in clinical and laboratorial studies as a tool for measuring subjects' physical activities. But, the reliability and validity of this questionnaire have not been investigated among Persian speakers. Therefore, the aim of the current study was examining the reliability and validity of the Persian version of the BHPAQ in healthy Persian adults. After following the process of forward-backward translation, 32 subjects were invited to fill out the Persian version of the questionnaire in two independent sessions (3 - 7 days after the first session) in order to determine the reliability index. Also, the validity of the questionnaire was assessed through concurrent validity by 126 subjects (66 males and 60 females) answering both the Baecke and the International Physical Activity Questionnaire (IPAQ). An acceptable level of intraclass correlation coefficient (ICC of work score = 0.95, sport score = 0.93, and leisure score = 0.77) was achieved for the Persian Baecke questionnaire. Correlations between Persian Baecke and IPAQ with and without the score for sitting position were found to be 0.19 and 0.36, respectively. The Persian version of the BHPAQ is a reliable and valid instrument that can be used to measure the level of habitual functional activities in Persian-speaking subjects.

  20. Assessing the validity and reliability of family factors on physical activity: A case study in Turkey.

    PubMed

    Steenson, Sharalyn; Özcebe, Hilal; Arslan, Umut; Konşuk Ünlü, Hande; Araz, Özgür M; Yardim, Mahmut; Üner, Sarp; Bilir, Nazmi; Huang, Terry T-K

    2018-01-01

    Childhood obesity rates have been rising rapidly in developing countries. A better understanding of the risk factors and social context is necessary to inform public health interventions and policies. This paper describes the validation of several measurement scales for use in Turkey, which relate to child and parent perceptions of physical activity (PA) and enablers and barriers of physical activity in the home environment. The aim of this study was to assess the validity and reliability of several measurement scales in Turkey using a population sample across three socio-economic strata in the Turkish capital, Ankara. Surveys were conducted in Grade 4 children (mean age = 9.7 years for boys; 9.9 years for girls), and their parents, across 6 randomly selected schools, stratified by SES (n = 641 students, 483 parents). Construct validity of the scales was evaluated through exploratory and confirmatory factor analysis. Internal consistency of scales and test-retest reliability were assessed by Cronbach's alpha and intra-class correlation. The scales as a whole were found to have acceptable-to-good model fit statistics (PA Barriers: RMSEA = 0.076, SRMR = 0.0577, AGFI = 0.901; PA Outcome Expectancies: RMSEA = 0.054, SRMR = 0.0545, AGFI = 0.916, and PA Home Environment: RMSEA = 0.038, SRMR = 0.0233, AGFI = 0.976). The PA Barriers subscales showed good internal consistency and poor to fair test-retest reliability (personal α = 0.79, ICC = 0.29, environmental α = 0.73, ICC = 0.59). The PA Outcome Expectancies subscales showed good internal consistency and test-retest reliability (negative α = 0.77, ICC = 0.56; positive α = 0.74, ICC = 0.49). Only the PA Home Environment subscale on support for PA was validated in the final confirmatory model; it showed moderate internal consistency and test-retest reliability (α = 0.61, ICC = 0.48). This study is the first to validate measures of perceptions of physical activity and the physical activity home environment in Turkey

  1. Reliable actuators for twin rotor MIMO system

    NASA Astrophysics Data System (ADS)

    Rao, Vidya S.; V. I, George; Kamath, Surekha; Shreesha, C.

    2017-11-01

    Twin Rotor MIMO System (TRMS) is a bench mark system to test flight control algorithms. One of the perturbations on TRMS which is likely to affect the control system is actuator failure. Therefore, there is a need for a reliable control system, which includes H infinity controller along with redundant actuators. Reliable control refers to the design of a control system to tolerate failures of a certain set of actuators or sensors while retaining desired control system properties. Output of reliable controller has to be transferred to the redundant actuator effectively to make the TRMS reliable even under actual actuator failure.

  2. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    DOE PAGES

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; ...

    2017-01-24

    We report that many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has beenmore » examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Lastly, although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.« less

  3. Reliability of specific physical examination tests for the diagnosis of shoulder pathologies: a systematic review and meta-analysis.

    PubMed

    Lange, Toni; Matthijs, Omer; Jain, Nitin B; Schmitt, Jochen; Lützner, Jörg; Kopkow, Christian

    2017-03-01

    Shoulder pain in the general population is common and to identify the aetiology of shoulder pain, history, motion and muscle testing, and physical examination tests are usually performed. The aim of this systematic review was to summarise and evaluate intrarater and inter-rater reliability of physical examination tests in the diagnosis of shoulder pathologies. A comprehensive systematic literature search was conducted using MEDLINE, EMBASE, Allied and Complementary Medicine Database (AMED) and Physiotherapy Evidence Database (PEDro) through 20 March 2015. Methodological quality was assessed using the Quality Appraisal of Reliability Studies (QAREL) tool by 2 independent reviewers. The search strategy revealed 3259 articles, of which 18 finally met the inclusion criteria. These studies evaluated the reliability of 62 test and test variations used for the specific physical examination tests for the diagnosis of shoulder pathologies. Methodological quality ranged from 2 to 7 positive criteria of the 11 items of the QAREL tool. This review identified a lack of high-quality studies evaluating inter-rater as well as intrarater reliability of specific physical examination tests for the diagnosis of shoulder pathologies. In addition, reliability measures differed between included studies hindering proper cross-study comparisons. PROSPERO CRD42014009018. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    PubMed Central

    Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-01-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629

  5. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    NASA Astrophysics Data System (ADS)

    Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-04-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.

  6. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew D.; Grabaskas, David; Brunett, Acacia J.

    2016-01-01

    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended due to deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologiesmore » for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Centering on an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive reactor cavity cooling system following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. While this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability for the reactor cavity cooling system (and the reactor system in general) to the postulated transient event.« less

  7. Physically Based Failure Criteria for Transverse Matrix Cracking

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.

    2003-01-01

    A criterion for matrix failure of laminated composite plies in transverse tension and in-plane shear is developed by examining the mechanics of transverse matrix crack growth. Matrix cracks are assumed to initiate from manufacturing defects and can propagate within planes parallel to the fiber direction and normal to the ply mid-plane. Fracture mechanics models of cracks in unidirectional laminates, embedded plies and outer plies are developed to determine the onset and direction of propagation for unstable crack growth. The models for each ply configuration relate ply thickness and ply toughness to the corresponding in-situ ply strength. Calculated results for several materials are shown to correlate well with experimental results.

  8. Real-time forecasting and predictability of catastrophic failure events: from rock failure to volcanoes and earthquakes

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Bell, A. F.; Naylor, M.; Atkinson, M.; Filguera, R.; Meredith, P. G.; Brantut, N.

    2012-12-01

    Accurate prediction of catastrophic brittle failure in rocks and in the Earth presents a significant challenge on theoretical and practical grounds. The governing equations are not known precisely, but are known to produce highly non-linear behavior similar to those of near-critical dynamical systems, with a large and irreducible stochastic component due to material heterogeneity. In a laboratory setting mechanical, hydraulic and rock physical properties are known to change in systematic ways prior to catastrophic failure, often with significant non-Gaussian fluctuations about the mean signal at a given time, for example in the rate of remotely-sensed acoustic emissions. The effectiveness of such signals in real-time forecasting has never been tested before in a controlled laboratory setting, and previous work has often been qualitative in nature, and subject to retrospective selection bias, though it has often been invoked as a basis in forecasting natural hazard events such as volcanoes and earthquakes. Here we describe a collaborative experiment in real-time data assimilation to explore the limits of predictability of rock failure in a best-case scenario. Data are streamed from a remote rock deformation laboratory to a user-friendly portal, where several proposed physical/stochastic models can be analysed in parallel in real time, using a variety of statistical fitting techniques, including least squares regression, maximum likelihood fitting, Markov-chain Monte-Carlo and Bayesian analysis. The results are posted and regularly updated on the web site prior to catastrophic failure, to ensure a true and and verifiable prospective test of forecasting power. Preliminary tests on synthetic data with known non-Gaussian statistics shows how forecasting power is likely to evolve in the live experiments. In general the predicted failure time does converge on the real failure time, illustrating the bias associated with the 'benefit of hindsight' in retrospective analyses

  9. The Development, Validation, and Reliability of SAM: A Tool for Measurement of Moderate to Vigorous Physical Activity in School Physical Education

    ERIC Educational Resources Information Center

    Surapiboonchai, Kampol

    2010-01-01

    There is a lack of valid and reliable low cost observational instruments to measure moderate to vigorous physical activity (MVPA) in school physical education (PE). The participants in this study were third to tenth grade boys and girls from a south Texas school district. The SAM (Simple Activity Measurement) activity levels were compared with…

  10. "Reliability Of Fiber Optic Lans"

    NASA Astrophysics Data System (ADS)

    Code n, Michael; Scholl, Frederick; Hatfield, W. Bryan

    1987-02-01

    Fiber optic Local Area Network Systems are being used to interconnect increasing numbers of nodes. These nodes may include office computer peripherals and terminals, PBX switches, process control equipment and sensors, automated machine tools and robots, and military telemetry and communications equipment. The extensive shared base of capital resources in each system requires that the fiber optic LAN meet stringent reliability and maintainability requirements. These requirements are met by proper system design and by suitable manufacturing and quality procedures at all levels of a vertically integrated manufacturing operation. We will describe the reliability and maintainability of Codenoll's passive star based systems. These include LAN systems compatible with Ethernet (IEEE 802.3) and MAP (IEEE 802.4), and software compatible with IBM Token Ring (IEEE 802.5). No single point of failure exists in this system architecture.

  11. Reliability and Validity of the PAQ-C Questionnaire to Assess Physical Activity in Children

    ERIC Educational Resources Information Center

    Benítez-Porres, Javier; López-Fernández, Iván; Raya, Juan Francisco; Álvarez Carnero, Sabrina; Alvero-Cruz, José Ramón; Álvarez Carnero, Elvis

    2016-01-01

    Background: Physical activity (PA) assessment by questionnaire is a cornerstone in the field of sport epidemiology studies. The Physical Activity Questionnaire for Children (PAQ-C) has been used widely to assess PA in healthy school populations. The aim of this study was to evaluate the reliability and validity of the PAQ-C questionnaire in…

  12. Reliability measurement during software development. [for a multisensor tracking system

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Sturm, W. A.; Trattner, S.

    1977-01-01

    During the development of data base software for a multi-sensor tracking system, reliability was measured. The failure ratio and failure rate were found to be consistent measures. Trend lines were established from these measurements that provided good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  13. Studies in knowledge-based diagnosis of failures in robotic assembly

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Pollard, Nancy S.; Desai, Rajiv S.

    1990-01-01

    The telerobot diagnostic system (TDS) is a knowledge-based system that is being developed for identification and diagnosis of failures in the space robotic domain. The system is able to isolate the symptoms of the failure, generate failure hypotheses based on these symptoms, and test their validity at various levels by interpreting or simulating the effects of the hypotheses on results of plan execution. The implementation of the TDS is outlined. The classification of failures and the types of system models used by the TDS are discussed. A detailed example of the TDS approach to failure diagnosis is provided.

  14. Validity and reliability of Nike + Fuelband for estimating physical activity energy expenditure.

    PubMed

    Tucker, Wesley J; Bhammar, Dharini M; Sawyer, Brandon J; Buman, Matthew P; Gaesser, Glenn A

    2015-01-01

    The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA). Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion). The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90). The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.

  15. Techniques to evaluate the importance of common cause degradation on reliability and safety of nuclear weapons.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    2011-05-01

    As the nuclear weapon stockpile ages, there is increased concern about common degradation ultimately leading to common cause failure of multiple weapons that could significantly impact reliability or safety. Current acceptable limits for the reliability and safety of a weapon are based on upper limits on the probability of failure of an individual item, assuming that failures among items are independent. We expanded the current acceptable limits to apply to situations with common cause failure. Then, we developed a simple screening process to quickly assess the importance of observed common degradation for both reliability and safety to determine if furthermore » action is necessary. The screening process conservatively assumes that common degradation is common cause failure. For a population with between 100 and 5000 items we applied the screening process and conclude the following. In general, for a reliability requirement specified in the Military Characteristics (MCs) for a specific weapon system, common degradation is of concern if more than 100(1-x)% of the weapons are susceptible to common degradation, where x is the required reliability expressed as a fraction. Common degradation is of concern for the safety of a weapon subsystem if more than 0.1% of the population is susceptible to common degradation. Common degradation is of concern for the safety of a weapon component or overall weapon system if two or more components/weapons in the population are susceptible to degradation. Finally, we developed a technique for detailed evaluation of common degradation leading to common cause failure for situations that are determined to be of concern using the screening process. The detailed evaluation requires that best estimates of common cause and independent failure probabilities be produced. Using these techniques, observed common degradation can be evaluated for effects on reliability and safety.« less

  16. Reliability analysis and fault-tolerant system development for a redundant strapdown inertial measurement unit. [inertial platforms

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.

  17. Autonomous, Decentralized Grid Architecture: Prosumer-Based Distributed Autonomous Cyber-Physical Architecture for Ultra-Reliable Green Electricity Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-01-11

    GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.

  18. Reliability Analysis of a Green Roof Under Different Storm Scenarios

    NASA Astrophysics Data System (ADS)

    William, R. K.; Stillwell, A. S.

    2015-12-01

    Urban environments continue to face the challenges of localized flooding and decreased water quality brought on by the increasing amount of impervious area in the built environment. Green infrastructure provides an alternative to conventional storm sewer design by using natural processes to filter and store stormwater at its source. However, there are currently few consistent standards available in North America to ensure that installed green infrastructure is performing as expected. This analysis offers a method for characterizing green roof failure using a visual aid commonly used in earthquake engineering: fragility curves. We adapted the concept of the fragility curve based on the efficiency in runoff reduction provided by a green roof compared to a conventional roof under different storm scenarios. We then used the 2D distributed surface water-groundwater coupled model MIKE SHE to model the impact that a real green roof might have on runoff in different storm events. We then employed a multiple regression analysis to generate an algebraic demand model that was input into the Matlab-based reliability analysis model FERUM, which was then used to calculate the probability of failure. The use of reliability analysis as a part of green infrastructure design code can provide insights into green roof weaknesses and areas for improvement. It also supports the design of code that is more resilient than current standards and is easily testable for failure. Finally, the understanding of reliability of a single green roof module under different scenarios can support holistic testing of system reliability.

  19. Relating design and environmental variables to reliability

    NASA Astrophysics Data System (ADS)

    Kolarik, William J.; Landers, Thomas L.

    The combination of space application and nuclear power source demands high reliability hardware. The possibilities of failure, either an inability to provide power or a catastrophic accident, must be minimized. Nuclear power experiences on the ground have led to highly sophisticated probabilistic risk assessment procedures, most of which require quantitative information to adequately assess such risks. In the area of hardware risk analysis, reliability information plays a key role. One of the lessons learned from the Three Mile Island experience is that thorough analyses of critical components are essential. Nuclear grade equipment shows some reliability advantages over commercial. However, no statistically significant difference has been found. A recent study pertaining to spacecraft electronics reliability, examined some 2500 malfunctions on more than 300 aircraft. The study classified the equipment failures into seven general categories. Design deficiencies and lack of environmental protection accounted for about half of all failures. Within each class, limited reliability modeling was performed using a Weibull failure model.

  20. Reliability and Failure Modes of Solid-State Lighting Electrical Drivers Subjected to Accelerated Aging

    DOE PAGES

    Lall, Pradeep; Sakalaukus, Peter; Davis, Lynn

    2015-02-19

    An investigation of an off-the-shelf solid-state lighting device with the primary focus on the accompanied light-emitting diode (LED) electrical driver (ED) has been conducted. A set of 10 EDs were exposed to temperature humidity life testing of 85% RH and 85 C (85/85) without an electrical bias per the JEDEC standard JESD22-A101C in order to accelerate the ingress of moisture into the aluminum electrolytic capacitor (AEC) and the EDs in order to assess the reliability of the LED drivers for harsh environment applications. The capacitance and equivalent series resistance for each AEC inside the ED were measured using a handheldmore » LCR meter as possible leading indications of failure. The photometric quantities of a single pristine light engine were monitored in order to investigate the interaction between the light engine and the EDs. These parameters were used in assessing the overall reliability of the EDs. In addition, a comparative analysis has been conducted between the 85/85 accelerated test data and a previously published high-temperature storage life accelerated test of 135°C. The results of the 85/85 acceleration test and the comparative analysis are presented in this paper.« less

  1. Reliability and feasibility of physical fitness tests in female fibromyalgia patients.

    PubMed

    Carbonell-Baeza, A; Álvarez-Gallardo, I C; Segura-Jiménez, V; Castro-Piñero, J; Ruiz, J R; Delgado-Fernández, M; Aparicio, V A

    2015-02-01

    The aim of the present study was to determine the reliability and feasibility of physical fitness tests in female fibromyalgia patients. 100 female fibromyalgia patients (aged 50.6±8.6 years) performed the following tests twice (7 days interval test-retest): chair sit and reach, back scratch, handgrip strength, arm curl, chair stand, 8 feet up and go, and 6-min walk. Significant differences between test and retest were found in the arm curl (mean difference: 1.25±2.16 repetitions, Cohen d=0.251), chair stand (0.99±1.7 repetitions, Cohen d=0.254) and 8 feet up and go (-0.38±1.09 s, Cohen d=0.111) tests. Intraclass correlation coefficients (ICC) range from 0.92 in the arm curl test to 0.96 in the back scratch test. The feasibility of the tests (patients able to complete the test) ranged from 89% in the arm curl test to 100% in the handgrip strength test. Therefore, the reliability and feasibility of the physical fitness tests examined is acceptable for female fibromyalgia patients. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Fatigue Reliability of Gas Turbine Engine Structures

    NASA Technical Reports Server (NTRS)

    Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.

    1997-01-01

    The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.

  3. Making real-time reactive systems reliable

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Wood, Mark

    1990-01-01

    A reactive system is characterized by a control program that interacts with an environment (or controlled program). The control program monitors the environment and reacts to significant events by sending commands to the environment. This structure is quite general. Not only are most embedded real time systems reactive systems, but so are monitoring and debugging systems and distributed application management systems. Since reactive systems are usually long running and may control physical equipment, fault tolerance is vital. The research tries to understand the principal issues of fault tolerance in real time reactive systems and to build tools that allow a programmer to design reliable, real time reactive systems. In order to make real time reactive systems reliable, several issues must be addressed: (1) How can a control program be built to tolerate failures of sensors and actuators. To achieve this, a methodology was developed for transforming a control program that references physical value into one that tolerates sensors that can fail and can return inaccurate values; (2) How can the real time reactive system be built to tolerate failures of the control program. Towards this goal, whether the techniques presented can be extended to real time reactive systems is investigated; and (3) How can the environment be specified in a way that is useful for writing a control program. Towards this goal, whether a system with real time constraints can be expressed as an equivalent system without such constraints is also investigated.

  4. Prognostic value of the physical examination in patients with heart failure and atrial fibrillation: insights from the AF-CHF trial (atrial fibrillation and chronic heart failure).

    PubMed

    Caldentey, Guillem; Khairy, Paul; Roy, Denis; Leduc, Hugues; Talajic, Mario; Racine, Normand; White, Michel; O'Meara, Eileen; Guertin, Marie-Claude; Rouleau, Jean L; Ducharme, Anique

    2014-02-01

    This study sought to assess the prognostic value of physical examination in a modern treated heart failure population. The physical examination is the cornerstone of the evaluation and monitoring of patients with heart failure. Yet, the prognostic value of congestive signs (i.e., peripheral edema, jugular venous distension, a third heart sound, and pulmonary rales) has not been assessed in the current era. A post-hoc analysis was conducted on all 1,376 patients, 81% male, mean age 67 ± 11 years, with symptomatic left ventricular systolic dysfunction enrolled in the AF-CHF (Atrial Fibrillation and Congestive Heart Failure) trial. The prognostic value of baseline physical examination findings was assessed in univariate and multivariate Cox regression analyses. Peripheral edema was observed in 425 (30.9%), jugular venous distension in 297 (21.6%), a third heart sound in 207 (15.0%), and pulmonary rales in 178 (12.9%) patients. Death from cardiovascular causes occurred in 357 (25.9%) patients over a mean follow-up of 37 ± 19 months. All 4 physical examination findings were associated with cardiovascular mortality in univariate analyses (all p values <0.01). In multivariate analyses, taking all 4 signs as potential covariates, only rales (hazard ratio 1.41; 95% confidence interval: 1.07 to 1.86; p = 0.013) and peripheral edema (hazard ratio: 1.25; 95% confidence interval: 1.00 to 1.57; p = 0.048) were associated with cardiovascular mortality, independent of other variables. In the modern era, congestive signs on the physical examination (i.e., peripheral edema, jugular venous distension, a third heart sound, and pulmonary rales) continue to provide important prognostic information in patients with congestive heart failure. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  5. Reliability analysis for the smart grid : from cyber control and communication to physical manifestations of failure.

    DOT National Transportation Integrated Search

    2010-01-01

    The Smart Grid is a cyber-physical system comprised of physical components, such as transmission lines and generators, and a : network of embedded systems deployed for their cyber control. Our objective is to qualitatively and quantitatively analyze ...

  6. A physically-based method for predicting peak discharge of floods caused by failure of natural and constructed earthen dams

    USGS Publications Warehouse

    Walder, J.S.

    1997-01-01

    We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/ D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?? > 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.

  7. Reliability of digital reactor protection system based on extenics.

    PubMed

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  8. Validity and reliability of smartphone magnetometer-based goniometer evaluation of shoulder abduction--A pilot study.

    PubMed

    Johnson, Linda B; Sumner, Sean; Duong, Tina; Yan, Posu; Bajcsy, Ruzena; Abresch, R Ted; de Bie, Evan; Han, Jay J

    2015-12-01

    Goniometers are commonly used by physical therapists to measure range-of-motion (ROM) in the musculoskeletal system. These measurements are used to assist in diagnosis and to help monitor treatment efficacy. With newly emerging technologies, smartphone-based applications are being explored for measuring joint angles and movement. This pilot study investigates the intra- and inter-rater reliability as well as concurrent validity of a newly-developed smartphone magnetometer-based goniometer (MG) application for measuring passive shoulder abduction in both sitting and supine positions, and compare against the traditional universal goniometer (UG). This is a comparative study with repeated measurement design. Three physical therapists utilized both the smartphone MG and a traditional UG to measure various angles of passive shoulder abduction in a healthy subject, whose shoulder was positioned in eight different positions with pre-determined degree of abduction while seated or supine. Each therapist was blinded to the measured angles. Concordance correlation coefficients (CCCs), Bland-Altman plotting methods, and Analysis of Variance (ANOVA) were used for statistical analyses. Both traditional UG and smartphone MG were reliable in repeated measures of standardized joint angle positions (average CCC > 0.997) with similar variability in both measurement tools (standard deviation (SD) ± 4°). Agreement between the UG and MG measurements was greater than 0.99 in all positions. Our results show that the smartphone MG has equivalent reliability compared to the traditional UG when measuring passive shoulder abduction ROM. With concordant measures and comparable reliability to the UG, the newly developed MG application shows potential as a useful tool to assess joint angles. Published by Elsevier Ltd.

  9. Reliability and cost analysis methods

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.

    1991-01-01

    In the design phase of a system, how does a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability? When is the increased cost justified? High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. We should not consider either the cost of the subsystem or the expected cost due to subsystem failure separately but should minimize the total of the two costs, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure. This final report discusses the Combined Analysis of Reliability, Redundancy, and Cost (CARRAC) methods which were developed under Grant Number NAG 3-1100 from the NASA Lewis Research Center. CARRAC methods and a CARRAC computer program employ five models which can be used to cover a wide range of problems. The models contain an option which can include repair of failed modules.

  10. Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.

  11. Validity and reliability of an adapted arabic version of the long international physical activity questionnaire.

    PubMed

    Helou, Khalil; El Helou, Nour; Mahfouz, Maya; Mahfouz, Yara; Salameh, Pascale; Harmouche-Karaki, Mireille

    2017-07-24

    The International Physical Actvity Questionnaire (IPAQ) is a validated tool for physical activity assessment used in many countries however no Arabic version of the long-form of this questionnaire exists to this date. Hence, the aim of this study was to cross-culturally adapt and validate an Arabic version of the long International Physical Activity Questionnaire (AIPAQ) equivalent to the French version (F-IPAQ) in a Lebanese population. The guidelines for cross-cultural adaptation provided by the World Health Organization and the International Physical Activity Questionnaire committee were followed. One hundred fifty-nine students and staff members from Saint Joseph University of Beirut were randomly recruited to participate in the study. Items of the A-IPAQ were compared to those from the F-IPAQ for concurrent validity using Spearman's correlation coefficient. Content validity of the questionnaire was assessed using factor analysis for the A-IPAQ's items. The physical activity indicators derived from the A-IPAQ were compared with the body mass index (BMI) of the participants for construct validity. The instrument was also evaluated for internal consistency reliability using Cronbach's alpha and Intraclass Correlation Coefficient (ICC). Finally, thirty-one participants were asked to complete the A-IPAQ on two occasions three weeks apart to examine its test-retest reliability. Bland-Altman analyses were performed to evaluate the extent of agreement between the two versions of the questionnaire and its repeated administrations. A high correlation was observed between answers of the F-IPAQ and those of the A-IPAQ, with Spearman's correlation coefficients ranging from 0.91 to 1.00 (p < 0.05). Bland-Altman analysis showed a high level of agreement between the two versions with all values scattered around the mean for total physical activity (mean difference = 5.3 min/week, 95% limits of agreement = -145.2 to 155.8). Negative correlations were observed between

  12. Micromechanics Based Failure Analysis of Heterogeneous Materials

    NASA Astrophysics Data System (ADS)

    Sertse, Hamsasew M.

    In recent decades, heterogeneous materials are extensively used in various industries such as aerospace, defense, automotive and others due to their desirable specific properties and excellent capability of accumulating damage. Despite their wide use, there are numerous challenges associated with the application of these materials. One of the main challenges is lack of accurate tools to predict the initiation, progression and final failure of these materials under various thermomechanical loading conditions. Although failure is usually treated at the macro and meso-scale level, the initiation and growth of failure is a complex phenomena across multiple scales. The objective of this work is to enable the mechanics of structure genome (MSG) and its companion code SwiftComp to analyze the initial failure (also called static failure), progressive failure, and fatigue failure of heterogeneous materials using micromechanics approach. The initial failure is evaluated at each numerical integration point using pointwise and nonlocal approach for each constituent of the heterogeneous materials. The effects of imperfect interfaces among constituents of heterogeneous materials are also investigated using a linear traction-displacement model. Moreover, the progressive and fatigue damage analyses are conducted using continuum damage mechanics (CDM) approach. The various failure criteria are also applied at a material point to analyze progressive damage in each constituent. The constitutive equation of a damaged material is formulated based on a consistent irreversible thermodynamics approach. The overall tangent modulus of uncoupled elastoplastic damage for negligible back stress effect is derived. The initiation of plasticity and damage in each constituent is evaluated at each numerical integration point using a nonlocal approach. The accumulated plastic strain and anisotropic damage evolution variables are iteratively solved using an incremental algorithm. The damage analyses

  13. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  14. Recent advances in computational structural reliability analysis methods

    NASA Astrophysics Data System (ADS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  15. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  16. Reliability and validity of two multidimensional self-reported physical activity questionnaires in people with chronic low back pain.

    PubMed

    Carvalho, Flávia A; Morelhão, Priscila K; Franco, Marcia R; Maher, Chris G; Smeets, Rob J E M; Oliveira, Crystian B; Freitas Júnior, Ismael F; Pinto, Rafael Z

    2017-02-01

    Although there is some evidence for reliability and validity of self-report physical activity (PA) questionnaires in the general adult population, it is unclear whether we can assume similar measurement properties in people with chronic low back pain (LBP). To determine the test-retest reliability of the International Physical Activity Questionnaire (IPAQ) long-version and the Baecke Physical Activity Questionnaire (BPAQ) and their criterion-related validity against data derived from accelerometers in patients with chronic LBP. Cross-sectional study. Patients with non-specific chronic LBP were recruited. Each participant attended the clinic twice (one week interval) and completed self-report PA. Accelerometer measures >7 days included time spent in moderate-and-vigorous physical activity, steps/day, counts/minute, and vector magnitude counts/minute. Intraclass Correlation Coefficients (ICC) and Bland and Altman method were used to determine reliability and spearman rho correlation were used for criterion-related validity. A total of 73 patients were included in our analyses. The reliability analyses revealed that the BPAQ and its subscales have moderate to excellent reliability (ICC 2,1 : 0.61 to 0.81), whereas IPAQ and most IPAQ domains (except walking) showed poor reliability (ICC 2,1 : 0.20 to 0.40). The Bland and Altman method revealed larger discrepancies for the IPAQ. For the validity analysis, questionnaire and accelerometer measures showed at best fair correlation (rho < 0.37). Although the BPAQ showed better reliability than the IPAQ long-version, both questionnaires did not demonstrate acceptable validity against accelerometer data. These findings suggest that questionnaire and accelerometer PA measures should not be used interchangeably in this population. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. [Reliability of the PRISCUS-PAQ. Questionnaire to assess physical activity of persons aged 70 years and older].

    PubMed

    Trampisch, U; Platen, P; Burghaus, I; Moschny, A; Wilm, S; Thiem, U; Hinrichs, T

    2010-12-01

    A questionnaire (Q) to measure physical activity (PA) of persons ≥70 years for epidemiological research is lacking. The aim was to develop the PRISCUS-PAQ and test the reliability in community-dwelling people (≥70 years). Validated PA questionnaires were translated and adapted to design the PRISCUS-PAQ. Its test-retest reliability for 91 randomly selected people (36% men) aged 70-98 (76±5) years ranged from 0.47 (walking) to 0.82 (riding a bicycle). The overall activity score was 0.59 as determined by the intraclass correlation coefficient (ICC). Recording of general activities, e.g., housework (ICC=0.59), was in general less reliable than athletic activities, e.g., gymnastics (ICC=0.76). The PRISCUS-PAQ, which is a short instrument with acceptable reliability to collect the physical activity of the elderly in a telephone interview, will be used to collect data in a large cohort of older people in the German research consortium PRISCUS.

  18. Reliability and Validity of the International Physical Activity Questionnaire for Assessing Walking

    ERIC Educational Resources Information Center

    van der Ploeg, Hidde P.; Tudor-Locke, Catrine; Marshall, Alison L.; Craig, Cora; Hagstromer, Maria; Sjostrom, Michael; Bauman, Adrian

    2010-01-01

    The single most commonly reported physical activity in public health surveys is walking. As evidence accumulates that walking is important for preventing weight gain and reducing the risk of diabetes, there is increased need to capture this behavior in a valid and reliable manner. Although the disadvantages of a self-report methodology are well…

  19. Reliability of Physical Activity Measures During Free-Living Activities in People After Total Knee Arthroplasty.

    PubMed

    Almeida, Gustavo J; Irrgang, James J; Fitzgerald, G Kelley; Jakicic, John M; Piva, Sara R

    2016-06-01

    Few instruments that measure physical activity (PA) can accurately quantify PA performed at light and moderate intensities, which is particularly relevant in older adults. The evidence of their reliability in free-living conditions is limited. The study objectives were: (1) to determine the test-retest reliability of the Actigraph (ACT), SenseWear Armband (SWA), and Community Healthy Activities Model Program for Seniors (CHAMPS) questionnaire in assessing free-living PA at light and moderate intensities in people after total knee arthroplasty; (2) to compare the reliability of the 3 instruments relative to each other; and (3) to determine the reliability of commonly used monitoring time frames (24 hours, waking hours, and 10 hours from awakening). A one-group, repeated-measures design was used. Participants wore the activity monitors for 2 weeks, and the CHAMPS questionnaire was completed at the end of each week. Test-retest reliability was determined by using the intraclass correlation coefficient (ICC [2,k]) to compare PA measures from one week with those from the other week. Data from 28 participants who reported similar PA during the 2 weeks were included in the analysis. The mean age of these participants was 69 years (SD=8), and 75% of them were women. Reliability ranged from moderate to excellent for the ACT (ICC=.75-.86) and was excellent for the SWA (ICC=.93-.95) and the CHAMPS questionnaire (ICC=.86-.92). The 95% confidence intervals (95% CI) of the ICCs from the SWA were the only ones within the excellent reliability range (.85-.98). The CHAMPS questionnaire showed systematic bias, with less PA being reported in week 2. The reliability of PA measures in the waking-hour time frame was comparable to that in the 24-hour time frame and reflected most PA performed during this period. Reliability may be lower for time intervals longer than 1 week. All PA measures showed good reliability. The reliability of the ACT was lower than those of the SWA and the CHAMPS

  20. Failure Mode Classification for Life Prediction Modeling of Solid-State Lighting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakalaukus, Peter Joseph

    2015-08-01

    light power” of the SSL luminaire. The use of the Arrhenius equation necessitates two different temperature conditions, 25°C and 45°C are suggested by TM28, to determine the SSL lamp specific activation energy. One principal issue with TM28 is the lack of additional stresses or parameters needed to characterize non-temperature dependent failure mechanisms. Another principal issue with TM28 is the assumption that lumen maintenance or lumen depreciation gives an adequate comparison between SSL luminaires. Additionally, TM28 has no process for the determination of acceleration factors or lifetime estimations. Currently, a literature gap exists for established accelerated test methods for SSL devices to assess quality, reliability and durability before being introduced into the marketplace. Furthermore, there is a need for Physics-of-Failure based approaches to understand the processes and mechanisms that induce failure for the assessment of SSL reliability in order to develop generalized acceleration factors that better represent SSL product lifetime. This and the deficiencies in TM28 validate the need behind the development of acceleration techniques to quantify SSL reliability under a variety of environmental conditions. The ability to assess damage accrual and investigate reliability of SSL components and systems is essential to understanding the life time of the SSL device itself. The methodologies developed in this work increases the understanding of SSL devices iv through the investigation of component and device reliability under a variety of accelerated test conditions. The approaches for suitable lifetime predictions through the development of novel generalized acceleration factors, as well as a prognostics and health management framework, will greatly reduce the time and effort needed to produce SSL acceleration factors for the development of lifetime predictions.« less

  1. Reliability evaluation of CMOS RAMs

    NASA Astrophysics Data System (ADS)

    Salvo, C. J.; Sasaki, A. T.

    The results of an evaluation of the reliability of a 1K x 1 bit CMOS RAM and a 4K x 1 bit CMOS RAM for the USAF are reported. The tests consisted of temperature cycling, thermal shock, electrical overstress-static discharge and accelerated life test cells. The study indicates that the devices have high reliability potential for military applications. Use-temperature failure rates at 100 C were 0.54 x 10 to the -5th failures/hour for the 1K RAM and 0.21 x 10 to the -5th failures/hour for the 4K RAM. Only minimal electrostatic discharge damage was noted in the devices when they were subjected to multiple pulses at 1000 Vdc, and redesign of the 7 Vdc quiescent parameter of the 4K RAM is expected to raise its field threshold voltage.

  2. Network reliability maximization for stochastic-flow network subject to correlated failures using genetic algorithm and tabu\\xA0search

    NASA Astrophysics Data System (ADS)

    Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun

    2018-07-01

    Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.

  3. Assessment of reliability of CAD-CAM tooth-colored implant custom abutments.

    PubMed

    Guilherme, Nuno Marques; Chung, Kwok-Hung; Flinn, Brian D; Zheng, Cheng; Raigrodski, Ariel J

    2016-08-01

    Information is lacking about the fatigue resistance of computer-aided design and computer-aided manufacturing (CAD-CAM) tooth-colored implant custom abutment materials. The purpose of this in vitro study was to investigate the reliability of different types of CAD-CAM tooth-colored implant custom abutments. Zirconia (Lava Plus), lithium disilicate (IPS e.max CAD), and resin-based composite (Lava Ultimate) abutments were fabricated using CAD-CAM technology and bonded to machined titanium-6 aluminum-4 vanadium (Ti-6Al-4V) alloy inserts for conical connection implants (NobelReplace Conical Connection RP 4.3×10 mm; Nobel Biocare). Three groups (n=19) were assessed: group ZR, CAD-CAM zirconia/Ti-6Al-4V bonded abutments; group RC, CAD-CAM resin-based composite/Ti-6Al-4V bonded abutments; and group LD, CAD-CAM lithium disilicate/Ti-6Al-4V bonded abutments. Fifty-seven implant abutments were secured to implants and embedded in autopolymerizing acrylic resin according to ISO standard 14801. Static failure load (n=5) and fatigue failure load (n=14) were tested. Weibull cumulative damage analysis was used to calculate step-stress reliability at 150-N and 200-N loads with 2-sided 90% confidence limits. Representative fractured specimens were examined using stereomicroscopy and scanning electron microscopy to observe fracture patterns. Weibull plots revealed β values of 2.59 for group ZR, 0.30 for group RC, and 0.58 for group LD, indicating a wear-out or cumulative fatigue pattern for group ZR and load as the failure accelerating factor for groups RC and LD. Fractographic observation disclosed that failures initiated in the interproximal area where the lingual tensile stresses meet the compressive facial stresses for the early failure specimens. Plastic deformation of titanium inserts with fracture was observed for zirconia abutments in fatigue resistance testing. Significantly higher reliability was found in group ZR, and no significant differences in reliability were

  4. Reliability and cost: A sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.; Patterson, Richard L.

    1991-01-01

    In the design phase of a system, how a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability is examined, along with the justification of the increased cost. High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. The cost of the subsystem nor the expected cost due to subsystem failure should not be considered separately but the total of the two costs should be maximized, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure.

  5. Software reliability experiments data analysis and investigation

    NASA Technical Reports Server (NTRS)

    Walker, J. Leslie; Caglayan, Alper K.

    1991-01-01

    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  6. Reliability assessment of serviceability performance of braced retaining walls using a neural network approach

    NASA Astrophysics Data System (ADS)

    Goh, A. T. C.; Kulhawy, F. H.

    2005-05-01

    In urban environments, one major concern with deep excavations in soft clay is the potentially large ground deformations in and around the excavation. Excessive movements can damage adjacent buildings and utilities. There are many uncertainties associated with the calculation of the ultimate or serviceability performance of a braced excavation system. These include the variabilities of the loadings, geotechnical soil properties, and engineering and geometrical properties of the wall. A risk-based approach to serviceability performance failure is necessary to incorporate systematically the uncertainties associated with the various design parameters. This paper demonstrates the use of an integrated neural network-reliability method to assess the risk of serviceability failure through the calculation of the reliability index. By first performing a series of parametric studies using the finite element method and then approximating the non-linear limit state surface (the boundary separating the safe and failure domains) through a neural network model, the reliability index can be determined with the aid of a spreadsheet. Two illustrative examples are presented to show how the serviceability performance for braced excavation problems can be assessed using the reliability index.

  7. Diverse Redundant Systems for Reliable Space Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.

  8. Reliability of solid-state lighting electrical drivers subjected to WHTOL accelerated aging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lall, Pradeep; Sakalauku, Peter; Davis, Lynn

    An investigation of a solid-state lighting (SSL) luminaire with the focus on the electronic driver which has been exposed to a standard wet hot temperature operating life (WHTOL) of 85% RH and 85°C in order to assess reliability of prolonged exposer to a harsh environment has been conducted. SSL luminaires are beginning introduced as head lamps in some of today's luxury automobiles and may also be fulfilling a variety of important outdoor applications such as overhead street lamps, traffic signals and landscape lighting. SSL luminaires in these environments are almost certain to encounter excessive moisture from humidity and high temperaturesmore » for a persistent period of time. The lack of accelerated test methods for LEDs to assess long-term reliability prior to introduction into the marketplace, a need for SSL physics based PHM modeling indicators for assessment and prediction of LED life, as well as the U.S. Department of Energy's R&D roadmap to replace todays lighting with SSL luminaires makes it important to increase the understanding of the reliability of SSL devices, specifically, in harsh environment applications. In this work, a set of SSL electrical drivers were investigated to determine failure mechanisms that occur during prolonged harsh environment applications. Each driver consists of four aluminum electrolytic capacitors (AECs) of three different types and was considered the weakest component inside the SSL electrical driver. The reliability of the electrical driver was assessed by monitoring the change in capacitance and the change in equivalent series resistance for each AEC, as well as monitoring the luminous flux of the SSL luminaire or the output of the electrical driver. The luminous flux of a pristine SSL electrical driver was also monitored in order to detect minute changes in the electrical drivers output and to aid in the investigation of the SSL luminaires reliability. The failure mechanisms of the electrical drivers have been

  9. Work-related measures of physical and behavioral health function: Test-retest reliability.

    PubMed

    Marino, Molly Elizabeth; Meterko, Mark; Marfeo, Elizabeth E; McDonough, Christine M; Jette, Alan M; Ni, Pengsheng; Bogusz, Kara; Rasch, Elizabeth K; Brandt, Diane E; Chan, Leighton

    2015-10-01

    The Work Disability Functional Assessment Battery (WD-FAB), developed for potential use by the US Social Security Administration to assess work-related function, currently consists of five multi-item scales assessing physical function and four multi-item scales assessing behavioral health function; the WD-FAB scales are administered as Computerized Adaptive Tests (CATs). The goal of this study was to evaluate the test-retest reliability of the WD-FAB Physical Function and Behavioral Health CATs. We administered the WD-FAB scales twice, 7-10 days apart, to a sample of 376 working age adults and 316 adults with work-disability. Intraclass correlation coefficients were calculated to measure the consistency of the scores between the two administrations. Standard error of measurement (SEM) and minimal detectable change (MDC90) were also calculated to measure the scales precision and sensitivity. For the Physical Function CAT scales, the ICCs ranged from 0.76 to 0.89 in the working age adult sample, and 0.77-0.86 in the sample of adults with work-disability. ICCs for the Behavioral Health CAT scales ranged from 0.66 to 0.70 in the working age adult sample, and 0.77-0.80 in the adults with work-disability. The SEM ranged from 3.25 to 4.55 for the Physical Function scales and 5.27-6.97 for the Behavioral Health function scales. For all scales in both samples, the MDC90 ranged from 7.58 to 16.27. Both the Physical Function and Behavioral Health CATs of the WD-FAB demonstrated good test-retest reliability in adults with work-disability and general adult samples, a critical requirement for assessing work related functioning in disability applicants and in other contexts. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Work-related measures of Physical and Behavioral Health Function: Test-Retest Reliability

    PubMed Central

    Marino, Molly Elizabeth; Meterko, Mark; Marfeo, Elizabeth E.; McDonough, Christine M.; Jette, Alan M.; Ni, Pengsheng; Bogusz, Kara; Rasch, Elizabeth K.; Brandt, Diane E.; Chan, Leighton

    2015-01-01

    Background The Work Disability Functional Assessment Battery (WD-FAB), developed for potential use by the US Social Security Administration to assess work-related function, currently consists of five multi-item scales assessing physical function and four multi-item scales assessing behavioral health function; the WD-FAB scales are administered as Computerized Adaptive Tests (CATs). Objective The goal of this study was to evaluate the test-retest reliability of the WD-FAB Physical Function and Behavioral Health CATs. Methods We administered the WD-FAB scales twice, 7–10 days apart, to a sample of 376 working age adults and 316 adults with work-disability. Intraclass correlation coefficients were calculated to measure the consistency of the scores between the two administrations. Standard error of measurement (SEM) and minimal detectable change (MDC90) were also calculated to measure the scales precision and sensitivity. Results For the Physical Function CAT scales, the ICCs ranged from 0.76–0.89 in the working age adult sample, and 0.77–0.86 in the sample of adults with work-disability. ICCs for the Behavioral Health CAT scales ranged from 0.66–0.70 in the working age adult sample, and 0.77–0.80 in the adults with work-disability. The SEM ranged from 3.25–4.55 for the Physical Function scales and 5.27–6.97 for the Behavioral Health function scales. For all scales in both samples, the MDC90 ranged from 7.58–16.27. Conclusion Both the Physical Function and Behavioral Health CATs of the WD-FAB demonstrated good test-retest reliability in adults with work-disability and general adult samples, a critical requirement for assessing work related functioning in disability applicants and in other contexts. PMID:25991419

  11. WEAMR-a weighted energy aware multipath reliable routing mechanism for hotline-based WSNs.

    PubMed

    Tufail, Ali; Qamar, Arslan; Khan, Adil Mehmood; Baig, Waleed Akram; Kim, Ki-Hyung

    2013-05-13

    Reliable source to sink communication is the most important factor for an efficient routing protocol especially in domains of military, healthcare and disaster recovery applications. We present weighted energy aware multipath reliable routing (WEAMR), a novel energy aware multipath routing protocol which utilizes hotline-assisted routing to meet such requirements for mission critical applications. The protocol reduces the number of average hops from source to destination and provides unmatched reliability as compared to well known reactive ad hoc protocols i.e., AODV and AOMDV. Our protocol makes efficient use of network paths based on weighted cost calculation and intelligently selects the best possible paths for data transmissions. The path cost calculation considers end to end number of hops, latency and minimum energy node value in the path. In case of path failure path recalculation is done efficiently with minimum latency and control packets overhead. Our evaluation shows that our proposal provides better end-to-end delivery with less routing overhead and higher packet delivery success ratio compared to AODV and AOMDV. The use of multipath also increases overall life time of WSN network using optimum energy available paths between sender and receiver in WDNs.

  12. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  13. Reliability of Fault Tolerant Control Systems. Part 1

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.

  14. Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.

    PubMed

    Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco

    2012-01-01

    Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.

  15. Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications

    PubMed Central

    Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco

    2012-01-01

    Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497

  16. A Sensor Failure Simulator for Control System Reliability Studies

    NASA Technical Reports Server (NTRS)

    Melcher, K. J.; Delaat, J. C.; Merrill, W. C.; Oberle, L. G.; Sadler, G. G.; Schaefer, J. H.

    1986-01-01

    A real-time Sensor Failure Simulator (SFS) was designed and assembled for the Advanced Detection, Isolation, and Accommodation (ADIA) program. Various designs were considered. The design chosen features an IBM-PC/XT. The PC is used to drive analog circuitry for simulating sensor failures in real-time. A user defined scenario describes the failure simulation for each of the five incoming sensor signals. Capabilities exist for editing, saving, and retrieving the failure scenarios. The SFS has been tested closed-loop with the Controls Interface and Monitoring (CIM) unit, the ADIA control, and a real-time F100 hybrid simulation. From a productivity viewpoint, the menu driven user interface has proven to be efficient and easy to use. From a real-time viewpoint, the software controlling the simulation loop executes at greater than 100 cycles/sec.

  17. A sensor failure simulator for control system reliability studies

    NASA Astrophysics Data System (ADS)

    Melcher, K. J.; Delaat, J. C.; Merrill, W. C.; Oberle, L. G.; Sadler, G. G.; Schaefer, J. H.

    A real-time Sensor Failure Simulator (SFS) was designed and assembled for the Advanced Detection, Isolation, and Accommodation (ADIA) program. Various designs were considered. The design chosen features an IBM-PC/XT. The PC is used to drive analog circuitry for simulating sensor failures in real-time. A user defined scenario describes the failure simulation for each of the five incoming sensor signals. Capabilities exist for editing, saving, and retrieving the failure scenarios. The SFS has been tested closed-loop with the Controls Interface and Monitoring (CIM) unit, the ADIA control, and a real-time F100 hybrid simulation. From a productivity viewpoint, the menu driven user interface has proven to be efficient and easy to use. From a real-time viewpoint, the software controlling the simulation loop executes at greater than 100 cycles/sec.

  18. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  19. Analyzing Log Files to Predict Students' Problem Solving Performance in a Computer-Based Physics Tutor

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2015-01-01

    This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…

  20. A vector-based failure detection and isolation algorithm for a dual fail-operational redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, Frederick R.; Bailey, Melvin L.

    1987-01-01

    A vector-based failure detection and isolation technique for a skewed array of two degree-of-freedom inertial sensors is developed. Failure detection is based on comparison of parity equations with a threshold, and isolation is based on comparison of logic variables which are keyed to pass/fail results of the parity test. A multi-level approach to failure detection is used to ensure adequate coverage for the flight control, display, and navigation avionics functions. Sensor error models are introduced to expose the susceptibility of the parity equations to sensor errors and physical separation effects. The algorithm is evaluated in a simulation of a commercial transport operating in a range of light to severe turbulence environments. A bias-jump failure level of 0.2 deg/hr was detected and isolated properly in the light and moderate turbulence environments, but not detected in the extreme turbulence environment. An accelerometer bias-jump failure level of 1.5 milli-g was detected over all turbulence environments. For both types of inertial sensor, hard-over, and null type failures were detected in all environments without incident. The algorithm functioned without false alarm or isolation over all turbulence environments for the runs tested.

  1. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  2. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise

  3. Solid Rocket Booster Large Main and Drogue Parachute Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Clifford, Courtenay B.; Hengel, John E.

    2009-01-01

    observed drogue chute failures, Jeffreys Prior was used to calculate a reliability of R =.998. Based on these results, it is concluded that the LMP and drogue parachutes on the Shuttle SRB are suited to their mission and changes made over their life have improved the reliability of the parachute.

  4. Validation and reliability of the Physical Activity Scale for the Elderly in Chinese population.

    PubMed

    Ngai, Shirley P C; Cheung, Roy T H; Lam, Priscillia L; Chiu, Joseph K W; Fung, Eric Y H

    2012-05-01

    Physical Activity Scale for the Elderly (PASE) is a widely used questionnaire in epidemiological studies for assessing the physical activity level of elderly. This study aims to translate and validate PASE in Chinese population. Cross-sectional study. Chinese elderly aged 65 or above. The original English version of PASE was translated into Chinese (PASE-C) following standardized translation procedures. Ninety Chinese elderly aged 65 or above were recruited in the community. Test-retest reliability was determined by comparing the scores obtained from two separate administrations by the intraclass correlation coefficient. Validity was evaluated by Spearman's rank correlation coefficients between PASE and Medical Outcome Survey 36-Item Short Form Health Survey (SF-36), grip strength, single-leg-stance, 5 times sit-to-stand and 10-m walk. PASE-C demonstrated good test-retest reliability (intraclass correlation coefficient  = 0.81). Fair to moderate association were found between PASE-C and most of the subscales of SF-36 (rs = 0.285 to 0.578, p < 0.01), grip strength (rs = 0.405 to 0.426, p < 0.001), single-leg-stance (rs = 0.470 to 0.548, p < 0.001), 5 times sit-to-stand (rs = -0.33, p = 0.001) and 10-m walk (rs = -0.281, p = 0.007). PASE-C is a reliable and valid instrument for assessing the physical activity level of elderly in Chinese population.

  5. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  6. Autonomic dysfunction predicts poor physical improvement after cardiac rehabilitation in patients with heart failure.

    PubMed

    Compostella, Leonida; Nicola, Russo; Tiziana, Setzu; Caterina, Compostella; Fabio, Bellotto

    2014-11-01

    Cardiac autonomic dysfunction, clinically expressed by reduced heart rate variability (HRV), is present in patients with congestive heart failure (CHF) and is related to the degree of left ventricular dysfunction. In athletes, HRV is an indicator of ability to improve performance. No similar data are available for CHF. The aim of this study was to assess whether HRV could predict the capability of CHF patients to improve physical fitness after a short period of exercise-based cardiac rehabilitation (CR). This was an observational, non-randomized study, conducted on 57 patients with advanced CHF, admitted to a residential cardiac rehabilitation unit 32 ± 22 days after an episode of acute heart failure. Inclusion criteria were sinus rhythm, stable clinical conditions, no diabetes and ejection fraction ≤ 35%. HRV (time-domain) and mean and minimum heart rate (HR) were evaluated using 24-h Holter at admission. Patients' physical fitness was evaluated at admission by 6-minute walking test (6MWT) and reassessed after two weeks of intensive exercise-based CR. Exercise capacity was evaluated by a symptom-limited cardiopulmonary exercise test (CPET). Patients with very depressed HRV (SDNN 55.8 ± 10.0 ms) had no improvement in their walking capacity after short CR, walked shorter absolute distances at final 6MWT (348 ± 118 vs. 470 ± 109 m; P = 0.027) and developed a peak-VO2 at CPET significantly lower than patients with greater HRV parameters (11.4 ± 3.7 vs. an average > 16 ± 4 mL/kg/min). Minimum HR, but not mean HR, showed a negative correlation (ρ = -0.319) with CPET performance. In patients with advanced CHF, depressed HRV and higher minimum HR were predictors of poor working capacity after a short period of exercise-based CR. An individualized and intensive rehabilitative intervention should be considered for these patients.

  7. Effect of system workload on operating system reliability - A study on IBM 3081

    NASA Technical Reports Server (NTRS)

    Iyer, R. K.; Rossetti, D. J.

    1985-01-01

    This paper presents an analysis of operating system failures on an IBM 3081 running VM/SP. Three broad categories of software failures are found: error handling, program control or logic, and hardware related; it is found that more than 25 percent of software failures occur in the hardware/software interface. Measurements show that results on software reliability cannot be considered representative unless the system workload is taken into account. The overall CPU execution rate, although measured to be close to 100 percent most of the time, is not found to correlate strongly with the occurrence of failures. Possible reasons for the observed workload failure dependency, based on detailed investigations of the failure data, are discussed.

  8. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  9. Rate based failure detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less

  10. Interobserver Reliability of the Respiratory Physical Examination in Premature Infants: A Multicenter Study

    PubMed Central

    Jensen, Erik A.; Panitch, Howard; Feng, Rui; Moore, Paul E.; Schmidt, Barbara

    2017-01-01

    Objective To measure the inter-rater reliability of 7 visual and 3 auscultatory respiratory physical examination findings at 36–40 weeks’ postmenstrual age in infants born less than 29 weeks’ gestation. Physicians also estimated the probability that each infant would remain hospitalized for 3 months after the examination or be readmitted for a respiratory illness during that time. Study design Prospective, multicenter, inter-rater reliability study using standardized audio-video recordings of respiratory physical examinations. Results We recorded the respiratory physical examination of 30 infants at 2 centers and invited 32 physicians from 9 centers to review the examinations. The intraclass correlation values for physician agreement ranged from 0.73 (95% CI 0.57–0.85) for subcostal retractions to 0.22 (95% CI 0.11–0.41) for expiratory abdominal muscle use. Eight (27%) infants remained hospitalized or were readmitted within 3 months after the examination. The area under the receiver operating characteristic curve for prediction of this outcome was 0.82 (95% CI 0.78–0.86). Physician predictive accuracy was greater for infants receiving supplemental oxygen (0.90, 95% CI 0.86–0.95) compared with those breathing in room air (0.71, 95% CI 0.66–0.75). Conclusions Physicians often do not agree on respiratory physical examination findings in premature infants. Physician prediction of short-term respiratory morbidity was more accurate for infants receiving supplemental oxygen compared with those breathing in room air. PMID:27567413

  11. Interobserver Reliability of the Respiratory Physical Examination in Premature Infants: A Multicenter Study.

    PubMed

    Jensen, Erik A; Panitch, Howard; Feng, Rui; Moore, Paul E; Schmidt, Barbara

    2016-11-01

    To measure the inter-rater reliability of 7 visual and 3 auscultatory respiratory physical examination findings at 36-40 weeks' postmenstrual age in infants born less than 29 weeks' gestation. Physicians also estimated the probability that each infant would remain hospitalized for 3 months after the examination or be readmitted for a respiratory illness during that time. Prospective, multicenter, inter-rater reliability study using standardized audio-video recordings of respiratory physical examinations. We recorded the respiratory physical examination of 30 infants at 2 centers and invited 32 physicians from 9 centers to review the examinations. The intraclass correlation values for physician agreement ranged from 0.73 (95% CI 0.57-0.85) for subcostal retractions to 0.22 (95% CI 0.11-0.41) for expiratory abdominal muscle use. Eight (27%) infants remained hospitalized or were readmitted within 3 months after the examination. The area under the receiver operating characteristic curve for prediction of this outcome was 0.82 (95% CI 0.78-0.86). Physician predictive accuracy was greater for infants receiving supplemental oxygen (0.90, 95% CI 0.86-0.95) compared with those breathing in room air (0.71, 95% CI 0.66-0.75). Physicians often do not agree on respiratory physical examination findings in premature infants. Physician prediction of short-term respiratory morbidity was more accurate for infants receiving supplemental oxygen compared with those breathing in room air. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. The reliability of a quality appraisal tool for studies of diagnostic reliability (QAREL).

    PubMed

    Lucas, Nicholas; Macaskill, Petra; Irwig, Les; Moran, Robert; Rickards, Luke; Turner, Robin; Bogduk, Nikolai

    2013-09-09

    The aim of this project was to investigate the reliability of a new 11-item quality appraisal tool for studies of diagnostic reliability (QAREL). The tool was tested on studies reporting the reliability of any physical examination procedure. The reliability of physical examination is a challenging area to study given the complex testing procedures, the range of tests, and lack of procedural standardisation. Three reviewers used QAREL to independently rate 29 articles, comprising 30 studies, published during 2007. The articles were identified from a search of relevant databases using the following string: "Reproducibility of results (MeSH) OR reliability (t.w.) AND Physical examination (MeSH) OR physical examination (t.w.)." A total of 415 articles were retrieved and screened for inclusion. The reviewers undertook an independent trial assessment prior to data collection, followed by a general discussion about how to score each item. At no time did the reviewers discuss individual papers. Reliability was assessed for each item using multi-rater kappa (κ). Multi-rater reliability estimates ranged from κ = 0.27 to 0.92 across all items. Six items were recorded with good reliability (κ > 0.60), three with moderate reliability (κ = 0.41 - 0.60), and two with fair reliability (κ = 0.21 - 0.40). Raters found it difficult to agree about the spectrum of patients included in a study (Item 1) and the correct application and interpretation of the test (Item 10). In this study, we found that QAREL was a reliable assessment tool for studies of diagnostic reliability when raters agreed upon criteria for the interpretation of each item. Nine out of 11 items had good or moderate reliability, and two items achieved fair reliability. The heterogeneity in the tests included in this study may have resulted in an underestimation of the reliability of these two items. We discuss these and other factors that could affect our results and make recommendations for the use of QAREL.

  13. Predicted reliability of aerospace electronics: Application of two advanced probabilistic concepts

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    Two advanced probabilistic design-for-reliability (PDfR) concepts are addressed and discussed in application to the prediction, quantification and assurance of the aerospace electronics reliability: 1) Boltzmann-Arrhenius-Zhurkov (BAZ) model, which is an extension of the currently widely used Arrhenius model and, in combination with the exponential law of reliability, enables one to obtain a simple, easy-to-use and physically meaningful formula for the evaluation of the probability of failure (PoF) of a material or a device after the given time in operation at the given temperature and under the given stress (not necessarily mechanical), and 2) Extreme Value Distribution (EVD) technique that can be used to assess the number of repetitive loadings that result in the material/device degradation and eventually lead to its failure by closing, in a step-wise fashion, the gap between the bearing capacity (stress-free activation energy) of the material or the device and the demand (loading). It is shown that the material degradation (aging, damage accumulation, flaw propagation, etc.) can be viewed, when BAZ model is considered, as a Markovian process, and that the BAZ model can be obtained as the ultimate steady-state solution to the well-known Fokker-Planck equation in the theory of Markovian processes. It is shown also that the BAZ model addresses the worst, but a reasonably conservative, situation. It is suggested therefore that the transient period preceding the condition addressed by the steady-state BAZ model need not be accounted for in engineering evaluations. However, when there is an interest in understanding the transient degradation process, the obtained solution to the Fokker-Planck equation can be used for this purpose. As to the EVD concept, it attributes the degradation process to the accumulation of damages caused by a train of repetitive high-level loadings, while loadings of levels that are considerably lower than their extreme values do not contribute

  14. Physics-Based Compact Model for CIGS and CdTe Solar Cells: From Voltage-Dependent Carrier Collection to Light-Enhanced Reverse Breakdown: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Xingshu; Alam, Muhammad Ashraful; Raguse, John

    2015-10-15

    In this paper, we develop a physics-based compact model for copper indium gallium diselenide (CIGS) and cadmium telluride (CdTe) heterojunction solar cells that attributes the failure of superposition to voltage-dependent carrier collection in the absorber layer, and interprets light-enhanced reverse breakdown as a consequence of tunneling-assisted Poole-Frenkel conduction. The temperature dependence of the model is validated against both simulation and experimental data for the entire range of bias conditions. The model can be used to characterize device parameters, optimize new designs, and most importantly, predict performance and reliability of solar panels including the effects of self-heating and reverse breakdown duemore » to partial-shading degradation.« less

  15. Perceived success/failure and attributions associated with self-regulatory efficacy to meet physical activity recommendations for women with arthritis.

    PubMed

    Spink, Kevin S; Brawley, Lawrence R; Gyurcsik, Nancy C

    2016-10-01

    The relationship between attributional dimensions women assign to the cause of their perceived success or failure at meeting the recommended physical activity dose and self-regulatory efficacy for future physical activity was examined among women with arthritis. Women (N = 117) aged 18-84 years, with self-reported medically-diagnosed arthritis, completed on-line questions in the fall of 2013 assessing endurance physical activity, perceived outcome for meeting the recommended levels of endurance activity, attributions for one's success or failure in meeting the recommendations, and self-regulatory efficacy to schedule/plan endurance activity over the next month. The main theoretically-driven finding revealed that the interaction of the stability dimension with perceived success/failure was significantly related to self-regulatory efficacy for scheduling and planning future physical activity (β = 0.35, p = .002). Outcomes attributed to more versus less stable factors accentuated differences in self-regulatory efficacy beliefs following perceived success and failure at being active. It appears that attributional dimensions were associated with self-regulatory efficacy in women with arthritis. This suggests that rather than objectively observed past mastery experience, women's subjective perceptions and explanations of their past experiences were related to efficacy beliefs, especially following a failure experience.

  16. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  17. Mass and Reliability Source (MaRS) Database

    NASA Technical Reports Server (NTRS)

    Valdenegro, Wladimir

    2017-01-01

    The Mass and Reliability Source (MaRS) Database consolidates components mass and reliability data for all Oribital Replacement Units (ORU) on the International Space Station (ISS) into a single database. It was created to help engineers develop a parametric model that relates hardware mass and reliability. MaRS supplies relevant failure data at the lowest possible component level while providing support for risk, reliability, and logistics analysis. Random-failure data is usually linked to the ORU assembly. MaRS uses this data to identify and display the lowest possible component failure level. As seen in Figure 1, the failure point is identified to the lowest level: Component 2.1. This is useful for efficient planning of spare supplies, supporting long duration crewed missions, allowing quicker trade studies, and streamlining diagnostic processes. MaRS is composed of information from various databases: MADS (operating hours), VMDB (indentured part lists), and ISS PART (failure data). This information is organized in Microsoft Excel and accessed through a program made in Microsoft Access (Figure 2). The focus of the Fall 2017 internship tour was to identify the components that were the root cause of failure from the given random-failure data, develop a taxonomy for the database, and attach material headings to the component list. Secondary objectives included verifying the integrity of the data in MaRS, eliminating any part discrepancies, and generating documentation for future reference. Due to the nature of the random-failure data, data mining had to be done manually without the assistance of an automated program to ensure positive identification.

  18. Reliability and Validity of a Measure of Sexual and Physical Abuse Histories among Women with Serious Mental Illness.

    ERIC Educational Resources Information Center

    Meyer, Ilan H.; And Others

    1996-01-01

    Structured clinical interviews concerning childhood histories of physical and sexual abuse with 70 mentally ill women at 2 times found test-retest reliability of .63 for physical abuse and .82 for sexual abuse. Validity, assessed as consistency with an independent clinical assessment, showed 75% agreement for physical abuse and 93% agreement for…

  19. Performance and Reliability of Bonded Interfaces for High-Temperature Packaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paret, Paul P

    2017-08-02

    Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (>200 degrees C). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. Mechanical characterization tests that result in stress-strain curves and accelerated tests that produce cycles-to-failure result will be conducted. Also, we present a finite element method (FEM) modeling methodologymore » that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. A fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed.« less

  20. Reliability evaluation of microgrid considering incentive-based demand response

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  1. Landslide early warning based on failure forecast models: the example of the Mt. de La Saxe rockslide, northern Italy

    NASA Astrophysics Data System (ADS)

    Manconi, A.; Giordan, D.

    2015-07-01

    We apply failure forecast models by exploiting near-real-time monitoring data for the La Saxe rockslide, a large unstable slope threatening Aosta Valley in northern Italy. Starting from the inverse velocity theory, we analyze landslide surface displacements automatically and in near real time on different temporal windows and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here, we present the result obtained for the La Saxe rockslide, a large unstable slope located in Aosta Valley, northern Italy. Based on this case study, we identify operational thresholds that are established on the reliability of the forecast models. Our approach is aimed at supporting the management of early warning systems in the most critical phases of the landslide emergency.

  2. Failure modes and effects analysis automation

    NASA Technical Reports Server (NTRS)

    Kamhieh, Cynthia H.; Cutts, Dannie E.; Purves, R. Byron

    1988-01-01

    A failure modes and effects analysis (FMEA) assistant was implemented as a knowledge based system and will be used during design of the Space Station to aid engineers in performing the complex task of tracking failures throughout the entire design effort. The three major directions in which automation was pursued were the clerical components of the FMEA process, the knowledge acquisition aspects of FMEA, and the failure propagation/analysis portions of the FMEA task. The system is accessible to design, safety, and reliability engineers at single user workstations and, although not designed to replace conventional FMEA, it is expected to decrease by many man years the time required to perform the analysis.

  3. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  4. Constructing the "Best" Reliability Data for the Job

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.; Kleinhammer, R. K.

    2014-01-01

    Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.

  5. Constructing the Best Reliability Data for the Job

    NASA Technical Reports Server (NTRS)

    Kleinhammer, R. K.; Kahn, J. C.

    2014-01-01

    Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.

  6. Performance and reliability enhancement of linear coolers

    NASA Astrophysics Data System (ADS)

    Mai, M.; Rühlich, I.; Schreiter, A.; Zehner, S.

    2010-04-01

    Highest efficiency states a crucial requirement for modern tactical IR cryocooling systems. For enhancement of overall efficiency, AIM cryocooler designs where reassessed considering all relevant loss mechanisms and associated components. Performed investigation was based on state-of-the-art simulation software featuring magnet circuitry analysis as well as computational fluid dynamics (CFD) to realistically replicate thermodynamic interactions. As a result, an improved design for AIM linear coolers could be derived. This paper gives an overview on performance enhancement activities and major results. An additional key-requirement for cryocoolers is reliability. In recent time, AIM has introduced linear coolers with full Flexure Bearing suspension on both ends of the driving mechanism incorporating Moving Magnet piston drive. In conjunction with a Pulse-Tube coldfinger these coolers are capable of meeting MTTF's (Mean Time To Failure) in excess of 50,000 hours offering superior reliability for space applications. Ongoing development also focuses on reliability enhancement, deriving space technology into tactical solutions combining both, excelling specific performance with space like reliability. Concerned publication will summarize the progress of this reliability program and give further prospect.

  7. An integrated approach coupling physically based models and probabilistic method to assess quantitatively landslide susceptibility at different scale: application to different geomorphological environments

    NASA Astrophysics Data System (ADS)

    Vandromme, Rosalie; Thiéry, Yannick; Sedan, Olivier; Bernardie, Séverine

    2016-04-01

    Landslide hazard assessment is the estimation of a target area where landslides of a particular type, volume, runout and intensity may occur within a given period. The first step to analyze landslide hazard consists in assessing the spatial and temporal failure probability (when the information is available, i.e. susceptibility assessment). Two types of approach are generally recommended to achieve this goal: (i) qualitative approach (i.e. inventory based methods and knowledge data driven methods) and (ii) quantitative approach (i.e. data-driven methods or deterministic physically based methods). Among quantitative approaches, deterministic physically based methods (PBM) are generally used at local and/or site-specific scales (1:5,000-1:25,000 and >1:5,000, respectively). The main advantage of these methods is the calculation of probability of failure (safety factor) following some specific environmental conditions. For some models it is possible to integrate the land-uses and climatic change. At the opposite, major drawbacks are the large amounts of reliable and detailed data (especially materials type, their thickness and the geotechnical parameters heterogeneity over a large area) and the fact that only shallow landslides are taking into account. This is why they are often used at site-specific scales (> 1:5,000). Thus, to take into account (i) materials' heterogeneity , (ii) spatial variation of physical parameters, (iii) different landslide types, the French Geological Survey (i.e. BRGM) has developed a physically based model (PBM) implemented in a GIS environment. This PBM couples a global hydrological model (GARDENIA®) including a transient unsaturated/saturated hydrological component with a physically based model computing the stability of slopes (ALICE®, Assessment of Landslides Induced by Climatic Events) based on the Morgenstern-Price method for any slip surface. The variability of mechanical parameters is handled by Monte Carlo approach. The

  8. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  9. CardioGuard: A Brassiere-Based Reliable ECG Monitoring Sensor System for Supporting Daily Smartphone Healthcare Applications

    PubMed Central

    Kwon, Sungjun; Kim, Jeehoon; Kang, Seungwoo; Lee, Youngki; Baek, Hyunjae

    2014-01-01

    Abstract We propose CardioGuard, a brassiere-based reliable electrocardiogram (ECG) monitoring sensor system, for supporting daily smartphone healthcare applications. It is designed to satisfy two key requirements for user-unobtrusive daily ECG monitoring: reliability of ECG sensing and usability of the sensor. The system is validated through extensive evaluations. The evaluation results showed that the CardioGuard sensor reliably measure the ECG during 12 representative daily activities including diverse movement levels; 89.53% of QRS peaks were detected on average. The questionnaire-based user study with 15 participants showed that the CardioGuard sensor was comfortable and unobtrusive. Additionally, the signal-to-noise ratio test and the washing durability test were conducted to show the high-quality sensing of the proposed sensor and its physical durability in practical use, respectively. PMID:25405527

  10. Speedy routing recovery protocol for large failure tolerance in wireless sensor networks.

    PubMed

    Lee, Joa-Hyoung; Jung, In-Bum

    2010-01-01

    Wireless sensor networks are expected to play an increasingly important role in data collection in hazardous areas. However, the physical fragility of a sensor node makes reliable routing in hazardous areas a challenging problem. Because several sensor nodes in a hazardous area could be damaged simultaneously, the network should be able to recover routing after node failures over large areas. Many routing protocols take single-node failure recovery into account, but it is difficult for these protocols to recover the routing after large-scale failures. In this paper, we propose a routing protocol, referred to as ARF (Adaptive routing protocol for fast Recovery from large-scale Failure), to recover a network quickly after failures over large areas. ARF detects failures by counting the packet losses from parent nodes, and upon failure detection, it decreases the routing interval to notify the neighbor nodes of the failure. Our experimental results indicate that ARF could provide recovery from large-area failures quickly with less packets and energy consumption than previous protocols.

  11. Development of confidence limits by pivotal functions for estimating software reliability

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.

  12. Spaceflight tracking and data network operational reliability assessment for Skylab

    NASA Technical Reports Server (NTRS)

    Seneca, V. I.; Mlynarczyk, R. H.

    1974-01-01

    Data on the spaceflight communications equipment status during the Skylab mission were subjected to an operational reliability assessment. Reliability models were revised to reflect pertinent equipment changes accomplished prior to the beginning of the Skylab missions. Appropriate adjustments were made to fit the data to the models. The availabilities are based on the failure events resulting in the stations inability to support a function of functions and the MTBF's are based on all events including 'can support' and 'cannot support'. Data were received from eleven land-based stations and one ship.

  13. Delay Analysis of Car-to-Car Reliable Data Delivery Strategies Based on Data Mulling with Network Coding

    NASA Astrophysics Data System (ADS)

    Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok

    Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.

  14. Scaled CMOS Technology Reliability Users Guide

    NASA Technical Reports Server (NTRS)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is

  15. WEAMR — A Weighted Energy Aware Multipath Reliable Routing Mechanism for Hotline-Based WSNs

    PubMed Central

    Tufail, Ali; Qamar, Arslan; Khan, Adil Mehmood; Baig, Waleed Akram; Kim, Ki-Hyung

    2013-01-01

    Reliable source to sink communication is the most important factor for an efficient routing protocol especially in domains of military, healthcare and disaster recovery applications. We present weighted energy aware multipath reliable routing (WEAMR), a novel energy aware multipath routing protocol which utilizes hotline-assisted routing to meet such requirements for mission critical applications. The protocol reduces the number of average hops from source to destination and provides unmatched reliability as compared to well known reactive ad hoc protocols i.e., AODV and AOMDV. Our protocol makes efficient use of network paths based on weighted cost calculation and intelligently selects the best possible paths for data transmissions. The path cost calculation considers end to end number of hops, latency and minimum energy node value in the path. In case of path failure path recalculation is done efficiently with minimum latency and control packets overhead. Our evaluation shows that our proposal provides better end-to-end delivery with less routing overhead and higher packet delivery success ratio compared to AODV and AOMDV. The use of multipath also increases overall life time of WSN network using optimum energy available paths between sender and receiver in WDNs. PMID:23669714

  16. Validity and Reliability of International Physical Activity Questionnaire-Short Form in Chinese Youth

    ERIC Educational Resources Information Center

    Wang, Chao; Chen, Peijie; Zhuang, Jie

    2013-01-01

    Purpose: The psychometric profiles of the widely used International Physical Activity Questionnaire-Short Form (IPAQ-SF) in Chinese youth have not been reported. The purpose of this study was to examine the validity and reliability of the IPAQ-SF using a sample of Chinese youth. Method: One thousand and twenty-one youth (M[subscript age] = 14.26 ±…

  17. Durability, value, and reliability of selected electric powered wheelchairs.

    PubMed

    Fass, Megan V; Cooper, Rory A; Fitzgerald, Shirley G; Schmeler, Mark; Boninger, Michael L; Algood, S David; Ammer, William A; Rentschler, Andrew J; Duncan, John

    2004-05-01

    To compare the durability, value, and reliability of selected electric powered wheelchairs (EPWs), purchased in 1998. Engineering standards tests of quality and performance. A rehabilitation engineering center. Fifteen EPWs: 3 each of the Jazzy, Quickie, Lancer, Arrow, and Chairman models. Not applicable. Wheelchairs were evaluated for durability (lifespan), value (durability, cost), and reliability (rate of repairs) using 2-drum and curb-drop machines in accordance with the standards of the American National Standards Institute and Rehabilitation Engineering and Assistive Technology Society of North America. The 5 brands differed significantly (Preliability, except in terms of reliability of supplier repairs. The Arrow had the highest durability, value, and reliability in terms of the number of consumer failures, supplier failures, repairs, failures, consumer repairs and failures, and supplier repairs and failures. The Lancer had the poorest durability and reliability, and the Chairman had the lowest value. K0014 wheelchairs (Arrow, Permobil) were significantly more durable than K0011 wheelchairs (Jazzy, Quickie, Lancer). No significant differences in durability with respect to rear-wheel-drive (Arrow, Lancer, Quickie), mid-wheel-drive (Jazzy), or front-wheel-drive (Chairman) wheelchairs were found. The Arrow consistently outperformed the other wheelchairs in nearly every area studied, and K0014 wheelchairs were more durable than K0011 wheelchairs. These results can be used as an objective comparison guide for clinicians and consumers, as long as they are used in conjunction with other important selection criteria. Manufacturers can use these results as a guide for continued efforts to produce higher quality wheelchairs. Care should be taken when making comparisons, however, because the 5 brands had different features. Purchased in 1998, these models may be used for several more years. In addition, problem areas in these models

  18. Test-retest reliability of Physical Activity Neighborhood Environment Scale among urban men and women in Nanjing, China.

    PubMed

    Zhao, L; Wang, Z; Qin, Z; Leslie, E; He, J; Xiong, Y; Xu, F

    2018-03-01

    The identification of physical-activity-friendly built environment (BE) constructs is highly useful for physical activity promotion and maintenance. The Physical Activity Neighborhood Environment Scale (PANES) was developed for assessing BE correlates. However, PANES reliability has not been investigated among adults in China. A cross-sectional study. With multistage sampling approaches, 1568 urban adults (aged 35-74 years) were recruited for the initial survey on all 17 items of PANES Chinese version (PANES-CHN), with the survey repeated 7 days later for each participant. Intraclass correlation coefficient (ICC) was used to assess the test-retest reliability of PANES-CHN for each item. Totally, 1551 participants completed both surveys (follow-up rate = 98.9%). Among participants (mean age: 54.7 ± 11.1 years), 47.8% were men, 22.1% were elders, and 22.7% had ≥13 years of education. Overall, the PANES-CHN demonstrated at least substantial reliability with ICCs ranging from 0.66 to 0.95 (core items), from 0.75 to 0.95 (recommended items), and from 0.78 to 0.87 (optional items). Similar outcomes were observed when data were analyzed by gender or age groups. The PANES-CHN has excellent test-retest reliability and thus has valuable utility for assessing urban BE attributes among Chinese adults. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  19. Reliability Analysis of the Electrical Control System of Subsea Blowout Preventers Using Markov Models

    PubMed Central

    Liu, Zengkai; Liu, Yonghong; Cai, Baoping

    2014-01-01

    Reliability analysis of the electrical control system of a subsea blowout preventer (BOP) stack is carried out based on Markov method. For the subsea BOP electrical control system used in the current work, the 3-2-1-0 and 3-2-0 input voting schemes are available. The effects of the voting schemes on system performance are evaluated based on Markov models. In addition, the effects of failure rates of the modules and repair time on system reliability indices are also investigated. PMID:25409010

  20. Reliability improvement of wire bonds subjected to fatigue stresses.

    NASA Technical Reports Server (NTRS)

    Ravi, K. V.; Philofsky, E. M.

    1972-01-01

    The failure of wire bonds due to repeated flexure when semiconductor devices are operated in an on-off mode has been investigated. An accelerated fatigue testing apparatus was constructed and the major fatigue variables, aluminum alloy composition, and bonding mechanism, were tested. The data showed Al-1% Mg wires to exhibit superior fatigue characteristics compared to Al-1% Cu or Al-1% Si and ultrasonic bonding to be better than thermocompression bonding for fatigue resistance. Based on these results highly reliable devices were fabricated using Al-1% Mg wire with ultrasonic bonding which withstood 120,000 power cycles with no failures.

  1. A validity and reliability study of the Turkish Multidimensional Assessment of Fatigue (MAF) scale in chronic musculoskeletal physical therapy patients.

    PubMed

    Yildirim, Yücel; Ergin, Gülbin

    2013-01-01

    Fatigue is primarily a subjective experience and self-report is the most common approach used to measure fatigue. Numerous self-report instruments have been developed to measure fatigue. Unfortunately, each of these measures was tailored for the situation in which fatigue was studied. Therefore, the aim of this study was to determine the reliability and validity of the Turkish language version of the Multidimensional Assessment of Fatigue Scale (MAF-T) in chronic musculoskeletal physical therapy patients. The MAF-T was supplied by the MAPI Research Institute, and 69 chronic musculoskeletal physical therapy patients were evaluated. To validate MAF-T, all participants completed the MAF-T and Short Form-36 (SF-36). The MAF was administered again one week later to assess test-retest reliability. Using Cronbach α, the internal consistency reliability of the MAF-T was 0.90, the Intraclass Correlation Coefficient (ICC) reliability was 0.96. Item-discriminant validity was calculated between r=0.14 and r=0.82. The correlations between the total scores of the MAF-T scale and the subscale scores of SF-36 were negative and significant (p< 0.01). The MAF-T is a valid and reliable scale for assessing fatigue in chronic musculoskeletal physical therapy patients.

  2. Investigation of accelerated stress factors and failure/degradation mechanisms in terrestrial solar cells

    NASA Technical Reports Server (NTRS)

    Lathrop, J. W.

    1984-01-01

    Research on the reliability of terrestrial solar cells was performed to identify failure/degradation modes affecting solar cells and to relate these to basic physical, chemical, and metallurgical phenomena. Particular concerns addressed were the reliability attributes of individual single crystalline, polycrystalline, and amorphous thin film silicon cells. Results of subjecting different types of crystalline cells to the Clemson accelerated test schedule are given. Preliminary step stress results on one type of thin film amorphous silicon (a:Si) cell indicated that extraneous degradation modes were introduced above 140 C. Also described is development of measurement procedures which are applicable to the reliability testing of a:Si solar cells as well as an approach to achieving the necessary repeatability of fabricating a simulated a:Si reference cell from crystalline silicon photodiodes.

  3. The reliability and validity of Chinese version of SF36 v2 in aging patients with chronic heart failure.

    PubMed

    Dong, Aishu; Chen, Sisi; Zhu, Lianlian; Shi, Lingmin; Cai, Yueli; Zeng, Jingni; Guo, Wenjian

    2017-08-01

    Chronic heart failure (CHF), a major public health problem worldwide, seriously limits health-related quality of life (HRQOL). How to evaluate HRQOL in older patients with CHF remains a problem. To evaluate the reliability and validity of the Chinese version of the Medical Outcomes Study Short Form version 2 (SF-36v2) in CHF patients. From September 2012 to June 2014, we assessed QOL using the SF-36v2 in 171 aging participants with CHF in four cardiology departments. Convergent and discriminant validity, factorial validity, sensitivity among different NYHA classes and between different age groups, and reliability were determined using standard measurement methods. A total of 150 participants completed a structured questionnaire including general information and the Chinese SF-36v2; 132 questionnaires were considered valid, while 21 patients refused to take part. 25 of the 50 participants invited to complete the 2-week test-retest questionnaires returned completed questionnaires. The internal consistency reliability (Cronbach's α) of the total SF-36v2 was 0.92 (range 0.74-0.93). All hypothesized item-subscale correlations showed satisfactory convergent and discriminant validity. Sensitivity was measured in different NYHA classes and age groups. Comparison of different NYHA classes showed statistical significance, but there was no significant difference between age groups. We confirmed the SF-36v2 as a valid instrument for evaluating HRQOL Chinese CHF patients. Both reliability and validity were strongly satisfactory, but there was divergence in understanding subscales such as "social functioning" because of differing cultural background. The reliability, validity, and sensitivity of SF-36v2 in aging patients with CHF were acceptable.

  4. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chokchai "Box" Leangsuksun

    2011-05-31

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  5. Periodically Self Restoring Redundant Systems for VLSI Based Highly Reliable Design,

    DTIC Science & Technology

    1984-01-01

    fault tolerance technique for realizing highly reliable computer systems for critical control applications . However, VL.SI technology has imposed a...operating correctly; failed critical real time control applications . n modules are discarded from the vote. the classical "static" voted redundancy...redundant modules are failure number of InterconnecttIon3. This results In f aree. However, for applications requiring higm modular complexity because

  6. Reliability and Validity of the PAQ-C Questionnaire to Assess Physical Activity in Children.

    PubMed

    Benítez-Porres, Javier; López-Fernández, Iván; Raya, Juan Francisco; Álvarez Carnero, Sabrina; Alvero-Cruz, José Ramón; Álvarez Carnero, Elvis

    2016-09-01

    Physical activity (PA) assessment by questionnaire is a cornerstone in the field of sport epidemiology studies. The Physical Activity Questionnaire for Children (PAQ-C) has been used widely to assess PA in healthy school populations. The aim of this study was to evaluate the reliability and validity of the PAQ-C questionnaire in Spanish children using triaxial accelerometry as criterion. Eighty-three (N = 46 boys, N = 37 girls) healthy children (age 10.98 ± 1.17 years, body mass index 19.48 ± 3.51 kg/m(2) ) were volunteers and completed the PAQ-C twice and wore an accelerometer for 8 consecutive days. Reliability was analyzed by the intraclass correlation coefficient (ICC) and the internal consistency by the Cronbach's α coefficient. The PAQ-C was compared against total PA and moderate to vigorous PA (MVPA) obtained by accelerometry. Test-retest reliability showed an ICC = 0.96 for the final score of PAQ-C. Small differences between first and second questionnaire administration were detected. Few and low correlations (rho = 0.228-0.278, all ps < .05) were observed between PAQ-C and accelerometry. The highest correlation was observed for item 9 (rho = 0.311, p < .01). PAQ-C had a high reliability but a questionable validity for assessing total PA and MVPA in Spanish children. Therefore, PA measurement in children should not be limited only to self-report measurements. © 2016, American School Health Association.

  7. Reliability of burst superimposed technique to assess central activation failure during fatiguing contraction.

    PubMed

    Dousset, Erick; Jammes, Yves

    2003-04-01

    Recording a superimposed electrically-induced contraction at the limit of endurance during voluntary contraction is used as an indicator of failure of muscle activation by the central nervous system and discards the existence of peripheral muscle fatigue. We questioned on the reliability of this method by using other means to explore peripheral muscle failure. Fifteen normal subjects sustained handgrip at 60% of maximal voluntary contraction (MVC) until exhaustion. During sustained contraction, the power spectrum analysis of the flexor digitorum surface electromyogram allowed us to calculate the leftward shift of median frequency (MF). A superimposed 60 Hz 3 s pulse train (burst superimposition) was delivered to the muscle when force levelled off close to the preset value. Immediately after the fatigue trial had ended, the subject was asked to perform a 5 s 60% MVC and we measured the peak contractile response to a 60 Hz 3 s burst stimulation. Recordings of the compound evoked muscle action potential (M-wave) allowed us to explore an impairment of neuromuscular propagation. A superimposed contraction was measured in 7 subjects in their two forearms, whereas it was absent in the 8 others. Despite these discrepancies, all subjects were able to reproduce a 3 s 60% MVC immediately after the fatigue trial ended and there was no post-fatigue decrease of contraction elicited by the 60 Hz 3 s burst stimulation, as well as no M-wave decrease in amplitude and conduction time. Thus, there was no indication of peripheral muscle fatigue. MF decrease was present in all individuals throughout the fatiguing contraction and it was not correlated with the magnitude of superimposed force. These observations indicate that an absence of superimposed electrically-induced muscle contraction does not allow us to conclude the existence of a sole peripheral muscle fatigue in these circumstances.

  8. Diagnosing Recent Failures In Hodoscope Photomultiplier Tube Bases For FNAL E906

    NASA Astrophysics Data System (ADS)

    Stien, Haley; SeaQuest Collaboration

    2017-09-01

    The E906/SeaQuest experiment at Fermi National Accelerator Laboratory is researching the nucleon quark sea in order to provide an accurate determination of the quark and anti-quark distributions within the nucleon. By colliding a 120 GeV proton beam with a set of fixed targets and tracking the dimuons that hit the detectors, it is possible to study the quark/anti-quark interaction that produced the unique dimuon through the Drell-Yan process. However, E906 recently began to experience a number of failures in the Hodoscope Photomultiplier Tube bases in the first two detector stations, which are used in the trigger. It was known that the two most likely causes were radiation damage or overheating. Radiation damage was able to be ruled out when it was found that there was no increase in the number of base failures in high rate areas. It was clear that the heat generated on the custom high rate bases caused several components on the daughter cards to slowly overheat until failure. Using thermal imaging and two temperature probes, it was observed that the components on the daughter cards would reach temperatures over 100 degrees Celcius very quickly during our tests. This presentation will discuss the diagnostic process and summarize how this issue will be prevented in the future. Supported by U.S. D.O.E. Medium Energy Nuclear Physics under Grant DE-FG02-03ER41243.

  9. Report on Wind Turbine Subsystem Reliability - A Survey of Various Databases (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, S.

    2013-07-01

    Wind industry has been challenged by premature subsystem/component failures. Various reliability data collection efforts have demonstrated their values in supporting wind turbine reliability and availability research & development and industrial activities. However, most information on these data collection efforts are scattered and not in a centralized place. With the objective of getting updated reliability statistics of wind turbines and/or subsystems so as to benefit future wind reliability and availability activities, this report is put together based on a survey of various reliability databases that are accessible directly or indirectly by NREL. For each database, whenever feasible, a brief description summarizingmore » database population, life span, and data collected is given along with its features & status. Then selective results deemed beneficial to the industry and generated based on the database are highlighted. This report concludes with several observations obtained throughout the survey and several reliability data collection opportunities in the future.« less

  10. Survey on the implementation and reliability of CubeSat electrical bus interfaces

    NASA Astrophysics Data System (ADS)

    Bouwmeester, Jasper; Langer, Martin; Gill, Eberhard

    2017-06-01

    This paper provides results and conclusions on a survey on the implementation and reliability aspects of CubeSat bus interfaces, with an emphasis on the data bus and power distribution. It provides recommendations for a future CubeSat bus standard. The survey is based on a literature study and a questionnaire representing 60 launched CubeSats and 44 to be launched CubeSats. It is found that the bus interfaces are not the main driver for mission failures. However, it is concluded that the Inter Integrated Circuit (I2C) data bus, as implemented in a great majority of the CubeSats, caused some catastrophic satellite failures and a vast amount of bus lockups. The power distribution may lead to catastrophic failures if the power lines are not protected against overcurrent. A connector and wiring standard widely implemented in CubeSats is based on the PC/104 standard. Most participants find the 104 pin connector of this standard too large. For a future CubeSat bus interface standard, it is recommended to implement a reliable data bus, a power distribution with overcurrent protection and a wiring harness with smaller connectors compared with PC/104.

  11. Comparing the Psychometric Properties of Two Physical Activity Self-Efficacy Instruments in Urban, Adolescent Girls: Validity, Measurement Invariance, and Reliability

    PubMed Central

    Voskuil, Vicki R.; Pierce, Steven J.; Robbins, Lorraine B.

    2017-01-01

    Aims: This study compared the psychometric properties of two self-efficacy instruments related to physical activity. Factorial validity, cross-group and longitudinal invariance, and composite reliability were examined. Methods: Secondary analysis was conducted on data from a group randomized controlled trial investigating the effect of a 17-week intervention on increasing moderate to vigorous physical activity among 5th–8th grade girls (N = 1,012). Participants completed a 6-item Physical Activity Self-Efficacy Scale (PASE) and a 7-item Self-Efficacy for Exercise Behaviors Scale (SEEB) at baseline and post-intervention. Confirmatory factor analyses for intervention and control groups were conducted with Mplus Version 7.4 using robust weighted least squares estimation. Model fit was evaluated with the chi-square index, comparative fit index, and root mean square error of approximation. Composite reliability for latent factors with ordinal indicators was computed from Mplus output using SAS 9.3. Results: Mean age of the girls was 12.2 years (SD = 0.96). One-third of the girls were obese. Girls represented a diverse sample with over 50% indicating black race and an additional 19% identifying as mixed or other race. Both instruments demonstrated configural invariance for simultaneous analysis of cross-group and longitudinal invariance based on alternative fit indices. However, simultaneous metric invariance was not met for the PASE or the SEEB instruments. Partial metric invariance for the simultaneous analysis was achieved for the PASE with one factor loading identified as non-invariant. Partial metric invariance was not met for the SEEB. Longitudinal scalar invariance was achieved for both instruments in the control group but not the intervention group. Composite reliability for the PASE ranged from 0.772 to 0.842. Reliability for the SEEB ranged from 0.719 to 0.800 indicating higher reliability for the PASE. Reliability was more stable over time in the control

  12. Ceramic component reliability with the restructured NASA/CARES computer program

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.

    1992-01-01

    The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).

  13. Diagnosis of Fanconi anemia in patients with bone marrow failure

    PubMed Central

    Pinto, Fernando O.; Leblanc, Thierry; Chamousset, Delphine; Le Roux, Gwenaelle; Brethon, Benoit; Cassinat, Bruno; Larghero, Jérôme; de Villartay, Jean-Pierre; Stoppa-Lyonnet, Dominique; Baruchel, André; Socié, Gérard; Gluckman, Eliane; Soulier, Jean

    2009-01-01

    Background Patients with bone marrow failure and undiagnosed underlying Fanconi anemia may experience major toxicity if given standard-dose conditioning regimens for hematopoietic stem cell transplant. Due to clinical variability and/or potential emergence of genetic reversion with hematopoietic somatic mosaicism, a straightforward Fanconi anemia diagnosis can be difficult to make, and diagnostic strategies combining different assays in addition to classical breakage tests in blood may be needed. Design and Methods We evaluated Fanconi anemia diagnosis on blood lymphocytes and skin fibroblasts from a cohort of 87 bone marrow failure patients (55 children and 32 adults) with no obvious full clinical picture of Fanconi anemia, by performing a combination of chromosomal breakage tests, FANCD2-monoubiquitination assays, a new flow cytometry-based mitomycin C sensitivity test in fibroblasts, and, when Fanconi anemia was diagnosed, complementation group and mutation analyses. The mitomycin C sensitivity test in fibroblasts was validated on control Fanconi anemia and non-Fanconi anemia samples, including other chromosomal instability disorders. Results When this diagnosis strategy was applied to the cohort of bone marrow failure patients, 7 Fanconi anemia patients were found (3 children and 4 adults). Classical chromosomal breakage tests in blood detected 4, but analyses on fibroblasts were necessary to diagnose 3 more patients with hematopoietic somatic mosaicism. Importantly, Fanconi anemia was excluded in all the other patients who were fully evaluated. Conclusions In this large cohort of patients with bone marrow failure our results confirmed that when any clinical/biological suspicion of Fanconi anemia remains after chromosome breakage tests in blood, based on physical examination, history or inconclusive results, then further evaluation including fibroblast analysis should be made. For that purpose, the flow-based mitomycin C sensitivity test here described proved

  14. Reliability/safety analysis of a fly-by-wire system

    NASA Technical Reports Server (NTRS)

    Brock, L. D.; Goddman, H. A.

    1980-01-01

    An analysis technique has been developed to estimate the reliability of a very complex, safety-critical system by constructing a diagram of the reliability equations for the total system. This diagram has many of the characteristics of a fault-tree or success-path diagram, but is much easier to construct for complex redundant systems. The diagram provides insight into system failure characteristics and identifies the most likely failure modes. A computer program aids in the construction of the diagram and the computation of reliability. Analysis of the NASA F-8 Digital Fly-by-Wire Flight Control System is used to illustrate the technique.

  15. Modelling Wind Turbine Failures based on Weather Conditions

    NASA Astrophysics Data System (ADS)

    Reder, Maik; Melero, Julio J.

    2017-11-01

    A large proportion of the overall costs of a wind farm is directly related to operation and maintenance (O&M) tasks. By applying predictive O&M strategies rather than corrective approaches these costs can be decreased significantly. Here, especially wind turbine (WT) failure models can help to understand the components’ degradation processes and enable the operators to anticipate upcoming failures. Usually, these models are based on the age of the systems or components. However, latest research shows that the on-site weather conditions also affect the turbine failure behaviour significantly. This study presents a novel approach to model WT failures based on the environmental conditions to which they are exposed to. The results focus on general WT failures, as well as on four main components: gearbox, generator, pitch and yaw system. A penalised likelihood estimation is used in order to avoid problems due to for example highly correlated input covariates. The relative importance of the model covariates is assessed in order to analyse the effect of each weather parameter on the model output.

  16. The development and reliability of a simple field based screening tool to assess core stability in athletes.

    PubMed

    O'Connor, S; McCaffrey, N; Whyte, E; Moran, K

    2016-07-01

    To adapt the trunk stability test to facilitate further sub-classification of higher levels of core stability in athletes for use as a screening tool. To establish the inter-tester and intra-tester reliability of this adapted core stability test. Reliability study. Collegiate athletic therapy facilities. Fifteen physically active male subjects (19.46 ± 0.63) free from any orthopaedic or neurological disorders were recruited from a convenience sample of collegiate students. The intraclass correlation coefficients (ICC) and 95% Confidence Intervals (CI) were computed to establish inter-tester and intra-tester reliability. Excellent ICC values were observed in the adapted core stability test for inter-tester reliability (0.97) and good to excellent intra-tester reliability (0.73-0.90). While the 95% CI were narrow for inter-tester reliability, Tester A and C 95% CI's were widely distributed compared to Tester B. The adapted core stability test developed in this study is a quick and simple field based test to administer that can further subdivide athletes with high levels of core stability. The test demonstrated high inter-tester and intra-tester reliability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Effect of UV curing time on physical and electrical properties and reliability of low dielectric constant materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, Kai-Chieh; Cheng, Yi-Lung, E-mail: yjcheng@ncnu.edu.tw; Chang, Wei-Yuan

    2014-11-01

    This study comprehensively investigates the effect of ultraviolet (UV) curing time on the physical, electrical, and reliability characteristics of porous low-k materials. Following UV irradiation for various periods, the depth profiles of the chemical composition in the low-k dielectrics were homogeneous. Initially, the UV curing process preferentially removed porogen-related CH{sub x} groups and then modified Si-CH{sub 3} and cage Si-O bonds to form network Si-O bonds. The lowest dielectric constant (k value) was thus obtained at a UV curing time of 300 s. Additionally, UV irradiation made porogen-based low-k materials hydrophobic and to an extent that increased with UV curing time.more » With a short curing time (<300 s), porogen was not completely removed and the residues degraded reliability performance. A long curing time (>300 s) was associated with improved mechanical strength, electrical performance, and reliability of the low-k materials, but none of these increased linearly with UV curing time. Therefore, UV curing is necessary, but the process time must be optimized for porous low-k materials on back-end of line integration in 45 nm or below technology nodes.« less

  18. Is perceived failure in school performance a trigger of physical injury? A case-crossover study of children in Stockholm County

    PubMed Central

    Laflamme, L; Engstrom, K; Moller, J; Hallqvist, J

    2004-01-01

    Objectives: To investigate whether perceived failure in school performance increases the potential for children to be physically injured. Subjects: Children aged 10–15 years residing in the Stockholm County and hospitalised or called back for a medical check up because of a physical injury during the school years 2000–2001 and 2001–2002 (n = 592). Methods: A case-crossover design was used and information on potential injury triggers was gathered by interview. Information about family socioeconomic circumstances was gathered by a questionnaire filled in by parents during the child interview (response rate 87%). Results: Perceived failure in school performance has the potential to trigger injury within up to 10 hours subsequent to exposure (relative risk = 2.70; 95% confidence intervals = 1.2 to 5.8). The risk is significantly higher among pre-adolescents and among children from families at a higher education level. Conclusions: Experiencing feelings of failure may affect children's physical safety, in particular among pre-adolescents. Possible mechanisms are perceptual deficits and response changes occasioned by the stress experienced after exposure. PMID:15082740

  19. Reliability and minimal detectable change of physical performance measures in individuals with pre-manifest and manifest Huntington disease.

    PubMed

    Quinn, Lori; Khalil, Hanan; Dawes, Helen; Fritz, Nora E; Kegelmeyer, Deb; Kloos, Anne D; Gillard, Jonathan W; Busse, Monica

    2013-07-01

    Clinical intervention trials in people with Huntington disease (HD) have been limited by a lack of reliable and appropriate outcome measures. The purpose of this study was to determine the reliability and minimal detectable change (MDC) of various outcome measures that are potentially suitable for evaluating physical functioning in individuals with HD. This was a multicenter, prospective, observational study. Participants with pre-manifest and manifest HD (early, middle, and late stages) were recruited from 8 international sites to complete a battery of physical performance and functional measures at 2 assessments, separated by 1 week. Test-retest reliability (using intraclass correlation coefficients) and MDC values were calculated for all measures. Seventy-five individuals with HD (mean age=52.12 years, SD=11.82) participated in the study. Test-retest reliability was very high (>.90) for participants with manifest HD for the Six-Minute Walk Test (6MWT), 10-Meter Walk Test, Timed "Up & Go" Test (TUG), Berg Balance Scale (BBS), Physical Performance Test (PPT), Barthel Index, Rivermead Mobility Index, and Tinetti Mobility Test (TMT). Many MDC values suggested a relatively high degree of inherent variability, particularly in the middle stage of HD. Minimum detectable change values for participants with manifest HD that were relatively low across disease stages were found for the BBS (5), PPT (5), and TUG (2.98). For individuals with pre-manifest HD (n=11), the 6MWT and Four Square Step Test had high reliability and low MDC values. The sample size for the pre-manifest HD group was small. The BBS, PPT, and TUG appear most appropriate for clinical trials aimed at improving physical functioning in people with manifest HD. Further research in people with pre-manifest HD is necessary.

  20. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different

  1. Reliability Assessment Approach for Stirling Convertors and Generators

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Schreiber, Jeffrey G.; Zampino, Edward; Best, Timothy

    2004-01-01

    Stirling power conversion is being considered for use in a Radioisotope Power System for deep-space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power. Quantifying the reliability of a Radioisotope Power System that utilizes Stirling power conversion technology is important in developing and demonstrating the capability for long-term success. A description of the Stirling power convertor is provided, along with a discussion about some of the key components. Ongoing efforts to understand component life, design variables at the component and system levels, related sources, and the nature of uncertainties is discussed. The requirement for reliability also is discussed, and some of the critical areas of concern are identified. A section on the objectives of the performance model development and a computation of reliability is included to highlight the goals of this effort. Also, a viable physics-based reliability plan to model the design-level variable uncertainties at the component and system levels is outlined, and potential benefits are elucidated. The plan involves the interaction of different disciplines, maintaining the physical and probabilistic correlations at all the levels, and a verification process based on rational short-term tests. In addition, both top-down and bottom-up coherency were maintained to follow the physics-based design process and mission requirements. The outlined reliability assessment approach provides guidelines to improve the design and identifies governing variables to achieve high reliability in the Stirling Radioisotope Generator design.

  2. RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for

  3. Linkage design effect on the reliability of surface-micromachined microengines driving a load

    NASA Astrophysics Data System (ADS)

    Tanner, Danelle M.; Peterson, Kenneth A.; Irwin, Lloyd W.; Tangyunyong, Paiboon; Miller, William M.; Eaton, William P.; Smith, Norman F.; Rodgers, M. Steven

    1998-09-01

    The reliability of microengines is a function of the design of the mechanical linkage used to connect the electrostatic actuator to the drive. We have completed a series of reliability stress tests on surface micromachined microengines driving an inertial load. In these experiments, we used microengines that had pin mechanisms with guides connecting the drive arms to the electrostatic actuators. Comparing this data to previous results using flexure linkages revealed that the pin linkage design was less reliable. The devices were stressed to failure at eight frequencies, both above and below the measured resonance frequency of the microengine. Significant amounts of wear debris were observed both around the hub and pin joint of the drive gear. Additionally, wear tracks were observed in the area where the moving shuttle rubbed against the guides of the pin linkage. At each frequency, we analyzed the statistical data yielding a lifetime (t50) for median cycles to failure and (sigma) , the shape parameter of the distribution. A model was developed to describe the failure data based on fundamental wear mechanisms and forces exhibited in mechanical resonant systems. The comparison to the model will be discussed.

  4. Reliability Analysis of Uniaxially Ground Brittle Materials

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Nemeth, Noel N.; Powers, Lynn M.; Choi, Sung R.

    1995-01-01

    The fast fracture strength distribution of uniaxially ground, alpha silicon carbide was investigated as a function of grinding angle relative to the principal stress direction in flexure. Both as-ground and ground/annealed surfaces were investigated. The resulting flexural strength distributions were used to verify reliability models and predict the strength distribution of larger plate specimens tested in biaxial flexure. Complete fractography was done on the specimens. Failures occurred from agglomerates, machining cracks, or hybrid flaws that consisted of a machining crack located at a processing agglomerate. Annealing eliminated failures due to machining damage. Reliability analyses were performed using two and three parameter Weibull and Batdorf methodologies. The Weibull size effect was demonstrated for machining flaws. Mixed mode reliability models reasonably predicted the strength distributions of uniaxial flexure and biaxial plate specimens.

  5. Markov and semi-Markov processes as a failure rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabski, Franciszek

    2016-06-08

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

  6. A Decentralized Compositional Framework for Dependable Decision Process in Self-Managed Cyber Physical Systems

    PubMed Central

    Hou, Kun-Mean; Zhang, Zhan

    2017-01-01

    Cyber Physical Systems (CPSs) need to interact with the changeable environment under various interferences. To provide continuous and high quality services, a self-managed CPS should automatically reconstruct itself to adapt to these changes and recover from failures. Such dynamic adaptation behavior introduces systemic challenges for CPS design, advice evaluation and decision process arrangement. In this paper, a formal compositional framework is proposed to systematically improve the dependability of the decision process. To guarantee the consistent observation of event orders for causal reasoning, this work first proposes a relative time-based method to improve the composability and compositionality of the timing property of events. Based on the relative time solution, a formal reference framework is introduced for self-managed CPSs, which includes a compositional FSM-based actor model (subsystems of CPS), actor-based advice and runtime decomposable decisions. To simplify self-management, a self-similar recursive actor interface is proposed for decision (actor) composition. We provide constraints and seven patterns for the composition of reliability and process time requirements. Further, two decentralized decision process strategies are proposed based on our framework, and we compare the reliability with the static strategy and the centralized processing strategy. The simulation results show that the one-order feedback strategy has high reliability, scalability and stability against the complexity of decision and random failure. This paper also shows a way to simplify the evaluation for dynamic system by improving the composability and compositionality of the subsystem. PMID:29120357

  7. A Decentralized Compositional Framework for Dependable Decision Process in Self-Managed Cyber Physical Systems.

    PubMed

    Zhou, Peng; Zuo, Decheng; Hou, Kun-Mean; Zhang, Zhan

    2017-11-09

    Cyber Physical Systems (CPSs) need to interact with the changeable environment under various interferences. To provide continuous and high quality services, a self-managed CPS should automatically reconstruct itself to adapt to these changes and recover from failures. Such dynamic adaptation behavior introduces systemic challenges for CPS design, advice evaluation and decision process arrangement. In this paper, a formal compositional framework is proposed to systematically improve the dependability of the decision process. To guarantee the consistent observation of event orders for causal reasoning, this work first proposes a relative time-based method to improve the composability and compositionality of the timing property of events. Based on the relative time solution, a formal reference framework is introduced for self-managed CPSs, which includes a compositional FSM-based actor model (subsystems of CPS), actor-based advice and runtime decomposable decisions. To simplify self-management, a self-similar recursive actor interface is proposed for decision (actor) composition. We provide constraints and seven patterns for the composition of reliability and process time requirements. Further, two decentralized decision process strategies are proposed based on our framework, and we compare the reliability with the static strategy and the centralized processing strategy. The simulation results show that the one-order feedback strategy has high reliability, scalability and stability against the complexity of decision and random failure. This paper also shows a way to simplify the evaluation for dynamic system by improving the composability and compositionality of the subsystem.

  8. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  9. Long-term effectiveness of telephone-based health coaching for heart failure patients: A post-only randomised controlled trial.

    PubMed

    Tiede, Michel; Dwinger, Sarah; Herbarth, Lutz; Härter, Martin; Dirmaier, Jörg

    2017-09-01

    Introduction The * Equal contributors. health-status of heart failure patients can be improved to some extent by disease self-management. One method of developing such skills is telephone-based health coaching. However, the effects of telephone-based health coaching remain inconclusive. The aim of this study was to evaluate the effects of telephone-based health coaching for people with heart failure. Methods A total sample of 7186 patients with various chronic diseases was randomly assigned to either the coaching or the control group. Then 184 patients with heart failure were selected by International Classification of Diseases (ICD)-10 code for subgroup analysis. Data were collected at 24 and 48 months after the beginning of the coaching. The primary outcome was change in quality of life. Secondary outcomes were changes in depression and anxiety, health-related control beliefs, control preference, health risk behaviour and health-related behaviours. Statistical analyses included a per-protocol evaluation, employing analysis of variance and analysis of covariance (ANCOVA) as well as Mann-Whitney U tests. Results Participants' average age was 73 years (standard deviation (SD) = 9) and the majority were women (52.8%). In ANCOVA analyses there were no significant differences between groups for the change in quality of life (QoL) . However, the coaching group reported a significantly higher level of physical activity ( p = 0.03), lower intake of non-prescribed drugs ( p = 0.04) and lower levels of stress ( p = 0.02) than the control group. Mann-Whitney U tests showed a different external locus of control ( p = 0.014), and higher reduction in unhealthy nutrition ( p = 0.019), physical inactivity ( p = 0.004) and stress ( p = 0.028). Discussion Our results suggest that telephone-based health coaching has no effect on QoL, anxiety and depression of heart failure patients, but helps in improving certain risk behaviours and changes the locus

  10. Sensor failure detection for jet engines

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.

    1988-01-01

    The use of analytical redundancy to improve gas turbine engine control system reliability through sensor failure detection, isolation, and accommodation is surveyed. Both the theoretical and application papers that form the technology base of turbine engine analytical redundancy research are discussed. Also, several important application efforts are reviewed. An assessment of the state-of-the-art in analytical redundancy technology is given.

  11. Study of complete interconnect reliability for a GaAs MMIC power amplifier

    NASA Astrophysics Data System (ADS)

    Lin, Qian; Wu, Haifeng; Chen, Shan-ji; Jia, Guoqing; Jiang, Wei; Chen, Chao

    2018-05-01

    By combining the finite element analysis (FEA) and artificial neural network (ANN) technique, the complete prediction of interconnect reliability for a monolithic microwave integrated circuit (MMIC) power amplifier (PA) at the both of direct current (DC) and alternating current (AC) operation conditions is achieved effectively in this article. As a example, a MMIC PA is modelled to study the electromigration failure of interconnect. This is the first time to study the interconnect reliability for an MMIC PA at the conditions of DC and AC operation simultaneously. By training the data from FEA, a high accuracy ANN model for PA reliability is constructed. Then, basing on the reliability database which is obtained from the ANN model, it can give important guidance for improving the reliability design for IC.

  12. Effect of Surge Current Testing on Reliability of Solid Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2008-01-01

    Tantalum capacitors manufactured per military specifications are established reliability components and have less than 0.001% of failures per 1000 hours for grades D or S, thus positioning these parts among electronic components with the highest reliability characteristics. Still, failures of tantalum capacitors do happen and when it occurs it might have catastrophic consequences for the system. To reduce this risk, further development of a screening and qualification system with special attention to the possible deficiencies in the existing procedures is necessary. The purpose of this work is evaluation of the effect of surge current stress testing on reliability of the parts at both steady-state and multiple surge current stress conditions. In order to reveal possible degradation and precipitate more failures, various part types were tested and stressed in the range of voltage and temperature conditions exceeding the specified limits. A model to estimate the probability of post-surge current testing-screening failures and measures to improve the effectiveness of the screening process has been suggested.

  13. Using web-based video to enhance physical examination skills in medical students.

    PubMed

    Orientale, Eugene; Kosowicz, Lynn; Alerte, Anton; Pfeiffer, Carol; Harrington, Karen; Palley, Jane; Brown, Stacey; Sapieha-Yanchak, Teresa

    2008-01-01

    Physical examination (PE) skills among U.S. medical students have been shown to be deficient. This study examines the effect of a Web-based physical examination curriculum on first-year medical student PE skills. Web-based video clips, consisting of instruction in 77 elements of the physical examination, were created using Microsoft Windows Moviemaker software. Medical students' PE skills were evaluated by standardized patients before and after implementation of the Internet-based video. Following implementation of this curriculum, there was a higher level of competency (from 87% in 2002-2003 to 91% in 2004-2005), and poor performances on standardized patient PE exams substantially diminished (from a 14%-22%failure rate in 2002-2003, to 4% in 2004-2005. A significant improvement in first-year medical student performance on the adult PE occurred after implementing Web-based instructional video.

  14. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  15. Reliability of High-Voltage Tantalum Capacitors. Parts 3 and 4)

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2010-01-01

    Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.

  16. Optimal periodic proof test based on cost-effective and reliability criteria

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    An exploratory study for the optimization of periodic proof tests for fatigue-critical structures is presented. The optimal proof load level and the optimal number of periodic proof tests are determined by minimizing the total expected (statistical average) cost, while the constraint on the allowable level of structural reliability is satisfied. The total expected cost consists of the expected cost of proof tests, the expected cost of structures destroyed by proof tests, and the expected cost of structural failure in service. It is demonstrated by numerical examples that significant cost saving and reliability improvement for fatigue-critical structures can be achieved by the application of the optimal periodic proof test. The present study is relevant to the establishment of optimal maintenance procedures for fatigue-critical structures.

  17. The Failure Models of Lead Free Sn-3.0Ag-0.5Cu Solder Joint Reliability Under Low-G and High-G Drop Impact

    NASA Astrophysics Data System (ADS)

    Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei

    2017-02-01

    The reliability of Sn-3.0Ag-0.5Cu (SAC 305) solder joint under a broad level of drop impacts was studied. The failure performance of solder joint, failure probability and failure position were analyzed under two shock test conditions, i.e., 1000 g for 1 ms and 300 g for 2 ms. The stress distribution on the solder joint was calculated by ABAQUS. The results revealed that the dominant reason was the tension due to the difference in stiffness between the print circuit board and ball grid array, and the maximum tension of 121.1 MPa and 31.1 MPa, respectively, under both 1000 g or 300 g drop impact, was focused on the corner of the solder joint which was located in the outmost corner of the solder ball row. The failure modes were summarized into the following four modes: initiation and propagation through the (1) intermetallic compound layer, (2) Ni layer, (3) Cu pad, or (4) Sn-matrix. The outmost corner of the solder ball row had a high failure probability under both 1000 g and 300 g drop impact. The number of failures of solder ball under the 300 g drop impact was higher than that under the 1000 g drop impact. The characteristic drop values for failure were 41 and 15,199, respectively, following the statistics.

  18. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

  19. Self-efficacy: a useful construct to promote physical activity in people with stable chronic heart failure.

    PubMed

    Du, HuiYun; Everett, Bronwyn; Newton, Phillip J; Salamonson, Yenna; Davidson, Patricia M

    2012-02-01

    To explore the conceptual underpinnings of self-efficacy to address the barriers to participating in physical activity and propose a model of intervention. The benefits of physical activity in reducing cardiovascular risk have led to evidence-based recommendations for patients with heart disease, including those with chronic heart failure. However, adherence to best practice recommendations is often suboptimal, particularly in those individuals who experience high symptom burden and feel less confident to undertake physical activity. Self-efficacy is the degree of confidence an individual has in his/her ability to perform behaviour under several specific circumstances. Four factors influence an individual's level of self-efficacy: (1) past performance, (2) vicarious experience, (3) verbal persuasion and (4) physiological arousal. Discursive. Using the method of a discursive paper, this article seeks to explore the conceptual underpinnings of self-efficacy to address the barriers to participating in physical activity and proposes a model of intervention, the Home-Heart-Walk, to promote physical activity and monitor functional status. Implementing effective interventions to promote physical activities require appreciation of factors impacting on behaviour change. Addressing concepts relating to self-efficacy in physical activity interventions may promote participation and adherence in the longer term. The increasing burden of chronic disease and the emphasis on self-management strategies underscore the importance of promoting adherence to recommendations, such as physical activity. © 2011 Blackwell Publishing Ltd.

  20. Characterization and reliability of aluminum gallium nitride/gallium nitride high electron mobility transistors

    NASA Astrophysics Data System (ADS)

    Douglas, Erica Ann

    Compound semiconductor devices, particularly those based on GaN, have found significant use in military and civilian systems for both microwave and optoelectronic applications. Future uses in ultra-high power radar systems will require the use of GaN transistors operated at very high voltages, currents and temperatures. GaN-based high electron mobility transistors (HEMTs) have proven power handling capability that overshadows all other wide band gap semiconductor devices for high frequency and high-power applications. Little conclusive research has been reported in order to determine the dominating degradation mechanisms of the devices that result in failure under standard operating conditions in the field. Therefore, it is imperative that further reliability testing be carried out to determine the failure mechanisms present in GaN HEMTs in order to improve device performance, and thus further the ability for future technologies to be developed. In order to obtain a better understanding of the true reliability of AlGaN/GaN HEMTs and determine the MTTF under standard operating conditions, it is crucial to investigate the interaction effects between thermal and electrical degradation. This research spans device characterization, device reliability, and device simulation in order to obtain an all-encompassing picture of the device physics. Initially, finite element thermal simulations were performed to investigate the effect of device design on self-heating under high power operation. This was then followed by a study of reliability of HEMTs and other tests structures during high power dc operation. Test structures without Schottky contacts showed high stability as compared to HEMTs, indicating that degradation of the gate is the reason for permanent device degradation. High reverse bias of the gate has been shown to induce the inverse piezoelectric effect, resulting in a sharp increase in gate leakage current due to crack formation. The introduction of elevated

  1. Factors that Affect Operational Reliability of Turbojet Engines

    NASA Technical Reports Server (NTRS)

    1956-01-01

    The problem of improving operational reliability of turbojet engines is studied in a series of papers. Failure statistics for this engine are presented, the theory and experimental evidence on how engine failures occur are described, and the methods available for avoiding failure in operation are discussed. The individual papers of the series are Objectives, Failure Statistics, Foreign-Object Damage, Compressor Blades, Combustor Assembly, Nozzle Diaphrams, Turbine Buckets, Turbine Disks, Rolling Contact Bearings, Engine Fuel Controls, and Summary Discussion.

  2. Shock reliability analysis and improvement of MEMS electret-based vibration energy harvesters

    NASA Astrophysics Data System (ADS)

    Renaud, M.; Fujita, T.; Goedbloed, M.; de Nooijer, C.; van Schaijk, R.

    2015-10-01

    Vibration energy harvesters can serve as a replacement solution to batteries for powering tire pressure monitoring systems (TPMS). Autonomous wireless TPMS powered by microelectromechanical system (MEMS) electret-based vibration energy harvester have been demonstrated. The mechanical reliability of the MEMS harvester still has to be assessed in order to bring the harvester to the requirements of the consumer market. It should survive the mechanical shocks occurring in the tire environment. A testing procedure to quantify the shock resilience of harvesters is described in this article. Our first generation of harvesters has a shock resilience of 400 g, which is far from being sufficient for the targeted application. In order to improve this aspect, the first important aspect is to understand the failure mechanism. Failure is found to occur in the form of fracture of the device’s springs. It results from impacts between the anchors of the springs when the harvester undergoes a shock. The shock resilience of the harvesters can be improved by redirecting these impacts to nonvital parts of the device. With this philosophy in mind, we design three types of shock absorbing structures and test their effect on the shock resilience of our MEMS harvesters. The solution leading to the best results consists of rigid silicon stoppers covered by a layer of Parylene. The shock resilience of the harvesters is brought above 2500 g. Results in the same range are also obtained with flexible silicon bumpers, which are simpler to manufacture.

  3. Reliability Assessment of a Robust Design Under Uncertainty for a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J. -W.; Newman, Perry A.

    2003-01-01

    The paper presents reliability assessment results for the robust designs under uncertainty of a 3-D flexible wing previously reported by the authors. Reliability assessments (additional optimization problems) of the active constraints at the various probabilistic robust design points are obtained and compared with the constraint values or target constraint probabilities specified in the robust design. In addition, reliability-based sensitivity derivatives with respect to design variable mean values are also obtained and shown to agree with finite difference values. These derivatives allow one to perform reliability based design without having to obtain second-order sensitivity derivatives. However, an inner-loop optimization problem must be solved for each active constraint to find the most probable point on that constraint failure surface.

  4. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  5. Chronic low back pain in older adults: prevalence, reliability, and validity of physical examination findings.

    PubMed

    Weiner, Debra K; Sakamoto, Sara; Perera, Subashan; Breuer, Paula

    2006-01-01

    To develop a structured physical examination protocol that identifies common biomechanical and soft-tissue abnormalities for older adults with chronic low back pain (CLBP) that can be used as a triage tool for healthcare providers and to test the interobserver reliability and discriminant validity of this protocol. Cross-sectional survey and examination. Older adult pain clinic. One hundred eleven community-dwelling adults aged 60 and older with CLBP and 20 who were pain-free. Clinical history for demographics, pain duration, previous lumbar surgery or advanced imaging, neurogenic claudication, and imaging clinically serious symptoms. Physical examination for scoliosis, functional leg length discrepancy, pain with lumbar movement, myofascial pain (paralumbar, piriformis, tensor fasciae latae (TFL)), regional bone pain (sacroiliac joint (SIJ), hip, vertebral body), and fibromyalgia. Scoliosis was prevalent in those with (77.5%) and without pain (60.0%), but prevalence of SIJ pain (84% vs 5%), fibromyalgia tender points (19% vs 0%), myofascial pain (96% vs 10%), and hip pain (48% vs 0%) was significantly different between groups (P < .001). Interrater reliability was excellent for SIJ pain (0.81), number of fibromyalgia tender points (0.84), and TFL pain (0.81); good for scoliosis (0.43), kyphosis (0.66), lumbar movement pain (0.75), piriformis pain (0.71), and hip disease by internal rotation (0.56); and marginal for leg length (0.00) and paravertebral pain (0.39). Biomechanical and soft tissue pathologies are common in older adults with CLBP, and many can be assessed reliably using a brief physical examination. Their recognition may save unnecessary healthcare expenditure and patient suffering.

  6. An Educational Intervention to Evaluate Nurses' Knowledge of Heart Failure.

    PubMed

    Sundel, Siobhan; Ea, Emerson E

    2018-07-01

    Nurses are the main providers of patient education in inpatient and outpatient settings. Unfortunately, nurses may lack knowledge of chronic medical conditions, such as heart failure. The purpose of this one-group pretest-posttest intervention was to determine the effectiveness of teaching intervention on nurses' knowledge of heart failure self-care principles in an ambulatory care setting. The sample consisted of 40 staff nurses in ambulatory care. Nurse participants received a focused education intervention based on knowledge deficits revealed in the pretest and were then resurveyed within 30 days. Nurses were evaluated using the valid and reliable 20-item Nurses Knowledge of Heart Failure Education Principles Survey tool. The results of this project demonstrated that an education intervention on heart failure self-care principles improved nurses' knowledge of heart failure in an ambulatory care setting, which was statistically significant (p < .05). Results suggest that a teaching intervention could improve knowledge of heart failure, which could lead to better patient education and could reduce patient readmission for heart failure. J Contin Educ Nurs. 2018;49(7):315-321. Copyright 2018, SLACK Incorporated.

  7. Reliability analysis of a phaser measurement unit using a generalized fuzzy lambda-tau(GFLT) technique.

    PubMed

    Komal

    2018-05-01

    Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Classification of comorbidity in trauma: the reliability of pre-injury ASA physical status classification.

    PubMed

    Ringdal, Kjetil G; Skaga, Nils Oddvar; Steen, Petter Andreas; Hestnes, Morten; Laake, Petter; Jones, J Mary; Lossius, Hans Morten

    2013-01-01

    Pre-injury comorbidities can influence the outcomes of severely injured patients. Pre-injury comorbidity status, graded according to the American Society of Anesthesiologists Physical Status (ASA-PS) classification system, is an independent predictor of survival in trauma patients and is recommended as a comorbidity score in the Utstein Trauma Template for Uniform Reporting of Data. Little is known about the reliability of pre-injury ASA-PS scores. The objective of this study was to examine whether the pre-injury ASA-PS system was a reliable scale for grading comorbidity in trauma patients. Nineteen Norwegian trauma registry coders were invited to participate in a reliability study in which 50 real but anonymised patient medical records were distributed. Reliability was analysed using quadratic weighted kappa (κ(w)) analysis with 95% CI as the primary outcome measure and unweighted kappa (κ) analysis, which included unknown values, as a secondary outcome measure. Fifteen of the invitees responded to the invitation, and ten participated. We found moderate (κ(w)=0.77 [95% CI: 0.64-0.87]) to substantial (κ(w)=0.95 [95% CI: 0.89-0.99]) rater-against-reference standard reliability using κ(w) and fair (κ=0.46 [95% CI: 0.29-0.64]) to substantial (κ=0.83 [95% CI: 0.68-0.94]) reliability using κ. The inter-rater reliability ranged from moderate (κ(w)=0.66 [95% CI: 0.45-0.81]) to substantial (κ(w)=0.96 [95% CI: 0.88-1.00]) for κ(w) and from slight (κ=0.36 [95% CI: 0.21-0.54]) to moderate (κ=0.75 [95% CI: 0.62-0.89]) for κ. The rater-against-reference standard reliability varied from moderate to substantial for the primary outcome measure and from fair to substantial for the secondary outcome measure. The study findings indicate that the pre-injury ASA-PS scale is a reliable score for classifying comorbidity in trauma patients. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Children's Physical Activity While Gardening: Development of a Valid and Reliable Direct Observation Tool.

    PubMed

    Myers, Beth M; Wells, Nancy M

    2015-04-01

    Gardens are a promising intervention to promote physical activity (PA) and foster health. However, because of the unique characteristics of gardening, no extant tool can capture PA, postures, and motions that take place in a garden. The Physical Activity Research and Assessment tool for Garden Observation (PARAGON) was developed to assess children's PA levels, tasks, postures, and motions, associations, and interactions while gardening. PARAGON uses momentary time sampling in which a trained observer watches a focal child for 15 seconds and then records behavior for 15 seconds. Sixty-five children (38 girls, 27 boys) at 4 elementary schools in New York State were observed over 8 days. During the observation, children simultaneously wore Actigraph GT3X+ accelerometers. The overall interrater reliability was 88% agreement, and Ebel was .97. Percent agreement values for activity level (93%), garden tasks (93%), motions (80%), associations (95%), and interactions (91%) also met acceptable criteria. Validity was established by previously validated PA codes and by expected convergent validity with accelerometry. PARAGON is a valid and reliable observation tool for assessing children's PA in the context of gardening.

  10. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  11. Predictors of physical activity in patients with heart failure: a questionnaire study.

    PubMed

    Chien, Hui-Chin; Chen, Hsing-Mei; Garet, Martin; Wang, Ruey-Hsia

    2014-07-01

    Adequate physical activity is believed to help decrease readmission and improve quality of life for patients with heart failure (HF). The aim of this study was to explore the predictors of physical activity level 1 month after discharge from hospital in Taiwanese patients with HF. A prospective research design was used. Overall, 111 patients with HF from a medical center in Southern Taiwan were recruited. Symptomatic distress, self-efficacy for physical activity, physical activity knowledge, and demographic and disease characteristics of patients with HF were collected at their discharge. One month later, patients' total daily energy expenditure (DEE), DEE for low-intensity physical activities (PA(low) DEE; strictly <3 metabolic equivalents [METs]), DEE for high-intensity physical activities (PA(high) DEE; 3-5 METs), and DEE for intensive-intensity physical activities (PA(intensive) DEE; strictly >5 METs) were collected. The mean total DEE was 8175.85 ± 2595.12 kJ 24 h, of which 19.12% was for PAlow DEE, 7.20% was for PA(high) DEE, and only 1.42% was for PA(intensive) DEE. Body mass index (BMI), age, self-efficacy for instrumental activities of daily living, and educational level were predictors of total DEE of patients with HF 1 month after discharge. Self-efficacy for instrumental activities of daily living, gender, and BMI were predictors of PA(high) DEE. Age, BMI, and symptom distress were predictors of PA(intensive) DEE. Taiwanese patients with HF practiced lower intensity physical activities. Factors related to physical activity of patients with HF in Taiwan were similar to those of Western countries. Nurses should emphasize the importance of physical activity to patients with HF who are male, of older age, with lower educational level, or with lower BMI. Improving self-efficacy for instrumental activities and decreasing symptom distress should be incorporated into discharge planning programs for patients with HF.

  12. Reliability and Validity of a New Physical Activity Self-Report Measure for Younger Children

    ERIC Educational Resources Information Center

    Belton, Sarahjane; Mac Donncha, Ciaran

    2010-01-01

    The purpose of this study was to assess the test-retest reliability and validity of a new Youth Physical Activity Self-Report measure. Heart rate and direct observation were employed as criterion measures with a sample of 79 children (aged 7-9 years). Spearman's rho correlation between self reported activity intensity and heart rate was 0.87 for…

  13. Failure Scenarios and Mitigations for the BABAR Superconducting Solenoid

    NASA Astrophysics Data System (ADS)

    Thompson, EunJoo; Candia, A.; Craddock, W. W.; Racine, M.; Weisend, J. G.

    2006-04-01

    The cryogenic department at the Stanford Linear Accelerator Center is responsible for the operation, troubleshooting, and upgrade of the 1.5 Tesla superconducting solenoid detector for the BABAR B-factory experiment. Events that disable the detector are rare but significantly impact the availability of the detector for physics research. As a result, a number of systems and procedures have been developed over time to minimize the downtime of the detector, for example improved control systems, improved and automatic backup systems, and spares for all major components. Together they can prevent or mitigate many of the failures experienced by the utilities, mechanical systems, controls and instrumentation. In this paper we describe various failure scenarios, their effect on the detector, and the modifications made to mitigate the effects of the failure. As a result of these modifications the reliability of the detector has increased significantly with only 3 shutdowns of the detector due to cryogenics systems over the last 2 years.

  14. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    PubMed

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  15. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method

    PubMed Central

    Deng, Xinyang

    2017-01-01

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model. PMID:28895905

  16. Reliability evaluation of oil pipelines operating in aggressive environment

    NASA Astrophysics Data System (ADS)

    Magomedov, R. M.; Paizulaev, M. M.; Gebel, E. S.

    2017-08-01

    In connection with modern increased requirements for ecology and safety, the development of diagnostic services complex is obligatory and necessary enabling to ensure the reliable operation of the gas transportation infrastructure. Estimation of oil pipelines technical condition should be carried out not only to establish the current values of the equipment technological parameters in operation, but also to predict the dynamics of changes in the physical and mechanical characteristics of the material, the appearance of defects, etc. to ensure reliable and safe operation. In the paper, existing Russian and foreign methods for evaluation of the oil pipelines reliability are considered, taking into account one of the main factors leading to the appearance of crevice in the pipeline material, i.e. change the shape of its cross-section, - corrosion. Without compromising the generality of the reasoning, the assumption of uniform corrosion wear for the initial rectangular cross section has been made. As a result a formula for calculation the probability of failure-free operation was formulated. The proposed mathematical model makes it possible to predict emergency situations, as well as to determine optimal operating conditions for oil pipelines.

  17. Reliability of self-reported childhood physical abuse by adults and factors predictive of inconsistent reporting.

    PubMed

    McKinney, Christy M; Harris, T Robert; Caetano, Raul

    2009-01-01

    Little is known about the reliability of self-reported child physical abuse (CPA) or CPA reporting practices. We estimated reliability and prevalence of self-reported CPA and identified factors predictive of inconsistent CPA reporting among 2,256 participants using surveys administered in 1995 and 2000. Reliability of CPA was fair to moderate (kappa = 0.41). Using a positive report from either survey, the prevalence of moderate (61.8%) and severe (12.0%) CPA was higher than at either survey alone. Compared to consistent reporters of having experienced CPA, inconsistent reporters were less likely to be > or = 30 years old (vs. 18-29) or Black (vs. White) and more likely to have < 12 years of education (vs. 12), have no alcohol-related problems (vs. having problems), or report one type (vs. > or = 2) of CPA. These findings may assist researchers conducting and interpreting studies of CPA.

  18. The Short International Physical Activity Questionnaire: cross-cultural adaptation, validation and reliability of the Hausa language version in Nigeria.

    PubMed

    Oyeyemi, Adewale L; Oyeyemi, Adetoyeje Y; Adegoke, Babatunde O; Oyetoke, Fatima O; Aliyu, Habeeb N; Aliyu, Salamatu U; Rufai, Adamu A

    2011-11-22

    Accurate assessment of physical activity is important in determining the risk for chronic diseases such as cardiovascular disease, stroke, type 2 diabetes, cancer and obesity. The absence of culturally relevant measures in indigenous languages could pose challenges to epidemiological studies on physical activity in developing countries. The purpose of this study was to translate and cross-culturally adapt the Short International Physical Activity Questionnaire (IPAQ-SF) to the Hausa language, and to evaluate the validity and reliability of the Hausa version of IPAQ-SF in Nigeria. The English IPAQ-SF was translated into the Hausa language, synthesized, back translated, and subsequently subjected to expert committee review and pre-testing. The final product (Hausa IPAQ-SF) was tested in a cross-sectional study for concurrent (correlation with the English version) and construct validity, and test-retest reliability in a sample of 102 apparently healthy adults. The Hausa IPAQ-SF has good concurrent validity with Spearman correlation coefficients (ρ) ranging from 0.78 for vigorous activity (Min Week-1) to 0.92 for total physical activity (Metabolic Equivalent of Task [MET]-Min Week-1), but poor construct validity, with cardiorespiratory fitness (ρ = 0.21, p = 0.01) and body mass index (ρ = 0.22, p = 0.04) significantly correlated with only moderate activity and sitting time (Min Week-1), respectively. Reliability was good for vigorous (ICC = 0.73, 95% C.I = 0.55-0.84) and total physical activity (ICC = 0.61, 95% C.I = 0.47-0.72), but fair for moderate activity (ICC = 0.33, 95% C.I = 0.12-0.51), and few meaningful differences were found in the gender and socioeconomic status specific analyses. The Hausa IPAQ-SF has acceptable concurrent validity and test-retest reliability for vigorous-intensity activity, walking, sitting and total physical activity, but demonstrated only fair construct validity for moderate and sitting activities. The Hausa IPAQ-SF can be used for

  19. Identification of the human factors contributing to maintenance failures in a petroleum operation.

    PubMed

    Antonovsky, Ari; Pollock, Clare; Straker, Leon

    2014-03-01

    This research aimed to identify the most frequently occurring human factors contributing to maintenance-related failures within a petroleum industry organization. Commonality between failures will assist in understanding reliability in maintenance processes, thereby preventing accidents in high-hazard domains. Methods exist for understanding the human factors contributing to accidents. Their application in a maintenance context mainly has been advanced in aviation and nuclear power. Maintenance in the petroleum industry provides a different context for investigating the role that human factors play in influencing outcomes. It is therefore worth investigating the contributing human factors to improve our understanding of both human factors in reliability and the factors specific to this domain. Detailed analyses were conducted of maintenance-related failures (N = 38) in a petroleum company using structured interviews with maintenance technicians. The interview structure was based on the Human Factor Investigation Tool (HFIT), which in turn was based on Rasmussen's model of human malfunction. A mean of 9.5 factors per incident was identified across the cases investigated.The three most frequent human factors contributing to the maintenance failures were found to be assumption (79% of cases), design and maintenance (71%), and communication (66%). HFIT proved to be a useful instrument for identifying the pattern of human factors that recurred most frequently in maintenance-related failures. The high frequency of failures attributed to assumptions and communication demonstrated the importance of problem-solving abilities and organizational communication in a domain where maintenance personnel have a high degree of autonomy and a wide geographical distribution.

  20. Reliability of Fault Tolerant Control Systems. Part 2

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2000-01-01

    This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.

  1. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  2. Reliability and Validity of a Physical Capacity Evaluation Used to Assess Individuals with Intellectual Disabilities and Mental Illness

    ERIC Educational Resources Information Center

    Jang, Yuh; Chang, Tzyh-Chyang; Lin, Keh-Chung

    2009-01-01

    Physical capacity evaluations (PCEs) are important and frequently offered services in work practice. This study investigates the reliability and validity of the National Taiwan University Hospital Physical Capacity Evaluation (NTUH PCE) on a sample of 149 participants consisted of three groups: 45 intellectual disability (ID), 56 mental illness…

  3. Reliability considerations in the placement of control system components

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1983-01-01

    This paper presents a methodology, along with applications to a grid type structure, for incorporating reliability considerations in the decision for actuator placement on large space structures. The method involves the minimization of a criterion that considers mission life and the reliability of the system components. It is assumed that the actuator gains are to be readjusted following failures, but their locations cannot be changed. The goal of the design is to suppress vibrations of the grid and the integral square of the grid modal amplitudes is used as a measure of performance of the control system. When reliability of the actuators is considered, a more pertinent measure is the expected value of the integral; that is, the sum of the squares of the modal amplitudes for each possible failure state considered, multiplied by the probability that the failure state will occur. For a given set of actuator locations, the optimal criterion may be graphed as a function of the ratio of the mean time to failure of the components and the design mission life or reservicing interval. The best location of the actuators is typically different for a short mission life than for a long one.

  4. Improvement of the Reliability of Dielectrics for MLCC

    NASA Astrophysics Data System (ADS)

    Nakamura, Tomoyuki; Yao, Takayuki; Ikeda, Jun; Kubodera, Noriyuki; Takagi, Hiroshi

    2011-10-01

    To achieve enough reliability of monolithic ceramic capacitor, it is important to know the contribution of grain boundary and grain interior to its reliability and insulation resistance. As the number of grain boundaries per layer increased, mean time to failure (MTTF) increased. In addition, as the number of grain boundaries per layer increased, samples showed lower current leakage in the measured electric field range. Using these data, the grain boundary E-J curves were determined by simulation. As a result, temperature and electric field dependence of insulation resistance of grain boundary were very low. The insulation characteristics of one BaTiO3 grain per layer were examined. The resistance and reliability of grain interior were very low. To improve the degradation resistance of grain interior, Ca-doped BaTiO3-based dielectrics were developed. The influence of Ca substitution on MTTF was investigated and it was found out that MTTF increased with the increase of Ca substitution.

  5. Reliability of objects in aerospace technologies and beyond: Holistic risk management approach

    NASA Astrophysics Data System (ADS)

    Shai, Yair; Ingman, D.; Suhir, E.

    Species” of military aircraft, commercial aircraft and private cars have been chosen in our analysis as illustrations of the fruitfulness of the “ holistic” approach. The obtained data show that both commercial “ species” exhibit similar “ survival dynamics” in compare with those of the military species of aircraft: lifetime distributions were found to be Weibull distributions for all “ species” however for commercial vehicles, the shape parameters were a little higher than 2, and scale parameters were 19.8 years (aircraft) and 21.7 (cars) whereas for military aircraft, the shape parameters were much higher and the mean time to failure much longer. The difference between the lifetime characteristics of the “ species” can be attributed to the differences in the social, operational, economic and safety-and-reliability requirements and constraints. The obtained information can be used to make tentative predictions for the most likely trends in the given field of vehicular technology. The following major conclusions can be drawn from our analysis: 1) The suggested concept based on the use of HLPFs reflects the current state and the general perceptions in the given field of engineering, including aerospace technologies, and allows for all the inherent and induced factors to be taken into account: any type of failures, usage profiles, economic factors, environmental conditions, etc. The concept requires only very general input data for the entire population. There is no need for the less available information about individual articles. 2) Failure modes are not restricted to the physical type of failures and include economic, cultural or social effects. All possible causes, which might lead to making a decision to terminate the use of a particular type

  6. Reliability and Probabilistic Risk Assessment - How They Play Together

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Stutts, Richard G.; Zhaofeng, Huang

    2015-01-01

    PRA methodology is one of the probabilistic analysis methods that NASA brought from the nuclear industry to assess the risk of LOM, LOV and LOC for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability and statistical data to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: What can go wrong? How likely is it? What is the severity of the degradation? Since 1986, NASA, along with industry partners, has conducted a number of PRA studies to predict the overall launch vehicles risks. Planning Research Corporation conducted the first of these studies in 1988. In 1995, Science Applications International Corporation (SAIC) conducted a comprehensive PRA study. In July 1996, NASA conducted a two-year study (October 1996 - September 1998) to develop a model that provided the overall Space Shuttle risk and estimates of risk changes due to proposed Space Shuttle upgrades. After the Columbia accident, NASA conducted a PRA on the Shuttle External Tank (ET) foam. This study was the most focused and extensive risk assessment that NASA has conducted in recent years. It used a dynamic, physics-based, integrated system analysis approach to understand the integrated system risk due to ET foam loss in flight. Most recently, a PRA for Ares I launch vehicle has been performed in support of the Constellation program. Reliability, on the other hand, addresses the loss of functions. In a broader sense, reliability engineering is a discipline that involves the application of engineering principles to the design and processing of products, both hardware and software, for meeting product reliability requirements or goals. It is a very broad design-support discipline. It has important interfaces with many other engineering disciplines. Reliability as a figure of merit (i.e. the metric) is the probability that an item will

  7. Reliability and validity of play-based assessments of motor and cognitive skills for infants and young children: a systematic review.

    PubMed

    O'Grady, Michael G; Dusing, Stacey C

    2015-01-01

    Play is vital for development. Infants and children learn through play. Traditional standardized developmental tests measure whether a child performs individual skills within controlled environments. Play-based assessments can measure skill performance during natural, child-driven play. The purpose of this study was to systematically review reliability, validity, and responsiveness of all play-based assessments that quantify motor and cognitive skills in children from birth to 36 months of age. Studies were identified from a literature search using PubMed, ERIC, CINAHL, and PsycINFO databases and the reference lists of included papers. Included studies investigated reliability, validity, or responsiveness of play-based assessments that measured motor and cognitive skills for children to 36 months of age. Two reviewers independently screened 40 studies for eligibility and inclusion. The reviewers independently extracted reliability, validity, and responsiveness data. They examined measurement properties and methodological quality of the included studies. Four current play-based assessment tools were identified in 8 included studies. Each play-based assessment tool measured motor and cognitive skills in a different way during play. Interrater reliability correlations ranged from .86 to .98 for motor development and from .23 to .90 for cognitive development. Test-retest reliability correlations ranged from .88 to .95 for motor development and from .45 to .91 for cognitive development. Structural validity correlations ranged from .62 to .90 for motor development and from .42 to .93 for cognitive development. One study assessed responsiveness to change in motor development. Most studies had small and poorly described samples. Lack of transparency in data management and statistical analysis was common. Play-based assessments have potential to be reliable and valid tools to assess cognitive and motor skills, but higher-quality research is needed. Psychometric properties

  8. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

  9. Sarma-based key-group method for rock slope reliability analyses

    NASA Astrophysics Data System (ADS)

    Yarahmadi Bafghi, A. R.; Verdel, T.

    2005-08-01

    The methods used in conducting static stability analyses have remained pertinent to this day for reasons of both simplicity and speed of execution. The most well-known of these methods for purposes of stability analysis of fractured rock masses is the key-block method (KBM).This paper proposes an extension to the KBM, called the key-group method (KGM), which combines not only individual key-blocks but also groups of collapsable blocks into an iterative and progressive analysis of the stability of discontinuous rock slopes. To take intra-group forces into account, the Sarma method has been implemented within the KGM in order to generate a Sarma-based KGM, abbreviated SKGM. We will discuss herein the hypothesis behind this new method, details regarding its implementation, and validation through comparison with results obtained from the distinct element method.Furthermore, as an alternative to deterministic methods, reliability analyses or probabilistic analyses have been proposed to take account of the uncertainty in analytical parameters and models. The FOSM and ASM probabilistic methods could be implemented within the KGM and SKGM framework in order to take account of the uncertainty due to physical and mechanical data (density, cohesion and angle of friction). We will then show how such reliability analyses can be introduced into SKGM to give rise to the probabilistic SKGM (PSKGM) and how it can be used for rock slope reliability analyses. Copyright

  10. Developing Ultra Reliable Life Support for the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  11. Fortran programs for reliability analysis

    Treesearch

    John J. Zahn

    1992-01-01

    This report contains a set of FORTRAN subroutines written to calculate the Hasofer-Lind reliability index. Nonlinear failure criteria and correlated basic variables are permitted. Users may incorporate these routines into their own calling program (an example program, RELANAL, is included) and must provide a failure criterion subroutine (two example subroutines,...

  12. Physics-based Entry, Descent and Landing Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Huynh, Loc C.; Manning, Ted

    2014-01-01

    A physics-based risk model was developed to assess the risk associated with thermal protection system failures during the entry, descent and landing phase of a manned spacecraft mission. In the model, entry trajectories were computed using a three-degree-of-freedom trajectory tool, the aerothermodynamic heating environment was computed using an engineering-level computational tool and the thermal response of the TPS material was modeled using a one-dimensional thermal response tool. The model was capable of modeling the effect of micrometeoroid and orbital debris impact damage on the TPS thermal response. A Monte Carlo analysis was used to determine the effects of uncertainties in the vehicle state at Entry Interface, aerothermodynamic heating and material properties on the performance of the TPS design. The failure criterion was set as a temperature limit at the bondline between the TPS and the underlying structure. Both direct computation and response surface approaches were used to compute the risk. The model was applied to a generic manned space capsule design. The effect of material property uncertainty and MMOD damage on risk of failure were analyzed. A comparison of the direct computation and response surface approach was undertaken.

  13. System reliability of randomly vibrating structures: Computational modeling and laboratory testing

    NASA Astrophysics Data System (ADS)

    Sundar, V. S.; Ammanagi, S.; Manohar, C. S.

    2015-09-01

    The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.

  14. Reliable Communication Models in Interdependent Critical Infrastructure Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Chinthavali, Supriya; Shankar, Mallikarjun

    Modern critical infrastructure networks are becoming increasingly interdependent where the failures in one network may cascade to other dependent networks, causing severe widespread national-scale failures. A number of previous efforts have been made to analyze the resiliency and robustness of interdependent networks based on different models. However, communication network, which plays an important role in today's infrastructures to detect and handle failures, has attracted little attention in the interdependency studies, and no previous models have captured enough practical features in the critical infrastructure networks. In this paper, we study the interdependencies between communication network and other kinds of critical infrastructuremore » networks with an aim to identify vulnerable components and design resilient communication networks. We propose several interdependency models that systematically capture various features and dynamics of failures spreading in critical infrastructure networks. We also discuss several research challenges in building reliable communication solutions to handle failures in these models.« less

  15. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  16. Interobserver Reliability of the Berlin ARDS Definition and Strategies to Improve the Reliability of ARDS Diagnosis.

    PubMed

    Sjoding, Michael W; Hofer, Timothy P; Co, Ivan; Courey, Anthony; Cooke, Colin R; Iwashyna, Theodore J

    2018-02-01

    Failure to reliably diagnose ARDS may be a major driver of negative clinical trials and underrecognition and treatment in clinical practice. We sought to examine the interobserver reliability of the Berlin ARDS definition and examine strategies for improving the reliability of ARDS diagnosis. Two hundred five patients with hypoxic respiratory failure from four ICUs were reviewed independently by three clinicians, who evaluated whether patients had ARDS, the diagnostic confidence of the reviewers, whether patients met individual ARDS criteria, and the time when criteria were met. Interobserver reliability of an ARDS diagnosis was "moderate" (kappa = 0.50; 95% CI, 0.40-0.59). Sixty-seven percent of diagnostic disagreements between clinicians reviewing the same patient was explained by differences in how chest imaging studies were interpreted, with other ARDS criteria contributing less (identification of ARDS risk factor, 15%; cardiac edema/volume overload exclusion, 7%). Combining the independent reviews of three clinicians can increase reliability to "substantial" (kappa = 0.75; 95% CI, 0.68-0.80). When a clinician diagnosed ARDS with "high confidence," all other clinicians agreed with the diagnosis in 72% of reviews. There was close agreement between clinicians about the time when a patient met all ARDS criteria if ARDS developed within the first 48 hours of hospitalization (median difference, 5 hours). The reliability of the Berlin ARDS definition is moderate, driven primarily by differences in chest imaging interpretation. Combining independent reviews by multiple clinicians or improving methods to identify bilateral infiltrates on chest imaging are important strategies for improving the reliability of ARDS diagnosis. Copyright © 2017 American College of Chest Physicians. All rights reserved.

  17. Teacher Perceptions of High School Student Failure in the Classroom: Identifying Preventive Practices of Failure Using Critical Incident Technique

    ERIC Educational Resources Information Center

    Kalahar, Kory G.

    2011-01-01

    Student failure is a prominent issue in many comprehensive secondary schools nationwide. Researchers studying error, reliability, and performance in organizations have developed and employed a method known as critical incident technique (CIT) for investigating failure. Adopting an action research model, this study involved gathering and analyzing…

  18. Reliability of Wearable Inertial Measurement Units to Measure Physical Activity in Team Handball.

    PubMed

    Luteberget, Live S; Holme, Benjamin R; Spencer, Matt

    2018-04-01

    To assess the reliability and sensitivity of commercially available inertial measurement units to measure physical activity in team handball. Twenty-two handball players were instrumented with 2 inertial measurement units (OptimEye S5; Catapult Sports, Melbourne, Australia) taped together. They participated in either a laboratory assessment (n = 10) consisting of 7 team handball-specific tasks or field assessment (n = 12) conducted in 12 training sessions. Variables, including PlayerLoad™ and inertial movement analysis (IMA) magnitude and counts, were extracted from the manufacturers' software. IMA counts were divided into intensity bands of low (1.5-2.5 m·s -1 ), medium (2.5-3.5 m·s -1 ), high (>3.5 m·s -1 ), medium/high (>2.5 m·s -1 ), and total (>1.5 m·s -1 ). Reliability between devices and sensitivity was established using coefficient of variation (CV) and smallest worthwhile difference (SWD). Laboratory assessment: IMA magnitude showed a good reliability (CV = 3.1%) in well-controlled tasks. CV increased (4.4-6.7%) in more-complex tasks. Field assessment: Total IMA counts (CV = 1.8% and SWD = 2.5%), PlayerLoad (CV = 0.9% and SWD = 2.1%), and their associated variables (CV = 0.4-1.7%) showed a good reliability, well below the SWD. However, the CV of IMA increased when categorized into intensity bands (2.9-5.6%). The reliability of IMA counts was good when data were displayed as total, high, or medium/high counts. A good reliability for PlayerLoad and associated variables was evident. The CV of the previously mentioned variables was well below the SWD, suggesting that OptimEye's inertial measurement unit and its software are sensitive for use in team handball.

  19. Fabrication, testing and reliability modeling of copper/titanium-metallized GaAs MESFETs and HEMTs for low-noise applications

    NASA Astrophysics Data System (ADS)

    Feng, Ting

    Today, GaAs based field effect transistors (FETs) have been used in a broad range of high-speed electronic military and commercial applications. However, their reliability still needs to be improved. Particularly the hydrogen induced degradation is a large remaining issue in the reliability of GaAs FETs, because hydrogen can easily be incorporated into devices during the crystal growth and virtually every device processing step. The main objective of this research work is to develop a new gate metallization system in order to reduce the hydrogen induced degradation from the gate region for GaAs based MESFETs and HEMTs. Cu/Ti gate metallization has been introduced into the GaAs MESFETs and HEMTs in our work in order to solve the hydrogen problem. The purpose of the use of copper is to tie up the hydrogen atoms and prevent hydrogen penetration into the device active region as well as to keep a low gate resistance for low noise applications. In this work, the fabrication technology of GaAs MESFETs and AlGaAs/GaAs HEMTs with Cu/Ti metallized gates have been successfully developed and the fabricated Cu/Ti FETs have shown comparable DC performance with similar Au-based GaAs FETs. The Cu/Ti FETs were subjected to temperature accelerated testing at NOT under 5% hydrogen forming gas and the experimental results show the hydrogen induced degradation has been reduced for the Cu/Ti FETs compared to commonly used AuPtTi based GaAs FETs. A long-term reliability testing for Cu/Ti FETs has also been carried out at 200°C and up to 1000hours and testing results show Cu/Ti FETs performed with adequate reliability. The failure modes were found to consist of a decrease in drain saturation current and pinch-off voltage and an increase in source ohmic contact resistance. Material characterization tools including Rutherford backscattering spectroscopy and a back etching technique were used in Cu/Ti GaAs FETs, and pronounced gate metal copper in-diffusion and intermixing compounds at the

  20. Dynamically induced cascading failures in power grids.

    PubMed

    Schäfer, Benjamin; Witthaut, Dirk; Timme, Marc; Latora, Vito

    2018-05-17

    Reliable functioning of infrastructure networks is essential for our modern society. Cascading failures are the cause of most large-scale network outages. Although cascading failures often exhibit dynamical transients, the modeling of cascades has so far mainly focused on the analysis of sequences of steady states. In this article, we focus on electrical transmission networks and introduce a framework that takes into account both the event-based nature of cascades and the essentials of the network dynamics. We find that transients of the order of seconds in the flows of a power grid play a crucial role in the emergence of collective behaviors. We finally propose a forecasting method to identify critical lines and components in advance or during operation. Overall, our work highlights the relevance of dynamically induced failures on the synchronization dynamics of national power grids of different European countries and provides methods to predict and model cascading failures.

  1. Role of failure-mechanism identification in accelerated testing

    NASA Technical Reports Server (NTRS)

    Hu, J. M.; Barker, D.; Dasgupta, A.; Arora, A.

    1993-01-01

    Accelerated life testing techniques provide a short-cut method to investigate the reliability of electronic devices with respect to certain dominant failure mechanisms that occur under normal operating conditions. However, accelerated tests have often been conducted without knowledge of the failure mechanisms and without ensuring that the test accelerated the same mechanism as that observed under normal operating conditions. This paper summarizes common failure mechanisms in electronic devices and packages and investigates possible failure mechanism shifting during accelerated testing.

  2. Space transportation architecture: Reliability sensitivities

    NASA Technical Reports Server (NTRS)

    Williams, A. M.

    1992-01-01

    A sensitivity analysis is given of the benefits and drawbacks associated with a proposed Earth to orbit vehicle architecture. The architecture represents a fleet of six vehicles (two existing, four proposed) that would be responsible for performing various missions as mandated by NASA and the U.S. Air Force. Each vehicle has a prescribed flight rate per year for a period of 31 years. By exposing this fleet of vehicles to a probabilistic environment where the fleet experiences failures, downtimes, setbacks, etc., the analysis involves determining the resiliency and costs associated with the fleet of specific vehicle/subsystem reliabilities. The resources required were actual observed data on the failures and downtimes associated with existing vehicles, data based on engineering judgement for proposed vehicles, and the development of a sensitivity analysis program.

  3. Validity and reliability of instruments aimed at measuring Evidence-Based Practice in Physical Therapy: a systematic review of the literature.

    PubMed

    Fernández-Domínguez, Juan Carlos; Sesé-Abad, Albert; Morales-Asencio, Jose Miguel; Oliva-Pascual-Vaca, Angel; Salinas-Bueno, Iosune; de Pedro-Gómez, Joan Ernest

    2014-12-01

    Our goal is to compile and analyse the characteristics - especially validity and reliability - of all the existing international tools that have been used to measure evidence-based clinical practice in physiotherapy. A systematic review conducted with data from exclusively quantitative-type studies synthesized in narrative format. An in-depth search of the literature was conducted in two phases: initial, structured, electronic search of databases and also journals with summarized evidence; followed by a residual-directed search in the bibliographical references of the main articles found in the primary search procedure. The studies included were assigned to members of the research team who acted as peer reviewers. Relevant information was extracted from each of the selected articles using a template that included the general characteristics of the instrument as well as an analysis of the quality of the validation processes carried out, by following the criteria of Terwee. Twenty-four instruments were found to comply with the review screening criteria; however, in all cases, they were found to be limited as regards the 'constructs' included. Besides, they can all be seen to be lacking as regards comprehensiveness associated to the validation process of the psychometric tests used. It seems that what constitutes a rigorously developed assessment instrument for EBP in physical therapy continues to be a challenge. © 2014 John Wiley & Sons, Ltd.

  4. Adaptation and reliability of neighborhood environment walkability scale (NEWS) for Iran: A questionnaire for assessing environmental correlates of physical activity.

    PubMed

    Hakimian, Pantea; Lak, Azadeh

    2016-01-01

    Background: In spite of the increased range of inactivity and obesity among Iranian adults, insufficient research has been done on environmental factors influencing physical activity. As a result adapting a subjective (self-report) measurement tool for assessment of physical environment in Iran is critical. Accordingly, in this study Neighborhood Environment Walkability Scale (NEWS) was adapted for Iran and also its reliability was evaluated. Methods: This study was conducted using a systematic adaptation method consisting of 3 steps: translate-back translation procedures, revision by a multidisciplinary panel of local experts and a cognitive study. Then NEWS-Iran was completed among adults aged 18 to 65 years (N=19) with an interval of 15 days. Intra-Class Coefficient (ICC) was used to evaluate the reliability of the adapted questionnaire. Results: NEWS-Iran is an adapted version of NEWS-A (abbreviated) and in the adaptation process five items were added from other versions of NEWS, two subscales were significantly modified for a shorter and more effective questionnaire, and five new items were added about climate factors and site-specific uses. NEWS-Iran showed almost perfect reliability (ICCs: more than 0.8) for all subscales, with items having moderate to almost perfect reliability scores (ICCs: 0.56-0.96). Conclusion: This study introduced NEWS-Iran, which is a reliable version of NEWS for measuring environmental perceptions related to physical activity behavior adapted for Iran. It is the first adapted version of NEWS which demonstrates a systematic adaptation process used by earlier studies. It can be used for other developing countries with similar environmental, social and cultural context.

  5. Reliability and Validity of a Chinese-Translated Version of a Pregnancy Physical Activity Questionnaire.

    PubMed

    Xiang, Mi; Konishi, Massayuki; Hu, Huanhuan; Takahashi, Masaki; Fan, Wenbi; Nishimaki, Mio; Ando, Karina; Kim, Hyeon-Ki; Tabata, Hiroki; Arao, Takashi; Sakamoto, Shizuo

    2016-09-01

    Objectives The objectives of the present study were to translate the English version of the Pregnancy Physical Activity Questionnaire into Chinese (PPAQ-C) and to determine its reliability and validity for use by pregnant Chinese women. Methods The study included 224 pregnant women during their first, second, or third trimesters of pregnancy who completed the PPAQ-C on their first visit and wore a uniaxial accelerometer (Lifecorder; Suzuken Co. Ltd) for 7 days. One week after the first visit, we collected the data from the uniaxial accelerometer records, and the women were asked to complete the PPAQ-C again. Results We used intraclass correlation coefficients to determine the reliability of the PPAQ-C. The intraclass correlation coefficients were 0.77 for total activity (light and above), 0.76 for sedentary activity, 0.75 for light activity, 0.59 for moderate activity, and 0.28 for vigorous activity. The intraclass correlation coefficients were 0.74 for "household and caregiving", 0.75 for "occupational" activities, and 0.34 for "sports/exercise". Validity between the PPAQ-C and accelerometer data was determined by Spearman correlation coefficients. Although there were no significant correlations for moderate activity (r = 0.19, P > 0.05) or vigorous activity (r = 0.15, P > 0.05), there were significant correlations for total activity [light and above; r = 0.35, P < 0.01)] and for light activity (r = 0.33, P < 0.01). Conclusions for Practice The PPAQ-C is reliable and moderately accurate for measuring physical activity in pregnant Chinese women.

  6. Multi-Disciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  7. On-orbit spacecraft reliability

    NASA Technical Reports Server (NTRS)

    Bloomquist, C.; Demars, D.; Graham, W.; Henmi, P.

    1978-01-01

    Operational and historic data for 350 spacecraft from 52 U.S. space programs were analyzed for on-orbit reliability. Failure rates estimates are made for on-orbit operation of spacecraft subsystems, components, and piece parts, as well as estimates of failure probability for the same elements during launch. Confidence intervals for both parameters are also given. The results indicate that: (1) the success of spacecraft operation is only slightly affected by most reported incidents of anomalous behavior; (2) the occurrence of the majority of anomalous incidents could have been prevented piror to launch; (3) no detrimental effect of spacecraft dormancy is evident; (4) cycled components in general are not demonstrably less reliable than uncycled components; and (5) application of product assurance elements is conductive to spacecraft success.

  8. Physical modelling of rainfall-induced flow failures in loose granular soils

    NASA Astrophysics Data System (ADS)

    Take, W. A.; Beddoe, R. A.

    2015-09-01

    The tragic consequences of the March 2014 Oso landslide in Washington, USA were particularly high due to the mobility of the landslide debris. Confusingly, a landslide occurred at that exact same location a number of years earlier, but simply slumped into the river at the toe of the slope. Why did these two events differ so drastically in their mobility? Considerable questions remain regarding the conditions required to generate flow failures in loose soils. Geotechnical centrifuge testing, in combination with high-speed cameras and advanced image analysis has now provided the landslides research community with a powerful new tool to experimentally investigate the complex mechanics leading to high mobility landslides. This paper highlights recent advances in our understanding of the process of static liquefaction in loose granular soil slopes achieved through observations of highly-instrumented physical models. In particular, the paper summarises experimental results aimed to identify the point of initiation of the chain-reaction required to trigger liquefaction flow failures, to assess the effect of slope inclination on the likelihood of a flowslide being triggered, and to quantify the effect of antecedent groundwater levels on the distal reach of landslide debris with the objective of beginning to explain why neighbouring slopes can exhibit such a wide variation in landslide travel distance upon rainfall-triggering.

  9. Reliability study of refractory gate gallium arsenide MESFETS

    NASA Technical Reports Server (NTRS)

    Yin, J. C. W.; Portnoy, W. M.

    1981-01-01

    Refractory gate MESFET's were fabricated as an alternative to aluminum gate devices, which have been found to be unreliable as RF power amplifiers. In order to determine the reliability of the new structures, statistics of failure and information about mechanisms of failure in refractory gate MESFET's are given. Test transistors were stressed under conditions of high temperature and forward gate current to enhance failure. Results of work at 150 C and 275 C are reported.

  10. Reliability study of refractory gate gallium arsenide MESFETS

    NASA Astrophysics Data System (ADS)

    Yin, J. C. W.; Portnoy, W. M.

    Refractory gate MESFET's were fabricated as an alternative to aluminum gate devices, which have been found to be unreliable as RF power amplifiers. In order to determine the reliability of the new structures, statistics of failure and information about mechanisms of failure in refractory gate MESFET's are given. Test transistors were stressed under conditions of high temperature and forward gate current to enhance failure. Results of work at 150 C and 275 C are reported.

  11. Heterogeneity: The key to failure forecasting

    PubMed Central

    Vasseur, Jérémie; Wadsworth, Fabian B.; Lavallée, Yan; Bell, Andrew F.; Main, Ian G.; Dingwell, Donald B.

    2015-01-01

    Elastic waves are generated when brittle materials are subjected to increasing strain. Their number and energy increase non-linearly, ending in a system-sized catastrophic failure event. Accelerating rates of geophysical signals (e.g., seismicity and deformation) preceding large-scale dynamic failure can serve as proxies for damage accumulation in the Failure Forecast Method (FFM). Here we test the hypothesis that the style and mechanisms of deformation, and the accuracy of the FFM, are both tightly controlled by the degree of microstructural heterogeneity of the material under stress. We generate a suite of synthetic samples with variable heterogeneity, controlled by the gas volume fraction. We experimentally demonstrate that the accuracy of failure prediction increases drastically with the degree of material heterogeneity. These results have significant implications in a broad range of material-based disciplines for which failure forecasting is of central importance. In particular, the FFM has been used with only variable success to forecast failure scenarios both in the field (volcanic eruptions and landslides) and in the laboratory (rock and magma failure). Our results show that this variability may be explained, and the reliability and accuracy of forecast quantified significantly improved, by accounting for material heterogeneity as a first-order control on forecasting power. PMID:26307196

  12. Heterogeneity: The key to failure forecasting.

    PubMed

    Vasseur, Jérémie; Wadsworth, Fabian B; Lavallée, Yan; Bell, Andrew F; Main, Ian G; Dingwell, Donald B

    2015-08-26

    Elastic waves are generated when brittle materials are subjected to increasing strain. Their number and energy increase non-linearly, ending in a system-sized catastrophic failure event. Accelerating rates of geophysical signals (e.g., seismicity and deformation) preceding large-scale dynamic failure can serve as proxies for damage accumulation in the Failure Forecast Method (FFM). Here we test the hypothesis that the style and mechanisms of deformation, and the accuracy of the FFM, are both tightly controlled by the degree of microstructural heterogeneity of the material under stress. We generate a suite of synthetic samples with variable heterogeneity, controlled by the gas volume fraction. We experimentally demonstrate that the accuracy of failure prediction increases drastically with the degree of material heterogeneity. These results have significant implications in a broad range of material-based disciplines for which failure forecasting is of central importance. In particular, the FFM has been used with only variable success to forecast failure scenarios both in the field (volcanic eruptions and landslides) and in the laboratory (rock and magma failure). Our results show that this variability may be explained, and the reliability and accuracy of forecast quantified significantly improved, by accounting for material heterogeneity as a first-order control on forecasting power.

  13. Heterogeneity: The key to failure forecasting

    NASA Astrophysics Data System (ADS)

    Vasseur, Jérémie; Wadsworth, Fabian B.; Lavallée, Yan; Bell, Andrew F.; Main, Ian G.; Dingwell, Donald B.

    2015-08-01

    Elastic waves are generated when brittle materials are subjected to increasing strain. Their number and energy increase non-linearly, ending in a system-sized catastrophic failure event. Accelerating rates of geophysical signals (e.g., seismicity and deformation) preceding large-scale dynamic failure can serve as proxies for damage accumulation in the Failure Forecast Method (FFM). Here we test the hypothesis that the style and mechanisms of deformation, and the accuracy of the FFM, are both tightly controlled by the degree of microstructural heterogeneity of the material under stress. We generate a suite of synthetic samples with variable heterogeneity, controlled by the gas volume fraction. We experimentally demonstrate that the accuracy of failure prediction increases drastically with the degree of material heterogeneity. These results have significant implications in a broad range of material-based disciplines for which failure forecasting is of central importance. In particular, the FFM has been used with only variable success to forecast failure scenarios both in the field (volcanic eruptions and landslides) and in the laboratory (rock and magma failure). Our results show that this variability may be explained, and the reliability and accuracy of forecast quantified significantly improved, by accounting for material heterogeneity as a first-order control on forecasting power.

  14. A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.

    DTIC Science & Technology

    1981-06-01

    Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that

  15. Making statistical inferences about software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1988-01-01

    Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.

  16. Development, scoring, and reliability of the Microscale Audit of Pedestrian Streetscapes (MAPS)

    PubMed Central

    2013-01-01

    Background Streetscape (microscale) features of the built environment can influence people’s perceptions of their neighborhoods’ suitability for physical activity. Many microscale audit tools have been developed, but few have published systematic scoring methods. We present the development, scoring, and reliability of the Microscale Audit of Pedestrian Streetscapes (MAPS) tool and its theoretically-based subscales. Methods MAPS was based on prior instruments and was developed to assess details of streetscapes considered relevant for physical activity. MAPS sections (route, segments, crossings, and cul-de-sacs) were scored by two independent raters for reliability analyses. There were 290 route pairs, 516 segment pairs, 319 crossing pairs, and 53 cul-de-sac pairs in the reliability sample. Individual inter-rater item reliability analyses were computed using Kappa, intra-class correlation coefficient (ICC), and percent agreement. A conceptual framework for subscale creation was developed using theory, expert consensus, and policy relevance. Items were grouped into subscales, and subscales were analyzed for inter-rater reliability at tiered levels of aggregation. Results There were 160 items included in the subscales (out of 201 items total). Of those included in the subscales, 80 items (50.0%) had good/excellent reliability, 41 items (25.6%) had moderate reliability, and 18 items (11.3%) had low reliability, with limited variability in the remaining 21 items (13.1%). Seventeen of the 20 route section subscales, valence (positive/negative) scores, and overall scores (85.0%) demonstrated good/excellent reliability and 3 demonstrated moderate reliability. Of the 16 segment subscales, valence scores, and overall scores, 12 (75.0%) demonstrated good/excellent reliability, three demonstrated moderate reliability, and one demonstrated poor reliability. Of the 8 crossing subscales, valence scores, and overall scores, 6 (75.0%) demonstrated good/excellent reliability, and

  17. Decreasing inventory of a cement factory roller mill parts using reliability centered maintenance method

    NASA Astrophysics Data System (ADS)

    Witantyo; Rindiyah, Anita

    2018-03-01

    According to data from maintenance planning and control, it was obtained that highest inventory value is non-routine components. Maintenance components are components which procured based on maintenance activities. The problem happens because there is no synchronization between maintenance activities and the components required. Reliability Centered Maintenance method is used to overcome the problem by reevaluating maintenance activities required components. The case chosen is roller mill system because it has the highest unscheduled downtime record. Components required for each maintenance activities will be determined by its failure distribution, so the number of components needed could be predicted. Moreover, those components will be reclassified from routine component to be non-routine component, so the procurement could be carried out regularly. Based on the conducted analysis, failure happens in almost every maintenance task are classified to become scheduled on condition task, scheduled discard task, schedule restoration task and no schedule maintenance. From 87 used components for maintenance activities are evaluated and there 19 components that experience reclassification from non-routine components to routine components. Then the reliability and need of those components were calculated for one-year operation period. Based on this invention, it is suggested to change all of the components in overhaul activity to increase the reliability of roller mill system. Besides, the inventory system should follow maintenance schedule and the number of required components in maintenance activity so the value of procurement will be decreased and the reliability system will increase.

  18. A Physical Heart Failure Simulation System Utilizing the Total Artificial Heart and Modified Donovan Mock Circulation.

    PubMed

    Crosby, Jessica R; DeCook, Katrina J; Tran, Phat L; Betterton, Edward; Smith, Richard G; Larson, Douglas F; Khalpey, Zain I; Burkhoff, Daniel; Slepian, Marvin J

    2017-07-01

    With the growth and diversity of mechanical circulatory support (MCS) systems entering clinical use, a need exists for a robust mock circulation system capable of reliably emulating and reproducing physiologic as well as pathophysiologic states for use in MCS training and inter-device comparison. We report on the development of such a platform utilizing the SynCardia Total Artificial Heart and a modified Donovan Mock Circulation System, capable of being driven at normal and reduced output. With this platform, clinically relevant heart failure hemodynamics could be reliably reproduced as evidenced by elevated left atrial pressure (+112%), reduced aortic flow (-12.6%), blunted Starling-like behavior, and increased afterload sensitivity when compared with normal function. Similarly, pressure-volume relationships demonstrated enhanced sensitivity to afterload and decreased Starling-like behavior in the heart failure model. Lastly, the platform was configured to allow the easy addition of a left ventricular assist device (HeartMate II at 9600 RPM), which upon insertion resulted in improvement of hemodynamics. The present configuration has the potential to serve as a viable system for training and research, aimed at fostering safe and effective MCS device use. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  19. The Verification-based Analysis of Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1996-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  20. Evidence-based medicine and patient choice: the case of heart failure care.

    PubMed

    Sanders, Tom; Harrison, Stephen; Checkland, Kath

    2008-04-01

    The implementation of evidence-based medicine and policies aimed at increasing user involvement in health care decisions are central planks of contemporary English health policy. Yet they are potentially in conflict. Our aim was to explore how clinicians working in the field of heart failure resolve this conflict. Qualitative semi-structured interviews were carried out with health professionals who were currently caring for patients with heart failure, and observations were conducted at one dedicated heart failure clinic in northern England. While clinicians acknowledged that patients' ideas and preferences should be an important part of treatment decisions, the widespread acceptance of an evidence-based clinical protocol for heart failure among the clinic doctors significantly influenced the content and style of the consultation. Evidence-based medicine was used to buttress professional authority and seemed to provide an additional barrier to the adoption of patient-centred clinical practice.