Sample records for failure model based

  1. Electromigration model for the prediction of lifetime based on the failure unit statistics in aluminum metallization

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Ahn, Byung Tae

    2003-01-01

    A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.

  2. Adaptive Failure Compensation for Aircraft Tracking Control Using Engine Differential Based Model

    NASA Technical Reports Server (NTRS)

    Liu, Yu; Tang, Xidong; Tao, Gang; Joshi, Suresh M.

    2006-01-01

    An aircraft model that incorporates independently adjustable engine throttles and ailerons is employed to develop an adaptive control scheme in the presence of actuator failures. This model captures the key features of aircraft flight dynamics when in the engine differential mode. Based on this model an adaptive feedback control scheme for asymptotic state tracking is developed and applied to a transport aircraft model in the presence of two types of failures during operation, rudder failure and aileron failure. Simulation results are presented to demonstrate the adaptive failure compensation scheme.

  3. A Novel Multiscale Physics Based Progressive Failure Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.; Yarrington, Phillip W.

    2008-01-01

    A variable fidelity, multiscale, physics based finite element procedure for predicting progressive damage and failure of laminated continuous fiber reinforced composites is introduced. At every integration point in a finite element model, progressive damage is accounted for at the lamina-level using thermodynamically based Schapery Theory. Separate failure criteria are applied at either the global-scale or the microscale in two different FEM models. A micromechanics model, the Generalized Method of Cells, is used to evaluate failure criteria at the micro-level. The stress-strain behavior and observed failure mechanisms are compared with experimental results for both models.

  4. Is it beneficial to approximate pre-failure topography to predict landslide susceptibility with empirical models?

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Schmaltz, Elmar; Glade, Thomas

    2017-04-01

    Empirical landslide susceptibility maps spatially depict the areas where future slope failures are likely due to specific environmental conditions. The underlying statistical models are based on the assumption that future landsliding is likely to occur under similar circumstances (e.g. topographic conditions, lithology, land cover) as past slope failures. This principle is operationalized by applying a supervised classification approach (e.g. a regression model with a binary response: landslide presence/absence) that enables discrimination between conditions that favored past landslide occurrences and the circumstances typical for landslide absences. The derived empirical relation is then transferred to each spatial unit of an area. Literature reveals that the specific topographic conditions representative for landslide presences are frequently extracted from derivatives of digital terrain models at locations were past landslides were mapped. The underlying morphology-based landslide identification becomes possible due to the fact that the topography at a specific locality usually changes after landslide occurrence (e.g. hummocky surface, concave and steep scarp). In a strict sense, this implies that topographic predictors used within conventional statistical landslide susceptibility models relate to post-failure topographic conditions - and not to the required pre-failure situation. This study examines the assumption that models calibrated on the basis of post-failure topographies may not be appropriate to predict future landslide locations, because (i) post-failure and pre-failure topographic conditions may differ and (ii) areas were future landslides will occur do not yet exhibit such a distinct post-failure morphology. The study was conducted for an area located in the Walgau region (Vorarlberg, western Austria), where a detailed inventory consisting of shallow landslides was available. The methodology comprised multiple systematic comparisons of models generated on the basis of post-failure conditions (i.e. the standard approach) with models based on an approximated pre-failure topography. Pre-failure topography was approximated by (i) erasing the area of mapped landslide polygons within a digital terrain model and (ii) filling these "empty" areas by interpolating elevation points located outside the mapped landslides. Landslide presence information was extracted from the respective landslide scarp locations while an equal number of randomly sampled points represented landslide absences. After an initial exploratory data analysis, mixed-effects logistic regression was applied to model landslide susceptibility on the basis of two predictor sets (post-failure versus pre-failure predictors). Furthermore, all analyses were separately conducted for five different modelling resolutions to elaborate the suspicion that the degree of generalization of topographic parameters may as well play a role on how the respective models may differ. Model evaluation was conducted by means of multiple procedures (i.e. odds ratios, k-fold cross validation, permutation-based variable importance, difference maps of predictions). The results revealed that models based on highest resolutions (e.g. 1 m, 2.5 m) and post-failure topography performed best from a purely quantitative perspective. A confrontation of models (post-failure versus pre-failure based models) based on an identical modelling resolution exposed that validation results, modelled relationships as well as the prediction pattern tended to converge with a decreasing raster resolution. Based on the results, we concluded that an approximation of pre-failure topography does not significantly contribute to improved landslide susceptibility models in the case (i) the underlying inventory consists of small landslide features and (ii) the models are based on coarse raster resolutions (e.g. 25 m). However, in the case modelling with high raster resolutions is envisaged (e.g. 1 m, 2.5 m) or the inventory mainly consists of larger events, a reconstruction of pre-failure conditions might be highly expedient, even though conventional validation results might indicate an opposite tendency. Finally, we recommend to consider that topographic predictors highly useful to detect past slope movements (e.g. roughness) are not necessarily valuable to predict future slope instabilities.

  5. Implementation of a Tabulated Failure Model Into a Generalized Composite Material Model Suitable for Use in Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther

    2017-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current paper, the complete development of the failure model is described and the generation of a tabulated failure surface for a representative composite material is discussed.

  6. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    PubMed

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Modelling Wind Turbine Failures based on Weather Conditions

    NASA Astrophysics Data System (ADS)

    Reder, Maik; Melero, Julio J.

    2017-11-01

    A large proportion of the overall costs of a wind farm is directly related to operation and maintenance (O&M) tasks. By applying predictive O&M strategies rather than corrective approaches these costs can be decreased significantly. Here, especially wind turbine (WT) failure models can help to understand the components’ degradation processes and enable the operators to anticipate upcoming failures. Usually, these models are based on the age of the systems or components. However, latest research shows that the on-site weather conditions also affect the turbine failure behaviour significantly. This study presents a novel approach to model WT failures based on the environmental conditions to which they are exposed to. The results focus on general WT failures, as well as on four main components: gearbox, generator, pitch and yaw system. A penalised likelihood estimation is used in order to avoid problems due to for example highly correlated input covariates. The relative importance of the model covariates is assessed in order to analyse the effect of each weather parameter on the model output.

  8. Real-time diagnostics of the reusable rocket engine using on-line system identification

    NASA Technical Reports Server (NTRS)

    Guo, T.-H.; Merrill, W.; Duyar, A.

    1990-01-01

    A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.

  9. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    NASA Astrophysics Data System (ADS)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  10. Incorporation of Failure Into an Orthotropic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther

    2017-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in various coordinate directions. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current paper, the complete development of the failure model is described and the generation of a tabulated failure surface for a representative composite material is discussed.

  11. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  12. On the use and the performance of software reliability growth models

    NASA Technical Reports Server (NTRS)

    Keiller, Peter A.; Miller, Douglas R.

    1991-01-01

    We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.

  13. Factors Influencing Progressive Failure Analysis Predictions for Laminated Composite Structure

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    2008-01-01

    Progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model for use with a nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details are described in the present paper. Parametric studies for laminated composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented and to demonstrate their influence on progressive failure analysis predictions.

  14. A Comparison of Functional Models for Use in the Function-Failure Design Method

    NASA Technical Reports Server (NTRS)

    Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.

    2006-01-01

    When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.

  15. Requirements for energy based constitutive modeling in tire mechanics

    NASA Technical Reports Server (NTRS)

    Luchini, John R.; Peters, Jim M.; Mars, Will V.

    1995-01-01

    The history, requirements, and theoretical basis of a new energy based constitutive model for (rubber) material elasticity, hysteresis, and failure are presented. Energy based elasticity is handled by many constitutive models, both in one dimension and in three dimensions. Conversion of mechanical energy to heat can be modeled with viscoelasticity or as structural hysteresis. We are seeking unification of elasticity, hysteresis, and failure mechanisms such as fatigue and wear. An energy state characterization for failure criteria of (rubber) materials may provide this unification and also help explain the interaction of temperature effects with failure mechanisms which are described as creation of growth of internal crack surface. Improved structural modeling of tires with FEM should result from such a unified constitutive theory. The theory will also guide experimental work and should enable better interpretation of the results of computational stress analyses.

  16. User-Defined Material Model for Progressive Failure Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F. Jr.; Reeder, James R. (Technical Monitor)

    2006-01-01

    An overview of different types of composite material system architectures and a brief review of progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model (or UMAT) for use with the ABAQUS/Standard1 nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details and use of the UMAT subroutine are described in the present paper. Parametric studies for composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented.

  17. Nurses' decision making in heart failure management based on heart failure certification status.

    PubMed

    Albert, Nancy M; Bena, James F; Buxbaum, Denise; Martensen, Linda; Morrison, Shannon L; Prasun, Marilyn A; Stamp, Kelly D

    Research findings on the value of nurse certification were based on subjective perceptions or biased by correlations of certification status and global clinical factors. In heart failure, the value of certification is unknown. Examine the value of certification based nurses' decision-making. Cross-sectional study of nurses who completed heart failure clinical vignettes that reflected decision-making in clinical heart failure scenarios. Statistical tests included multivariable linear, logistic and proportional odds logistic regression models. Of nurses (N = 605), 29.1% were heart failure certified, 35.0% were certified in another specialty/job role and 35.9% were not certified. In multivariable modeling, nurses certified in heart failure (versus not heart failure certified) had higher clinical vignette scores (p = 0.002), reflecting higher evidence-based decision making; nurses with another specialty/role certification (versus no certification) did not (p = 0.62). Heart failure certification, but not in other specialty/job roles was associated with decisions that reflected delivery of high-quality care. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. TEAM-HF Cost-Effectiveness Model: A Web-Based Program Designed to Evaluate the Cost-Effectiveness of Disease Management Programs in Heart Failure

    PubMed Central

    Reed, Shelby D.; Neilson, Matthew P.; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H.; Polsky, Daniel E.; Graham, Felicia L.; Bowers, Margaret T.; Paul, Sara C.; Granger, Bradi B.; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara; Levy, Wayne C.

    2015-01-01

    Background Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. Methods We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics, use of evidence-based medications, and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model (SHFM). Projections of resource use and quality of life are modeled using relationships with time-varying SHFM scores. The model can be used to evaluate parallel-group and single-cohort designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. Results The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. Conclusion The TEAM-HF Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. PMID:26542504

  19. A measurement-based performability model for a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  20. Simulation Assisted Risk Assessment: Blast Overpressure Modeling

    NASA Technical Reports Server (NTRS)

    Lawrence, Scott L.; Gee, Ken; Mathias, Donovan; Olsen, Michael

    2006-01-01

    A probabilistic risk assessment (PRA) approach has been developed and applied to the risk analysis of capsule abort during ascent. The PRA is used to assist in the identification of modeling and simulation applications that can significantly impact the understanding of crew risk during this potentially dangerous maneuver. The PRA approach is also being used to identify the appropriate level of fidelity for the modeling of those critical failure modes. The Apollo launch escape system (LES) was chosen as a test problem for application of this approach. Failure modes that have been modeled and/or simulated to date include explosive overpressure-based failure, explosive fragment-based failure, land landing failures (range limits exceeded either near launch or Mode III trajectories ending on the African continent), capsule-booster re-contact during separation, and failure due to plume-induced instability. These failure modes have been investigated using analysis tools in a variety of technical disciplines at various levels of fidelity. The current paper focuses on the development and application of a blast overpressure model for the prediction of structural failure due to overpressure, including the application of high-fidelity analysis to predict near-field and headwinds effects.

  1. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes.

    PubMed

    van Walraven, Carl

    2017-04-01

    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Micromechanical investigation of ductile failure in Al 5083-H116 via 3D unit cell modeling

    NASA Astrophysics Data System (ADS)

    Bomarito, G. F.; Warner, D. H.

    2015-01-01

    Ductile failure is governed by the evolution of micro-voids within a material. The micro-voids, which commonly initiate at second phase particles within metal alloys, grow and interact with each other until failure occurs. The evolution of the micro-voids, and therefore ductile failure, depends on many parameters (e.g., stress state, temperature, strain rate, void and particle volume fraction, etc.). In this study, the stress state dependence of the ductile failure of Al 5083-H116 is investigated by means of 3-D Finite Element (FE) periodic cell models. The cell models require only two pieces of information as inputs: (1) the initial particle volume fraction of the alloy and (2) the constitutive behavior of the matrix material. Based on this information, cell models are subjected to a given stress state, defined by the stress triaxiality and the Lode parameter. For each stress state, the cells are loaded in many loading orientations until failure. Material failure is assumed to occur in the weakest orientation, and so the orientation in which failure occurs first is considered as the critical orientation. The result is a description of material failure that is derived from basic principles and requires no fitting parameters. Subsequently, the results of the simulations are used to construct a homogenized material model, which is used in a component-scale FE model. The component-scale FE model is compared to experiments and is shown to over predict ductility. By excluding smaller nucleation events and load path non-proportionality, it is concluded that accuracy could be gained by including more information about the true microstructure in the model; emphasizing that its incorporation into micromechanical models is critical to developing quantitatively accurate physics-based ductile failure models.

  3. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  4. Dynamics of functional failures and recovery in complex road networks

    NASA Astrophysics Data System (ADS)

    Zhan, Xianyuan; Ukkusuri, Satish V.; Rao, P. Suresh C.

    2017-11-01

    We propose a new framework for modeling the evolution of functional failures and recoveries in complex networks, with traffic congestion on road networks as the case study. Differently from conventional approaches, we transform the evolution of functional states into an equivalent dynamic structural process: dual-vertex splitting and coalescing embedded within the original network structure. The proposed model successfully explains traffic congestion and recovery patterns at the city scale based on high-resolution data from two megacities. Numerical analysis shows that certain network structural attributes can amplify or suppress cascading functional failures. Our approach represents a new general framework to model functional failures and recoveries in flow-based networks and allows understanding of the interplay between structure and function for flow-induced failure propagation and recovery.

  5. Numerical Predictions of Damage and Failure in Carbon Fiber Reinforced Laminates Using a Thermodynamically-Based Work Potential Theory

    NASA Technical Reports Server (NTRS)

    Pineda, Evan Jorge; Waas, Anthony M.

    2013-01-01

    A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, referred to as enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Consistent characteristic lengths are introduced into the formulation to govern the evolution of the failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs are derived. The theory is implemented into a commercial finite element code. The model is verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared against the experimental results.

  6. Studies in knowledge-based diagnosis of failures in robotic assembly

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Pollard, Nancy S.; Desai, Rajiv S.

    1990-01-01

    The telerobot diagnostic system (TDS) is a knowledge-based system that is being developed for identification and diagnosis of failures in the space robotic domain. The system is able to isolate the symptoms of the failure, generate failure hypotheses based on these symptoms, and test their validity at various levels by interpreting or simulating the effects of the hypotheses on results of plan execution. The implementation of the TDS is outlined. The classification of failures and the types of system models used by the TDS are discussed. A detailed example of the TDS approach to failure diagnosis is provided.

  7. Comprehension and retrieval of failure cases in airborne observatories

    NASA Technical Reports Server (NTRS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-01-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  8. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  9. Comprehension and retrieval of failure cases in airborne observatories

    NASA Astrophysics Data System (ADS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-05-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  10. Failure analysis of energy storage spring in automobile composite brake chamber

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Wei, Qing; Hu, Xiaofeng

    2015-02-01

    This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.

  11. Failure Models and Criteria for FRP Under In-Plane or Three-Dimensional Stress States Including Shear Non-Linearity

    NASA Technical Reports Server (NTRS)

    Pinho, Silvestre T.; Davila, C. G.; Camanho, P. P.; Iannucci, L.; Robinson, P.

    2005-01-01

    A set of three-dimensional failure criteria for laminated fiber-reinforced composites, denoted LaRC04, is proposed. The criteria are based on physical models for each failure mode and take into consideration non-linear matrix shear behaviour. The model for matrix compressive failure is based on the Mohr-Coulomb criterion and it predicts the fracture angle. Fiber kinking is triggered by an initial fiber misalignment angle and by the rotation of the fibers during compressive loading. The plane of fiber kinking is predicted by the model. LaRC04 consists of 6 expressions that can be used directly for design purposes. Several applications involving a broad range of load combinations are presented and compared to experimental data and other existing criteria. Predictions using LaRC04 correlate well with the experimental data, arguably better than most existing criteria. The good correlation seems to be attributable to the physical soundness of the underlying failure models.

  12. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  13. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  14. The human subject: an integrative animal model for 21st century heart failure research

    PubMed Central

    Chandrasekera, P Charukeshi; Pippin, John J

    2015-01-01

    Heart failure remains a leading cause of death and it is a major cause of morbidity and mortality affecting tens of millions of people worldwide. Despite decades of extensive research conducted at enormous expense, only a handful of interventions have significantly impacted survival in heart failure. Even the most widely prescribed treatments act primarily to slow disease progression, do not provide sustained survival advantage, and have adverse side effects. Since mortality remains about 50% within five years of diagnosis, the need to increase our understanding of heart failure disease mechanisms and development of preventive and reparative therapies remains critical. Currently, the vast majority of basic science heart failure research is conducted using animal models ranging from fruit flies to primates; however, insights gleaned from decades of animal-based research efforts have not been proportional to research success in terms of deciphering human heart failure and developing effective therapeutics for human patients. Here we discuss the reasons for this translational discrepancy which can be equally attributed to the use of erroneous animal models and the lack of widespread use of human-based research methodologies and address why and how we must position our own species at center stage as the quintessential animal model for 21st century heart failure research. If the ultimate goal of the scientific community is to tackle the epidemic status of heart failure, the best way to achieve that goal is through prioritizing human-based, human-relevant research. PMID:26550463

  15. Rocket engine diagnostics using qualitative modeling techniques

    NASA Technical Reports Server (NTRS)

    Binder, Michael; Maul, William; Meyer, Claudia; Sovie, Amy

    1992-01-01

    Researchers at NASA Lewis Research Center are presently developing qualitative modeling techniques for automated rocket engine diagnostics. A qualitative model of a turbopump interpropellant seal system has been created. The qualitative model describes the effects of seal failures on the system steady-state behavior. This model is able to diagnose the failure of particular seals in the system based on anomalous temperature and pressure values. The anomalous values input to the qualitative model are generated using numerical simulations. Diagnostic test cases include both single and multiple seal failures.

  16. Rocket engine diagnostics using qualitative modeling techniques

    NASA Technical Reports Server (NTRS)

    Binder, Michael; Maul, William; Meyer, Claudia; Sovie, Amy

    1992-01-01

    Researchers at NASA Lewis Research Center are presently developing qualitative modeling techniques for automated rocket engine diagnostics. A qualitative model of a turbopump interpropellant seal system was created. The qualitative model describes the effects of seal failures on the system steady state behavior. This model is able to diagnose the failure of particular seals in the system based on anomalous temperature and pressure values. The anomalous values input to the qualitative model are generated using numerical simulations. Diagnostic test cases include both single and multiple seal failures.

  17. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

    2018-03-01

    The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

  18. Risk-based decision making to manage water quality failures caused by combined sewer overflows

    NASA Astrophysics Data System (ADS)

    Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.

    2017-12-01

    Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean- pbf optimization. The effectiveness of using buffered failure probability compared to the failure probability is tested by comparing the solutions obtained by using mean-pbf and mean-pf optimizations.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    Previously the SURFplus reactive burn model was calibrated for the TATB based explosive PBX 9502. The calibration was based on fitting Pop plot data, the failure diameter and the limiting detonation speed, and curvature effect data for small curvature. The model failure diameter is determined utilizing 2-D simulations of an unconfined rate stick to find the minimum diameter for which a detonation wave propagates. Here we examine the effect of mesh resolution on an unconfined rate stick with a diameter (10mm) slightly greater than the measured failure diameter (8 to 9 mm).

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chao; Xu, Jun; Cao, Lei

    The electrodes of lithium-ion batteries (LIB) are known to be brittle and to fail earlier than the separators during an external crush event. Thus, the understanding of mechanical failure mechanism for LIB electrodes (anode and cathode) is critical for the safety design of LIB cells. In this paper, we present experimental and numerical studies on the constitutive behavior and progression of failure in LIB electrodes. Mechanical tests were designed and conducted to evaluate the constitutive properties of porous electrodes. Constitutive models were developed to describe the stress-strain response of electrodes under uniaxial tensile and compressive loads. The failure criterion andmore » a damage model were introduced to model their unique tensile and compressive failure behavior. The failure mechanism of LIB electrodes was studied using the blunt rod test on dry electrodes, and numerical models were built to simulate progressive failure. The different failure processes were examined and analyzed in detail numerically, and correlated with experimentally observed failure phenomena. Finally, the test results and models improve our understanding of failure behavior in LIB electrodes, and provide constructive insights on future development of physics-based safety design tools for battery structures under mechanical abuse.« less

  1. Investigation of advanced fault insertion and simulator methods

    NASA Technical Reports Server (NTRS)

    Dunn, W. R.; Cottrell, D.

    1986-01-01

    The cooperative agreement partly supported research leading to the open-literature publication cited. Additional efforts under the agreement included research into fault modeling of semiconductor devices. Results of this research are presented in this report which is summarized in the following paragraphs. As a result of the cited research, it appears that semiconductor failure mechanism data is abundant but of little use in developing pin-level device models. Failure mode data on the other hand does exist but is too sparse to be of any statistical use in developing fault models. What is significant in the failure mode data is that, unlike classical logic, MSI and LSI devices do exhibit more than 'stuck-at' and open/short failure modes. Specifically they are dominated by parametric failures and functional anomalies that can include intermittent faults and multiple-pin failures. The report discusses methods of developing composite pin-level models based on extrapolation of semiconductor device failure mechanisms, failure modes, results of temperature stress testing and functional modeling. Limitations of this model particularly with regard to determination of fault detection coverage and latency time measurement are discussed. Indicated research directions are presented.

  2. Torque Limits for Fasteners in Composites

    NASA Technical Reports Server (NTRS)

    Zhao, Yi

    2002-01-01

    The two major classes of laminate joints are bonded and bolted. Often the two classes are combined as bonded-bolted joints. Several characteristics of fiber reinforced composite materials render them more susceptible to joint problems than conventional metals. These characteristics include weakness in in-plane shear, transverse tension/compression, interlaminar shear, and bearing strength relative to the strength and stiffness in the fiber direction. Studies on bolted joints of composite materials have been focused on joining assembly subject to in-plane loads. Modes of failure under these loading conditions are net-tension failure, cleavage tension failure, shear-out failure, bearing failure, etc. Although the studies of torque load can be found in literature, they mainly discussed the effect of the torque load on in-plane strength. Existing methods for calculating torque limit for a mechanical fastener do not consider connecting members. The concern that a composite member could be crushed by a preload inspired the initiation of this study. The purpose is to develop a fundamental knowledge base on how to determine a torque limit when a composite member is taken into account. Two simplified analytical models were used: a stress failure analysis model based on maximum stress criterion, and a strain failure analysis model based on maximum strain criterion.

  3. Development of an Electronic Medical Record Based Alert for Risk of HIV Treatment Failure in a Low-Resource Setting

    PubMed Central

    Puttkammer, Nancy; Zeliadt, Steven; Balan, Jean Gabriel; Baseman, Janet; Destiné, Rodney; Domerçant, Jean Wysler; France, Garilus; Hyppolite, Nathaelf; Pelletier, Valérie; Raphael, Nernst Atwood; Sherr, Kenneth; Yuhas, Krista; Barnhart, Scott

    2014-01-01

    Background The adoption of electronic medical record systems in resource-limited settings can help clinicians monitor patients' adherence to HIV antiretroviral therapy (ART) and identify patients at risk of future ART failure, allowing resources to be targeted to those most at risk. Methods Among adult patients enrolled on ART from 2005–2013 at two large, public-sector hospitals in Haiti, ART failure was assessed after 6–12 months on treatment, based on the World Health Organization's immunologic and clinical criteria. We identified models for predicting ART failure based on ART adherence measures and other patient characteristics. We assessed performance of candidate models using area under the receiver operating curve, and validated results using a randomly-split data sample. The selected prediction model was used to generate a risk score, and its ability to differentiate ART failure risk over a 42-month follow-up period was tested using stratified Kaplan Meier survival curves. Results Among 923 patients with CD4 results available during the period 6–12 months after ART initiation, 196 (21.2%) met ART failure criteria. The pharmacy-based proportion of days covered (PDC) measure performed best among five possible ART adherence measures at predicting ART failure. Average PDC during the first 6 months on ART was 79.0% among cases of ART failure and 88.6% among cases of non-failure (p<0.01). When additional information including sex, baseline CD4, and duration of enrollment in HIV care prior to ART initiation were added to PDC, the risk score differentiated between those who did and did not meet failure criteria over 42 months following ART initiation. Conclusions Pharmacy data are most useful for new ART adherence alerts within iSanté. Such alerts offer potential to help clinicians identify patients at high risk of ART failure so that they can be targeted with adherence support interventions, before ART failure occurs. PMID:25390044

  4. Development of an electronic medical record based alert for risk of HIV treatment failure in a low-resource setting.

    PubMed

    Puttkammer, Nancy; Zeliadt, Steven; Balan, Jean Gabriel; Baseman, Janet; Destiné, Rodney; Domerçant, Jean Wysler; France, Garilus; Hyppolite, Nathaelf; Pelletier, Valérie; Raphael, Nernst Atwood; Sherr, Kenneth; Yuhas, Krista; Barnhart, Scott

    2014-01-01

    The adoption of electronic medical record systems in resource-limited settings can help clinicians monitor patients' adherence to HIV antiretroviral therapy (ART) and identify patients at risk of future ART failure, allowing resources to be targeted to those most at risk. Among adult patients enrolled on ART from 2005-2013 at two large, public-sector hospitals in Haiti, ART failure was assessed after 6-12 months on treatment, based on the World Health Organization's immunologic and clinical criteria. We identified models for predicting ART failure based on ART adherence measures and other patient characteristics. We assessed performance of candidate models using area under the receiver operating curve, and validated results using a randomly-split data sample. The selected prediction model was used to generate a risk score, and its ability to differentiate ART failure risk over a 42-month follow-up period was tested using stratified Kaplan Meier survival curves. Among 923 patients with CD4 results available during the period 6-12 months after ART initiation, 196 (21.2%) met ART failure criteria. The pharmacy-based proportion of days covered (PDC) measure performed best among five possible ART adherence measures at predicting ART failure. Average PDC during the first 6 months on ART was 79.0% among cases of ART failure and 88.6% among cases of non-failure (p<0.01). When additional information including sex, baseline CD4, and duration of enrollment in HIV care prior to ART initiation were added to PDC, the risk score differentiated between those who did and did not meet failure criteria over 42 months following ART initiation. Pharmacy data are most useful for new ART adherence alerts within iSanté. Such alerts offer potential to help clinicians identify patients at high risk of ART failure so that they can be targeted with adherence support interventions, before ART failure occurs.

  5. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  6. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    PubMed

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  7. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method

    PubMed Central

    Deng, Xinyang

    2017-01-01

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model. PMID:28895905

  8. Constitutive behavior and progressive mechanical failure of electrodes in lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Xu, Jun; Cao, Lei; Wu, Zenan; Santhanagopalan, Shriram

    2017-07-01

    The electrodes of lithium-ion batteries (LIB) are known to be brittle and to fail earlier than the separators during an external crush event. Thus, the understanding of mechanical failure mechanism for LIB electrodes (anode and cathode) is critical for the safety design of LIB cells. In this paper, we present experimental and numerical studies on the constitutive behavior and progression of failure in LIB electrodes. Mechanical tests were designed and conducted to evaluate the constitutive properties of porous electrodes. Constitutive models were developed to describe the stress-strain response of electrodes under uniaxial tensile and compressive loads. The failure criterion and a damage model were introduced to model their unique tensile and compressive failure behavior. The failure mechanism of LIB electrodes was studied using the blunt rod test on dry electrodes, and numerical models were built to simulate progressive failure. The different failure processes were examined and analyzed in detail numerically, and correlated with experimentally observed failure phenomena. The test results and models improve our understanding of failure behavior in LIB electrodes, and provide constructive insights on future development of physics-based safety design tools for battery structures under mechanical abuse.

  9. Constitutive behavior and progressive mechanical failure of electrodes in lithium-ion batteries

    DOE PAGES

    Zhang, Chao; Xu, Jun; Cao, Lei; ...

    2017-05-05

    The electrodes of lithium-ion batteries (LIB) are known to be brittle and to fail earlier than the separators during an external crush event. Thus, the understanding of mechanical failure mechanism for LIB electrodes (anode and cathode) is critical for the safety design of LIB cells. In this paper, we present experimental and numerical studies on the constitutive behavior and progression of failure in LIB electrodes. Mechanical tests were designed and conducted to evaluate the constitutive properties of porous electrodes. Constitutive models were developed to describe the stress-strain response of electrodes under uniaxial tensile and compressive loads. The failure criterion andmore » a damage model were introduced to model their unique tensile and compressive failure behavior. The failure mechanism of LIB electrodes was studied using the blunt rod test on dry electrodes, and numerical models were built to simulate progressive failure. The different failure processes were examined and analyzed in detail numerically, and correlated with experimentally observed failure phenomena. Finally, the test results and models improve our understanding of failure behavior in LIB electrodes, and provide constructive insights on future development of physics-based safety design tools for battery structures under mechanical abuse.« less

  10. Numerical simulation of failure behavior of granular debris flows based on flume model tests.

    PubMed

    Zhou, Jian; Li, Ye-xun; Jia, Min-cai; Li, Cui-na

    2013-01-01

    In this study, the failure behaviors of debris flows were studied by flume model tests with artificial rainfall and numerical simulations (PFC(3D)). Model tests revealed that grain sizes distribution had profound effects on failure mode, and the failure in slope of medium sand started with cracks at crest and took the form of retrogressive toe sliding failure. With the increase of fine particles in soil, the failure mode of the slopes changed to fluidized flow. The discrete element method PFC(3D) can overcome the hypothesis of the traditional continuous medium mechanic and consider the simple characteristics of particle. Thus, a numerical simulations model considering liquid-solid coupled method has been developed to simulate the debris flow. Comparing the experimental results, the numerical simulation result indicated that the failure mode of the failure of medium sand slope was retrogressive toe sliding, and the failure of fine sand slope was fluidized sliding. The simulation result is consistent with the model test and theoretical analysis, and grain sizes distribution caused different failure behavior of granular debris flows. This research should be a guide to explore the theory of debris flow and to improve the prevention and reduction of debris flow.

  11. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  12. Savannah River Site generic data base development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanton, C.H.; Eide, S.A.

    This report describes the results of a project to improve the generic component failure data base for the Savannah River Site (SRS). A representative list of components and failure modes for SRS risk models was generated by reviewing existing safety analyses and component failure data bases and from suggestions from SRS safety analysts. Then sources of data or failure rate estimates were identified and reviewed for applicability. A major source of information was the Nuclear Computerized Library for Assessing Reactor Reliability, or NUCLARR. This source includes an extensive collection of failure data and failure rate estimates for commercial nuclear powermore » plants. A recent Idaho National Engineering Laboratory report on failure data from the Idaho Chemical Processing Plant was also reviewed. From these and other recent sources, failure data and failure rate estimates were collected for the components and failure modes of interest. This information was aggregated to obtain a recommended generic failure rate distribution (mean and error factor) for each component failure mode.« less

  13. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  14. A Micromechanics-Based Elastoplastic Damage Model for Rocks with a Brittle-Ductile Transition in Mechanical Response

    NASA Astrophysics Data System (ADS)

    Hu, Kun; Zhu, Qi-zhi; Chen, Liang; Shao, Jian-fu; Liu, Jian

    2018-06-01

    As confining pressure increases, crystalline rocks of moderate porosity usually undergo a transition in failure mode from localized brittle fracture to diffused damage and ductile failure. This transition has been widely reported experimentally for several decades; however, satisfactory modeling is still lacking. The present paper aims at modeling the brittle-ductile transition process of rocks under conventional triaxial compression. Based on quantitative analyses of experimental results, it is found that there is a quite satisfactory linearity between the axial inelastic strain at failure and the confining pressure prescribed. A micromechanics-based frictional damage model is then formulated using an associated plastic flow rule and a strain energy release rate-based damage criterion. The analytical solution to the strong plasticity-damage coupling problem is provided and applied to simulate the nonlinear mechanical behaviors of Tennessee marble, Indiana limestone and Jinping marble, each presenting a brittle-ductile transition in stress-strain curves.

  15. Fault management for the Space Station Freedom control center

    NASA Technical Reports Server (NTRS)

    Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet

    1992-01-01

    This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.

  16. A probabilisitic based failure model for components fabricated from anisotropic graphite

    NASA Astrophysics Data System (ADS)

    Xiao, Chengfeng

    The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of invariants, known as an integrity basis, was developed for a non-linear elastic constitutive model. This integrity basis allowed the non-linear constitutive model to exhibit different behavior in tension and compression and moreover, the integrity basis was amenable to being augmented and extended to anisotropic behavior. This integrity basis served as the starting point in developing both an isotropic reliability model and a reliability model for transversely isotropic materials. At the heart of the reliability models is a failure function very similar in nature to the yield functions found in classic plasticity theory. The failure function is derived and presented in the context of a multiaxial stress space. States of stress inside the failure envelope denote safe operating states. States of stress on or outside the failure envelope denote failure. The phenomenological strength parameters associated with the failure function are treated as random variables. There is a wealth of failure data in the literature that supports this notion. The mathematical integration of a joint probability density function that is dependent on the random strength variables over the safe operating domain defined by the failure function provides a way to compute the reliability of a state of stress in a graphite core component fabricated from graphite. The evaluation of the integral providing the reliability associated with an operational stress state can only be carried out using a numerical method. Monte Carlo simulation with importance sampling was selected to make these calculations. The derivation of the isotropic reliability model and the extension of the reliability model to anisotropy are provided in full detail. Model parameters are cast in terms of strength parameters that can (and have been) characterized by multiaxial failure tests. Comparisons of model predictions with failure data is made and a brief comparison is made to reliability predictions called for in the ASME Boiler and Pressure Vessel Code. Future work is identified that would provide further verification and augmentation of the numerical methods used to evaluate model predictions.

  17. Real-time failure control (SAFD)

    NASA Technical Reports Server (NTRS)

    Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.

    1990-01-01

    The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.

  18. Landslide early warning based on failure forecast models: the example of Mt. de La Saxe rockslide, northern Italy

    NASA Astrophysics Data System (ADS)

    Manconi, A.; Giordan, D.

    2015-02-01

    We investigate the use of landslide failure forecast models by exploiting near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here we describe the main concepts of our method, and show an example of application to a real emergency scenario, the La Saxe rockslide, Aosta Valley region, northern Italy. Based on the herein presented case study, we identify operational thresholds based on the reliability of the forecast models, in order to support the management of early warning systems in the most critical phases of the landslide emergency.

  19. A Review of Statistical Failure Time Models with Application of a Discrete Hazard Based Model to 1Cr1Mo-0.25V Steel for Turbine Rotors and Shafts

    PubMed Central

    2017-01-01

    Producing predictions of the probabilistic risks of operating materials for given lengths of time at stated operating conditions requires the assimilation of existing deterministic creep life prediction models (that only predict the average failure time) with statistical models that capture the random component of creep. To date, these approaches have rarely been combined to achieve this objective. The first half of this paper therefore provides a summary review of some statistical models to help bridge the gap between these two approaches. The second half of the paper illustrates one possible assimilation using 1Cr1Mo-0.25V steel. The Wilshire equation for creep life prediction is integrated into a discrete hazard based statistical model—the former being chosen because of its novelty and proven capability in accurately predicting average failure times and the latter being chosen because of its flexibility in modelling the failure time distribution. Using this model it was found that, for example, if this material had been in operation for around 15 years at 823 K and 130 MPa, the chances of failure in the next year is around 35%. However, if this material had been in operation for around 25 years, the chance of failure in the next year rises dramatically to around 80%. PMID:29039773

  20. Numerical Implementation of a Multiple-ISV Thermodynamically-Based Work Potential Theory for Modeling Progressive Damage and Failure in Fiber-Reinforced Laminates

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.

    2011-01-01

    A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Damage is considered to be the effect of any structural changes in a material that manifest as pre-peak non-linearity in the stress versus strain response. Conversely, failure is taken to be the effect of the evolution of any mechanisms that results in post-peak strain softening. It is assumed that matrix microdamage is the dominant damage mechanism in continuous fiber-reinforced polymer matrix laminates, and its evolution is controlled with a single ISV. Three additional ISVs are introduced to account for failure due to mode I transverse cracking, mode II transverse cracking, and mode I axial failure. Typically, failure evolution (i.e., post-peak strain softening) results in pathologically mesh dependent solutions within a finite element method (FEM) setting. Therefore, consistent character element lengths are introduced into the formulation of the evolution of the three failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs is derived. The theory is implemented into commercial FEM software. Objectivity of total energy dissipated during the failure process, with regards to refinements in the FEM mesh, is demonstrated. The model is also verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared to the experiments.

  1. Stochastic damage evolution in textile laminates

    NASA Technical Reports Server (NTRS)

    Dzenis, Yuris A.; Bogdanovich, Alexander E.; Pastore, Christopher M.

    1993-01-01

    A probabilistic model utilizing random material characteristics to predict damage evolution in textile laminates is presented. Model is based on a division of each ply into two sublaminas consisting of cells. The probability of cell failure is calculated using stochastic function theory and maximal strain failure criterion. Three modes of failure, i.e. fiber breakage, matrix failure in transverse direction, as well as matrix or interface shear cracking, are taken into account. Computed failure probabilities are utilized in reducing cell stiffness based on the mesovolume concept. A numerical algorithm is developed predicting the damage evolution and deformation history of textile laminates. Effect of scatter of fiber orientation on cell properties is discussed. Weave influence on damage accumulation is illustrated with the help of an example of a Kevlar/epoxy laminate.

  2. Regression analysis of clustered failure time data with informative cluster size under the additive transformation models.

    PubMed

    Chen, Ling; Feng, Yanqin; Sun, Jianguo

    2017-10-01

    This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.

  3. An elastic failure model of indentation damage. [of brittle structural ceramics

    NASA Technical Reports Server (NTRS)

    Liaw, B. M.; Kobayashi, A. S.; Emery, A. F.

    1984-01-01

    A mechanistically consistent model for indentation damage based on elastic failure at tensile or shear overloads, is proposed. The model accommodates arbitrary crack orientation, stress relaxation, reduction and recovery of stiffness due to crack opening and closure, and interfacial friction due to backward sliding of closed cracks. This elastic failure model was implemented by an axisymmetric finite element program which was used to simulate progressive damage in a silicon nitride plate indented by a tungsten carbide sphere. The predicted damage patterns and the permanent impression matched those observed experimentally. The validation of this elastic failure model shows that the plastic deformation postulated by others is not necessary to replicate the indentation damage of brittle structural ceramics.

  4. Failure rate and reliability of the KOMATSU hydraulic excavator in surface limestone mine

    NASA Astrophysics Data System (ADS)

    Harish Kumar N., S.; Choudhary, R. P.; Murthy, Ch. S. N.

    2018-04-01

    The model with failure rate function of bathtub-shaped is helpful in reliability analysis of any system and particularly in reliability associated privative maintenance. The usual Weibull distribution is, however, not capable to model the complete lifecycle of the any with a bathtub-shaped failure rate function. In this paper, failure rate and reliability analysis of the KOMATSU hydraulic excavator/shovel in surface mine is presented and also to improve the reliability and decrease the failure rate of each subsystem of the shovel based on the preventive maintenance. The model of the bathtub-shaped for shovel can also be seen as a simplification of the Weibull distribution.

  5. Electromechanical actuators affected by multiple failures: Prognostic method based on spectral analysis techniques

    NASA Astrophysics Data System (ADS)

    Belmonte, D.; Vedova, M. D. L. Dalla; Ferro, C.; Maggiore, P.

    2017-06-01

    The proposal of prognostic algorithms able to identify precursors of incipient failures of primary flight command electromechanical actuators (EMA) is beneficial for the anticipation of the incoming failure: an early and correct interpretation of the failure degradation pattern, in fact, can trig an early alert of the maintenance crew, who can properly schedule the servomechanism replacement. An innovative prognostic model-based approach, able to recognize the EMA progressive degradations before his anomalous behaviors become critical, is proposed: the Fault Detection and Identification (FDI) of the considered incipient failures is performed analyzing proper system operational parameters, able to put in evidence the corresponding degradation path, by means of a numerical algorithm based on spectral analysis techniques. Subsequently, these operational parameters will be correlated with the actual EMA health condition by means of failure maps created by a reference monitoring model-based algorithm. In this work, the proposed method has been tested in case of EMA affected by combined progressive failures: in particular, partial stator single phase turn to turn short-circuit and rotor static eccentricity are considered. In order to evaluate the prognostic method, a numerical test-bench has been conceived. Results show that the method exhibit adequate robustness and a high degree of confidence in the ability to early identify an eventual malfunctioning, minimizing the risk of fake alarms or unannounced failures.

  6. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  7. Safety evaluation of driver cognitive failures and driving errors on right-turn filtering movement at signalized road intersections based on Fuzzy Cellular Automata (FCA) model.

    PubMed

    Chai, Chen; Wong, Yiik Diew; Wang, Xuesong

    2017-07-01

    This paper proposes a simulation-based approach to estimate safety impact of driver cognitive failures and driving errors. Fuzzy Logic, which involves linguistic terms and uncertainty, is incorporated with Cellular Automata model to simulate decision-making process of right-turn filtering movement at signalized intersections. Simulation experiments are conducted to estimate the relationships between cognitive failures and driving errors with safety performance. Simulation results show Different types of cognitive failures are found to have varied relationship with driving errors and safety performance. For right-turn filtering movement, cognitive failures are more likely to result in driving errors with denser conflicting traffic stream. Moreover, different driving errors are found to have different safety impacts. The study serves to provide a novel approach to linguistically assess cognitions and replicate decision-making procedures of the individual driver. Compare to crash analysis, the proposed FCA model allows quantitative estimation of particular cognitive failures, and the impact of cognitions on driving errors and safety performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  9. GenSo-EWS: a novel neural-fuzzy based early warning system for predicting bank failures.

    PubMed

    Tung, W L; Quek, C; Cheng, P

    2004-05-01

    Bank failure prediction is an important issue for the regulators of the banking industries. The collapse and failure of a bank could trigger an adverse financial repercussion and generate negative impacts such as a massive bail out cost for the failing bank and loss of confidence from the investors and depositors. Very often, bank failures are due to financial distress. Hence, it is desirable to have an early warning system (EWS) that identifies potential bank failure or high-risk banks through the traits of financial distress. Various traditional statistical models have been employed to study bank failures [J Finance 1 (1975) 21; J Banking Finance 1 (1977) 249; J Banking Finance 10 (1986) 511; J Banking Finance 19 (1995) 1073]. However, these models do not have the capability to identify the characteristics of financial distress and thus function as black boxes. This paper proposes the use of a new neural fuzzy system [Foundations of neuro-fuzzy systems, 1997], namely the Generic Self-organising Fuzzy Neural Network (GenSoFNN) [IEEE Trans Neural Networks 13 (2002c) 1075] based on the compositional rule of inference (CRI) [Commun ACM 37 (1975) 77], as an alternative to predict banking failure. The CRI based GenSoFNN neural fuzzy network, henceforth denoted as GenSoFNN-CRI(S), functions as an EWS and is able to identify the inherent traits of financial distress based on financial covariates (features) derived from publicly available financial statements. The interaction between the selected features is captured in the form of highly intuitive IF-THEN fuzzy rules. Such easily comprehensible rules provide insights into the possible characteristics of financial distress and form the knowledge base for a highly desired EWS that aids bank regulation. The performance of the GenSoFNN-CRI(S) network is subsequently benchmarked against that of the Cox's proportional hazards model [J Banking Finance 10 (1986) 511; J Banking Finance 19 (1995) 1073], the multi-layered perceptron (MLP) and the modified cerebellar model articulation controller (MCMAC) [IEEE Trans Syst Man Cybern: Part B 30 (2000) 491] in predicting bank failures based on a population of 3635 US banks observed over a 21 years period. Three sets of experiments are performed-bank failure classification based on the last available financial record and prediction using financial records one and two years prior to the last available financial statements. The performance of the GenSoFNN-CRI(S) network as a bank failure classification and EWS is encouraging.

  10. Validation of PV-RPM Code in the System Advisor Model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Freeman, Janine

    2017-04-01

    This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whethermore » the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.« less

  11. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  12. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  13. Failure detection and correction for turbofan engines

    NASA Technical Reports Server (NTRS)

    Corley, R. C.; Spang, H. A., III

    1977-01-01

    In this paper, a failure detection and correction strategy for turbofan engines is discussed. This strategy allows continuing control of the engines in the event of a sensor failure. An extended Kalman filter is used to provide the best estimate of the state of the engine based on currently available sensor outputs. Should a sensor failure occur the control is based on the best estimate rather than the sensor output. The extended Kalman filter consists of essentially two parts, a nonlinear model of the engine and up-date logic which causes the model to track the actual engine. Details on the model and up-date logic are presented. To allow implementation, approximations are made to the feedback gain matrix which result in a single feedback matrix which is suitable for use over the entire flight envelope. The effect of these approximations on stability and response is discussed. Results from a detailed nonlinear simulation indicate that good control can be maintained even under multiple failures.

  14. Detection of Failure in Asynchronous Motor Using Soft Computing Method

    NASA Astrophysics Data System (ADS)

    Vinoth Kumar, K.; Sony, Kevin; Achenkunju John, Alan; Kuriakose, Anto; John, Ano P.

    2018-04-01

    This paper investigates the stator short winding failure of asynchronous motor also their effects on motor current spectrums. A fuzzy logic approach i.e., model based technique possibly will help to detect the asynchronous motor failure. Actually, fuzzy logic similar to humanoid intelligent methods besides expected linguistic empowering inferences through vague statistics. The dynamic model is technologically advanced for asynchronous motor by means of fuzzy logic classifier towards investigate the stator inter turn failure in addition open phase failure. A hardware implementation was carried out with LabVIEW for the online-monitoring of faults.

  15. Lyapunov-Based Sensor Failure Detection And Recovery For The Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Haralambous, Michael G.

    2001-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in terms of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  16. LYAPUNOV-Based Sensor Failure Detection and Recovery for the Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Haralambous, Michael G.

    2002-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in t e m of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  17. Improvement of Progressive Damage Model to Predicting Crashworthy Composite Corrugated Plate

    NASA Astrophysics Data System (ADS)

    Ren, Yiru; Jiang, Hongyong; Ji, Wenyuan; Zhang, Hanyu; Xiang, Jinwu; Yuan, Fuh-Gwo

    2018-02-01

    To predict the crashworthy composite corrugated plate, different single and stacked shell models are evaluated and compared, and a stacked shell progressive damage model combined with continuum damage mechanics is proposed and investigated. To simulate and predict the failure behavior, both of the intra- and inter- laminar failure behavior are considered. The tiebreak contact method, 1D spot weld element and cohesive element are adopted in stacked shell model, and a surface-based cohesive behavior is used to capture delamination in the proposed model. The impact load and failure behavior of purposed and conventional progressive damage models are demonstrated. Results show that the single shell could simulate the impact load curve without the delamination simulation ability. The general stacked shell model could simulate the interlaminar failure behavior. The improved stacked shell model with continuum damage mechanics and cohesive element not only agree well with the impact load, but also capture the fiber, matrix debonding, and interlaminar failure of composite structure.

  18. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Interaction

    NASA Technical Reports Server (NTRS)

    DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  19. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Migration

    NASA Technical Reports Server (NTRS)

    DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  20. An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks

    PubMed Central

    Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei

    2014-01-01

    The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005

  1. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  2. Functional correlation approach to operational risk in banking organizations

    NASA Astrophysics Data System (ADS)

    Kühn, Reimer; Neu, Peter

    2003-05-01

    A Value-at-Risk-based model is proposed to compute the adequate equity capital necessary to cover potential losses due to operational risks, such as human and system process failures, in banking organizations. Exploring the analogy to a lattice gas model from physics, correlations between sequential failures are modeled by as functionally defined, heterogeneous couplings between mutually supportive processes. In contrast to traditional risk models for market and credit risk, where correlations are described as equal-time-correlations by a covariance matrix, the dynamics of the model shows collective phenomena such as bursts and avalanches of process failures.

  3. Reliable Communication Models in Interdependent Critical Infrastructure Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Chinthavali, Supriya; Shankar, Mallikarjun

    Modern critical infrastructure networks are becoming increasingly interdependent where the failures in one network may cascade to other dependent networks, causing severe widespread national-scale failures. A number of previous efforts have been made to analyze the resiliency and robustness of interdependent networks based on different models. However, communication network, which plays an important role in today's infrastructures to detect and handle failures, has attracted little attention in the interdependency studies, and no previous models have captured enough practical features in the critical infrastructure networks. In this paper, we study the interdependencies between communication network and other kinds of critical infrastructuremore » networks with an aim to identify vulnerable components and design resilient communication networks. We propose several interdependency models that systematically capture various features and dynamics of failures spreading in critical infrastructure networks. We also discuss several research challenges in building reliable communication solutions to handle failures in these models.« less

  4. A Thermodynamically-Based Mesh Objective Work Potential Theory for Predicting Intralaminar Progressive Damage and Failure in Fiber-Reinforced Laminates

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.

    2012-01-01

    A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Damage is considered to be the effect of any structural changes in a material that manifest as pre-peak non-linearity in the stress versus strain response. Conversely, failure is taken to be the effect of the evolution of any mechanisms that results in post-peak strain softening. It is assumed that matrix microdamage is the dominant damage mechanism in continuous fiber-reinforced polymer matrix laminates, and its evolution is controlled with a single ISV. Three additional ISVs are introduced to account for failure due to mode I transverse cracking, mode II transverse cracking, and mode I axial failure. Typically, failure evolution (i.e., post-peak strain softening) results in pathologically mesh dependent solutions within a finite element method (FEM) setting. Therefore, consistent character element lengths are introduced into the formulation of the evolution of the three failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs is derived. The theory is implemented into commercial FEM software. Objectivity of total energy dissipated during the failure process, with regards to refinements in the FEM mesh, is demonstrated. The model is also verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared to the experiments.

  5. Implementation of a Helicopter Flight Simulator with Individual Blade Control

    NASA Astrophysics Data System (ADS)

    Zinchiak, Andrew G.

    2011-12-01

    Nearly all modern helicopters are designed with a swashplate-based system for control of the main rotor blades. However, the swashplate-based approach does not provide the level of redundancy necessary to cope with abnormal actuator conditions. For example, if an actuator fails (becomes locked) on the main rotor, the cyclic inputs are consequently fixed and the helicopter may become stuck in a flight maneuver. This can obviously be seen as a catastrophic failure, and would likely lead to a crash. These types of failures can be overcome with the application of individual blade control (IBC). IBC is achieved using the blade pitch control method, which provides complete authority of the aerodynamic characteristics of each rotor blade at any given time by replacing the normally rigid pitch links between the swashplate and the pitch horn of the blade with hydraulic or electronic actuators. Thus, IBC can provide the redundancy necessary for subsystem failure accommodation. In this research effort, a simulation environment is developed to investigate the potential of the IBC main rotor configuration for fault-tolerant control. To examine the applications of IBC to failure scenarios and fault-tolerant controls, a conventional, swashplate-based linear model is first developed for hover and forward flight scenarios based on the UH-60 Black Hawk helicopter. The linear modeling techniques for the swashplate-based helicopter are then adapted and expanded to include IBC. Using these modified techniques, an IBC based mathematical model of the UH-60 helicopter is developed for the purposes of simulation and analysis. The methodology can be used to model and implement a different aircraft if geometric, gravimetric, and general aerodynamic data are available. Without the kinetic restrictions of the swashplate, the IBC model effectively decouples the cyclic control inputs between different blades. Simulations of the IBC model prove that the primary control functions can be manually reconfigured after local actuator failures are initiated, thus preventing a catastrophic failure or crash. Furthermore, this simulator promises to be a useful tool for the design, testing, and analysis of fault-tolerant control laws.

  6. Developing a CD-CBM Anticipatory Approach for Cavitation - Defining a Model-Based Descriptor Consistent Across Processes, Phase 1 Final Report Context-Dependent Prognostics and Health Assessment: A New Paradigm for Condition-based Maintenance SBIR Topic No. N98-114

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, G.O.; Dress, W.B.; Kercel, S.W.

    1999-06-01

    The objective of this research, and subsequent testing, was to identify specific features of cavitation that could be used as a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor is based on the physics of the phenomena, capturing the salient features of the process dynamics. The test methodology and approach were developed to make the cavitation features the dominant effect in the process and collected signatures. This would allow the accurate characterization of the salient cavitation features at different operational states. By developing such an abstraction, these attributes can be used asmore » a general diagnostic for a system or any of its components. In this study, the particular focus will be pumps. As many as 90% of pump failures are catastrophic. They seem to be operating normally and fail abruptly without warning. This is true whether the failure is sudden hardware damage requiring repair, such as a gasket failure, or a transition into an undesired operating mode, such as cavitation. This means that conventional diagnostic methods fail to predict 90% of incipient failures and that in addressing this problem, model-based methods can add value where it is actually needed.« less

  7. Analysis and Characterization of Damage and Failure Utilizing a Generalized Composite Material Model Suitable for Use in Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Khaled, Bilal; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in state-of-the art composite impact models is under development. In particular, a next generation composite impact material model, jointly developed by the FAA and NASA, is being implemented into the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage, and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters (such as modulus and strength). The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in the various coordinate directions. Due to the fact that the plasticity and damage models are uncoupled, test procedures and methods to both characterize the damage model and to covert the material stress-strain curves from the true (damaged) stress space to the effective (undamaged) stress space have been developed. A methodology has been developed to input the experimentally determined composite failure surface in a tabulated manner. An analytical approach is then utilized to track how close the current stress state is to the failure surface.

  8. Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.

    1997-01-01

    A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.

  9. On the equivalence between traction- and stress-based approaches for the modeling of localized failure in solids

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Ying; Cervera, Miguel

    2015-09-01

    This work investigates systematically traction- and stress-based approaches for the modeling of strong and regularized discontinuities induced by localized failure in solids. Two complementary methodologies, i.e., discontinuities localized in an elastic solid and strain localization of an inelastic softening solid, are addressed. In the former it is assumed a priori that the discontinuity forms with a continuous stress field and along the known orientation. A traction-based failure criterion is introduced to characterize the discontinuity and the orientation is determined from Mohr's maximization postulate. If the displacement jumps are retained as independent variables, the strong/regularized discontinuity approaches follow, requiring constitutive models for both the bulk and discontinuity. Elimination of the displacement jumps at the material point level results in the embedded/smeared discontinuity approaches in which an overall inelastic constitutive model fulfilling the static constraint suffices. The second methodology is then adopted to check whether the assumed strain localization can occur and identify its consequences on the resulting approaches. The kinematic constraint guaranteeing stress boundedness and continuity upon strain localization is established for general inelastic softening solids. Application to a unified stress-based elastoplastic damage model naturally yields all the ingredients of a localized model for the discontinuity (band), justifying the first methodology. Two dual but not necessarily equivalent approaches, i.e., the traction-based elastoplastic damage model and the stress-based projected discontinuity model, are identified. The former is equivalent to the embedded and smeared discontinuity approaches, whereas in the later the discontinuity orientation and associated failure criterion are determined consistently from the kinematic constraint rather than given a priori. The bi-directional connections and equivalence conditions between the traction- and stress-based approaches are classified. Closed-form results under plane stress condition are also given. A generic failure criterion of either elliptic, parabolic or hyperbolic type is analyzed in a unified manner, with the classical von Mises (J2), Drucker-Prager, Mohr-Coulomb and many other frequently employed criteria recovered as its particular cases.

  10. Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network

    DTIC Science & Technology

    2015-08-14

    theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The views...theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The...independently evolving research directions based on physics-based models of mechanical, electromechanical and electronic devices, operational constraints

  11. The use of subjective expert opinions in cost optimum design of aerospace structures. [probabilistic failure models

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hanagud, S.

    1975-01-01

    The results of two questionnaires sent to engineering experts are statistically analyzed and compared with objective data from Saturn V design and testing. Engineers were asked how likely it was for structural failure to occur at load increments above and below analysts' stress limit predictions. They were requested to estimate the relative probabilities of different failure causes, and of failure at each load increment given a specific cause. Three mathematical models are constructed based on the experts' assessment of causes. The experts' overall assessment of prediction strength fits the Saturn V data better than the models do, but a model test option (T-3) based on the overall assessment gives more design change likelihood to overstrength structures than does an older standard test option. T-3 compares unfavorably with the standard option in a cost optimum structural design problem. The report reflects a need for subjective data when objective data are unavailable.

  12. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  13. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information

    PubMed Central

    Wang, Xiaohong; Wang, Lizhi

    2017-01-01

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system. PMID:28926930

  14. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.

    PubMed

    Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi

    2017-09-15

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.

  15. Semiparametric modeling and estimation of the terminal behavior of recurrent marker processes before failure events.

    PubMed

    Chan, Kwun Chuen Gary; Wang, Mei-Cheng

    2017-01-01

    Recurrent event processes with marker measurements are mostly and largely studied with forward time models starting from an initial event. Interestingly, the processes could exhibit important terminal behavior during a time period before occurrence of the failure event. A natural and direct way to study recurrent events prior to a failure event is to align the processes using the failure event as the time origin and to examine the terminal behavior by a backward time model. This paper studies regression models for backward recurrent marker processes by counting time backward from the failure event. A three-level semiparametric regression model is proposed for jointly modeling the time to a failure event, the backward recurrent event process, and the marker observed at the time of each backward recurrent event. The first level is a proportional hazards model for the failure time, the second level is a proportional rate model for the recurrent events occurring before the failure event, and the third level is a proportional mean model for the marker given the occurrence of a recurrent event backward in time. By jointly modeling the three components, estimating equations can be constructed for marked counting processes to estimate the target parameters in the three-level regression models. Large sample properties of the proposed estimators are studied and established. The proposed models and methods are illustrated by a community-based AIDS clinical trial to examine the terminal behavior of frequencies and severities of opportunistic infections among HIV infected individuals in the last six months of life.

  16. A Competing Risk Model of First Failure Site after Definitive Chemoradiation Therapy for Locally Advanced Non-Small Cell Lung Cancer.

    PubMed

    Nygård, Lotte; Vogelius, Ivan R; Fischer, Barbara M; Kjær, Andreas; Langer, Seppo W; Aznar, Marianne C; Persson, Gitte F; Bentzen, Søren M

    2018-04-01

    The aim of the study was to build a model of first failure site- and lesion-specific failure probability after definitive chemoradiotherapy for inoperable NSCLC. We retrospectively analyzed 251 patients receiving definitive chemoradiotherapy for NSCLC at a single institution between 2009 and 2015. All patients were scanned by fludeoxyglucose positron emission tomography/computed tomography for radiotherapy planning. Clinical patient data and fludeoxyglucose positron emission tomography standardized uptake values from primary tumor and nodal lesions were analyzed by using multivariate cause-specific Cox regression. In patients experiencing locoregional failure, multivariable logistic regression was applied to assess risk of each lesion being the first site of failure. The two models were used in combination to predict probability of lesion failure accounting for competing events. Adenocarcinoma had a lower hazard ratio (HR) of locoregional failure than squamous cell carcinoma (HR = 0.45, 95% confidence interval [CI]: 0.26-0.76, p = 0.003). Distant failures were more common in the adenocarcinoma group (HR = 2.21, 95% CI: 1.41-3.48, p < 0.001). Multivariable logistic regression of individual lesions at the time of first failure showed that primary tumors were more likely to fail than lymph nodes (OR = 12.8, 95% CI: 5.10-32.17, p < 0.001). Increasing peak standardized uptake value was significantly associated with lesion failure (OR = 1.26 per unit increase, 95% CI: 1.12-1.40, p < 0.001). The electronic model is available at http://bit.ly/LungModelFDG. We developed a failure site-specific competing risk model based on patient- and lesion-level characteristics. Failure patterns differed between adenocarcinoma and squamous cell carcinoma, illustrating the limitation of aggregating them into NSCLC. Failure site-specific models add complementary information to conventional prognostic models. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  17. Detecting failure of climate predictions

    USGS Publications Warehouse

    Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve

    2016-01-01

    The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.

  18. Software For Fault-Tree Diagnosis Of A System

    NASA Technical Reports Server (NTRS)

    Iverson, Dave; Patterson-Hine, Ann; Liao, Jack

    1993-01-01

    Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.

  19. Aircraft control surface failure detection and isolation using the OSGLR test. [orthogonal series generalized likelihood ratio

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.

    1986-01-01

    The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.

  20. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  1. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

  2. Risk Analysis of Earth-Rock Dam Failures Based on Fuzzy Event Tree Method

    PubMed Central

    Fu, Xiao; Gu, Chong-Shi; Su, Huai-Zhi; Qin, Xiang-Nan

    2018-01-01

    Earth-rock dams make up a large proportion of the dams in China, and their failures can induce great risks. In this paper, the risks associated with earth-rock dam failure are analyzed from two aspects: the probability of a dam failure and the resulting life loss. An event tree analysis method based on fuzzy set theory is proposed to calculate the dam failure probability. The life loss associated with dam failure is summarized and refined to be suitable for Chinese dams from previous studies. The proposed method and model are applied to one reservoir dam in Jiangxi province. Both engineering and non-engineering measures are proposed to reduce the risk. The risk analysis of the dam failure has essential significance for reducing dam failure probability and improving dam risk management level. PMID:29710824

  3. A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2017-01-01

    Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.

  4. Heart failure patients' attitudes, beliefs, expectations and experiences of self-management strategies: a qualitative synthesis.

    PubMed

    Wingham, Jennifer; Harding, Geoff; Britten, Nicky; Dalal, Hayes

    2014-06-01

    To develop a model of heart failure patients' attitudes, beliefs, expectations, and experiences based on published qualitative research that could influence the development of self-management strategies. A synthesis of 19 qualitative research studies using the method of meta-ethnography. This synthesis offers a conceptual model of the attitudes, beliefs, and expectations of patients with heart failure. Patients experienced a sense of disruption before developing a mental model of heart failure. Patients' reactions included becoming a strategic avoider, a selective denier, a well-intentioned manager, or an advanced self-manager. Patients responded by forming self-management strategies and finally assimilated the strategies into everyday life seeking to feel safe. This conceptual model suggests that there are a range of interplaying factors that facilitate the process of developing self-management strategies. Interventions should take into account patients' concepts of heart failure and their subsequent reactions.

  5. Vocal fold tissue failure: preliminary data and constitutive modeling.

    PubMed

    Chan, Roger W; Siegmund, Thomas

    2004-08-01

    In human voice production (phonation), linear small-amplitude vocal fold oscillation occurs only under restricted conditions. Physiologically, phonation more often involves large-amplitude oscillation associated with tissue stresses and strains beyond their linear viscoelastic limits, particularly in the lamina propria extracellular matrix (ECM). This study reports some preliminary measurements of tissue deformation and failure response of the vocal fold ECM under large-strain shear The primary goal was to formulate and test a novel constitutive model for vocal fold tissue failure, based on a standard-linear cohesive-zone (SL-CZ) approach. Tissue specimens of the sheep vocal fold mucosa were subjected to torsional deformation in vitro, at constant strain rates corresponding to twist rates of 0.01, 0.1, and 1.0 rad/s. The vocal fold ECM demonstrated nonlinear stress-strain and rate-dependent failure response with a failure strain as low as 0.40 rad. A finite-element implementation of the SL-CZ model was capable of capturing the rate dependence in these preliminary data, demonstrating the model's potential for describing tissue failure. Further studies with additional tissue specimens and model improvements are needed to better understand vocal fold tissue failure.

  6. Uncertainty and Intelligence in Computational Stochastic Mechanics

    NASA Technical Reports Server (NTRS)

    Ayyub, Bilal M.

    1996-01-01

    Classical structural reliability assessment techniques are based on precise and crisp (sharp) definitions of failure and non-failure (survival) of a structure in meeting a set of strength, function and serviceability criteria. These definitions are provided in the form of performance functions and limit state equations. Thus, the criteria provide a dichotomous definition of what real physical situations represent, in the form of abrupt change from structural survival to failure. However, based on observing the failure and survival of real structures according to the serviceability and strength criteria, the transition from a survival state to a failure state and from serviceability criteria to strength criteria are continuous and gradual rather than crisp and abrupt. That is, an entire spectrum of damage or failure levels (grades) is observed during the transition to total collapse. In the process, serviceability criteria are gradually violated with monotonically increasing level of violation, and progressively lead into the strength criteria violation. Classical structural reliability methods correctly and adequately include the ambiguity sources of uncertainty (physical randomness, statistical and modeling uncertainty) by varying amounts. However, they are unable to adequately incorporate the presence of a damage spectrum, and do not consider in their mathematical framework any sources of uncertainty of the vagueness type. Vagueness can be attributed to sources of fuzziness, unclearness, indistinctiveness, sharplessness and grayness; whereas ambiguity can be attributed to nonspecificity, one-to-many relations, variety, generality, diversity and divergence. Using the nomenclature of structural reliability, vagueness and ambiguity can be accounted for in the form of realistic delineation of structural damage based on subjective judgment of engineers. For situations that require decisions under uncertainty with cost/benefit objectives, the risk of failure should depend on the underlying level of damage and the uncertainties associated with its definition. A mathematical model for structural reliability assessment that includes both ambiguity and vagueness types of uncertainty was suggested to result in the likelihood of failure over a damage spectrum. The resulting structural reliability estimates properly represent the continuous transition from serviceability to strength limit states over the ultimate time exposure of the structure. In this section, a structural reliability assessment method based on a fuzzy definition of failure is suggested to meet these practical needs. A failure definition can be developed to indicate the relationship between failure level and structural response. In this fuzzy model, a subjective index is introduced to represent all levels of damage (or failure). This index can be interpreted as either a measure of failure level or a measure of a degree of belief in the occurrence of some performance condition (e.g., failure). The index allows expressing the transition state between complete survival and complete failure for some structural response based on subjective evaluation and judgment.

  7. Modeling Quasi-Static and Fatigue-Driven Delamination Migration

    NASA Technical Reports Server (NTRS)

    De Carvalho, N. V.; Ratcliffe, J. G.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Tay, T. E.

    2014-01-01

    An approach was proposed and assessed for the high-fidelity modeling of progressive damage and failure in composite materials. It combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. Delamination, matrix cracking, and migration were captured failure and migration criteria based on fracture mechanics. Quasi-static and fatigue loading were modeled within the same overall framework. The methodology proposed was illustrated by simulating the delamination migration test, showing good agreement with the available experimental data.

  8. Dam break analysis and flood inundation map of Krisak dam for emergency action plan

    NASA Astrophysics Data System (ADS)

    Juliastuti, Setyandito, Oki

    2017-11-01

    The Indonesian Regulation which refers to the ICOLD Regulation (International Committee on Large Dam required have the Emergency Action Plan (EAP) guidelines because of the dams have potential failure. In EAP guidelines there is a management of evacuation where the determination of the inundation map based on flood modeling. The purpose of the EAP is to minimize the risk of loss of life and property in downstream which caused by dam failure. This paper will describe about develop flood modeling and inundation map in Krisak dam using numerical methods through dam break analysis (DBA) using hydraulic model Zhong Xing HY-21. The approaches of dam failure simulation are overtopping and piping. Overtopping simulation based on quadrangular, triangular and trapezium fracture. Piping simulation based on cracks of orifice. Using results of DBA, hazard classification of Krisak dam is very high. The nearest village affected dam failure is Singodutan village (distance is 1.45 kilometer from dam) with inundation depth is 1.85 meter. This result can be used by stakeholders such as emergency responders and the community at risk in formulating evacuation procedure.

  9. Using diagnostic experiences in experience-based innovative design

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sattiraju; Goel, Ashok K.

    1992-03-01

    Designing a novel class of devices requires innovation. Often, the design knowledge of these devices does not identify and address the constraints that are required for their performance in the real world operating environment. So any new design adapted from these devices tend to be similarly sketchy. In order to address this problem, we propose a case-based reasoning method called performance driven innovation (PDI). We model the design as a dynamic process, arrive at a design by adaptation from the known designs, generate failures for this design for some new constraints, and then use this failure knowledge to generate the required design knowledge for the new constraints. In this paper, we discuss two aspects of PDI: the representation of PDI cases and the translation of the failure knowledge into design knowledge for a constraint. Each case in PDI has two components: design and failure knowledge. Both of them are represented using a substance-behavior-function model. Failure knowledge has internal device failure behaviors and external environmental behaviors. The environmental behavior, for a constraint, interacting with the design behaviors, results in the failure internal behavior. The failure adaptation strategy generates functions, from the failure knowledge, which can be addressed using the routine design methods. These ideas are illustrated using a coffee-maker example.

  10. Strength Evaluation and Failure Prediction of Short Carbon Fiber Reinforced Nylon Spur Gears by Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Hu, Zhong; Hossan, Mohammad Robiul

    2013-06-01

    In this paper, short carbon fiber reinforced nylon spur gear pairs, and steel and unreinforced nylon spur gear pairs have been selected for study and comparison. A 3D finite element model was developed to simulate the multi-axial stress-strain behaviors of the gear tooth. Failure prediction has been conducted based on the different failure criteria, including Tsai-Wu criterion. The tooth roots, where has stress concentration and the potential for failure, have been carefully investigated. The modeling results show that the short carbon fiber reinforced nylon gear fabricated by properly controlled injection molding processes can provide higher strength and better performance.

  11. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  12. Simulation Assisted Risk Assessment Applied to Launch Vehicle Conceptual Design

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Go, Susie; Gee, Ken; Lawrence, Scott

    2008-01-01

    A simulation-based risk assessment approach is presented and is applied to the analysis of abort during the ascent phase of a space exploration mission. The approach utilizes groupings of launch vehicle failures, referred to as failure bins, which are mapped to corresponding failure environments. Physical models are used to characterize the failure environments in terms of the risk due to blast overpressure, resulting debris field, and the thermal radiation due to a fireball. The resulting risk to the crew is dynamically modeled by combining the likelihood of each failure, the severity of the failure environments as a function of initiator and time of the failure, the robustness of the crew module, and the warning time available due to early detection. The approach is shown to support the launch vehicle design process by characterizing the risk drivers and identifying regions where failure detection would significantly reduce the risk to the crew.

  13. An adaptive actuator failure compensation scheme for two linked 2WD mobile robots

    NASA Astrophysics Data System (ADS)

    Ma, Yajie; Al-Dujaili, Ayad; Cocquempot, Vincent; El Badaoui El Najjar, Maan

    2017-01-01

    This paper develops a new adaptive compensation control scheme for two linked mobile robots with actuator failurs. A configuration with two linked two-wheel drive (2WD) mobile robots is proposed, and the modelling of its kinematics and dynamics are given. An adaptive failure compensation scheme is developed to compensate actuator failures, consisting of a kinematic controller and a multi-design integration based dynamic controller. The kinematic controller is a virtual one, and based on which, multiple adaptive dynamic control signals are designed which covers all possible failure cases. By combing these dynamic control signals, the dynamic controller is designed, which ensures system stability and asymptotic tracking properties. Simulation results verify the effectiveness of the proposed adaptive failure compensation scheme.

  14. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  15. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  16. A dynamic integrated fault diagnosis method for power transformers.

    PubMed

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  17. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    PubMed Central

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  18. Performance and Reliability of Bonded Interfaces for High-Temperature Packaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paret, Paul P

    2017-08-02

    Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (>200 degrees C). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. Mechanical characterization tests that result in stress-strain curves and accelerated tests that produce cycles-to-failure result will be conducted. Also, we present a finite element method (FEM) modeling methodologymore » that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. A fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed.« less

  19. Predicting remaining life by fusing the physics of failure modeling with diagnostics

    NASA Astrophysics Data System (ADS)

    Kacprzynski, G. J.; Sarlashkar, A.; Roemer, M. J.; Hess, A.; Hardman, B.

    2004-03-01

    Technology that enables failure prediction of critical machine components (prognostics) has the potential to significantly reduce maintenance costs and increase availability and safety. This article summarizes a research effort funded through the U.S. Defense Advanced Research Projects Agency and Naval Air System Command aimed at enhancing prognostic accuracy through more advanced physics-of-failure modeling and intelligent utilization of relevant diagnostic information. H-60 helicopter gear is used as a case study to introduce both stochastic sub-zone crack initiation and three-dimensional fracture mechanics lifing models along with adaptive model updating techniques for tuning key failure mode variables at a local material/damage site based on fused vibration features. The overall prognostic scheme is aimed at minimizing inherent modeling and operational uncertainties via sensed system measurements that evolve as damage progresses.

  20. What is (and Isn't) Wrong with Both the Tension and Shear Failure Models for the Formation of Lineae on Europa

    NASA Technical Reports Server (NTRS)

    Kattenhorn, S. A.

    2004-01-01

    An unresolved problem in the interpretation of lineae on Europa is whether they formed as tension- or shear-fractures. Voyager image analyses led to hypotheses that Europan lineaments are tension cracks induced by tidal deformation of the ice crust. This interpretation continued with Galileo image analyses, with lineae being classified as crust- penetrating tension cracks. Tension fracturing has also been an implicit assumption of nonsynchronous rotation (NSR) studies. However, recent hypotheses invoke shear failure to explain lineae development. If a shear failure mechanism is correct, it will be necessary to re-evaluate any models for the evolution of Europa's crust that are based on tensile failure models, such as NSR estimates. For this reason, it is imperative that the mechanism by which fractures are initiated on Europa be unambiguously unraveled. A logical starting point is an evaluation of the pros and cons of each failure model, highlighting the lines of evidence that are needed to fully justify either model.

  1. A New Approach to Fibrous Composite Laminate Strength Prediction

    NASA Technical Reports Server (NTRS)

    Hart-Smith, L. J.

    1990-01-01

    A method of predicting the strength of cross-plied fibrous composite laminates is based on expressing the classical maximum-shear-stress failure criterion for ductile metals in terms of strains. Starting with such a formulation for classical isotropic materials, the derivation is extended to orthotropic materials having a longitudinal axis of symmetry, to represent the fibers in a unidirectional composite lamina. The only modification needed to represent those same fibers with properties normalized to the lamina rather than fiber is a change in axial modulus. A mirror image is added to the strain-based lamina failure criterion for fiber-dominated failures to reflect the cutoffs due to the presence of orthogonal fibers. It is found that the combined failure envelope is now identical with the well-known maximum-strain failure model in the tension-tension and compression-compression quadrants but is truncated in the shear quadrants. The successive application of this simple failure model for fibers in the 0/90 degree and +/- 45 degree orientations, in turn, is shown to be the necessary and sufficient characterization of the fiber-dominated failures of laminates made from fibers having the same tensile and compressive strengths. When one such strength is greater than the other, the failure envelope is appropriately truncated for the lesser direct strain. The shear-failure cutoffs are now based on the higher axial strain to failure since they occur at lower strains than and are usually not affected by such mechanisms as microbuckling. Premature matrix failures can also be covered by appropriately truncating the fiber failure envelope. Matrix failures are excluded from consideration for conventional fiber/polymer composites but the additional features needed for a more rigorous analysis of exotic materials are covered. The new failure envelope is compared with published biaxial test data. The theory is developed for unnotched laminates but is easily shrunk to incorporate reductions to allow for bolt holes, cutouts, reduced compressive strength after impact, and the like.

  2. Simulating Progressive Damage of Notched Composite Laminates with Various Lamination Schemes

    NASA Astrophysics Data System (ADS)

    Mandal, B.; Chakrabarti, A.

    2017-05-01

    A three dimensional finite element based progressive damage model has been developed for the failure analysis of notched composite laminates. The material constitutive relations and the progressive damage algorithms are implemented into finite element code ABAQUS using user-defined subroutine UMAT. The existing failure criteria for the composite laminates are modified by including the failure criteria for fiber/matrix shear damage and delamination effects. The proposed numerical model is quite efficient and simple compared to other progressive damage models available in the literature. The efficiency of the present constitutive model and the computational scheme is verified by comparing the simulated results with the results available in the literature. A parametric study has been carried out to investigate the effect of change in lamination scheme on the failure behaviour of notched composite laminates.

  3. A unified bond theory, probabilistic meso-scale modeling, and experimental validation of deformed steel rebar in normal strength concrete

    NASA Astrophysics Data System (ADS)

    Wu, Chenglin

    Bond between deformed rebar and concrete is affected by rebar deformation pattern, concrete properties, concrete confinement, and rebar-concrete interfacial properties. Two distinct groups of bond models were traditionally developed based on the dominant effects of concrete splitting and near-interface shear-off failures. Their accuracy highly depended upon the test data sets selected in analysis and calibration. In this study, a unified bond model is proposed and developed based on an analogy to the indentation problem around the rib front of deformed rebar. This mechanics-based model can take into account the combined effect of concrete splitting and interface shear-off failures, resulting in average bond strengths for all practical scenarios. To understand the fracture process associated with bond failure, a probabilistic meso-scale model of concrete is proposed and its sensitivity to interface and confinement strengths are investigated. Both the mechanical and finite element models are validated with the available test data sets and are superior to existing models in prediction of average bond strength (< 6% error) and crack spacing (< 6% error). The validated bond model is applied to derive various interrelations among concrete crushing, concrete splitting, interfacial behavior, and the rib spacing-to-height ratio of deformed rebar. It can accurately predict the transition of failure modes from concrete splitting to rebar pullout and predict the effect of rebar surface characteristics as the rib spacing-to-height ratio increases. Based on the unified theory, a global bond model is proposed and developed by introducing bond-slip laws, and validated with testing of concrete beams with spliced reinforcement, achieving a load capacity prediction error of less than 26%. The optimal rebar parameters and concrete cover in structural designs can be derived from this study.

  4. Landslide early warning based on failure forecast models: the example of the Mt. de La Saxe rockslide, northern Italy

    NASA Astrophysics Data System (ADS)

    Manconi, A.; Giordan, D.

    2015-07-01

    We apply failure forecast models by exploiting near-real-time monitoring data for the La Saxe rockslide, a large unstable slope threatening Aosta Valley in northern Italy. Starting from the inverse velocity theory, we analyze landslide surface displacements automatically and in near real time on different temporal windows and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here, we present the result obtained for the La Saxe rockslide, a large unstable slope located in Aosta Valley, northern Italy. Based on this case study, we identify operational thresholds that are established on the reliability of the forecast models. Our approach is aimed at supporting the management of early warning systems in the most critical phases of the landslide emergency.

  5. Modeling and Simulating Multiple Failure Masking enabled by Local Recovery for Stencil-based Applications at Extreme Scales

    DOE PAGES

    Gamell, Marc; Teranishi, Keita; Mayo, Jackson; ...

    2017-04-24

    By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less

  6. Modeling and Simulating Multiple Failure Masking enabled by Local Recovery for Stencil-based Applications at Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamell, Marc; Teranishi, Keita; Mayo, Jackson

    By obtaining multi-process hard failure resilience at the application level is a key challenge that must be overcome before the promise of exascale can be fully realized. Some previous work has shown that online global recovery can dramatically reduce the overhead of failures when compared to the more traditional approach of terminating the job and restarting it from the last stored checkpoint. If online recovery is performed in a local manner further scalability is enabled, not only due to the intrinsic lower costs of recovering locally, but also due to derived effects when using some application types. In this papermore » we model one such effect, namely multiple failure masking, that manifests when running Stencil parallel computations on an environment when failures are recovered locally. First, the delay propagation shape of one or multiple failures recovered locally is modeled to enable several analyses of the probability of different levels of failure masking under certain Stencil application behaviors. These results indicate that failure masking is an extremely desirable effect at scale which manifestation is more evident and beneficial as the machine size or the failure rate increase.« less

  7. Study of Impact on Undergraduates' Entrepreneurial Failure Based on the Model of Psychological Resilience-Knowledge Acquisition

    ERIC Educational Resources Information Center

    Jing, Tang; Dancheng, Luo; Ye, Zhao

    2016-01-01

    Purpose: The entrepreneurship is a course of gaining knowledge from the failure and stimulating positive energy constantly. The entrepreneur's psychological resilience is the key to gain knowledge (positive energy) from failure (negative energy). The education of undergraduate entrepreneurship is one of the priorities these days. Educators shall…

  8. Modeling and Characterization of Dynamic Failure of Soda-lime Glass Under High Speed Impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenning N.; Sun, Xin; Chen, Weinong W.

    2012-05-27

    In this paper, the impact-induced dynamic failure of a soda-lime glass block is studied using an integrated experimental/analytical approach. The Split Hopkinson Pressure Bar (SHPB) technique is used to conduct dynamic failure test of soda-lime glass first. The damage growth patterns and stress histories are reported for various glass specimen designs. Making use of a continuum damage mechanics (CDM)-based constitutive model, the initial failure and subsequent stiffness reduction of glass are simulated and investigated. Explicit finite element analyses are used to simulate the glass specimen impact event. A maximum shear stress-based damage evolution law is used in describing the glassmore » damage process under combined compression/shear loading. The impact test results are used to quantify the critical shear stress for the soda-lime glass under examination.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puskar, Joseph David; Quintana, Michael A.; Sorensen, Neil Robert

    A program is underway at Sandia National Laboratories to predict long-term reliability of photovoltaic (PV) systems. The vehicle for the reliability predictions is a Reliability Block Diagram (RBD), which models system behavior. Because this model is based mainly on field failure and repair times, it can be used to predict current reliability, but it cannot currently be used to accurately predict lifetime. In order to be truly predictive, physics-informed degradation processes and failure mechanisms need to be included in the model. This paper describes accelerated life testing of metal foil tapes used in thin-film PV modules, and how tape jointmore » degradation, a possible failure mode, can be incorporated into the model.« less

  10. PCI fuel failure analysis: a report on a cooperative program undertaken by Pacific Northwest Laboratory and Chalk River Nuclear Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohr, C.L.; Pankaskie, P.J.; Heasler, P.G.

    Reactor fuel failure data sets in the form of initial power (P/sub i/), final power (P/sub f/), transient increase in power (..delta..P), and burnup (Bu) were obtained for pressurized heavy water reactors (PHWRs), boiling water reactors (BWRs), and pressurized water reactors (PWRs). These data sets were evaluated and used as the basis for developing two predictive fuel failure models, a graphical concept called the PCI-OGRAM, and a nonlinear regression based model called PROFIT. The PCI-OGRAM is an extension of the FUELOGRAM developed by AECL. It is based on a critical threshold concept for stress dependent stress corrosion cracking. The PROFITmore » model, developed at Pacific Northwest Laboratory, is the result of applying standard statistical regression methods to the available PCI fuel failure data and an analysis of the environmental and strain rate dependent stress-strain properties of the Zircaloy cladding.« less

  11. A hybrid feature selection and health indicator construction scheme for delay-time-based degradation modelling of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Deng, Congying; Zhang, Yi

    2018-03-01

    Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.

  12. The failure models of Sn-based solder joints under coupling effects of electromigration and thermal cycling

    NASA Astrophysics Data System (ADS)

    Ma, Limin; Zuo, Yong; Liu, Sihan; Guo, Fu; Wang, Xitao

    2013-01-01

    Currently, the main concerns of Pb-free solder joints are focusing on electromigration (EM) and thermomechanical fatigue (TMF) problems. Many models have been established to understand the failure mechanisms of the joint under such single test conditions. Based on the fact that almost all microelectronic devices serve in combination conditions of fluctuated temperature and electric current stressing, the coupling effects of EM and TMF on evolution of microstructure and resistance of solder joint had been investigated. The failure models of binary SnBi alloy and ternary SnAgCu (SAC) solder under the coupling stressing were divided into four and three different stages, respectively. The failure mechanisms were dominant by the relationship of phase segregation, polarity effect, phase coarsening, and the coefficient of thermal expansion mismatch. Cracks tend to form and propagate along the interface between intermetallic compound layers and solder matrix in SAC solder. However, grain boundary was considered as the nucleation sites for microcracks in SnBi solder. High current density alleviates the deterioration of solder at the beginning stage of coupling stressing through Joule heating effect. An abrupt jump of resistance could be observed before the failure of the joint. The failure molds were determined by interactions of EM behaviors and TMF damages.

  13. A Bayesian network approach for modeling local failure in lung cancer

    NASA Astrophysics Data System (ADS)

    Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam

    2011-03-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.

  14. Method and system for detecting a failure or performance degradation in a dynamic system such as a flight vehicle

    NASA Technical Reports Server (NTRS)

    Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)

    2003-01-01

    A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.

  15. Method of Testing and Predicting Failures of Electronic Mechanical Systems

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, Frances A.

    1996-01-01

    A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.

  16. Expert systems for automated maintenance of a Mars oxygen production system

    NASA Technical Reports Server (NTRS)

    Ash, Robert L.; Huang, Jen-Kuang; Ho, Ming-Tsang

    1989-01-01

    A prototype expert system was developed for maintaining autonomous operation of a Mars oxygen production system. Normal operation conditions and failure modes according to certain desired criteria are tested and identified. Several schemes for failure detection and isolation using forward chaining, backward chaining, knowledge-based and rule-based are devised to perform several housekeeping functions. These functions include self-health checkout, an emergency shut down program, fault detection and conventional control activities. An effort was made to derive the dynamic model of the system using Bond-Graph technique in order to develop the model-based failure detection and isolation scheme by estimation method. Finally, computer simulations and experimental results demonstrated the feasibility of the expert system and a preliminary reliability analysis for the oxygen production system is also provided.

  17. Fault Tree Based Diagnosis with Optimal Test Sequencing for Field Service Engineers

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; George, Laurence L.; Patterson-Hine, F. A.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    When field service engineers go to customer sites to service equipment, they want to diagnose and repair failures quickly and cost effectively. Symptoms exhibited by failed equipment frequently suggest several possible causes which require different approaches to diagnosis. This can lead the engineer to follow several fruitless paths in the diagnostic process before they find the actual failure. To assist in this situation, we have developed the Fault Tree Diagnosis and Optimal Test Sequence (FTDOTS) software system that performs automated diagnosis and ranks diagnostic hypotheses based on failure probability and the time or cost required to isolate and repair each failure. FTDOTS first finds a set of possible failures that explain exhibited symptoms by using a fault tree reliability model as a diagnostic knowledge to rank the hypothesized failures based on how likely they are and how long it would take or how much it would cost to isolate and repair them. This ordering suggests an optimal sequence for the field service engineer to investigate the hypothesized failures in order to minimize the time or cost required to accomplish the repair task. Previously, field service personnel would arrive at the customer site and choose which components to investigate based on past experience and service manuals. Using FTDOTS running on a portable computer, they can now enter a set of symptoms and get a list of possible failures ordered in an optimal test sequence to help them in their decisions. If facilities are available, the field engineer can connect the portable computer to the malfunctioning device for automated data gathering. FTDOTS is currently being applied to field service of medical test equipment. The techniques are flexible enough to use for many different types of devices. If a fault tree model of the equipment and information about component failure probabilities and isolation times or costs are available, a diagnostic knowledge base for that device can be developed easily.

  18. Redundancy relations and robust failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.

    1984-01-01

    All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.

  19. Robust failure detection filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sanmartin, A. M.

    1985-01-01

    The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.

  20. Risk-based planning analysis for a single levee

    NASA Astrophysics Data System (ADS)

    Hui, Rui; Jachens, Elizabeth; Lund, Jay

    2016-04-01

    Traditional risk-based analysis for levee planning focuses primarily on overtopping failure. Although many levees fail before overtopping, few planning studies explicitly include intermediate geotechnical failures in flood risk analysis. This study develops a risk-based model for two simplified levee failure modes: overtopping failure and overall intermediate geotechnical failure from through-seepage, determined by the levee cross section represented by levee height and crown width. Overtopping failure is based only on water level and levee height, while through-seepage failure depends on many geotechnical factors as well, mathematically represented here as a function of levee crown width using levee fragility curves developed from professional judgment or analysis. These levee planning decisions are optimized to minimize the annual expected total cost, which sums expected (residual) annual flood damage and annualized construction costs. Applicability of this optimization approach to planning new levees or upgrading existing levees is demonstrated preliminarily for a levee on a small river protecting agricultural land, and a major levee on a large river protecting a more valuable urban area. Optimized results show higher likelihood of intermediate geotechnical failure than overtopping failure. The effects of uncertainty in levee fragility curves, economic damage potential, construction costs, and hydrology (changing climate) are explored. Optimal levee crown width is more sensitive to these uncertainties than height, while the derived general principles and guidelines for risk-based optimal levee planning remain the same.

  1. Evaluating the risk of water distribution system failure: A shared frailty model

    NASA Astrophysics Data System (ADS)

    Clark, Robert M.; Thurnau, Robert C.

    2011-12-01

    Condition assessment (CA) Modeling is drawing increasing interest as a technique that can assist in managing drinking water infrastructure. This paper develops a model based on the application of a Cox proportional hazard (PH)/shared frailty model and applies it to evaluating the risk of failure in drinking water networks using data from the Laramie Water Utility (located in Laramie, Wyoming, USA). Using the risk model a cost/ benefit analysis incorporating the inspection value method (IVM), is used to assist in making improved repair, replacement and rehabilitation decisions for selected drinking water distribution system pipes. A separate model is developed to predict failures in prestressed concrete cylinder pipe (PCCP). Various currently available inspection technologies are presented and discussed.

  2. Development of a subway operation incident delay model using accelerated failure time approaches.

    PubMed

    Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang

    2014-12-01

    This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development

    PubMed Central

    Honig, Shanee; Oron-Gilad, Tal

    2018-01-01

    While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people's perceptions and feelings toward robots, and how these effects can be mitigated. Fifty-two studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction (HRI), and mitigating failures. Since little research has been done on these topics within the HRI community, insights from the fields of human computer interaction (HCI), human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing, RF-HIP), that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1) communicating failures, (2) perception and comprehension of failures, and (3) solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a tool to promote the development of user-centered failure-handling strategies for HRIs.

  4. Experimental Evidence of Accelerated Seismic Release without Critical Failure in Acoustic Emissions of Compressed Nanoporous Materials

    NASA Astrophysics Data System (ADS)

    Baró, Jordi; Dahmen, Karin A.; Davidsen, Jörn; Planes, Antoni; Castillo, Pedro O.; Nataf, Guillaume F.; Salje, Ekhard K. H.; Vives, Eduard

    2018-06-01

    The total energy of acoustic emission (AE) events in externally stressed materials diverges when approaching macroscopic failure. Numerical and conceptual models explain this accelerated seismic release (ASR) as the approach to a critical point that coincides with ultimate failure. Here, we report ASR during soft uniaxial compression of three silica-based (SiO2 ) nanoporous materials. Instead of a singular critical point, the distribution of AE energies is stationary, and variations in the activity rate are sufficient to explain the presence of multiple periods of ASR leading to distinct brittle failure events. We propose that critical failure is suppressed in the AE statistics by mechanisms of transient hardening. Some of the critical exponents estimated from the experiments are compatible with mean field models, while others are still open to interpretation in terms of the solution of frictional and fracture avalanche models.

  5. Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.

    PubMed

    Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi

    2015-10-01

    In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.

  6. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  7. Fracture simulation of restored teeth using a continuum damage mechanics failure model.

    PubMed

    Li, Haiyan; Li, Jianying; Zou, Zhenmin; Fok, Alex Siu-Lun

    2011-07-01

    The aim of this paper is to validate the use of a finite-element (FE) based continuum damage mechanics (CDM) failure model to simulate the debonding and fracture of restored teeth. Fracture testing of plastic model teeth, with or without a standard Class-II MOD (mesial-occusal-distal) restoration, was carried out to investigate their fracture behavior. In parallel, 2D FE models of the teeth are constructed and analyzed using the commercial FE software ABAQUS. A CDM failure model, implemented into ABAQUS via the user element subroutine (UEL), is used to simulate the debonding and/or final fracture of the model teeth under a compressive load. The material parameters needed for the CDM model to simulate fracture are obtained through separate mechanical tests. The predicted results are then compared with the experimental data of the fracture tests to validate the failure model. The failure processes of the intact and restored model teeth are successfully reproduced by the simulation. However, the fracture parameters obtained from testing small specimens need to be adjusted to account for the size effect. The results indicate that the CDM model is a viable model for the prediction of debonding and fracture in dental restorations. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  8. Failure prediction using machine learning and time series in optical network.

    PubMed

    Wang, Zhilong; Zhang, Min; Wang, Danshi; Song, Chuang; Liu, Min; Li, Jin; Lou, Liqi; Liu, Zhuo

    2017-08-07

    In this paper, we propose a performance monitoring and failure prediction method in optical networks based on machine learning. The primary algorithms of this method are the support vector machine (SVM) and double exponential smoothing (DES). With a focus on risk-aware models in optical networks, the proposed protection plan primarily investigates how to predict the risk of an equipment failure. To the best of our knowledge, this important problem has not yet been fully considered. Experimental results showed that the average prediction accuracy of our method was 95% when predicting the optical equipment failure state. This finding means that our method can forecast an equipment failure risk with high accuracy. Therefore, our proposed DES-SVM method can effectively improve traditional risk-aware models to protect services from possible failures and enhance the optical network stability.

  9. Comprehensive Deployment Method for Technical Characteristics Base on Multi-failure Modes Correlation Analysis

    NASA Astrophysics Data System (ADS)

    Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.

    2017-12-01

    This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.

  10. Redundant Design in Interdependent Networks

    PubMed Central

    2016-01-01

    Modern infrastructure networks are often coupled together and thus could be modeled as interdependent networks. Overload and interdependent effect make interdependent networks more fragile when suffering from attacks. Existing research has primarily concentrated on the cascading failure process of interdependent networks without load, or the robustness of isolated network with load. Only limited research has been done on the cascading failure process caused by overload in interdependent networks. Redundant design is a primary approach to enhance the reliability and robustness of the system. In this paper, we propose two redundant methods, node back-up and dependency redundancy, and the experiment results indicate that two measures are effective and costless. Two detailed models about redundant design are introduced based on the non-linear load-capacity model. Based on the attributes and historical failure distribution of nodes, we introduce three static selecting strategies-Random-based, Degree-based, Initial load-based and a dynamic strategy-HFD (historical failure distribution) to identify which nodes could have a back-up with priority. In addition, we consider the cost and efficiency of different redundant proportions to determine the best proportion with maximal enhancement and minimal cost. Experiments on interdependent networks demonstrate that the combination of HFD and dependency redundancy is an effective and preferred measure to implement redundant design on interdependent networks. The results suggest that the redundant design proposed in this paper can permit construction of highly robust interactive networked systems. PMID:27764174

  11. A performance study of unmanned aerial vehicle-based sensor networks under cyber attack

    NASA Astrophysics Data System (ADS)

    Puchaty, Ethan M.

    In UAV-based sensor networks, an emerging area of interest is the performance of these networks under cyber attack. This study seeks to evaluate the performance trade-offs from a System-of-Systems (SoS) perspective between various UAV communications architecture options in the context two missions: tracking ballistic missiles and tracking insurgents. An agent-based discrete event simulation is used to model a sensor communication network consisting of UAVs, military communications satellites, ground relay stations, and a mission control center. Network susceptibility to cyber attack is modeled with probabilistic failures and induced data variability, with performance metrics focusing on information availability, latency, and trustworthiness. Results demonstrated that using UAVs as routers increased network availability with a minimal latency penalty and communications satellite networks were best for long distance operations. Redundancy in the number of links between communication nodes helped mitigate cyber-caused link failures and add robustness in cases of induced data variability by an adversary. However, when failures were not independent, redundancy and UAV routing were detrimental in some cases to network performance. Sensitivity studies indicated that long cyber-caused downtimes and increasing failure dependencies resulted in build-ups of failures and caused significant degradations in network performance.

  12. Reliability analysis and fault-tolerant system development for a redundant strapdown inertial measurement unit. [inertial platforms

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.

  13. A micromechanics-based strength prediction methodology for notched metal matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1992-01-01

    An analytical micromechanics based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and post fatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  14. A micromechanics-based strength prediction methodology for notched metal-matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1993-01-01

    An analytical micromechanics-based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three-dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and postfatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics-based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  15. Multiscale Fiber Kinking: Computational Micromechanics and a Mesoscale Continuum Damage Mechanics Models

    NASA Technical Reports Server (NTRS)

    Herraez, Miguel; Bergan, Andrew C.; Gonzalez, Carlos; Lopes, Claudio S.

    2017-01-01

    In this work, the fiber kinking phenomenon, which is known as the failure mechanism that takes place when a fiber reinforced polymer is loaded under longitudinal compression, is studied. A computational micromechanics model is employed to interrogate the assumptions of a recently developed mesoscale continuum damage mechanics (CDM) model for fiber kinking based on the deformation gradient decomposition (DGD) and the LaRC04 failure criteria.

  16. Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.

  17. Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1998-01-01

    Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.

  18. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    NASA Technical Reports Server (NTRS)

    Flores, Melissa; Malin, Jane T.

    2013-01-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component s functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  19. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    NASA Astrophysics Data System (ADS)

    Flores, Melissa D.; Malin, Jane T.; Fleming, Land D.

    2013-09-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component's functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  20. Evolution of a 90-day model of care for bundled episodic payments for congestive heart failure in home care.

    PubMed

    Feld, April; Madden-Baer, Rose; McCorkle, Ruth

    2016-01-01

    The Centers for Medicare and Medicaid Services Innovation Center's Episode-Based Payment initiatives propose a large opportunity to reduce cost from waste and variation and stand to align hospitals, physicians, and postacute providers in the redesign of care that achieves savings and improve quality. Community-based organizations are at the forefront of this care redesign through innovative models of care aimed at bridging gaps in care coordination and reducing hospital readmissions. This article describes a community-based provider's approach to participation under the Bundled Payments for Care Improvement initiative and a 90-day model of care for congestive heart failure in home care.

  1. The Inclusion of Arbitrary Load Histories in the Strength Decay Model for Stress Rupture

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2014-01-01

    Stress rupture is a failure mechanism where failures can occur after a period of time, even though the material has seen no increase in load. Carbon/epoxy composite materials have demonstrated the stress rupture failure mechanism. In a previous work, a model was proposed for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures based on strength degradation. However, the original model was limited to constant load periods (holds) at constant load. The model was expanded in this paper to address arbitrary loading histories and specifically the inclusions of ramp loadings up to holds and back down. The broadening of the model allows for failures on loading to be treated as any other failure that may occur during testing instead of having to be treated as a special case. The inclusion of ramps can also influence the length of the "safe period" following proof loading that was previously predicted by the model. No stress rupture failures are predicted in a safe period because time is required for strength to decay from above the proof level to the lower level of loading. Although the model can predict failures during the ramp periods, no closed-form solution for the failure times could be derived. Therefore, two suggested solution techniques were proposed. Finally, the model was used to design an experiment that could detect the difference between the strength decay model and a commonly used model for stress rupture. Although these types of models are necessary to help guide experiments for stress rupture, only experimental evidence will determine how well the model may predict actual material response. If the model can be shown to be accurate, current proof loading requirements may result in predicted safe periods as long as 10(13) years. COPVs design requirements for stress rupture may then be relaxed, allowing more efficient designs, while still maintaining an acceptable level of safety.

  2. Management of heart failure in the new era: the role of scores.

    PubMed

    Mantegazza, Valentina; Badagliacca, Roberto; Nodari, Savina; Parati, Gianfranco; Lombardi, Carolina; Di Somma, Salvatore; Carluccio, Erberto; Dini, Frank Lloyd; Correale, Michele; Magrì, Damiano; Agostoni, Piergiuseppe

    2016-08-01

    Heart failure is a widespread syndrome involving several organs, still characterized by high mortality and morbidity, and whose clinical course is heterogeneous and hardly predictable.In this scenario, the assessment of heart failure prognosis represents a fundamental step in clinical practice. A single parameter is always unable to provide a very precise prognosis. Therefore, risk scores based on multiple parameters have been introduced, but their clinical utility is still modest. In this review, we evaluated several prognostic models for acute, right, chronic, and end-stage heart failure based on multiple parameters. In particular, for chronic heart failure we considered risk scores essentially based on clinical evaluation, comorbidities analysis, baroreflex sensitivity, heart rate variability, sleep disorders, laboratory tests, echocardiographic imaging, and cardiopulmonary exercise test parameters. What is at present established is that a single parameter is not sufficient for an accurate prediction of prognosis in heart failure because of the complex nature of the disease. However, none of the scoring systems available is widely used, being in some cases complex, not user-friendly, or based on expensive or not easily available parameters. We believe that multiparametric scores for risk assessment in heart failure are promising but their widespread use needs to be experienced.

  3. Risk Analysis and Prediction of Floor Failure Mechanisms at Longwall Face in Parvadeh-I Coal Mine using Rock Engineering System (RES)

    NASA Astrophysics Data System (ADS)

    Aghababaei, Sajjad; Saeedi, Gholamreza; Jalalifar, Hossein

    2016-05-01

    The floor failure at longwall face decreases productivity and safety, increases operation costs, and causes other serious problems. In Parvadeh-I coal mine, the timber is used to prevent the puncture of powered support base into the floor. In this paper, a rock engineering system (RES)-based model is presented to evaluate the risk of floor failure mechanisms at the longwall face of E 2 and W 1 panels. The presented model is used to determine the most probable floor failure mechanism, effective factors, damaged regions and remedial actions. From the analyzed results, it is found that soft floor failure is dominant in the floor failure mechanism at Parvadeh-I coal mine. The average of vulnerability index (VI) for soft, buckling and compressive floor failure mechanisms was estimated equal to 52, 43 and 30 for both panels, respectively. By determining the critical VI for soft floor failure mechanism equal to 54, the percentage of regions with VIs beyond the critical VI in E 2 and W 1 panels is equal to 65.5 and 30, respectively. The percentage of damaged regions showed that the excess amount of used timber to prevent the puncture of weak floor below the powered support base is equal to 4,180,739 kg. RES outputs and analyzed results showed that setting and yielding load of powered supports, length of face, existent water at face, geometry of powered supports, changing the cutting pattern at longwall face and limiting the panels to damaged regions with supercritical VIs could be considered to control the soft floor failure in this mine. The results of this research could be used as a useful tool to identify the damaged regions prior to mining operation at longwall panel for the same conditions.

  4. Chronic Heart Failure Follow-up Management Based on Agent Technology.

    PubMed

    Mohammadzadeh, Niloofar; Safdari, Reza

    2015-10-01

    Monitoring heart failure patients through continues assessment of sign and symptoms by information technology tools lead to large reduction in re-hospitalization. Agent technology is one of the strongest artificial intelligence areas; therefore, it can be expected to facilitate, accelerate, and improve health services especially in home care and telemedicine. The aim of this article is to provide an agent-based model for chronic heart failure (CHF) follow-up management. This research was performed in 2013-2014 to determine appropriate scenarios and the data required to monitor and follow-up CHF patients, and then an agent-based model was designed. Agents in the proposed model perform the following tasks: medical data access, communication with other agents of the framework and intelligent data analysis, including medical data processing, reasoning, negotiation for decision-making, and learning capabilities. The proposed multi-agent system has ability to learn and thus improve itself. Implementation of this model with more and various interval times at a broader level could achieve better results. The proposed multi-agent system is no substitute for cardiologists, but it could assist them in decision-making.

  5. Software dependability in the Tandem GUARDIAN system

    NASA Technical Reports Server (NTRS)

    Lee, Inhwan; Iyer, Ravishankar K.

    1995-01-01

    Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.

  6. Advanced composites structural concepts and materials technologies for primary aircraft structures: Structural response and failure analysis

    NASA Technical Reports Server (NTRS)

    Dorris, William J.; Hairr, John W.; Huang, Jui-Tien; Ingram, J. Edward; Shah, Bharat M.

    1992-01-01

    Non-linear analysis methods were adapted and incorporated in a finite element based DIAL code. These methods are necessary to evaluate the global response of a stiffened structure under combined in-plane and out-of-plane loading. These methods include the Arc Length method and target point analysis procedure. A new interface material model was implemented that can model elastic-plastic behavior of the bond adhesive. Direct application of this method is in skin/stiffener interface failure assessment. Addition of the AML (angle minus longitudinal or load) failure procedure and Hasin's failure criteria provides added capability in the failure predictions. Interactive Stiffened Panel Analysis modules were developed as interactive pre-and post-processors. Each module provides the means of performing self-initiated finite elements based analysis of primary structures such as a flat or curved stiffened panel; a corrugated flat sandwich panel; and a curved geodesic fuselage panel. This module brings finite element analysis into the design of composite structures without the requirement for the user to know much about the techniques and procedures needed to actually perform a finite element analysis from scratch. An interactive finite element code was developed to predict bolted joint strength considering material and geometrical non-linearity. The developed method conducts an ultimate strength failure analysis using a set of material degradation models.

  7. Explosive Model Tarantula 4d/JWL++ Calibration of LX-17

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souers, P C; Vitello, P A

    2008-09-30

    Tarantula is an explosive kinetic package intended to do detonation, shock initiation, failure, corner-turning with dead zones, gap tests and air gaps in reactive flow hydrocode models. The first, 2007-2008 version with monotonic Q is here run inside JWL++ with square zoning from 40 to 200 zones/cm on ambient LX-17. The model splits the rate behavior in every zone into sections set by the hydrocode pressure, P + Q. As the pressure rises, we pass through the no-reaction, initiation, ramp-up/failure and detonation sections sequentially. We find that the initiation and pure detonation rate constants are largely insensitive to zoning butmore » that the ramp-up/failure rate constant is extremely sensitive. At no time does the model pass every test, but the pressure-based approach generally works. The best values for the ramp/failure region are listed here in Mb units.« less

  8. Numerical Analysis of Solids at Failure

    DTIC Science & Technology

    2011-08-20

    failure analyses include the formulation of invariant finite elements for thin Kirchhoff rods, and preliminary initial studies of growth in...analysis of the failure of other structural/mechanical systems, including the finite element modeling of thin Kirchhoff rods and the constitutive...algorithm based on the connectivity graph of the underlying finite element mesh. In this setting, the discontinuities are defined by fronts propagating

  9. Determination of fiber-matrix interface failure parameters from off-axis tests

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1993-01-01

    Critical fiber-matrix (FM) interface strength parameters were determined using a micromechanics-based approach together with failure data from off-axis tension (OAT) tests. The ply stresses at failure for a range of off-axis angles were used as input to a micromechanics analysis that was performed using the personal computer-based MICSTRAN code. FM interface stresses at the failure loads were calculated for both the square and the diamond array models. A simple procedure was developed to determine which array had the more severe FM interface stresses and the location of these critical stresses on the interface. For the cases analyzed, critical FM interface stresses were found to occur with the square array model and were located at a point where adjacent fibers were closest together. The critical FM interface stresses were used together with the Tsai-Wu failure theory to determine a failure criterion for the FM interface. This criterion was then used to predict the onset of ply cracking in angle-ply laminates for a range of laminate angles. Predictions for the onset of ply cracking in angle-ply laminates agreed with the test data trends.

  10. Dual permeability FEM models for distributed fiber optic sensors development

    NASA Astrophysics Data System (ADS)

    Aguilar-López, Juan Pablo; Bogaard, Thom

    2017-04-01

    Fiber optic cables are commonly known for being robust and reliable mediums for transferring information at the speed of light in glass. Billions of kilometers of cable have been installed around the world for internet connection and real time information sharing. Yet, fiber optic cable is not only a mean for information transfer but also a way to sense and measure physical properties of the medium in which is installed. For dike monitoring, it has been used in the past for detecting inner core and foundation temperature changes which allow to estimate water infiltration during high water events. The DOMINO research project, aims to develop a fiber optic based dike monitoring system which allows to directly sense and measure any pore pressure change inside the dike structure. For this purpose, questions like which location, how many sensors, which measuring frequency and which accuracy are required for the sensor development. All these questions may be initially answered with a finite element model which allows to estimate the effects of pore pressure change in different locations along the cross section while having a time dependent estimation of a stability factor. The sensor aims to monitor two main failure mechanisms at the same time; The piping erosion failure mechanism and the macro-stability failure mechanism. Both mechanisms are going to be modeled and assessed in detail with a finite element based dual permeability Darcy-Richards numerical solution. In that manner, it is possible to assess different sensing configurations with different loading scenarios (e.g. High water levels, rainfall events and initial soil moisture and permeability conditions). The results obtained for the different configurations are later evaluated based on an entropy based performance evaluation. The added value of this kind of modelling approach for the sensor development is that it allows to simultaneously model the piping erosion and macro-stability failure mechanisms in a time dependent manner. In that way, the estimated pore pressures may be related to the monitored one and to both failure mechanisms. Furthermore, the approach is intended to be used in a later stage for the real time monitoring of the failure.

  11. Anchorage strength models for end-debonding predictions in RC beams strengthened with FRP composites

    NASA Astrophysics Data System (ADS)

    Nardini, V.; Guadagnini, M.; Valluzzi, M. R.

    2008-05-01

    The increase in the flexural capacity of RC beams obtained by externally bonding FRP composites to their tension side is often limited by the premature and brittle debonding of the external reinforcement. An in-depth understanding of this complex failure mechanism, however, has not yet been achieved. With specific regard to end-debonding failure modes, extensive experimental observations reported in the literature highlight the important distinction, often neglected in strength models proposed by researchers, between the peel-off and rip-off end-debonding types of failure. The peel-off failure is generally characterized by a failure plane located within the first few millimetres of the concrete cover, whilst the rip-off failure penetrates deeper into the concrete cover and propagates along the tensile steel reinforcement. A new rip-off strength model is described in this paper. The model proposed is based on the Chen and Teng peel-off model and relies upon additional theoretical considerations. The influence of the amount of the internal tensile steel reinforcement and the effective anchorage length of FRP are considered and discussed. The validity of the new model is analyzed further through comparisons with test results, findings of a numerical investigation, and a parametric study. The new rip-off strength model is assessed against a database comprising results from 62 beams tested by various researchers and is shown to yield less conservative results.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hobbs, Michael L.

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less

  13. Peer Review of Launch Environments

    NASA Technical Reports Server (NTRS)

    Wilson, Timmy R.

    2011-01-01

    Catastrophic failures of launch vehicles during launch and ascent are currently modeled using equivalent trinitrotoluene (TNT) estimates. This approach tends to over-predict the blast effect with subsequent impact to launch vehicle and crew escape requirements. Bangham Engineering, located in Huntsville, Alabama, assembled a less-conservative model based on historical failure and test data coupled with physical models and estimates. This white paper summarizes NESC's peer review of the Bangham analytical work completed to date.

  14. Fuzzy-based failure mode and effect analysis (FMEA) of a hybrid molten carbonate fuel cell (MCFC) and gas turbine system for marine propulsion

    NASA Astrophysics Data System (ADS)

    Ahn, Junkeon; Noh, Yeelyong; Park, Sung Ho; Choi, Byung Il; Chang, Daejun

    2017-10-01

    This study proposes a fuzzy-based FMEA (failure mode and effect analysis) for a hybrid molten carbonate fuel cell and gas turbine system for liquefied hydrogen tankers. An FMEA-based regulatory framework is adopted to analyze the non-conventional propulsion system and to understand the risk picture of the system. Since the participants of the FMEA rely on their subjective and qualitative experiences, the conventional FMEA used for identifying failures that affect system performance inevitably involves inherent uncertainties. A fuzzy-based FMEA is introduced to express such uncertainties appropriately and to provide flexible access to a risk picture for a new system using fuzzy modeling. The hybrid system has 35 components and has 70 potential failure modes, respectively. Significant failure modes occur in the fuel cell stack and rotary machine. The fuzzy risk priority number is used to validate the crisp risk priority number in the FMEA.

  15. Health information systems: failure, success and improvisation.

    PubMed

    Heeks, Richard

    2006-02-01

    The generalised assumption of health information systems (HIS) success is questioned by a few commentators in the medical informatics field. They point to widespread HIS failure. The purpose of this paper was therefore to develop a better conceptual foundation for, and practical guidance on, health information systems failure (and success). Literature and case analysis plus pilot testing of developed model. Defining HIS failure and success is complex, and the current evidence base on HIS success and failure rates was found to be weak. Nonetheless, the best current estimate is that HIS failure is an important problem. The paper therefore derives and explains the "design-reality gap" conceptual model. This is shown to be robust in explaining multiple cases of HIS success and failure, yet provides a contingency that encompasses the differences which exist in different HIS contexts. The design-reality gap model is piloted to demonstrate its value as a tool for risk assessment and mitigation on HIS projects. It also throws into question traditional, structured development methodologies, highlighting the importance of emergent change and improvisation in HIS. The design-reality gap model can be used to address the problem of HIS failure, both as a post hoc evaluative tool and as a pre hoc risk assessment and mitigation tool. It also validates a set of methods, techniques, roles and competencies needed to support the dynamic improvisations that are found to underpin cases of HIS success.

  16. A Methodology for Modeling Nuclear Power Plant Passive Component Aging in Probabilistic Risk Assessment under the Impact of Operating Conditions, Surveillance and Maintenance Activities

    NASA Astrophysics Data System (ADS)

    Guler Yigitoglu, Askin

    In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.

  17. A two-stage model of fracture of rocks

    USGS Publications Warehouse

    Kuksenko, V.; Tomilin, N.; Damaskinskaya, E.; Lockner, D.

    1996-01-01

    In this paper we propose a two-stage model of rock fracture. In the first stage, cracks or local regions of failure are uncorrelated occur randomly throughout the rock in response to loading of pre-existing flaws. As damage accumulates in the rock, there is a gradual increase in the probability that large clusters of closely spaced cracks or local failure sites will develop. Based on statistical arguments, a critical density of damage will occur where clusters of flaws become large enough to lead to larger-scale failure of the rock (stage two). While crack interaction and cooperative failure is expected to occur within clusters of closely spaced cracks, the initial development of clusters is predicted based on the random variation in pre-existing Saw populations. Thus the onset of the unstable second stage in the model can be computed from the generation of random, uncorrelated damage. The proposed model incorporates notions of the kinetic (and therefore time-dependent) nature of the strength of solids as well as the discrete hierarchic structure of rocks and the flaw populations that lead to damage accumulation. The advantage offered by this model is that its salient features are valid for fracture processes occurring over a wide range of scales including earthquake processes. A notion of the rank of fracture (fracture size) is introduced, and criteria are presented for both fracture nucleation and the transition of the failure process from one scale to another.

  18. Delamination modeling of laminate plate made of sublaminates

    NASA Astrophysics Data System (ADS)

    Kormaníková, Eva; Kotrasová, Kamila

    2017-07-01

    The paper presents the mixed-mode delamination of plates made of sublaminates. To this purpose an opening load mode of delamination is proposed as failure model. The failure model is implemented in ANSYS code to calculate the mixed-mode delamination response as energy release rate. The analysis is based on interface techniques. Within the interface finite element modeling there are calculated the individual components of damage parameters as spring reaction forces, relative displacements and energy release rates along the lamination front.

  19. Photoresist and stochastic modeling

    NASA Astrophysics Data System (ADS)

    Hansen, Steven G.

    2018-01-01

    Analysis of physical modeling results can provide unique insights into extreme ultraviolet stochastic variation, which augment, and sometimes refute, conclusions based on physical intuition and even wafer experiments. Simulations verify the primacy of "imaging critical" counting statistics (photons, electrons, and net acids) and the image/blur-dependent dose sensitivity in describing the local edge or critical dimension variation. But the failure of simple counting when resist thickness is varied highlights a limitation of this exact analytical approach, so a calibratable empirical model offers useful simplicity and convenience. Results presented here show that a wide range of physical simulation results can be well matched by an empirical two-parameter model based on blurred image log-slope (ILS) for lines/spaces and normalized ILS for holes. These results are largely consistent with a wide range of published experimental results; however, there is some disagreement with the recently published dataset of De Bisschop. The present analysis suggests that the origin of this model failure is an unexpected blurred ILS:dose-sensitivity relationship failure in that resist process. It is shown that a photoresist mechanism based on high photodecomposable quencher loading and high quencher diffusivity can give rise to pitch-dependent blur, which may explain the discrepancy.

  20. Bounded influence function based inference in joint modelling of ordinal partial linear model and accelerated failure time model.

    PubMed

    Chakraborty, Arindom

    2016-12-01

    A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.

  1. Thermomechanical Modeling of Sintered Silver - A Fracture Mechanics-based Approach: Extended Abstract: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paret, Paul P; DeVoto, Douglas J; Narumanchi, Sreekant V

    Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (less than 200 degrees Celcius). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. We present a finite element method (FEM) modeling methodology that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. Amore » fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed. In this paper, we outline the procedures for obtaining the J-integral/thermal cycle values in a computational model and report on the possible advantage of using these values as modeling parameters in a predictive lifetime model.« less

  2. Fuzzy-information-based robustness of interconnected networks against attacks and failures

    NASA Astrophysics Data System (ADS)

    Zhu, Qian; Zhu, Zhiliang; Wang, Yifan; Yu, Hai

    2016-09-01

    Cascading failure is fatal in applications and its investigation is essential and therefore became a focal topic in the field of complex networks in the last decade. In this paper, a cascading failure model is established for interconnected networks and the associated data-packet transport problem is discussed. A distinguished feature of the new model is its utilization of fuzzy information in resisting uncertain failures and malicious attacks. We numerically find that the giant component of the network after failures increases with tolerance parameter for any coupling preference and attacking ambiguity. Moreover, considering the effect of the coupling probability on the robustness of the networks, we find that the robustness of the assortative coupling and random coupling of the network model increases with the coupling probability. However, for disassortative coupling, there exists a critical phenomenon for coupling probability. In addition, a critical value that attacking information accuracy affects the network robustness is observed. Finally, as a practical example, the interconnected AS-level Internet in South Korea and Japan is analyzed. The actual data validates the theoretical model and analytic results. This paper thus provides some guidelines for preventing cascading failures in the design of architecture and optimization of real-world interconnected networks.

  3. Micromechanics of failure waves in glass. 2: Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espinosa, H.D.; Xu, Y.; Brar, N.S.

    1997-08-01

    In an attempt to elucidate the failure mechanism responsible for the so-called failure waves in glass, numerical simulations of plate and rod impact experiments, with a multiple-plane model, have been performed. These simulations show that the failure wave phenomenon can be modeled by the nucleation and growth of penny-shaped shear defects from the specimen surface to its interior. Lateral stress increase, reduction of spall strength,and progressive attenuation of axial stress behind the failure front are properly predicted by the multiple-plane model. Numerical simulations of high-strain-rate pressure-shear experiments indicate that the model predicts reasonably well the shear resistance of the materialmore » at strain rates as high as 1 {times} 10{sup 6}/s. The agreement is believed to be the result of the model capability in simulating damage-induced anisotropy. By examining the kinetics of the failure process in plate experiments, the authors show that the progressive glass spallation in the vicinity of the failure front and the rate of increase in lateral stress are more consistent with a representation of inelasticity based on shear-activated flow surfaces, inhomogeneous flow, and microcracking, rather than pure microcracking. In the former mechanism, microcracks are likely formed at a later time at the intersection of flow surfaces, in the case of rod-on-rod impact, stress and radial velocity histories predicted by the microcracking model are in agreement with the experimental measurements. Stress attenuation, pulse duration, and release structure are properly simulated. It is shown that failure wave speeds in excess to 3,600 m/s are required for adequate prediction in rod radial expansion.« less

  4. Some Aspects of the Failure Mechanisms in BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, David Donhang; Sampson, Michael J.

    2012-01-01

    The objective of this presentation is to gain insight into possible failure mechanisms in BaTiO3-based ceramic capacitors that may be associated with the reliability degradation that accompanies a reduction in dielectric thickness, as reported by Intel Corporation in 2010. The volumetric efficiency (microF/cm3) of a multilayer ceramic capacitor (MLCC) has been shown to not increase limitlessly due to the grain size effect on the dielectric constant of ferroelectric ceramic BaTiO3 material. The reliability of an MLCC has been discussed with respect to its structure. The MLCCs with higher numbers of dielectric layers will pose more challenges for the reliability of dielectric material, which is the case for most base-metal-electrode (BME) capacitors. A number of MLCCs manufactured using both precious-metal-electrode (PME) and BME technology, with 25 V rating and various chip sizes and capacitances, were tested at accelerated stress levels. Most of these MLCCs had a failure behavior with two mixed failure modes: the well-known rapid dielectric wearout, and so-called 'early failures." The two failure modes can be distinguished when the testing data were presented and normalized at use-level using a 2-parameter Weibull plot. The early failures had a slope parameter of Beta >1, indicating that the early failures are not infant mortalities. Early failures are triggered due to external electrical overstress and become dominant as dielectric layer thickness decreases, accompanied by a dramatic reduction in reliability. This indicates that early failures are the main cause of the reliability degradation in MLCCs as dielectric layer thickness decreases. All of the early failures are characterized by an avalanche-like breakdown leakage current. The failures have been attributed to the extrinsic minor construction defects introduced during fabrication of the capacitors. A reliability model including dielectric thickness and extrinsic defect feature size is proposed in this presentation. The model can be used to explain the Intel-reported reliability degradation in MLCCs with respect to the reduction of dielectric thickness. It can also be used to estimate the reliability of a MLCC based on its construction and microstructure parameters such as dielectric thickness, average grain size, and number of dielectric layers. Measures for preventing early failures are also discussed in this document.

  5. Optimization of cascading failure on complex network based on NNIA

    NASA Astrophysics Data System (ADS)

    Zhu, Qian; Zhu, Zhiliang; Qi, Yi; Yu, Hai; Xu, Yanjie

    2018-07-01

    Recently, the robustness of networks under cascading failure has attracted extensive attention. Different from previous studies, we concentrate on how to improve the robustness of the networks from the perspective of intelligent optimization. We establish two multi-objective optimization models that comprehensively consider the operational cost of the edges in the networks and the robustness of the networks. The NNIA (Non-dominated Neighbor Immune Algorithm) is applied to solve the optimization models. We finished simulations of the Barabási-Albert (BA) network and Erdös-Rényi (ER) network. In the solutions, we find the edges that can facilitate the propagation of cascading failure and the edges that can suppress the propagation of cascading failure. From the conclusions, we take optimal protection measures to weaken the damage caused by cascading failures. We also consider actual situations of operational cost feasibility of the edges. People can make a more practical choice based on the operational cost. Our work will be helpful in the design of highly robust networks or improvement of the robustness of networks in the future.

  6. Studies and analyses of the space shuttle main engine. Failure information propagation model data base and software

    NASA Technical Reports Server (NTRS)

    Tischer, A. E.

    1987-01-01

    The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.

  7. A machine learning model to predict the risk of 30-day readmissions in patients with heart failure: a retrospective analysis of electronic medical records data.

    PubMed

    Golas, Sara Bersche; Shibahara, Takuma; Agboola, Stephen; Otaki, Hiroko; Sato, Jumpei; Nakae, Tatsuya; Hisamitsu, Toru; Kojima, Go; Felsted, Jennifer; Kakarmath, Sujay; Kvedar, Joseph; Jethwani, Kamal

    2018-06-22

    Heart failure is one of the leading causes of hospitalization in the United States. Advances in big data solutions allow for storage, management, and mining of large volumes of structured and semi-structured data, such as complex healthcare data. Applying these advances to complex healthcare data has led to the development of risk prediction models to help identify patients who would benefit most from disease management programs in an effort to reduce readmissions and healthcare cost, but the results of these efforts have been varied. The primary aim of this study was to develop a 30-day readmission risk prediction model for heart failure patients discharged from a hospital admission. We used longitudinal electronic medical record data of heart failure patients admitted within a large healthcare system. Feature vectors included structured demographic, utilization, and clinical data, as well as selected extracts of un-structured data from clinician-authored notes. The risk prediction model was developed using deep unified networks (DUNs), a new mesh-like network structure of deep learning designed to avoid over-fitting. The model was validated with 10-fold cross-validation and results compared to models based on logistic regression, gradient boosting, and maxout networks. Overall model performance was assessed using concordance statistic. We also selected a discrimination threshold based on maximum projected cost saving to the Partners Healthcare system. Data from 11,510 patients with 27,334 admissions and 6369 30-day readmissions were used to train the model. After data processing, the final model included 3512 variables. The DUNs model had the best performance after 10-fold cross-validation. AUCs for prediction models were 0.664 ± 0.015, 0.650 ± 0.011, 0.695 ± 0.016 and 0.705 ± 0.015 for logistic regression, gradient boosting, maxout networks, and DUNs respectively. The DUNs model had an accuracy of 76.4% at the classification threshold that corresponded with maximum cost saving to the hospital. Deep learning techniques performed better than other traditional techniques in developing this EMR-based prediction model for 30-day readmissions in heart failure patients. Such models can be used to identify heart failure patients with impending hospitalization, enabling care teams to target interventions at their most high-risk patients and improving overall clinical outcomes.

  8. Model authoring system for fail safe analysis

    NASA Technical Reports Server (NTRS)

    Sikora, Scott E.

    1990-01-01

    The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.

  9. Models of research-operational collaboration for behavioral health in space.

    PubMed

    Palinkas, Lawrence A; Allred, Charlene A; Landsverk, John A

    2005-06-01

    Addressing the behavioral health needs of astronauts clearly requires collaborations involving researchers, clinicians and operational support personnel, program administrators, and the astronauts themselves. However, such collaborations are often compromised by a failure to understand the needs, priorities, constraints, and preferences of potential collaborators. This failure, in turn, can lead to research of poor quality, implementation of programs and procedures that are not evidence-based, and an increased risk of morbidity and mission failure. The experiences of social marketing strategies in health promotion and disease prevention, cultural exchange between developers of evidence-based treatments and consumers, and dissemination and implementation of evidence-based practices in mental health services offer three different models of research-operational collaboration with relevance to behavioral health in space. Central to each of these models are the patterns of interpersonal relations and the individual, social, and organizational characteristics that influence these patterns. Any program or countermeasure for behavioral health in space must be both needs-based and evidence-based. The successful development, dissemination, implementation, and sustainability of such a program require communication, collaboration, and consensus among all key stakeholders. To accomplish this, all stakeholders must participate in creating a culture of operational research.

  10. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  11. CRISPR/Cas9 Technology Targeting Fas Gene Protects Mice From Concanavalin-A Induced Fulminant Hepatic Failure.

    PubMed

    Liang, Wei-Cheng; Liang, Pu-Ping; Wong, Cheuk-Wa; Ng, Tzi-Bun; Huang, Jun-Jiu; Zhang, Jin-Fang; Waye, Mary Miu-Yee; Fu, Wei-Ming

    2017-03-01

    Fulminant hepatic failure is a life-threatening disease which occurs in patients without preexisting liver disease. Nowadays, there is no ideal therapeutic tool in the treatment of fulminant hepatic failure. Recent studies suggested that a novel technology termed CRISPR/Cas9 may be a promising approach for the treatment of fulminant hepatic failure. In this project, we have designed single chimeric guide RNAs specifically targeting the genomic regions of mouse Fas gene. The in vitro and in vivo effects of sgRNAs on the production of Fas protein were examined in cultured mouse cells and in a hydrodynamic injection-based mouse model, respectively. The in vivo delivery of CRISPR/Cas9 could maintain liver homeostasis and protect hepatocytes from Fas-mediated cell apoptosis in the fulminant hepatic failure model. Our study indicates the clinical potential of developing the CRISPR/Cas9 system as a novel therapeutic strategy to rescue Concanavalin-A-induced fulminant hepatic failure in the mouse model. J. Cell. Biochem. 118: 530-536, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Statistical analysis of cascading failures in power grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chertkov, Michael; Pfitzner, Rene; Turitsyn, Konstantin

    2010-12-01

    We introduce a new microscopic model of cascading failures in transmission power grids. This model accounts for automatic response of the grid to load fluctuations that take place on the scale of minutes, when optimum power flow adjustments and load shedding controls are unavailable. We describe extreme events, caused by load fluctuations, which cause cascading failures of loads, generators and lines. Our model is quasi-static in the causal, discrete time and sequential resolution of individual failures. The model, in its simplest realization based on the Directed Current description of the power flow problem, is tested on three standard IEEE systemsmore » consisting of 30, 39 and 118 buses. Our statistical analysis suggests a straightforward classification of cascading and islanding phases in terms of the ratios between average number of removed loads, generators and links. The analysis also demonstrates sensitivity to variations in line capacities. Future research challenges in modeling and control of cascading outages over real-world power networks are discussed.« less

  13. Finite Element Model for Failure Study of Two-Dimensional Triaxially Braided Composite

    NASA Technical Reports Server (NTRS)

    Li, Xuetao; Binienda, Wieslaw K.; Goldberg, Robert K.

    2010-01-01

    A new three-dimensional finite element model of two-dimensional triaxially braided composites is presented in this paper. This meso-scale modeling technique is used to examine and predict the deformation and damage observed in tests of straight sided specimens. A unit cell based approach is used to take into account the braiding architecture as well as the mechanical properties of the fiber tows, the matrix and the fiber tow-matrix interface. A 0 deg / plus or minus 60 deg. braiding configuration has been investigated by conducting static finite element analyses. Failure initiation and progressive degradation has been simulated in the fiber tows by use of the Hashin failure criteria and a damage evolution law. The fiber tow-matrix interface was modeled by using a cohesive zone approach to capture any fiber-matrix debonding. By comparing the analytical results to those obtained experimentally, the applicability of the developed model was assessed and the failure process was investigated.

  14. A model for the progressive failure of laminated composite structural components

    NASA Technical Reports Server (NTRS)

    Allen, D. H.; Lo, D. C.

    1991-01-01

    Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.

  15. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  16. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  17. Reliability Analysis of Systems Subject to First-Passage Failure

    NASA Technical Reports Server (NTRS)

    Lutes, Loren D.; Sarkani, Shahram

    2009-01-01

    An obvious goal of reliability analysis is the avoidance of system failure. However, it is generally recognized that it is often not feasible to design a practical or useful system for which failure is impossible. Thus it is necessary to use techniques that estimate the likelihood of failure based on modeling the uncertainty about such items as the demands on and capacities of various elements in the system. This usually involves the use of probability theory, and a design is considered acceptable if it has a sufficiently small probability of failure. This report contains findings of analyses of systems subject to first-passage failure.

  18. Enhanced stability of steep channel beds to mass failure and debris flow initiation

    NASA Astrophysics Data System (ADS)

    Prancevic, J.; Lamb, M. P.; Ayoub, F.; Venditti, J. G.

    2015-12-01

    Debris flows dominate bedrock erosion and sediment transport in very steep mountain channels, and are often initiated from failure of channel-bed alluvium during storms. While several theoretical models exist to predict mass failures, few have been tested because observations of in-channel bed failures are extremely limited. To fill this gap in our understanding, we performed laboratory flume experiments to identify the conditions necessary to initiate bed failures in non-cohesive sediment of different sizes (D = 0.7 mm to 15 mm) on steep channel-bed slopes (S = 0.45 to 0.93) and in the presence of water flow. In beds composed of sand, failures occurred under sub-saturated conditions on steep bed slopes (S > 0.5) and under super-saturated conditions at lower slopes. In beds of gravel, however, failures occurred only under super-saturated conditions at all tested slopes, even those approaching the dry angle of repose. Consistent with theoretical models, mass failures under super-saturated conditions initiated along a failure plane approximately one grain-diameter below the bed surface, whereas the failure plane was located near the base of the bed under sub-saturated conditions. However, all experimental beds were more stable than predicted by 1-D infinite-slope stability models. In partially saturated sand, enhanced stability appears to result from suction stress. Enhanced stability in gravel may result from turbulent energy losses in pores or increased granular friction for failures that are shallow with respect to grain size. These grain-size dependent effects are not currently included in stability models for non-cohesive sediment, and they may help to explain better the timing and location of debris flow occurrence.

  19. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  20. Mechanistic Considerations Used in the Development of the PROFIT PCI Failure Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pankaskie, P. J.

    A fuel Pellet-Zircaloy Cladding (thermo-mechanical-chemical) Interactions (PC!) failure model for estimating the probability of failure in !ransient increases in power (PROFIT) was developed. PROFIT is based on 1) standard statistical methods applied to available PC! fuel failure data and 2) a mechanistic analysis of the environmental and strain-rate-dependent stress versus strain characteristics of Zircaloy cladding. The statistical analysis of fuel failures attributable to PCI suggested that parameters in addition to power, transient increase in power, and burnup are needed to define PCI fuel failures in terms of probability estimates with known confidence limits. The PROFIT model, therefore, introduces an environmentalmore » and strain-rate dependent strain energy absorption to failure (SEAF) concept to account for the stress versus strain anomalies attributable to interstitial-disloction interaction effects in the Zircaloy cladding. Assuming that the power ramping rate is the operating corollary of strain-rate in the Zircaloy cladding, then the variables of first order importance in the PCI fuel failure phenomenon are postulated to be: 1. pre-transient fuel rod power, P{sub I}, 2. transient increase in fuel rod power, {Delta}P, 3. fuel burnup, Bu, and 4. the constitutive material property of the Zircaloy cladding, SEAF.« less

  1. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  2. Simulation as a preoperative planning approach in advanced heart failure patients. A retrospective clinical analysis.

    PubMed

    Capoccia, Massimo; Marconi, Silvia; Singh, Sanjeet Avtaar; Pisanelli, Domenico M; De Lazzari, Claudio

    2018-05-02

    Modelling and simulation may become clinically applicable tools for detailed evaluation of the cardiovascular system and clinical decision-making to guide therapeutic intervention. Models based on pressure-volume relationship and zero-dimensional representation of the cardiovascular system may be a suitable choice given their simplicity and versatility. This approach has great potential for application in heart failure where the impact of left ventricular assist devices has played a significant role as a bridge to transplant and more recently as a long-term solution for non eligible candidates. We sought to investigate the value of simulation in the context of three heart failure patients with a view to predict or guide further management. CARDIOSIM © was the software used for this purpose. The study was based on retrospective analysis of haemodynamic data previously discussed at a multidisciplinary meeting. The outcome of the simulations addressed the value of a more quantitative approach in the clinical decision process. Although previous experience, co-morbidities and the risk of potentially fatal complications play a role in clinical decision-making, patient-specific modelling may become a daily approach for selection and optimisation of device-based treatment for heart failure patients. Willingness to adopt this integrated approach may be the key to further progress.

  3. Probabilistic Analysis of a Composite Crew Module

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Krishnamurthy, Thiagarajan

    2011-01-01

    An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.

  4. Subject specific finite element modeling of periprosthetic femoral fracture using element deactivation to simulate bone failure.

    PubMed

    Miles, Brad; Kolos, Elizabeth; Walter, William L; Appleyard, Richard; Shi, Angela; Li, Qing; Ruys, Andrew J

    2015-06-01

    Subject-specific finite element (FE) modeling methodology could predict peri-prosthetic femoral fracture (PFF) for cementless hip arthoplasty in the early postoperative period. This study develops methodology for subject-specific finite element modeling by using the element deactivation technique to simulate bone failure and validate with experimental testing, thereby predicting peri-prosthetic femoral fracture in the early postoperative period. Material assignments for biphasic and triphasic models were undertaken. Failure modeling with the element deactivation feature available in ABAQUS 6.9 was used to simulate a crack initiation and propagation in the bony tissue based upon a threshold of fracture strain. The crack mode for the biphasic models was very similar to the experimental testing crack mode, with a similar shape and path of the crack. The fracture load is sensitive to the friction coefficient at the implant-bony interface. The development of a novel technique to simulate bone failure by element deactivation of subject-specific finite element models could aid prediction of fracture load in addition to fracture risk characterization for PFF. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    PubMed

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Damage-based life prediction model for uniaxial low-cycle stress fatigue of super-elastic NiTi shape memory alloy microtubes

    NASA Astrophysics Data System (ADS)

    Song, Di; Kang, Guozheng; Kan, Qianhua; Yu, Chao; Zhang, Chuanzeng

    2015-08-01

    Based on the experimental observations for the uniaxial low-cycle stress fatigue failure of super-elastic NiTi shape memory alloy microtubes (Song et al 2015 Smart Mater. Struct. 24 075004) and a new definition of damage variable corresponding to the variation of accumulated dissipation energy, a phenomenological damage model is proposed to describe the damage evolution of the NiTi microtubes during cyclic loading. Then, with a failure criterion of Dc = 1, the fatigue lives of the NiTi microtubes are predicted by the damage-based model, the predicted lives are in good agreement with the experimental ones, and all of the points are located within an error band of 1.5 times.

  7. Methodology for Physics and Engineering of Reliable Products

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Gibbel, Mark

    1996-01-01

    Physics of failure approaches have gained wide spread acceptance within the electronic reliability community. These methodologies involve identifying root cause failure mechanisms, developing associated models, and utilizing these models to inprove time to market, lower development and build costs and higher reliability. The methodology outlined herein sets forth a process, based on integration of both physics and engineering principles, for achieving the same goals.

  8. Construct validity of the Chinese version of the Self-care of Heart Failure Index determined using structural equation modeling.

    PubMed

    Kang, Xiaofeng; Dennison Himmelfarb, Cheryl R; Li, Zheng; Zhang, Jian; Lv, Rong; Guo, Jinyu

    2015-01-01

    The Self-care of Heart Failure Index (SCHFI) is an empirically tested instrument for measuring the self-care of patients with heart failure. The aim of this study was to develop a simplified Chinese version of the SCHFI and provide evidence for its construct validity. A total of 182 Chinese with heart failure were surveyed. A 2-step structural equation modeling procedure was applied to test construct validity. Factor analysis showed 3 factors explaining 43% of the variance. Structural equation model confirmed that self-care maintenance, self-care management, and self-care confidence are indeed indicators of self-care, and self-care confidence was a positive and equally strong predictor of self-care maintenance and self-care management. Moreover, self-care scores were correlated with the Partners in Health Scale, indicating satisfactory concurrent validity. The Chinese version of the SCHFI is a theory-based instrument for assessing self-care of Chinese patients with heart failure.

  9. Model Based Autonomy for Robust Mars Operations

    NASA Technical Reports Server (NTRS)

    Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.

  10. FMEA of manual and automated methods for commissioning a radiotherapy treatment planning system.

    PubMed

    Wexler, Amy; Gu, Bruce; Goddu, Sreekrishna; Mutic, Maya; Yaddanapudi, Sridhar; Olsen, Lindsey; Harry, Taylor; Noel, Camille; Pawlicki, Todd; Mutic, Sasa; Cai, Bin

    2017-09-01

    To evaluate the level of risk involved in treatment planning system (TPS) commissioning using a manual test procedure, and to compare the associated process-based risk to that of an automated commissioning process (ACP) by performing an in-depth failure modes and effects analysis (FMEA). The authors collaborated to determine the potential failure modes of the TPS commissioning process using (a) approaches involving manual data measurement, modeling, and validation tests and (b) an automated process utilizing application programming interface (API) scripting, preloaded, and premodeled standard radiation beam data, digital heterogeneous phantom, and an automated commissioning test suite (ACTS). The severity (S), occurrence (O), and detectability (D) were scored for each failure mode and the risk priority numbers (RPN) were derived based on TG-100 scale. Failure modes were then analyzed and ranked based on RPN. The total number of failure modes, RPN scores and the top 10 failure modes with highest risk were described and cross-compared between the two approaches. RPN reduction analysis is also presented and used as another quantifiable metric to evaluate the proposed approach. The FMEA of a MTP resulted in 47 failure modes with an RPN ave of 161 and S ave of 6.7. The highest risk process of "Measurement Equipment Selection" resulted in an RPN max of 640. The FMEA of an ACP resulted in 36 failure modes with an RPN ave of 73 and S ave of 6.7. The highest risk process of "EPID Calibration" resulted in an RPN max of 576. An FMEA of treatment planning commissioning tests using automation and standardization via API scripting, preloaded, and pre-modeled standard beam data, and digital phantoms suggests that errors and risks may be reduced through the use of an ACP. © 2017 American Association of Physicists in Medicine.

  11. Attack Vulnerability of Network Controllability

    PubMed Central

    2016-01-01

    Controllability of complex networks has attracted much attention, and understanding the robustness of network controllability against potential attacks and failures is of practical significance. In this paper, we systematically investigate the attack vulnerability of network controllability for the canonical model networks as well as the real-world networks subject to attacks on nodes and edges. The attack strategies are selected based on degree and betweenness centralities calculated for either the initial network or the current network during the removal, among which random failure is as a comparison. It is found that the node-based strategies are often more harmful to the network controllability than the edge-based ones, and so are the recalculated strategies than their counterparts. The Barabási-Albert scale-free model, which has a highly biased structure, proves to be the most vulnerable of the tested model networks. In contrast, the Erdős-Rényi random model, which lacks structural bias, exhibits much better robustness to both node-based and edge-based attacks. We also survey the control robustness of 25 real-world networks, and the numerical results show that most real networks are control robust to random node failures, which has not been observed in the model networks. And the recalculated betweenness-based strategy is the most efficient way to harm the controllability of real-world networks. Besides, we find that the edge degree is not a good quantity to measure the importance of an edge in terms of network controllability. PMID:27588941

  12. Attack Vulnerability of Network Controllability.

    PubMed

    Lu, Zhe-Ming; Li, Xin-Feng

    2016-01-01

    Controllability of complex networks has attracted much attention, and understanding the robustness of network controllability against potential attacks and failures is of practical significance. In this paper, we systematically investigate the attack vulnerability of network controllability for the canonical model networks as well as the real-world networks subject to attacks on nodes and edges. The attack strategies are selected based on degree and betweenness centralities calculated for either the initial network or the current network during the removal, among which random failure is as a comparison. It is found that the node-based strategies are often more harmful to the network controllability than the edge-based ones, and so are the recalculated strategies than their counterparts. The Barabási-Albert scale-free model, which has a highly biased structure, proves to be the most vulnerable of the tested model networks. In contrast, the Erdős-Rényi random model, which lacks structural bias, exhibits much better robustness to both node-based and edge-based attacks. We also survey the control robustness of 25 real-world networks, and the numerical results show that most real networks are control robust to random node failures, which has not been observed in the model networks. And the recalculated betweenness-based strategy is the most efficient way to harm the controllability of real-world networks. Besides, we find that the edge degree is not a good quantity to measure the importance of an edge in terms of network controllability.

  13. LARC-1: a Los Alamos release calculation program for fission product transport in HTGRs during the LOFC accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carruthers, L.M.; Lee, C.E.

    1976-10-01

    The theoretical and numerical data base development of the LARC-1 code is described. Four analytical models of fission product release from an HTGR core during the loss of forced circulation accident are developed. Effects of diffusion, adsorption and evaporation of the metallics and precursors are neglected in this first LARC model. Comparison of the analytic models indicates that the constant release-renormalized model is adequate to describe the processes involved. The numerical data base for release constants, temperature modeling, fission product release rates, coated fuel particle failure fraction and aged coated fuel particle failure fractions is discussed. Analytic fits and graphicmore » displays for these data are given for the Ft. St. Vrain and GASSAR models.« less

  14. Limited improvement of incorporating primary circulating prostate cells with the CAPRA score to predict biochemical failure-free outcome of radical prostatectomy for prostate cancer.

    PubMed

    Murray, Nigel P; Aedo, Socrates; Fuentealba, Cynthia; Jacob, Omar; Reyes, Eduardo; Novoa, Camilo; Orellana, Sebastian; Orellana, Nelson

    2016-10-01

    To establish a prediction model for early biochemical failure based on the Cancer of the Prostate Risk Assessment (CAPRA) score, the presence or absence of primary circulating prostate cells (CPC) and the number of primary CPC (nCPC)/8ml blood sample is detected before surgery. A prospective single-center study of men who underwent radical prostatectomy as monotherapy for prostate cancer. Clinical-pathological findings were used to calculate the CAPRA score. Before surgery blood was taken for CPC detection, mononuclear cells were obtained using differential gel centrifugation, and CPCs identified using immunocytochemistry. A CPC was defined as a cell expressing prostate-specific antigen and P504S, and the presence or absence of CPCs and the number of cells detected/8ml blood sample was registered. Patients were followed up for up to 5 years; biochemical failure was defined as a prostate-specific antigen>0.2ng/ml. The validity of the CAPRA score was calibrated using partial validation, and the fractional polynomial Cox proportional hazard regression was used to build 3 models, which underwent a decision analysis curve to determine the predictive value of the 3 models with respect to biochemical failure. A total of 267 men participated, mean age 65.80 years, and after 5 years of follow-up the biochemical-free survival was 67.42%. The model using CAPRA score showed a hazards ratio (HR) of 5.76 between low and high-risk groups, that of CPC with a HR of 26.84 between positive and negative groups, and the combined model showed a HR of 4.16 for CAPRA score and 19.93 for CPC. Using the continuous variable nCPC, there was no improvement in the predictive value of the model compared with the model using a positive-negative result of CPC detection. The combined CAPRA-nCPC model showed an improvement of the predictive performance for biochemical failure using the Harrell׳s C concordance test and a net benefit on DCA in comparison with either model used separately. The use of primary CPC as a predictive factor based on their presence or absence did not predict aggressive disease or biochemical failure. Although the use of a combined CAPRA-nCPC model improves the prediction of biochemical failure in patients undergoing radical prostatectomy for prostate cancer, this is minimal. The use of the presence or absence of primary CPCs alone did not predict aggressive disease or biochemical failure. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Detailed investigation of causes of avionics field failures

    NASA Astrophysics Data System (ADS)

    Kallis, J. M.; Buechler, D. W.; Richardson, Z. C.; Backes, P. G.; Lopez, S. B.; Erickson, J. J.; van Westerhuyzen, D. H.

    A detailed analysis of digital and analog modules from the F-15 AN/APG-63 Radar was performed to identify the kinds, types, and number of life models based on observed failure modes, mechanisms, locations, and characteristics needed to perform a Failure Free Operating Period prediction for these items. It is found that a significant fraction of the failures of the analog module and a small fraction of those of the digital module resulted from the exacerbation of latent defects by environmental stresses. It is also found that the fraction of failures resulting from thermal cycling and vibration is small.

  16. Continuum Damage Mechanics Models for the Analysis of Progressive Failure in Open-Hole Tension Laminates

    NASA Technical Reports Server (NTRS)

    Song, Kyonchan; Li, Yingyong; Rose, Cheryl A.

    2011-01-01

    The performance of a state-of-the-art continuum damage mechanics model for interlaminar damage, coupled with a cohesive zone model for delamination is examined for failure prediction of quasi-isotropic open-hole tension laminates. Limitations of continuum representations of intra-ply damage and the effect of mesh orientation on the analysis predictions are discussed. It is shown that accurate prediction of matrix crack paths and stress redistribution after cracking requires a mesh aligned with the fiber orientation. Based on these results, an aligned mesh is proposed for analysis of the open-hole tension specimens consisting of different meshes within the individual plies, such that the element edges are aligned with the ply fiber direction. The modeling approach is assessed by comparison of analysis predictions to experimental data for specimen configurations in which failure is dominated by complex interactions between matrix cracks and delaminations. It is shown that the different failure mechanisms observed in the tests are well predicted. In addition, the modeling approach is demonstrated to predict proper trends in the effect of scaling on strength and failure mechanisms of quasi-isotropic open-hole tension laminates.

  17. A vector-based failure detection and isolation algorithm for a dual fail-operational redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, Frederick R.; Bailey, Melvin L.

    1987-01-01

    A vector-based failure detection and isolation technique for a skewed array of two degree-of-freedom inertial sensors is developed. Failure detection is based on comparison of parity equations with a threshold, and isolation is based on comparison of logic variables which are keyed to pass/fail results of the parity test. A multi-level approach to failure detection is used to ensure adequate coverage for the flight control, display, and navigation avionics functions. Sensor error models are introduced to expose the susceptibility of the parity equations to sensor errors and physical separation effects. The algorithm is evaluated in a simulation of a commercial transport operating in a range of light to severe turbulence environments. A bias-jump failure level of 0.2 deg/hr was detected and isolated properly in the light and moderate turbulence environments, but not detected in the extreme turbulence environment. An accelerometer bias-jump failure level of 1.5 milli-g was detected over all turbulence environments. For both types of inertial sensor, hard-over, and null type failures were detected in all environments without incident. The algorithm functioned without false alarm or isolation over all turbulence environments for the runs tested.

  18. Damage evolution of bi-body model composed of weakly cemented soft rock and coal considering different interface effect.

    PubMed

    Zhao, Zenghui; Lv, Xianzhou; Wang, Weiming; Tan, Yunliang

    2016-01-01

    Considering the structure effect of tunnel stability in western mining of China, three typical kinds of numerical model were respectively built as follows based on the strain softening constitutive model and linear elastic-perfectly plastic model for soft rock and interface: R-M, R-C(s)-M and R-C(w)-M. Calculation results revealed that the stress-strain relation and failure characteristics of the three models vary between each other. The combination model without interface or with a strong interface presented continuous failure, while weak interface exhibited 'cut off' effect. Thus, conceptual models of bi-material model and bi-body model were established. Then numerical experiments of tri-axial compression were carried out for the two models. The relationships between stress evolution, failure zone and deformation rate fluctuations as well as the displacement of interface were detailed analyzed. Results show that two breakaway points of deformation rate actually demonstrate the starting and penetration of the main rupture, respectively. It is distinguishable due to the large fluctuation. The bi-material model shows general continuous failure while bi-body model shows 'V' type shear zone in weak body and failure in strong body near the interface due to the interface effect. With the increasing of confining pressure, the 'cut off' effect of weak interface is not obvious. These conclusions lay the theoretical foundation for further development of constitutive model for soft rock-coal combination body.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Lin, Guang

    In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less

  20. Unified continuum damage model for matrix cracking in composite rotor blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollayi, Hemaraju; Harursampath, Dineshkumar

    This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system undermore » various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load.« less

  1. Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model

    PubMed Central

    NING, JING; QIN, JING; SHEN, YU

    2014-01-01

    SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727

  2. Bayesian transformation cure frailty models with multivariate failure time data.

    PubMed

    Yin, Guosheng

    2008-12-10

    We propose a class of transformation cure frailty models to accommodate a survival fraction in multivariate failure time data. Established through a general power transformation, this family of cure frailty models includes the proportional hazards and the proportional odds modeling structures as two special cases. Within the Bayesian paradigm, we obtain the joint posterior distribution and the corresponding full conditional distributions of the model parameters for the implementation of Gibbs sampling. Model selection is based on the conditional predictive ordinate statistic and deviance information criterion. As an illustration, we apply the proposed method to a real data set from dentistry.

  3. LS-DYNA Simulation of Hemispherical-punch Stamping Process Using an Efficient Algorithm for Continuum Damage Based Elastoplastic Constitutive Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salajegheh, Nima; Abedrabbo, Nader; Pourboghrat, Farhang

    An efficient integration algorithm for continuum damage based elastoplastic constitutive equations is implemented in LS-DYNA. The isotropic damage parameter is defined as the ratio of the damaged surface area over the total cross section area of the representative volume element. This parameter is incorporated into the integration algorithm as an internal variable. The developed damage model is then implemented in the FEM code LS-DYNA as user material subroutine (UMAT). Pure stretch experiments of a hemispherical punch are carried out for copper sheets and the results are compared against the predictions of the implemented damage model. Evaluation of damage parameters ismore » carried out and the optimized values that correctly predicted the failure in the sheet are reported. Prediction of failure in the numerical analysis is performed through element deletion using the critical damage value. The set of failure parameters which accurately predict the failure behavior in copper sheets compared to experimental data is reported as well.« less

  4. Physics-based Entry, Descent and Landing Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Huynh, Loc C.; Manning, Ted

    2014-01-01

    A physics-based risk model was developed to assess the risk associated with thermal protection system failures during the entry, descent and landing phase of a manned spacecraft mission. In the model, entry trajectories were computed using a three-degree-of-freedom trajectory tool, the aerothermodynamic heating environment was computed using an engineering-level computational tool and the thermal response of the TPS material was modeled using a one-dimensional thermal response tool. The model was capable of modeling the effect of micrometeoroid and orbital debris impact damage on the TPS thermal response. A Monte Carlo analysis was used to determine the effects of uncertainties in the vehicle state at Entry Interface, aerothermodynamic heating and material properties on the performance of the TPS design. The failure criterion was set as a temperature limit at the bondline between the TPS and the underlying structure. Both direct computation and response surface approaches were used to compute the risk. The model was applied to a generic manned space capsule design. The effect of material property uncertainty and MMOD damage on risk of failure were analyzed. A comparison of the direct computation and response surface approach was undertaken.

  5. Cascading failures mechanism based on betweenness-degree ratio distribution with different connecting preferences

    NASA Astrophysics Data System (ADS)

    Wang, Xiao Juan; Guo, Shi Ze; Jin, Lei; Chen, Mo

    We study the structural robustness of the scale free network against the cascading failure induced by overload. In this paper, a failure mechanism based on betweenness-degree ratio distribution is proposed. In the cascading failure model we built the initial load of an edge which is proportional to the node betweenness of its ends. During the edge random deletion, we find a phase transition. Then based on the phase transition, we divide the process of the cascading failure into two parts: the robust area and the vulnerable area, and define the corresponding indicator to measure the performance of the networks in both areas. From derivation, we find that the vulnerability of the network is determined by the distribution of betweenness-degree ratio. After that we use the connection between the node ability coefficient and distribution of betweenness-degree ratio to explain the cascading failure mechanism. In simulations, we verify the correctness of our derivations. By changing connecting preferences, we find scale free networks with a slight assortativity, which performs better both in robust area and vulnerable area.

  6. A New Rock Strength Criterion from Microcracking Mechanisms Which Provides Theoretical Evidence of Hybrid Failure

    NASA Astrophysics Data System (ADS)

    Zhu, Qi-Zhi

    2017-02-01

    A proper criterion describing when material fails is essential for deep understanding and constitutive modeling of rock damage and failure by microcracking. Physically, such a criterion should be the global effect of local mechanical response and microstructure evolution inside the material. This paper aims at deriving a new mechanisms-based failure criterion for brittle rocks, based on micromechanical unilateral damage-friction coupling analyses rather than on the basic results from the classical linear elastic fracture mechanics. The failure functions respectively describing three failure modes (purely tensile mode, tensile-shear mode as well as compressive-shear mode) are achieved in a unified upscaling framework and illustrated in the Mohr plane and also in the plane of principal stresses. The strength envelope is proved to be continuous and smooth with a compressive to tensile strength ratio dependent on material properties. Comparisons with experimental data are finally carried out. By this work, we also provide a theoretical evidence on the hybrid failure and the smooth transition from tensile failure to compressive-shear failure.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, Isaac M.

    To facilitate and accelerate the process of introducing, evaluating and adopting of new material systems, it is important to develop/establish comprehensive and effective procedures of characterization, modeling and failure prediction of structural laminates based on the properties of the constituent materials, e. g., fibers, matrix, and the single ply or lamina. A new failure theory, the Northwestern (NU-Daniel) theory, has been proposed for predicting lamina yielding and failure under multi-axial states of stress including strain rate effects. It is primarily applicable to matrix-dominated interfiber/interlaminar failures. It is based on micromechanical failure mechanisms but is expressed in terms of easily measuredmore » macroscopic lamina stiffness and strength properties. It is presented in the form of a master failure envelope incorporating strain rate effects. The theory was further adapted and extended to the prediction of in situ first ply yielding and failure (FPY and FPF) and progressive failure of multi-directional laminates under static and dynamic loadings. The significance of this theory is that it allows for rapid screening of new composite materials without very extensive testing and offers easily implemented design tools.« less

  8. Chronic Heart Failure Follow-up Management Based on Agent Technology

    PubMed Central

    Safdari, Reza

    2015-01-01

    Objectives Monitoring heart failure patients through continues assessment of sign and symptoms by information technology tools lead to large reduction in re-hospitalization. Agent technology is one of the strongest artificial intelligence areas; therefore, it can be expected to facilitate, accelerate, and improve health services especially in home care and telemedicine. The aim of this article is to provide an agent-based model for chronic heart failure (CHF) follow-up management. Methods This research was performed in 2013-2014 to determine appropriate scenarios and the data required to monitor and follow-up CHF patients, and then an agent-based model was designed. Results Agents in the proposed model perform the following tasks: medical data access, communication with other agents of the framework and intelligent data analysis, including medical data processing, reasoning, negotiation for decision-making, and learning capabilities. Conclusions The proposed multi-agent system has ability to learn and thus improve itself. Implementation of this model with more and various interval times at a broader level could achieve better results. The proposed multi-agent system is no substitute for cardiologists, but it could assist them in decision-making. PMID:26618038

  9. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  10. Remediation Strategies for Learners at Risk of Failure: A Course Based Retention Model

    ERIC Educational Resources Information Center

    Gajewski, Agnes; Mather, Meera

    2015-01-01

    This paper presents an overview and discussion of a course based remediation model developed to enhance student learning and increased retention based on literature. This model focuses on course structure and course delivery in a compressed semester format. A comparative analysis was applied to a pilot study of students enrolled in a course…

  11. Prediction of line failure fault based on weighted fuzzy dynamic clustering and improved relational analysis

    NASA Astrophysics Data System (ADS)

    Meng, Xiaocheng; Che, Renfei; Gao, Shi; He, Juntao

    2018-04-01

    With the advent of large data age, power system research has entered a new stage. At present, the main application of large data in the power system is the early warning analysis of the power equipment, that is, by collecting the relevant historical fault data information, the system security is improved by predicting the early warning and failure rate of different kinds of equipment under certain relational factors. In this paper, a method of line failure rate warning is proposed. Firstly, fuzzy dynamic clustering is carried out based on the collected historical information. Considering the imbalance between the attributes, the coefficient of variation is given to the corresponding weights. And then use the weighted fuzzy clustering to deal with the data more effectively. Then, by analyzing the basic idea and basic properties of the relational analysis model theory, the gray relational model is improved by combining the slope and the Deng model. And the incremental composition and composition of the two sequences are also considered to the gray relational model to obtain the gray relational degree between the various samples. The failure rate is predicted according to the principle of weighting. Finally, the concrete process is expounded by an example, and the validity and superiority of the proposed method are verified.

  12. Reliability-based management of buried pipelines considering external corrosion defects

    NASA Astrophysics Data System (ADS)

    Miran, Seyedeh Azadeh

    Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.

  13. Model-based diagnostics for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Martin, Eric R.; Lerutte, Marcel G.

    1991-01-01

    An innovative approach to fault management was recently demonstrated for the NASA LeRC Space Station Freedom (SSF) power system testbed. This project capitalized on research in model-based reasoning, which uses knowledge of a system's behavior to monitor its health. The fault management system (FMS) can isolate failures online, or in a post analysis mode, and requires no knowledge of failure symptoms to perform its diagnostics. An in-house tool called MARPLE was used to develop and run the FMS. MARPLE's capabilities are similar to those available from commercial expert system shells, although MARPLE is designed to build model-based as opposed to rule-based systems. These capabilities include functions for capturing behavioral knowledge, a reasoning engine that implements a model-based technique known as constraint suspension, and a tool for quickly generating new user interfaces. The prototype produced by applying MARPLE to SSF not only demonstrated that model-based reasoning is a valuable diagnostic approach, but it also suggested several new applications of MARPLE, including an integration and testing aid, and a complement to state estimation.

  14. Chronic heart failure management in Australia -- time for general practice centred models of care?

    PubMed

    Scott, Ian; Jackson, Claire

    2013-05-01

    Chronic heart failure (CHF) is an increasingly prevalent problem within ageing populations and accounts for thousands of hospitalisations and deaths annually in Australia. Disease management programs for CHF (CHF-DMPs) aim to optimise care, with the predominant model being cardiologist led, hospital based multidisciplinary clinics with cardiac nurse outreach. However, findings from contemporary observational studies and clinical trials raise uncertainty around the effectiveness and sustainability of traditional CHF-DMPs in real-world clinical practice. To suggest an alternative model of care that involves general practitioners with a special interest in CHF liaising with, and being up-skilled by, specialists within community based, multidisciplinary general practice settings. Preliminary data from trials evaluating primary care based CHF-DMPs are encouraging, and further studies are underway comparing this model of care with traditional hospital based, specialist led CHF-DMPs. Results of studies of similar primary care models targeting diabetes and other chronic diseases suggest potential for its application to CHF.

  15. Space Shuttle Main Engine Quantitative Risk Assessment: Illustrating Modeling of a Complex System with a New QRA Software Package

    NASA Technical Reports Server (NTRS)

    Smart, Christian

    1998-01-01

    During 1997, a team from Hernandez Engineering, MSFC, Rocketdyne, Thiokol, Pratt & Whitney, and USBI completed the first phase of a two year Quantitative Risk Assessment (QRA) of the Space Shuttle. The models for the Shuttle systems were entered and analyzed by a new QRA software package. This system, termed the Quantitative Risk Assessment System(QRAS), was designed by NASA and programmed by the University of Maryland. The software is a groundbreaking PC-based risk assessment package that allows the user to model complex systems in a hierarchical fashion. Features of the software include the ability to easily select quantifications of failure modes, draw Event Sequence Diagrams(ESDs) interactively, perform uncertainty and sensitivity analysis, and document the modeling. This paper illustrates both the approach used in modeling and the particular features of the software package. The software is general and can be used in a QRA of any complex engineered system. The author is the project lead for the modeling of the Space Shuttle Main Engines (SSMEs), and this paper focuses on the modeling completed for the SSMEs during 1997. In particular, the groundrules for the study, the databases used, the way in which ESDs were used to model catastrophic failure of the SSMES, the methods used to quantify the failure rates, and how QRAS was used in the modeling effort are discussed. Groundrules were necessary to limit the scope of such a complex study, especially with regard to a liquid rocket engine such as the SSME, which can be shut down after ignition either on the pad or in flight. The SSME was divided into its constituent components and subsystems. These were ranked on the basis of the possibility of being upgraded and risk of catastrophic failure. Once this was done the Shuttle program Hazard Analysis and Failure Modes and Effects Analysis (FMEA) were used to create a list of potential failure modes to be modeled. The groundrules and other criteria were used to screen out the many failure modes that did not contribute significantly to the catastrophic risk. The Hazard Analysis and FMEA for the SSME were also used to build ESDs that show the chain of events leading from the failure mode occurence to one of the following end states: catastrophic failure, engine shutdown, or siccessful operation( successful with respect to the failure mode under consideration).

  16. Risk factors for early failure after peripheral endovascular intervention: application of a reliability engineering approach.

    PubMed

    Meltzer, Andrew J; Graham, Ashley; Connolly, Peter H; Karwowski, John K; Bush, Harry L; Frazier, Peter I; Schneider, Darren B

    2013-01-01

    We apply an innovative and novel analytic approach, based on reliability engineering (RE) principles frequently used to characterize the behavior of manufactured products, to examine outcomes after peripheral endovascular intervention. We hypothesized that this would allow for improved prediction of outcome after peripheral endovascular intervention, specifically with regard to identification of risk factors for early failure. Patients undergoing infrainguinal endovascular intervention for chronic lower-extremity ischemia from 2005 to 2010 were identified in a prospectively maintained database. The primary outcome of failure was defined as patency loss detected by duplex ultrasonography, with or without clinical failure. Analysis included univariate and multivariate Cox regression models, as well as RE-based analysis including product life-cycle models and Weibull failure plots. Early failures were distinguished using the RE principle of "basic rating life," and multivariate models identified independent risk factors for early failure. From 2005 to 2010, 434 primary endovascular peripheral interventions were performed for claudication (51.8%), rest pain (16.8%), or tissue loss (31.3%). Fifty-five percent of patients were aged ≥75 years; 57% were men. Failure was noted after 159 (36.6%) interventions during a mean follow-up of 18 months (range, 0-71 months). Using multivariate (Cox) regression analysis, rest pain and tissue loss were independent predictors of patency loss, with hazard ratios of 2.5 (95% confidence interval, 1.6-4.1; P < 0.001) and 3.2 (95% confidence interval, 2.0-5.2, P < 0.001), respectively. The distribution of failure times for both claudication and critical limb ischemia fit distinct Weibull plots, with different characteristics: interventions for claudication demonstrated an increasing failure rate (β = 1.22, θ = 13.46, mean time to failure = 12.603 months, index of fit = 0.99037, R(2) = 0.98084), whereas interventions for critical limb ischemia demonstrated a decreasing failure rate, suggesting the predominance of early failures (β = 0.7395, θ = 6.8, mean time to failure = 8.2, index of fit = 0.99391, R(2) = 0.98786). By 3.1 months, 10% of interventions failed. This point (90% reliability) was identified as the basic rating life. Using multivariate analysis of failure data, independent predictors of early failure (before 3.1 months) included tissue loss, long lesion length, chronic total occlusions, heart failure, and end-stage renal disease. Application of a RE framework to the assessment of clinical outcomes after peripheral interventions is feasible, and potentially more informative than traditional techniques. Conceptualization of interventions as "products" permits application of product life-cycle models that allow for empiric definition of "early failure" may facilitate comparative effectiveness analysis and enable the development of individualized surveillance programs after endovascular interventions. Copyright © 2013 Annals of Vascular Surgery Inc. Published by Elsevier Inc. All rights reserved.

  17. Performance evaluation of the croissant production line with reparable machines

    NASA Astrophysics Data System (ADS)

    Tsarouhas, Panagiotis H.

    2015-03-01

    In this study, the analytical probability models for an automated serial production system, bufferless that consists of n-machines in series with common transfer mechanism and control system was developed. Both time to failure and time to repair a failure are assumed to follow exponential distribution. Applying those models, the effect of system parameters on system performance in actual croissant production line was studied. The production line consists of six workstations with different numbers of reparable machines in series. Mathematical models of the croissant production line have been developed using Markov process. The strength of this study is in the classification of the whole system in states, representing failures of different machines. Failure and repair data from the actual production environment have been used to estimate reliability and maintainability for each machine, workstation, and the entire line is based on analytical models. The analysis provides a useful insight into the system's behaviour, helps to find design inherent faults and suggests optimal modifications to upgrade the system and improve its performance.

  18. Mechanical characterization and modeling of the deformation and failure of the highly crosslinked RTM6 epoxy resin

    NASA Astrophysics Data System (ADS)

    Morelle, X. P.; Chevalier, J.; Bailly, C.; Pardoen, T.; Lani, F.

    2017-08-01

    The nonlinear deformation and fracture of RTM6 epoxy resin is characterized as a function of strain rate and temperature under various loading conditions involving uniaxial tension, notched tension, uniaxial compression, torsion, and shear. The parameters of the hardening law depend on the strain-rate and temperature. The pressure-dependency and hardening law, as well as four different phenomenological failure criteria, are identified using a subset of the experimental results. Detailed fractography analysis provides insight into the competition between shear yielding and maximum principal stress driven brittle failure. The constitutive model and a stress-triaxiality dependent effective plastic strain based failure criterion are readily introduced in the standard version of Abaqus, without the need for coding user subroutines, and can thus be directly used as an input in multi-scale modeling of fibre-reinforced composite material. The model is successfully validated against data not used for the identification and through the full simulation of the crack propagation process in the V-notched beam shear test.

  19. Scale effects in the response and failure of fiber reinforced composite laminates loaded in tension and in flexure

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Kellas, Sotiris; Morton, John

    1992-01-01

    The feasibility of using scale model testing for predicting the full-scale behavior of flat composite coupons loaded in tension and beam-columns loaded in flexure is examined. Classical laws of similitude are applied to fabricate and test replica model specimens to identify scaling effects in the load response, strength, and mode of failure. Experiments were performed on graphite-epoxy composite specimens having different laminate stacking sequences and a range of scaled sizes. From the experiments it was deduced that the elastic response of scaled composite specimens was independent of size. However, a significant scale effect in strength was observed. In addition, a transition in failure mode was observed among scaled specimens of certain laminate stacking sequences. A Weibull statistical model and a fracture mechanics based model were applied to predict the strength scale effect since standard failure criteria cannot account for the influence of absolute specimen size on strength.

  20. Comprehensive risk assessment method of catastrophic accident based on complex network properties

    NASA Astrophysics Data System (ADS)

    Cui, Zhen; Pang, Jun; Shen, Xiaohong

    2017-09-01

    On the macro level, the structural properties of the network and the electrical characteristics of the micro components determine the risk of cascading failures. And the cascading failures, as a process with dynamic development, not only the direct risk but also potential risk should be considered. In this paper, comprehensively considered the direct risk and potential risk of failures based on uncertain risk analysis theory and connection number theory, quantified uncertain correlation by the node degree and node clustering coefficient, then established a comprehensive risk indicator of failure. The proposed method has been proved by simulation on the actual power grid. Modeling a network according to the actual power grid, and verified the rationality of the proposed method.

  1. Analytical Study of different types Of network failure detection and possible remedies

    NASA Astrophysics Data System (ADS)

    Saxena, Shikha; Chandra, Somnath

    2012-07-01

    Faults in a network have various causes,such as the failure of one or more routers, fiber-cuts, failure of physical elements at the optical layer, or extraneous causes like power outages. These faults are usually detected as failures of a set of dependent logical entities and the links affected by the failed components. A reliable control plane plays a crucial role in creating high-level services in the next-generation transport network based on the Generalized Multiprotocol Label Switching (GMPLS) or Automatically Switched Optical Networks (ASON) model. In this paper, approaches to control-plane survivability, based on protection and restoration mechanisms, are examined. Procedures for the control plane state recovery are also discussed, including link and node failure recovery and the concepts of monitoring paths (MPs) and monitoring cycles (MCs) for unique localization of shared risk linked group (SRLG) failures in all-optical networks. An SRLG failure is a failure of multiple links due to a failure of a common resource. MCs (MPs) start and end at same (distinct) monitoring location(s). They are constructed such that any SRLG failure results in the failure of a unique combination of paths and cycles. We derive necessary and sufficient conditions on the set of MCs and MPs needed for localizing an SRLG failure in an arbitrary graph. Procedure of Protection and Restoration of the SRLG failure by backup re-provisioning algorithm have also been discussed.

  2. Critical Infrastructure Vulnerability to Spatially Localized Failures with Applications to Chinese Railway System.

    PubMed

    Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun

    2017-01-17

    This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.

  3. Fibre Break Failure Processes in Unidirectional Composites. Part 2: Failure and Critical Damage State Induced by Sustained Tensile Loading

    NASA Astrophysics Data System (ADS)

    Thionnet, A.; Chou, H. Y.; Bunsell, A.

    2015-04-01

    The purpose of these three papers is not to just revisit the modelling of unidirectional composites. It is to provide a robust framework based on physical processes that can be used to optimise the design and long term reliability of internally pressurised filament wound structures. The model presented in Part 1 for the case of monotonically loaded unidirectional composites is further developed to consider the effects of the viscoelastic nature of the matrix in determining the kinetics of fibre breaks under slow or sustained loading. It is shown that the relaxation of the matrix around fibre breaks leads to locally increasing loads on neighbouring fibres and in some cases their delayed failure. Although ultimate failure is similar to the elastic case in that clusters of fibre breaks ultimately control composite failure the kinetics of their development varies significantly from the elastic case. Failure loads have been shown to reduce when loading rates are lowered.

  4. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  5. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  6. Dynamically induced cascading failures in power grids.

    PubMed

    Schäfer, Benjamin; Witthaut, Dirk; Timme, Marc; Latora, Vito

    2018-05-17

    Reliable functioning of infrastructure networks is essential for our modern society. Cascading failures are the cause of most large-scale network outages. Although cascading failures often exhibit dynamical transients, the modeling of cascades has so far mainly focused on the analysis of sequences of steady states. In this article, we focus on electrical transmission networks and introduce a framework that takes into account both the event-based nature of cascades and the essentials of the network dynamics. We find that transients of the order of seconds in the flows of a power grid play a crucial role in the emergence of collective behaviors. We finally propose a forecasting method to identify critical lines and components in advance or during operation. Overall, our work highlights the relevance of dynamically induced failures on the synchronization dynamics of national power grids of different European countries and provides methods to predict and model cascading failures.

  7. Yield and failure criteria for composite materials under static and dynamic loading

    DOE PAGES

    Daniel, Isaac M.

    2015-12-23

    To facilitate and accelerate the process of introducing, evaluating and adopting of new material systems, it is important to develop/establish comprehensive and effective procedures of characterization, modeling and failure prediction of structural laminates based on the properties of the constituent materials, e. g., fibers, matrix, and the single ply or lamina. A new failure theory, the Northwestern (NU-Daniel) theory, has been proposed for predicting lamina yielding and failure under multi-axial states of stress including strain rate effects. It is primarily applicable to matrix-dominated interfiber/interlaminar failures. It is based on micromechanical failure mechanisms but is expressed in terms of easily measuredmore » macroscopic lamina stiffness and strength properties. It is presented in the form of a master failure envelope incorporating strain rate effects. The theory was further adapted and extended to the prediction of in situ first ply yielding and failure (FPY and FPF) and progressive failure of multi-directional laminates under static and dynamic loadings. The significance of this theory is that it allows for rapid screening of new composite materials without very extensive testing and offers easily implemented design tools.« less

  8. A bivariate model for analyzing recurrent multi-type automobile failures

    NASA Astrophysics Data System (ADS)

    Sunethra, A. A.; Sooriyarachchi, M. R.

    2017-09-01

    The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by the bivariate model. The proposed model can be used to determine the time and type of failure that would occur in the automobiles considered here.

  9. L70 life prediction for solid state lighting using Kalman Filter and Extended Kalman Filter based models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lall, Pradeep; Wei, Junchao; Davis, Lynn

    2013-08-08

    Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life is definedmore » by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. The measured state variable has been related to the underlying damage using physics-based models. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less

  10. Holistic Care of Hemodialysis Access in Patients with Kidney Failure.

    PubMed

    Bueno, Michael V; Latham, Christine L

    2017-01-01

    Kidney failure requiring hemodialysis is a chronic illness that has physical, psychosocial, and financial consequences. Patients with kidney failure receiving hemodialysis need a renewed focus on self-care, prevention, and community-based health management to reduce healthcare costs and complications, and improve outcomes and quality of life, while living with an altered lifestyle. A holistic chronic care model was applied as a guideline for healthcare professionals involved with this population to more effectively engage people with kidney failure in their management of their hemodialysis access. Copyright© by the American Nephrology Nurses Association.

  11. Probabilistic Risk Assessment for Decision Making During Spacecraft Operations

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila

    2009-01-01

    Decisions made during the operational phase of a space mission often have significant and immediate consequences. Without the explicit consideration of the risks involved and their representation in a solid model, it is very likely that these risks are not considered systematically in trade studies. Wrong decisions during the operational phase of a space mission can lead to immediate system failure whereas correct decisions can help recover the system even from faulty conditions. A problem of special interest is the determination of the system fault protection strategies upon the occurrence of faults within the system. Decisions regarding the fault protection strategy also heavily rely on a correct understanding of the state of the system and an integrated risk model that represents the various possible scenarios and their respective likelihoods. Probabilistic Risk Assessment (PRA) modeling is applicable to the full lifecycle of a space mission project, from concept development to preliminary design, detailed design, development and operations. The benefits and utilities of the model, however, depend on the phase of the mission for which it is used. This is because of the difference in the key strategic decisions that support each mission phase. The focus of this paper is on describing the particular methods used for PRA modeling during the operational phase of a spacecraft by gleaning insight from recently conducted case studies on two operational Mars orbiters. During operations, the key decisions relate to the commands sent to the spacecraft for any kind of diagnostics, anomaly resolution, trajectory changes, or planning. Often, faults and failures occur in the parts of the spacecraft but are contained or mitigated before they can cause serious damage. The failure behavior of the system during operations provides valuable data for updating and adjusting the related PRA models that are built primarily based on historical failure data. The PRA models, in turn, provide insight into the effect of various faults or failures on the risk and failure drivers of the system and the likelihood of possible end case scenarios, thereby facilitating the decision making process during operations. This paper describes the process of adjusting PRA models based on observed spacecraft data, on one hand, and utilizing the models for insight into the future system behavior on the other hand. While PRA models are typically used as a decision aid during the design phase of a space mission, we advocate adjusting them based on the observed behavior of the spacecraft and utilizing them for decision support during the operations phase.

  12. Application of Single Crystal Failure Criteria: Theory and Turbine Blade Case Study

    NASA Technical Reports Server (NTRS)

    Sayyah, Tarek; Swanson, Gregory R.; Schonberg, W. P.

    1999-01-01

    The orientation of the single crystal material within a structural component is known to affect the strength and life of the part. The first stage blade of the High Pressure Fuel Turbopump (HPFTP)/ Alternative Turbopump Development (ATD), of the Space Shuttle Main Engine (SSME) was used to study the effects of secondary axis'orientation angles on the failure rate of the blade. A new failure criterion was developed based on normal and shear strains on the primary crystallographic planes. The criterion was verified using low cycle fatigue (LCF) specimen data and a finite element model of the test specimens. The criterion was then used to study ATD/HPFTP first stage blade failure events. A detailed ANSYS finite element model of the blade was used to calculate the failure parameter for the different crystallographic orientations. A total of 297 cases were run to cover a wide range of acceptable orientations within the blade. Those orientations are related to the base crystallographic coordinate system that was created in the ANSYS finite element model. Contour plots of the criterion as a function of orientation for the blade tip and attachment were obtained. Results of the analysis revealed a 40% increase in the failure parameter due to changing of the primary and secondary axes of material orientations. A comparison between failure criterion predictions and actual engine test data was then conducted. The engine test data comes from two ATD/HPFTP builds (units F3- 4B and F6-5D), which were ground tested on the SSME at the Stennis Space Center in Mississippi. Both units experienced cracking of the airfoil tips in multiple blades, but only a few cracks grew all the way across the wall of the hollow core airfoil.

  13. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  14. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  15. Failure criterion for materials with spatially correlated mechanical properties

    NASA Astrophysics Data System (ADS)

    Faillettaz, J.; Or, D.

    2015-03-01

    The role of spatially correlated mechanical elements in the failure behavior of heterogeneous materials represented by fiber bundle models (FBMs) was evaluated systematically for different load redistribution rules. Increasing the range of spatial correlation for FBMs with local load sharing is marked by a transition from ductilelike failure characteristics into brittlelike failure. The study identified a global failure criterion based on macroscopic properties (external load and cumulative damage) that is independent of spatial correlation or load redistribution rules. This general metric could be applied to assess the mechanical stability of complex and heterogeneous systems and thus provide an important component for early warning of a class of geophysical ruptures.

  16. A Simulation of Low and High Cycle Fatigue Failure Effects for Metal Matrix Composites Based on Innovative J2-Flow Elastoplasticity Model

    PubMed Central

    Wang, Zhaoling; Xiao, Heng

    2017-01-01

    New elastoplastic J2-flow constitutive equations at finite deformations are proposed for the purpose of simulating the fatigue failure behavior for metal matrix composites. A new, direct approach is established in a two-fold sense of unification. Namely, both low and high cycle fatigue failure effects of metal matrix composites may be simultaneously simulated for various cases of the weight percentage of reinforcing particles. Novel results are presented in four respects. First, both the yield condition and the loading–unloading conditions in a usual sense need not be involved but may be automatically incorporated into inherent features of the proposed constitutive equations; second, low-to-high cycle fatigue failure effects may be directly represented by a simple condition for asymptotic loss of the material strength, without involving any additional damage-like variables; third, both high and low cycle fatigue failure effects need not be separately treated but may be automatically derived as model predictions with a unified criterion for critical failure states, without assuming any ad hoc failure criteria; and, finally, explicit expressions for each incorporated model parameter changing with the weight percentage of reinforcing particles may be obtainable directly from appropriate test data. Numerical examples are presented for medium-to-high cycle fatigue failure effects and for complicated duplex effects from low to high cycle fatigue failure effects. Simulation results are in good agreement with experimental data. PMID:28946637

  17. A cost simulation for mammography examinations taking into account equipment failures and resource utilization characteristics.

    PubMed

    Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A

    2010-12-01

    This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.

  18. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  19. A two-scale Weibull approach to the failure of porous ceramic structures made by robocasting: possibilities and limits

    PubMed Central

    Genet, Martin; Houmard, Manuel; Eslava, Salvador; Saiz, Eduardo; Tomsia, Antoni P.

    2012-01-01

    This paper introduces our approach to modeling the mechanical behavior of cellular ceramics, through the example of calcium phosphate scaffolds made by robocasting for bone-tissue engineering. The Weibull theory is used to deal with the scaffolds’ constitutive rods statistical failure, and the Sanchez-Palencia theory of periodic homogenization is used to link the rod- and scaffold-scales. Uniaxial compression of scaffolds and three-point bending of rods were performed to calibrate and validate the model. If calibration based on rod-scale data leads to over-conservative predictions of scaffold’s properties (as rods’ successive failures are not taken into account), we show that, for a given rod diameter, calibration based on scaffold-scale data leads to very satisfactory predictions for a wide range of rod spacing, i.e. of scaffold porosity, as well as for different loading conditions. This work establishes the proposed model as a reliable tool for understanding and optimizing cellular ceramics’ mechanical properties. PMID:23439936

  20. Visual Attention Allocation Between Robotic Arm and Environmental Process Control: Validating the STOM Task Switching Model

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Vieanne, Alex; Clegg, Benjamin; Sebok, Angelia; Janes, Jessica

    2015-01-01

    Fifty six participants time shared a spacecraft environmental control system task with a realistic space robotic arm control task in either a manual or highly automated version. The former could suffer minor failures, whose diagnosis and repair were supported by a decision aid. At the end of the experiment this decision aid unexpectedly failed. We measured visual attention allocation and switching between the two tasks, in each of the eight conditions formed by manual-automated arm X expected-unexpected failure X monitoring- failure management. We also used our multi-attribute task switching model, based on task attributes of priority interest, difficulty and salience that were self-rated by participants, to predict allocation. An un-weighted model based on attributes of difficulty, interest and salience accounted for 96 percent of the task allocation variance across the 8 different conditions. Task difficulty served as an attractor, with more difficult tasks increasing the tendency to stay on task.

  1. The evolution of concepts for soil erosion modelling

    NASA Astrophysics Data System (ADS)

    Kirkby, Mike

    2013-04-01

    From the earliest models for soil erosion, based on power laws relating sediment discharge or yield to slope length and gradient, the development of the Universal Soil Loss Equation was a natural step, although one that has long continued to hinder the development of better perceptual models for erosion processes. Key stumbling blocks have been: 1. The failure to go through runoff generation as a key intermediary 2. The failure to separate hydrological and strength parameters of the soil 3. The failure to treat sediment transport along a slope as a routing problem 4. The failure to analyse the nature of the dependence on vegetation Key advances have been in these directions (among others) 1. Improved understanding of the hydrological processes (e.g. infiltration and runoff, sediment entrainment) leading to KINEROS, LISEM,WEPP, PESERA 2. Recognition of selective sediment transport (e.g. transport- or supply-limited removal, grain travel distances) leading e.g. to MAHLERAN 3. Development of models adapted to particular time/space scales Some major remaining problems 1. Failure to integrate geomorphological and agronomic approaches 2. Tillage erosion - Is erosion loss of sediment or lowering of centre of mass? 3. Dynamic change during an event, as rills etc form.

  2. Changes in electrical and thermal parameters of led packages under different current and heating stresses

    NASA Astrophysics Data System (ADS)

    Jayawardena, Adikaramge Asiri

    The goal of this dissertation is to identify electrical and thermal parameters of an LED package that can be used to predict catastrophic failure real-time in an application. Through an experimental study the series electrical resistance and thermal resistance were identified as good indicators of contact failure of LED packages. This study investigated the long-term changes in series electrical resistance and thermal resistance of LED packages at three different current and junction temperature stress conditions. Experiment results showed that the series electrical resistance went through four phases of change; including periods of latency, rapid increase, saturation, and finally a sharp decline just before failure. Formation of voids in the contact metallization was identified as the underlying mechanism for series resistance increase. The rate of series resistance change was linked to void growth using the theory of electromigration. The rate of increase of series resistance is dependent on temperature and current density. The results indicate that void growth occurred in the cap (Au) layer, was constrained by the contact metal (Ni) layer, preventing open circuit failure of contact metal layer. Short circuit failure occurred due to electromigration induced metal diffusion along dislocations in GaN. The increase in ideality factor, and reverse leakage current with time provided further evidence to presence of metal in the semiconductor. An empirical model was derived for estimation of LED package failure time due to metal diffusion. The model is based on the experimental results and theories of electromigration and diffusion. Furthermore, the experimental results showed that the thermal resistance of LED packages increased with aging time. A relationship between thermal resistance change rate, with case temperature and temperature gradient within the LED package was developed. The results showed that dislocation creep is responsible for creep induced plastic deformation in the die-attach solder. The temperatures inside the LED package reached the melting point of die-attach solder due to delamination just before catastrophic open circuit failure. A combined model that could estimate life of LED packages based on catastrophic failure of thermal and electrical contacts is presented for the first time. This model can be used to make a-priori or real-time estimation of LED package life based on catastrophic failure. Finally, to illustrate the usefulness of the findings from this thesis, two different implementations of real-time life prediction using prognostics and health monitoring techniques are discussed.

  3. Space Vehicle Reliability Modeling in DIORAMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tornga, Shawn Robert

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  4. Cascading failure in the wireless sensor scale-free networks

    NASA Astrophysics Data System (ADS)

    Liu, Hao-Ran; Dong, Ming-Ru; Yin, Rong-Rong; Han, Li

    2015-05-01

    In the practical wireless sensor networks (WSNs), the cascading failure caused by a failure node has serious impact on the network performance. In this paper, we deeply research the cascading failure of scale-free topology in WSNs. Firstly, a cascading failure model for scale-free topology in WSNs is studied. Through analyzing the influence of the node load on cascading failure, the critical load triggering large-scale cascading failure is obtained. Then based on the critical load, a control method for cascading failure is presented. In addition, the simulation experiments are performed to validate the effectiveness of the control method. The results show that the control method can effectively prevent cascading failure. Project supported by the Natural Science Foundation of Hebei Province, China (Grant No. F2014203239), the Autonomous Research Fund of Young Teacher in Yanshan University (Grant No. 14LGB017) and Yanshan University Doctoral Foundation, China (Grant No. B867).

  5. Launch Vehicle Debris Models and Crew Vehicle Ascent Abort Risk

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Lawrence, Scott

    2013-01-01

    For manned space launch systems, a reliable abort system is required to reduce the risks associated with a launch vehicle failure during ascent. Understanding the risks associated with failure environments can be achieved through the use of physics-based models of these environments. Debris fields due to destruction of the launch vehicle is one such environment. To better analyze the risk posed by debris, a physics-based model for generating launch vehicle debris catalogs has been developed. The model predicts the mass distribution of the debris field based on formulae developed from analysis of explosions. Imparted velocity distributions are computed using a shock-physics code to model the explosions within the launch vehicle. A comparison of the debris catalog with an existing catalog for the Shuttle external tank show good comparison in the debris characteristics and the predicted debris strike probability. The model is used to analyze the effects of number of debris pieces and velocity distributions on the strike probability and risk.

  6. A thermomechanical constitutive model for cemented granular materials with quantifiable internal variables. Part II - Validation and localization analysis

    NASA Astrophysics Data System (ADS)

    Das, Arghya; Tengattini, Alessandro; Nguyen, Giang D.; Viggiani, Gioacchino; Hall, Stephen A.; Einav, Itai

    2014-10-01

    We study the mechanical failure of cemented granular materials (e.g., sandstones) using a constitutive model based on breakage mechanics for grain crushing and damage mechanics for cement fracture. The theoretical aspects of this model are presented in Part I: Tengattini et al. (2014), A thermomechanical constitutive model for cemented granular materials with quantifiable internal variables, Part I - Theory (Journal of the Mechanics and Physics of Solids, 10.1016/j.jmps.2014.05.021). In this Part II we investigate the constitutive and structural responses of cemented granular materials through analyses of Boundary Value Problems (BVPs). The multiple failure mechanisms captured by the proposed model enable the behavior of cemented granular rocks to be well reproduced for a wide range of confining pressures. Furthermore, through comparison of the model predictions and experimental data, the micromechanical basis of the model provides improved understanding of failure mechanisms of cemented granular materials. In particular, we show that grain crushing is the predominant inelastic deformation mechanism under high pressures while cement failure is the relevant mechanism at low pressures. Over an intermediate pressure regime a mixed mode of failure mechanisms is observed. Furthermore, the micromechanical roots of the model allow the effects on localized deformation modes of various initial microstructures to be studied. The results obtained from both the constitutive responses and BVP solutions indicate that the proposed approach and model provide a promising basis for future theoretical studies on cemented granular materials.

  7. An evidential reasoning extension to quantitative model-based failure diagnosis

    NASA Technical Reports Server (NTRS)

    Gertler, Janos J.; Anderson, Kenneth C.

    1992-01-01

    The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.

  8. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, M; Abazeed, M; Woody, N

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported tomore » R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.« less

  9. Stage Separation Failure: Model Based Diagnostics and Prognostics

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry; Hafiychuk, Vasyl; Kulikov, Igor; Smelyanskiy, Vadim; Patterson-Hine, Ann; Hanson, John; Hill, Ashley

    2010-01-01

    Safety of the next-generation space flight vehicles requires development of an in-flight Failure Detection and Prognostic (FD&P) system. Development of such system is challenging task that involves analysis of many hard hitting engineering problems across the board. In this paper we report progress in the development of FD&P for the re-contact fault between upper stage nozzle and the inter-stage caused by the first stage and upper stage separation failure. A high-fidelity models and analytical estimations are applied to analyze the following sequence of events: (i) structural dynamics of the nozzle extension during the impact; (ii) structural stability of the deformed nozzle in the presence of the pressure and temperature loads induced by the hot gas flow during engine start up; and (iii) the fault induced thrust changes in the steady burning regime. The diagnostic is based on the measurements of the impact torque. The prognostic is based on the analysis of the correlation between the actuator signal and fault-induced changes in the nozzle structural stability and thrust.

  10. Resilient Sensor Networks with Spatiotemporal Interpolation of Missing Sensors: An Example of Space Weather Forecasting by Multiple Satellites

    PubMed Central

    Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru

    2016-01-01

    This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons. PMID:27092508

  11. Resilient Sensor Networks with Spatiotemporal Interpolation of Missing Sensors: An Example of Space Weather Forecasting by Multiple Satellites.

    PubMed

    Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru

    2016-04-15

    This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons.

  12. Performance-based maintenance of gas turbines for reliable control of degraded power systems

    NASA Astrophysics Data System (ADS)

    Mo, Huadong; Sansavini, Giovanni; Xie, Min

    2018-03-01

    Maintenance actions are necessary for ensuring proper operations of control systems under component degradation. However, current condition-based maintenance (CBM) models based on component health indices are not suitable for degraded control systems. Indeed, failures of control systems are only determined by the controller outputs, and the feedback mechanism compensates the control performance loss caused by the component deterioration. Thus, control systems may still operate normally even if the component health indices exceed failure thresholds. This work investigates the CBM model of control systems and employs the reduced control performance as a direct degradation measure for deciding maintenance activities. The reduced control performance depends on the underlying component degradation modelled as a Wiener process and the feedback mechanism. To this aim, the controller features are quantified by developing a dynamic and stochastic control block diagram-based simulation model, consisting of the degraded components and the control mechanism. At each inspection, the system receives a maintenance action if the control performance deterioration exceeds its preventive-maintenance or failure thresholds. Inspired by realistic cases, the component degradation model considers random start time and unit-to-unit variability. The cost analysis of maintenance model is conducted via Monte Carlo simulation. Optimal maintenance strategies are investigated to minimize the expected maintenance costs, which is a direct consequence of the control performance. The proposed framework is able to design preventive maintenance actions on a gas power plant, to ensuring required load frequency control performance against a sudden load increase. The optimization results identify the trade-off between system downtime and maintenance costs as a function of preventive maintenance thresholds and inspection frequency. Finally, the control performance-based maintenance model can reduce maintenance costs as compared to CBM and pre-scheduled maintenance.

  13. Vibrational fatigue failures in short cantilevered piping with socket-welding fittings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, J.K.

    1996-12-01

    Approximately 80% of the vibrational fatigue failures in nuclear power plants have been caused by high cycle vibrational fatigue. Many of these failures have occurred in short, small bore (2 in. nominal diameter and smaller), unbraced, cantilevered piping with socket-welding fittings. The fatigue failures initiated in the socket welds. These failures have been unexpected, and have caused costly, unscheduled outages in some cases. In order to reduce the number of vibrational fatigue failures in these short cantilevered pipes, an acceleration based vibrational fatigue screening criteria was developed under Electric Power Research Institute (EPRI) sponsorship. In this paper, the acceleration basedmore » criteria will be compared to the results obtained from detailed dynamic modeling of a short, cantilevered pipe.« less

  14. An analytical model to design circumferential clasps for laser-sintered removable partial dentures.

    PubMed

    Alsheghri, Ammar A; Alageel, Omar; Caron, Eric; Ciobanu, Ovidiu; Tamimi, Faleh; Song, Jun

    2018-06-21

    Clasps of removable partial dentures (RPDs) often suffer from plastic deformation and failure by fatigue; a common complication of RPDs. A new technology for processing metal frameworks for dental prostheses based on laser-sintering, which allows for precise fabrication of clasp geometry, has been recently developed. This study sought to propose a novel method for designing circumferential clasps for laser-sintered RPDs to avoid plastic deformation or fatigue failure. An analytical model for designing clasps with semicircular cross-sections was derived based on mechanics. The Euler-Bernoulli elastic curved beam theory and Castigliano's energy method were used to relate the stress and undercut with the clasp length, cross-sectional radius, alloy properties, tooth type, and retention force. Finite element analysis (FEA) was conducted on a case study and the resultant tensile stress and undercut were compared with the analytical model predictions. Pull-out experiments were conducted on laser-sintered cobalt-chromium (Co-Cr) dental prostheses to validate the analytical model results. The proposed circumferential clasp design model yields results in good agreement with FEA and experiments. The results indicate that Co-Cr circumferential clasps in molars that are 13mm long engaging undercuts of 0.25mm should have a cross-section radius of 1.2mm to provide a retention of 10N and to avoid plastic deformation or fatigue failure. However, shorter circumferential clasps such as those in premolars present high stresses and cannot avoid plastic deformation or fatigue failure. Laser-sintered Co-Cr circumferential clasps in molars are safe, whereas they are susceptible to failure in premolars. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  15. Introspective Reasoning Models for Multistrategy Case-Based and Explanation

    DTIC Science & Technology

    1997-03-10

    symptoms and diseases to causal 30 principles about diseases and first-principle analysis grounded in basic science. Based on research in process...the symptoms of the failure to conclusion that the process which posts learning goals a causal explanation of the failure. Secondl,,. the learner...the vernacular, a "jones" is a drug habit accompanied the faucet for water. Therefore, the story can end with by withdrawal symptoms . The verb "to jones

  16. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.

  17. Nonparametric method for failures detection and localization in the actuating subsystem of aircraft control system

    NASA Astrophysics Data System (ADS)

    Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.

  18. A definitional framework for the human/biometric sensor interaction model

    NASA Astrophysics Data System (ADS)

    Elliott, Stephen J.; Kukula, Eric P.

    2010-04-01

    Existing definitions for biometric testing and evaluation do not fully explain errors in a biometric system. This paper provides a definitional framework for the Human Biometric-Sensor Interaction (HBSI) model. This paper proposes six new definitions based around two classifications of presentations, erroneous and correct. The new terms are: defective interaction (DI), concealed interaction (CI), false interaction (FI), failure to detect (FTD), failure to extract (FTX), and successfully acquired samples (SAS). As with all definitions, the new terms require a modification to the general biometric model developed by Mansfield and Wayman [1].

  19. Rainfall-triggered shallow landslides at catchment scale: Threshold mechanics-based modeling for abruptness and localization

    NASA Astrophysics Data System (ADS)

    von Ruette, J.; Lehmann, P.; Or, D.

    2013-10-01

    Rainfall-induced shallow landslides may occur abruptly without distinct precursors and could span a wide range of soil mass released during a triggering event. We present a rainfall-induced landslide-triggering model for steep catchments with surfaces represented as an assembly of hydrologically and mechanically interconnected soil columns. The abruptness of failure was captured by defining local strength thresholds for mechanical bonds linking soil and bedrock and adjacent columns, whereby a failure of a single bond may initiate a chain reaction of subsequent failures, culminating in local mass release (a landslide). The catchment-scale hydromechanical landslide-triggering model (CHLT) was applied to results from two event-based landslide inventories triggered by two rainfall events in 2002 and 2005 in two nearby catchments located in the Prealps in Switzerland. Rainfall radar data, surface elevation and vegetation maps, and a soil production model for soil depth distribution were used for hydromechanical modeling of failure patterns for the two rainfall events at spatial and temporal resolutions of 2.5 m and 0.02 h, respectively. The CHLT model enabled systematic evaluation of the effects of soil type, mechanical reinforcement (soil cohesion and lateral root strength), and initial soil water content on landslide characteristics. We compared various landslide metrics and spatial distribution of simulated landslides in subcatchments with observed inventory data. Model parameters were optimized for the short but intense rainfall event in 2002, and the calibrated model was then applied for the 2005 rainfall, yielding reasonable predictions of landslide events and volumes and statistically reproducing localized landslide patterns similar to inventory data. The model provides a means for identifying local hot spots and offers insights into the dynamics of locally resolved landslide hazards in mountainous regions.

  20. Mitigation of commutation failures in LCC-HVDC systems based on superconducting fault current limiters

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Geon; Khan, Umer Amir; Lee, Ho-Yun; Lim, Sung-Woo; Lee, Bang-Wook

    2016-11-01

    Commutation failure in line commutated converter based HVDC systems cause severe damages on the entire power grid system. For LCC-HVDC, thyristor valves are turned on by a firing signal but turn off control is governed by the external applied AC voltage from surrounding network. When the fault occurs in AC system, turn-off control of thyristor valves is unavailable due to the voltage collapse of point of common coupling (PCC), which causes the commutation failure in LCC-HVDC link. Due to the commutation failure, the power transfer interruption, dc voltage drop and severe voltage fluctuation in the AC system could be occurred. In a severe situation, it might cause the protection system to block the valves. In this paper, as a solution to prevent the voltage collapse on PCC and to limit the fault current, the application study of resistive superconducting fault current limiter (SFCL) on LCC-HVDC grid system was performed with mathematical and simulation analyses. The simulation model was designed by Matlab/Simulink considering Haenam-Jeju HVDC power grid in Korea which includes conventional AC system and onshore wind farm and resistive SFCL model. From the result, it was observed that the application of SFCL on LCC-HVDC system is an effective solution to mitigate the commutation failure. And then the process to determine optimum quench resistance of SFCL which enables the recovery of commutation failure was deeply investigated.

  1. Creep and creep rupture of laminated graphite/epoxy composites. Ph.D. Thesis. Final Report, 1 Oct. 1979 - 30 Sep. 1980

    NASA Technical Reports Server (NTRS)

    Dillard, D. A.; Morris, D. H.; Brinson, H. F.

    1981-01-01

    An incremental numerical procedure based on lamination theory is developed to predict creep and creep rupture of general laminates. Existing unidirectional creep compliance and delayed failure data is used to develop analytical models for lamina response. The compliance model is based on a procedure proposed by Findley which incorporates the power law for creep into a nonlinear constitutive relationship. The matrix octahedral shear stress is assumed to control the stress interaction effect. A modified superposition principle is used to account for the varying stress level effect on the creep strain. The lamina failure model is based on a modification of the Tsai-Hill theory which includes the time dependent creep rupture strength. A linear cumulative damage law is used to monitor the remaining lifetime in each ply.

  2. Improving the performance of a filling line based on simulation

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.

    2016-08-01

    The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.

  3. Modeling Progressive Damage Using Local Displacement Discontinuities Within the FEAMAC Multiscale Modeling Framework

    NASA Technical Reports Server (NTRS)

    Ranatunga, Vipul; Bednarcyk, Brett A.; Arnold, Steven M.

    2010-01-01

    A method for performing progressive damage modeling in composite materials and structures based on continuum level interfacial displacement discontinuities is presented. The proposed method enables the exponential evolution of the interfacial compliance, resulting in unloading of the tractions at the interface after delamination or failure occurs. In this paper, the proposed continuum displacement discontinuity model has been used to simulate failure within both isotropic and orthotropic materials efficiently and to explore the possibility of predicting the crack path, therein. Simulation results obtained from Mode-I and Mode-II fracture compare the proposed approach with the cohesive element approach and Virtual Crack Closure Techniques (VCCT) available within the ABAQUS (ABAQUS, Inc.) finite element software. Furthermore, an eccentrically loaded 3-point bend test has been simulated with the displacement discontinuity model, and the resulting crack path prediction has been compared with a prediction based on the extended finite element model (XFEM) approach.

  4. Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2008-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.

  5. Failing to Learn: Towards a Unified Design Approach for Failure-Based Learning

    ERIC Educational Resources Information Center

    Tawfik, Andrew A.; Rong, Hui; Choi, Ikseon

    2015-01-01

    To date, many instructional systems are designed to support learners as they progress through a problem-solving task. Often these systems are designed in accordance with instructional design models that progress the learner efficiently through the problem-solving process. However, theories from various fields have discussed failure as a strategic…

  6. Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  7. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  8. [Chronic heart failure].

    PubMed

    Gosch, Markus

    2008-08-01

    As a consequence of the increasing life expectancy the number of patients suffering from chronic heart failure has been growing continuously in the past few decades, especially in the group of the old and oldest. Frailty is a clinical syndrome that geriatricians attach great importance to. Like many other diseases chronic heart failure can cause frailty. Based on the experience that we see only a small correlation between the functional capacity of patients with heart failure and the results of cardiological findings, the model of peripheral myopathy in chronic heart failure was developed. Different pathophysiological changes may cause the increasing exercise intolerance in patients with chronic heart failure. We can already consider different experimental approaches to the therapy of frailty caused by chronic heart failure. At the moment we have to focus our efforts on an optimal therapy of heart failure, especially with angiotensin-converting-enzyme inhibitors and beta-blockers, and on individual endurance and strength training.

  9. Immunity-based detection, identification, and evaluation of aircraft sub-system failures

    NASA Astrophysics Data System (ADS)

    Moncayo, Hever Y.

    This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.

  10. Prognostics of Power Mosfets Under Thermal Stress Accelerated Aging Using Data-Driven and Model-Based Methodologies

    NASA Technical Reports Server (NTRS)

    Celaya, Jose; Saxena, Abhinav; Saha, Sankalita; Goebel, Kai F.

    2011-01-01

    An approach for predicting remaining useful life of power MOSFETs (metal oxide field effect transistor) devices has been developed. Power MOSFETs are semiconductor switching devices that are instrumental in electronics equipment such as those used in operation and control of modern aircraft and spacecraft. The MOSFETs examined here were aged under thermal overstress in a controlled experiment and continuous performance degradation data were collected from the accelerated aging experiment. Dieattach degradation was determined to be the primary failure mode. The collected run-to-failure data were analyzed and it was revealed that ON-state resistance increased as die-attach degraded under high thermal stresses. Results from finite element simulation analysis support the observations from the experimental data. Data-driven and model based prognostics algorithms were investigated where ON-state resistance was used as the primary precursor of failure feature. A Gaussian process regression algorithm was explored as an example for a data-driven technique and an extended Kalman filter and a particle filter were used as examples for model-based techniques. Both methods were able to provide valid results. Prognostic performance metrics were employed to evaluate and compare the algorithms.

  11. Formulation of an Integrated Community Based Disaster Management for Hydroelectric facilities: The Malaysia Case

    NASA Astrophysics Data System (ADS)

    Hijazzi, Norshamirra; Thiruchelvam, Sivadass; Sabri Muda, Rahsidi; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Ghazali, Azrul; Kamal Kadir, Ahmad; Hakimie, Hazlinda; Sahari, Khairul Salleh Mohamed; Hasini, Hasril; Mohd Sidek, Lariyah; Itam, Zarina; Fadhli Mohamad, Mohd; Razad, Azwin Zailti Abdul

    2016-03-01

    Dams, however significant their contributions are to the society, are not immune to failures and diminishing lifespan not unlike other structural elements in our infrastructure. Despite continuing efforts on design, construction, operation, and maintenance of dams to improve the safety of the dams, the possibility of unforeseen events of dam failures is still possible. Seeing that dams are usually integrated into close approximities with the community, dam failures may consequent in tremendous loss of lives and properties. The aims of formulation of Integrated Community Based Disaster Management (ICBDM) is to simulate evacuation modelling and emergency planning in order to minimize loss of life and property damages in the event of a dam-related disaster. To achieve the aim above, five main pillars have been identified for the formulation of ICBDM. A series of well-defined program inclusive of hydrological 2-D modelling, life safety modelling, community based EWS and CBTAP will be conducted. Finally, multiple parties’ engagement is to be carried out in the form of table top exercise to measure the readiness of emergency plans and response capabilities of key players during the state of a crisis.

  12. Prediction of L70 lumen maintenance and chromaticity for LEDs using extended Kalman filter models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lall, Pradeep; Wei, Junchao; Davis, Lynn

    2013-09-30

    Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life is definedmore » by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. The measured state variable has been related to the underlying damage using physics-based models. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less

  13. A risk assessment method for multi-site damage

    NASA Astrophysics Data System (ADS)

    Millwater, Harry Russell, Jr.

    This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.

  14. Fatigue of notched fiber composite laminates. Part 1: Analytical model

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P. V., Jr.; Kulkarni, S. V.; Huang, S. N.; Rosen, B. W.

    1975-01-01

    A description is given of a semi-empirical, deterministic analysis for prediction and correlation of fatigue crack growth, residual strength, and fatigue lifetime for fiber composite laminates containing notches (holes). The failure model used for the analysis is based upon composite heterogeneous behavior and experimentally observed failure modes under both static and fatigue loading. The analysis is consistent with the wearout philosophy. Axial cracking and transverse cracking failure modes are treated together in the analysis. Cracking off-axis is handled by making a modification to the axial cracking analysis. The analysis predicts notched laminate failure from unidirectional material fatique properties using constant strain laminate analysis techniques. For multidirectional laminates, it is necessary to know lamina fatique behavior under axial normal stress, transverse normal stress and axial shear stress. Examples of the analysis method are given.

  15. Prediction of failure in notched carbon-fibre-reinforced-polymer laminates under multi-axial loading.

    PubMed

    Tan, J L Y; Deshpande, V S; Fleck, N A

    2016-07-13

    A damage-based finite-element model is used to predict the fracture behaviour of centre-notched quasi-isotropic carbon-fibre-reinforced-polymer laminates under multi-axial loading. Damage within each ply is associated with fibre tension, fibre compression, matrix tension and matrix compression. Inter-ply delamination is modelled by cohesive interfaces using a traction-separation law. Failure envelopes for a notch and a circular hole are predicted for in-plane multi-axial loading and are in good agreement with the observed failure envelopes from a parallel experimental study. The ply-by-ply (and inter-ply) damage evolution and the critical mechanisms of ultimate failure also agree with the observed damage evolution. It is demonstrated that accurate predictions of notched compressive strength are obtained upon employing the band broadening stress for microbuckling, highlighting the importance of this damage mode in compression. This article is part of the themed issue 'Multiscale modelling of the structural integrity of composite materials'. © 2016 The Author(s).

  16. A Framework for Performing Multiscale Stochastic Progressive Failure Analysis of Composite Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2006-01-01

    A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis - Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.

  17. A Framework for Performing Multiscale Stochastic Progressive Failure Analysis of Composite Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2007-01-01

    A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis-Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.

  18. Failure tolerance of spike phase synchronization in coupled neural networks

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2011-09-01

    Neuronal synchronization plays an important role in the various functionality of nervous system such as binding, cognition, information processing, and computation. In this paper, we investigated how random and intentional failures in the nodes of a network influence its phase synchronization properties. We considered both artificially constructed networks using models such as preferential attachment, Watts-Strogatz, and Erdős-Rényi as well as a number of real neuronal networks. The failure strategy was either random or intentional based on properties of the nodes such as degree, clustering coefficient, betweenness centrality, and vulnerability. Hindmarsh-Rose model was considered as the mathematical model for the individual neurons, and the phase synchronization of the spike trains was monitored as a function of the percentage/number of removed nodes. The numerical simulations were supplemented by considering coupled non-identical Kuramoto oscillators. Failures based on the clustering coefficient, i.e., removing the nodes with high values of the clustering coefficient, had the least effect on the spike synchrony in all of the networks. This was followed by errors where the nodes were removed randomly. However, the behavior of the other three attack strategies was not uniform across the networks, and different strategies were the most influential in different network structure.

  19. Investigation of PDC bit failure base on stick-slip vibration analysis of drilling string system plus drill bit

    NASA Astrophysics Data System (ADS)

    Huang, Zhiqiang; Xie, Dou; Xie, Bing; Zhang, Wenlin; Zhang, Fuxiao; He, Lei

    2018-03-01

    The undesired stick-slip vibration is the main source of PDC bit failure, such as tooth fracture and tooth loss. So, the study of PDC bit failure base on stick-slip vibration analysis is crucial to prolonging the service life of PDC bit and improving ROP (rate of penetration). For this purpose, a piecewise-smooth torsional model with 4-DOF (degree of freedom) of drilling string system plus PDC bit is proposed to simulate non-impact drilling. In this model, both the friction and cutting behaviors of PDC bit are innovatively introduced. The results reveal that PDC bit is easier to fail than other drilling tools due to the severer stick-slip vibration. Moreover, reducing WOB (weight on bit) and improving driving torque can effectively mitigate the stick-slip vibration of PDC bit. Therefore, PDC bit failure can be alleviated by optimizing drilling parameters. In addition, a new 4-DOF torsional model is established to simulate torsional impact drilling and the effect of torsional impact on PDC bit's stick-slip vibration is analyzed by use of an engineering example. It can be concluded that torsional impact can mitigate stick-slip vibration, prolonging the service life of PDC bit and improving drilling efficiency, which is consistent with the field experiment results.

  20. Meso-Scale Progressive Damage Behavior Characterization of Triaxial Braided Composites under Quasi-Static Tensile Load

    NASA Astrophysics Data System (ADS)

    Ren, Yiru; Zhang, Songjun; Jiang, Hongyong; Xiang, Jinwu

    2018-04-01

    Based on continuum damage mechanics (CDM), a sophisticated 3D meso-scale finite element (FE) model is proposed to characterize the progressive damage behavior of 2D Triaxial Braided Composites (2DTBC) with 60° braiding angle under quasi-static tensile load. The modified Von Mises strength criterion and 3D Hashin failure criterion are used to predict the damage initiation of the pure matrix and fiber tows. A combining interface damage and friction constitutive model is applied to predict the interface damage behavior. Murakami-Ohno stiffness degradation scheme is employed to predict the damage evolution process of each constituent. Coupling with the ordinary and translational symmetry boundary conditions, the tensile elastic response including tensile strength and failure strain of 2DTBC are in good agreement with the available experiment data. The numerical results show that the main failure modes of the composites under axial tensile load are pure matrix cracking, fiber and matrix tension failure in bias fiber tows, matrix tension failure in axial fiber tows and interface debonding; the main failure modes of the composites subjected to transverse tensile load are free-edge effect, matrix tension failure in bias fiber tows and interface debonding.

  1. Comparison of atazanavir/ritonavir and darunavir/ritonavir based antiretroviral therapy for antiretroviral naïve patients.

    PubMed

    Antoniou, Tony; Szadkowski, Leah; Walmsley, Sharon; Cooper, Curtis; Burchell, Ann N; Bayoumi, Ahmed M; Montaner, Julio S G; Loutfy, Mona; Klein, Marina B; Machouf, Nima; Tsoukas, Christos; Wong, Alexander; Hogg, Robert S; Raboud, Janet

    2017-04-11

    Atazanavir/ritonavir and darunavir/ritonavir are common protease inhibitor-based regimens for treating patients with HIV. Studies comparing these drugs in clinical practice are lacking. We conducted a retrospective cohort study of antiretroviral naïve participants in the Canadian Observational Cohort (CANOC) collaboration initiating atazanavir/ritonavir- or darunavir/ritonavir-based treatment. We used separate Fine and Gray competing risk regression models to compare times to regimen failure (composite of virologic failure or discontinuation for any reason). Additional endpoints included virologic failure, discontinuation due to virologic failure, discontinuation for other reasons, and virologic suppression. We studied 222 patients treated with darunavir/ritonavir and 1791 patients treated with atazanavir/ritonavir. Following multivariable adjustment, there was no difference between darunavir/ritonavir and atazanavir-ritonavir in the risk of regimen failure (adjusted hazard ratio 0.76, 95% CI 0.56 to 1.03) Darunavir/ritonavir-treated patients were at lower risk of virologic failure relative to atazanavir/ritonavir treated patients (aHR 0.50, 95% CI 0.28 to 0.91), findings driven largely by high rates of virologic failure among atazanavir/ritonavir-treated patients in the province of British Columbia. Of 108 discontinuations due to virologic failure, all occurred in patients starting atazanavir/ritonavir. There was no difference between regimens in time to discontinuation for reasons other than virologic failure (aHR 0.93; 95% CI 0.65 to 1.33) or virologic suppression (aHR 0.99, 95% CI 0.82 to 1.21). The risk of regimen failure was similar between patients treated with darunavir/ritonavir and atazanavir/ritonavir. Although darunavir/ritonavir was associated with a lower risk of virologic failure relative to atazanavir/ritonavir, this difference varied substantially by Canadian province and likely reflects regional variation in prescribing practices and patient characteristics.

  2. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  3. Adaptive model-based control systems and methods for controlling a gas turbine

    NASA Technical Reports Server (NTRS)

    Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)

    2004-01-01

    Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).

  4. Research on cascading failure in multilayer network with different coupling preference

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Jin, Lei; Wang, Xiao Juan

    This paper is aimed at constructing robust multilayer networks against cascading failure. Considering link protection strategies in reality, we design a cascading failure model based on load distribution and extend it to multilayer. We use the cascading failure model to deduce the scale of the largest connected component after cascading failure, from which we can find that the performance of four kinds of load distribution strategies associates with the load ratio of the current edge to its adjacent edge. Coupling preference is a typical characteristic in multilayer networks which corresponds to the network robustness. The coupling preference of multilayer networks is divided into two forms: the coupling preference in layers and the coupling preference between layers. To analyze the relationship between the coupling preference and the multilayer network robustness, we design a construction algorithm to generate multilayer networks with different coupling preferences. Simulation results show that the load distribution based on the node betweenness performs the best. When the coupling coefficient in layers is zero, the scale-free network is the most robust. In the random network, the assortative coupling in layers is more robust than the disassortative coupling. For the coupling preference between layers, the assortative coupling between layers is more robust than the disassortative coupling both in the scale free network and the random network.

  5. Conduit Stability and Collapse in Explosive Volcanic Eruptions: Coupling Conduit Flow and Failure Models

    NASA Astrophysics Data System (ADS)

    Mullet, B.; Segall, P.

    2017-12-01

    Explosive volcanic eruptions can exhibit abrupt changes in physical behavior. In the most extreme cases, high rates of mass discharge are interspaced by dramatic drops in activity and periods of quiescence. Simple models predict exponential decay in magma chamber pressure, leading to a gradual tapering of eruptive flux. Abrupt changes in eruptive flux therefore indicate that relief of chamber pressure cannot be the only control of the evolution of such eruptions. We present a simplified physics-based model of conduit flow during an explosive volcanic eruption that attempts to predict stress-induced conduit collapse linked to co-eruptive pressure loss. The model couples a simple two phase (gas-melt) 1-D conduit solution of the continuity and momentum equations with a Mohr-Coulomb failure condition for the conduit wall rock. First order models of volatile exsolution (i.e. phase mass transfer) and fragmentation are incorporated. The interphase interaction force changes dramatically between flow regimes, so smoothing of this force is critical for realistic results. Reductions in the interphase force lead to significant relative phase velocities, highlighting the deficiency of homogenous flow models. Lateral gas loss through conduit walls is incorporated using a membrane-diffusion model with depth dependent wall rock permeability. Rapid eruptive flux results in a decrease of chamber and conduit pressure, which leads to a critical deviatoric stress condition at the conduit wall. Analogous stress distributions have been analyzed for wellbores, where much work has been directed at determining conditions that lead to wellbore failure using Mohr-Coulomb failure theory. We extend this framework to cylindrical volcanic conduits, where large deviatoric stresses can develop co-eruptively leading to multiple distinct failure regimes depending on principal stress orientations. These failure regimes are categorized and possible implications for conduit flow are discussed, including cessation of eruption.

  6. A fuzzy set approach for reliability calculation of valve controlling electric actuators

    NASA Astrophysics Data System (ADS)

    Karmachev, D. P.; Yefremov, A. A.; Luneva, E. E.

    2017-02-01

    The oil and gas equipment and electric actuators in particular frequently perform in various operational modes and under dynamic environmental conditions. These factors affect equipment reliability measures in a vague, uncertain way. To eliminate the ambiguity, reliability model parameters could be defined as fuzzy numbers. We suggest a technique that allows constructing fundamental fuzzy-valued performance reliability measures based on an analysis of electric actuators failure data in accordance with the amount of work, completed before the failure, instead of failure time. Also, this paper provides a computation example of fuzzy-valued reliability and hazard rate functions, assuming Kumaraswamy complementary Weibull geometric distribution as a lifetime (reliability) model for electric actuators.

  7. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  8. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  9. DEPEND - A design environment for prediction and evaluation of system dependability

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.; Iyer, Ravishankar K.

    1990-01-01

    The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.

  10. Compression failure mechanisms of uni-ply composite plates with a circular cutout

    NASA Technical Reports Server (NTRS)

    Khamseh, A. R.; Waas, A. M.

    1992-01-01

    The effect of circular-hole size on the failure mode of uniply graphite-epoxy composite plates is investigated experimentally and analytically for uniaxial compressive loading. The test specimens are sandwiched between polyetherimide plastic for nondestructive evaluations of the uniply failure mechanisms associated with a range of hole sizes. Finite-element modeling based on classical lamination theory is conducted for the corresponding materials and geometries to reproduce the experimental results analytically. The type of compressive failure is found to be a function of hole size, with fiber buckling/kinking at the hole being the dominant failure mechanism for hole diam/plate width ratios exceeding 0.062. The results of the finite-element analysis supported the experimental data for these failure mechanisms and for those corresponding to smaller hole sizes.

  11. When Project Commitment Leads to Learning from Failure: The Roles of Perceived Shame and Personal Control

    PubMed Central

    Wang, Wenzhou; Wang, Bin; Yang, Ke; Yang, Chong; Yuan, Wenlong; Song, Shanghao

    2018-01-01

    Facing a remarkably changing world, researchers have gradually shifted emphasis from successful experiences to failures. In the current study, we build a model to explore the relationship between project commitment and learning from failure, and test how emotion (i.e., perceived shame after failure) and cognition (i.e., attribution for failure) affect this process. After randomly selecting 400 firms from the list of high-tech firms reported by the Beijing Municipal Science and Technology Commission, we use a two-wave investigation of the employees, and the final sample consists of 140 teams from 58 companies in the technology industry in mainland China. The results provide evidence for the positive role of personal control attribution in the relationship between project commitment and learning from failure. However, in contrast to previous studies, perceived shame, as the negative emotion after failed events, could bring desirable outcomes during this process. Based on the results, we further expand a model to explain the behavioral responses after failure, and the implications of our findings for research and practice are discussed. The failures and reverses which await men - and one after another sadden the brow of youth - add a dignity to the prospect of human life, which no Arcadian success would do. —Henry David Thoreau PMID:29467699

  12. When Project Commitment Leads to Learning from Failure: The Roles of Perceived Shame and Personal Control.

    PubMed

    Wang, Wenzhou; Wang, Bin; Yang, Ke; Yang, Chong; Yuan, Wenlong; Song, Shanghao

    2018-01-01

    Facing a remarkably changing world, researchers have gradually shifted emphasis from successful experiences to failures. In the current study, we build a model to explore the relationship between project commitment and learning from failure, and test how emotion (i.e., perceived shame after failure) and cognition (i.e., attribution for failure) affect this process. After randomly selecting 400 firms from the list of high-tech firms reported by the Beijing Municipal Science and Technology Commission, we use a two-wave investigation of the employees, and the final sample consists of 140 teams from 58 companies in the technology industry in mainland China. The results provide evidence for the positive role of personal control attribution in the relationship between project commitment and learning from failure. However, in contrast to previous studies, perceived shame, as the negative emotion after failed events, could bring desirable outcomes during this process. Based on the results, we further expand a model to explain the behavioral responses after failure, and the implications of our findings for research and practice are discussed. The failures and reverses which await men - and one after another sadden the brow of youth - add a dignity to the prospect of human life, which no Arcadian success would do. -Henry David Thoreau.

  13. A Novel MiRNA-Based Predictive Model for Biochemical Failure Following Post-Prostatectomy Salvage Radiation Therapy

    PubMed Central

    Stegmaier, Petra; Drendel, Vanessa; Mo, Xiaokui; Ling, Stella; Fabian, Denise; Manring, Isabel; Jilg, Cordula A.; Schultze-Seemann, Wolfgang; McNulty, Maureen; Zynger, Debra L.; Martin, Douglas; White, Julia; Werner, Martin; Grosu, Anca L.; Chakravarti, Arnab

    2015-01-01

    Purpose To develop a microRNA (miRNA)-based predictive model for prostate cancer patients of 1) time to biochemical recurrence after radical prostatectomy and 2) biochemical recurrence after salvage radiation therapy following documented biochemical disease progression post-radical prostatectomy. Methods Forty three patients who had undergone salvage radiation therapy following biochemical failure after radical prostatectomy with greater than 4 years of follow-up data were identified. Formalin-fixed, paraffin-embedded tissue blocks were collected for all patients and total RNA was isolated from 1mm cores enriched for tumor (>70%). Eight hundred miRNAs were analyzed simultaneously using the nCounter human miRNA v2 assay (NanoString Technologies; Seattle, WA). Univariate and multivariate Cox proportion hazards regression models as well as receiver operating characteristics were used to identify statistically significant miRNAs that were predictive of biochemical recurrence. Results Eighty eight miRNAs were identified to be significantly (p<0.05) associated with biochemical failure post-prostatectomy by multivariate analysis and clustered into two groups that correlated with early (≤ 36 months) versus late recurrence (>36 months). Nine miRNAs were identified to be significantly (p<0.05) associated by multivariate analysis with biochemical failure after salvage radiation therapy. A new predictive model for biochemical recurrence after salvage radiation therapy was developed; this model consisted of miR-4516 and miR-601 together with, Gleason score, and lymph node status. The area under the ROC curve (AUC) was improved to 0.83 compared to that of 0.66 for Gleason score and lymph node status alone. Conclusion miRNA signatures can distinguish patients who fail soon after radical prostatectomy versus late failures, giving insight into which patients may need adjuvant therapy. Notably, two novel miRNAs (miR-4516 and miR-601) were identified that significantly improve prediction of biochemical failure post-salvage radiation therapy compared to clinico-histopathological factors, supporting the use of miRNAs within clinically used predictive models. Both findings warrant further validation studies. PMID:25760964

  14. Attention, motivation, and reading coherence failure: a neuropsychological perspective.

    PubMed

    Wasserman, Theodore

    2012-01-01

    Reading coherence, defined as the ability to create appropriate, meaningful connections between the elements within a specific text itself and between elements within a text and the reader's prior knowledge, is one of the key processes involved in reading comprehension. This article describes reading coherence within the context of a neuropsychological model combining recent research in motivation, attention, and working memory. Specifically, a unique neuropsychologically identifiable form of reading coherence failure arising from the attentional and motivational deficiencies, based in altered frontoventral striatal reward circuits associated with noradrenaline (NA) circuitry, consistent with the delay-aversion model (dual-pathway model) of Sonuga-Barke ( 2003 ) is postulated. This article provides a model for this subset of reading disorders of which etiology is based on the executive support processes for reading and not in the mechanics of actual reading such as decoding and phonetics.

  15. Estimating a Service-Life Distribution Based on Production Counts and a Failure Database

    DOE PAGES

    Ryan, Kenneth J.; Hamada, Michael Scott; Vardeman, Stephen B.

    2017-04-01

    A manufacturer wanted to compare the service-life distributions of two similar products. These concern product lifetimes after installation (not manufacture). For each product, there were available production counts and an imperfect database providing information on failing units. In the real case, these units were expensive repairable units warrantied against repairs. Failure (of interest here) was relatively rare and driven by a different mode/mechanism than ordinary repair events (not of interest here). Approach: Data models for the service life based on a standard parametric lifetime distribution and a related limited failure population were developed. These models were used to develop expressionsmore » for the likelihood of the available data that properly accounts for information missing in the failure database. Results: A Bayesian approach was employed to obtain estimates of model parameters (with associated uncertainty) in order to investigate characteristics of the service-life distribution. Custom software was developed and is included as Supplemental Material to this case study. One part of a responsible approach to the original case was a simulation experiment used to validate the correctness of the software and the behavior of the statistical methodology before using its results in the application, and an example of such an experiment is included here. Because of confidentiality issues that prevent use of the original data, simulated data with characteristics like the manufacturer’s proprietary data are used to illustrate some aspects of our real analyses. Lastly, we also note that, although this case focuses on rare and complete product failure, the statistical methodology provided is directly applicable to more standard warranty data problems involving typically much larger warranty databases where entries are warranty claims (often for repairs) rather than reports of complete failures.« less

  16. Estimating a Service-Life Distribution Based on Production Counts and a Failure Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Kenneth J.; Hamada, Michael Scott; Vardeman, Stephen B.

    A manufacturer wanted to compare the service-life distributions of two similar products. These concern product lifetimes after installation (not manufacture). For each product, there were available production counts and an imperfect database providing information on failing units. In the real case, these units were expensive repairable units warrantied against repairs. Failure (of interest here) was relatively rare and driven by a different mode/mechanism than ordinary repair events (not of interest here). Approach: Data models for the service life based on a standard parametric lifetime distribution and a related limited failure population were developed. These models were used to develop expressionsmore » for the likelihood of the available data that properly accounts for information missing in the failure database. Results: A Bayesian approach was employed to obtain estimates of model parameters (with associated uncertainty) in order to investigate characteristics of the service-life distribution. Custom software was developed and is included as Supplemental Material to this case study. One part of a responsible approach to the original case was a simulation experiment used to validate the correctness of the software and the behavior of the statistical methodology before using its results in the application, and an example of such an experiment is included here. Because of confidentiality issues that prevent use of the original data, simulated data with characteristics like the manufacturer’s proprietary data are used to illustrate some aspects of our real analyses. Lastly, we also note that, although this case focuses on rare and complete product failure, the statistical methodology provided is directly applicable to more standard warranty data problems involving typically much larger warranty databases where entries are warranty claims (often for repairs) rather than reports of complete failures.« less

  17. Individual prediction of heart failure among childhood cancer survivors.

    PubMed

    Chow, Eric J; Chen, Yan; Kremer, Leontien C; Breslow, Norman E; Hudson, Melissa M; Armstrong, Gregory T; Border, William L; Feijen, Elizabeth A M; Green, Daniel M; Meacham, Lillian R; Meeske, Kathleen A; Mulrooney, Daniel A; Ness, Kirsten K; Oeffinger, Kevin C; Sklar, Charles A; Stovall, Marilyn; van der Pal, Helena J; Weathers, Rita E; Robison, Leslie L; Yasui, Yutaka

    2015-02-10

    To create clinically useful models that incorporate readily available demographic and cancer treatment characteristics to predict individual risk of heart failure among 5-year survivors of childhood cancer. Survivors in the Childhood Cancer Survivor Study (CCSS) free of significant cardiovascular disease 5 years after cancer diagnosis (n = 13,060) were observed through age 40 years for the development of heart failure (ie, requiring medications or heart transplantation or leading to death). Siblings (n = 4,023) established the baseline population risk. An additional 3,421 survivors from Emma Children's Hospital (Amsterdam, the Netherlands), the National Wilms Tumor Study, and the St Jude Lifetime Cohort Study were used to validate the CCSS prediction models. Heart failure occurred in 285 CCSS participants. Risk scores based on selected exposures (sex, age at cancer diagnosis, and anthracycline and chest radiotherapy doses) achieved an area under the curve of 0.74 and concordance statistic of 0.76 at or through age 40 years. Validation cohort estimates ranged from 0.68 to 0.82. Risk scores were collapsed to form statistically distinct low-, moderate-, and high-risk groups, corresponding to cumulative incidences of heart failure at age 40 years of 0.5% (95% CI, 0.2% to 0.8%), 2.4% (95% CI, 1.8% to 3.0%), and 11.7% (95% CI, 8.8% to 14.5%), respectively. In comparison, siblings had a cumulative incidence of 0.3% (95% CI, 0.1% to 0.5%). Using information available to clinicians soon after completion of childhood cancer therapy, individual risk for subsequent heart failure can be predicted with reasonable accuracy and discrimination. These validated models provide a framework on which to base future screening strategies and interventions. © 2014 by American Society of Clinical Oncology.

  18. Thermal barrier coating life prediction model

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Cook, T. S.; Kim, K. S.

    1986-01-01

    This is the second annual report of the first 3-year phase of a 2-phase, 5-year program. The objectives of the first phase are to determine the predominant modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consists of an air plasma sprayed ZrO-Y2O3 top coat, a low pressure plasma sprayed NiCrAlY bond coat, and a Rene' 80 substrate. Task I was to evaluate TBC failure mechanisms. Both bond coat oxidation and bond coat creep have been identified as contributors to TBC failure. Key property determinations have also been made for the bond coat and the top coat, including tensile strength, Poisson's ratio, dynamic modulus, and coefficient of thermal expansion. Task II is to develop TBC life prediction models for the predominant failure modes. These models will be developed based on the results of thermmechanical experiments and finite element analysis. The thermomechanical experiments have been defined and testing initiated. Finite element models have also been developed to handle TBCs and are being utilized to evaluate different TBC failure regimes.

  19. Joint scale-change models for recurrent events and failure time.

    PubMed

    Xu, Gongjun; Chiou, Sy Han; Huang, Chiung-Yu; Wang, Mei-Cheng; Yan, Jun

    2017-01-01

    Recurrent event data arise frequently in various fields such as biomedical sciences, public health, engineering, and social sciences. In many instances, the observation of the recurrent event process can be stopped by the occurrence of a correlated failure event, such as treatment failure and death. In this article, we propose a joint scale-change model for the recurrent event process and the failure time, where a shared frailty variable is used to model the association between the two types of outcomes. In contrast to the popular Cox-type joint modeling approaches, the regression parameters in the proposed joint scale-change model have marginal interpretations. The proposed approach is robust in the sense that no parametric assumption is imposed on the distribution of the unobserved frailty and that we do not need the strong Poisson-type assumption for the recurrent event process. We establish consistency and asymptotic normality of the proposed semiparametric estimators under suitable regularity conditions. To estimate the corresponding variances of the estimators, we develop a computationally efficient resampling-based procedure. Simulation studies and an analysis of hospitalization data from the Danish Psychiatric Central Register illustrate the performance of the proposed method.

  20. Modeling Population-Level Consequences of Polychlorinated Biphenyl Exposure in East Greenland Polar Bears.

    PubMed

    Pavlova, Viola; Grimm, Volker; Dietz, Rune; Sonne, Christian; Vorkamp, Katrin; Rigét, Frank F; Letcher, Robert J; Gustavson, Kim; Desforges, Jean-Pierre; Nabe-Nielsen, Jacob

    2016-01-01

    Polychlorinated biphenyls (PCBs) can cause endocrine disruption, cancer, immunosuppression, or reproductive failure in animals. We used an individual-based model to explore whether and how PCB-associated reproductive failure could affect the dynamics of a hypothetical polar bear (Ursus maritimus) population exposed to PCBs to the same degree as the East Greenland subpopulation. Dose-response data from experimental studies on a surrogate species, the mink (Mustela vision), were used in the absence of similar data for polar bears. Two alternative types of reproductive failure in relation to maternal sum-PCB concentrations were considered: increased abortion rate and increased cub mortality. We found that the quantitative impact of PCB-induced reproductive failure on population growth rate depended largely on the actual type of reproductive failure involved. Critical potencies of the dose-response relationship for decreasing the population growth rate were established for both modeled types of reproductive failure. Comparing the model predictions of the age-dependent trend of sum-PCBs concentrations in females with actual field measurements from East Greenland indicated that it was unlikely that PCB exposure caused a high incidence of abortions in the subpopulation. However, on the basis of this analysis, it could not be excluded that PCB exposure contributes to higher cub mortality. Our results highlight the necessity for further research on the possible influence of PCBs on polar bear reproduction regarding their physiological pathway. This includes determining the exact cause of reproductive failure, i.e., in utero exposure versus lactational exposure of offspring; the timing of offspring death; and establishing the most relevant reference metrics for the dose-response relationship.

  1. Prediction of Lumen Output and Chromaticity Shift in LEDs Using Kalman Filter and Extended Kalman Filter Based Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lall, Pradeep; Wei, Junchao; Davis, J Lynn

    2014-06-24

    Abstract— Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life ismore » defined by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less

  2. A new yield and failure theory for composite materials under static and dynamic loading

    DOE PAGES

    Daniel, Isaac M.; Daniel, Sam M.; Fenner, Joel S.

    2017-09-12

    In order to facilitate and accelerate the process of introducing, evaluating and adopting new material systems, it is important to develop/establish comprehensive and effective procedures of characterization, modeling and failure prediction of composite structures based on the properties of the constituent materials, e. g., fibers, matrix, and the single ply or lamina. A new yield/failure theory is proposed for predicting lamina yielding and failure under multi-axial states of stress including strain rate effects. It is based on the equivalent stress concept derived from energy principles and is expressed in terms of a single criterion. It is presented in the formmore » of master yield and failure envelopes incorporating strain rate effects. The theory can be further adapted and extended to the prediction of in situ first ply yielding and failure (FPY and FPF) and progressive damage of multi-directional laminates under static and dynamic loadings. The significance of this theory is that it allows for rapid screening of new composite materials without extensive testing and offers easily implemented design tools.« less

  3. A new yield and failure theory for composite materials under static and dynamic loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, Isaac M.; Daniel, Sam M.; Fenner, Joel S.

    In order to facilitate and accelerate the process of introducing, evaluating and adopting new material systems, it is important to develop/establish comprehensive and effective procedures of characterization, modeling and failure prediction of composite structures based on the properties of the constituent materials, e. g., fibers, matrix, and the single ply or lamina. A new yield/failure theory is proposed for predicting lamina yielding and failure under multi-axial states of stress including strain rate effects. It is based on the equivalent stress concept derived from energy principles and is expressed in terms of a single criterion. It is presented in the formmore » of master yield and failure envelopes incorporating strain rate effects. The theory can be further adapted and extended to the prediction of in situ first ply yielding and failure (FPY and FPF) and progressive damage of multi-directional laminates under static and dynamic loadings. The significance of this theory is that it allows for rapid screening of new composite materials without extensive testing and offers easily implemented design tools.« less

  4. SPH modeling of fluid-solid interaction for dynamic failure analysis of fluid-filled thin shells

    NASA Astrophysics Data System (ADS)

    Caleyron, F.; Combescure, A.; Faucher, V.; Potapov, S.

    2013-05-01

    This work concerns the prediction of failure of a fluid-filled tank under impact loading, including the resulting fluid leakage. A water-filled steel cylinder associated with a piston is impacted by a mass falling at a prescribed velocity. The cylinder is closed at its base by an aluminum plate whose characteristics are allowed to vary. The impact on the piston creates a pressure wave in the fluid which is responsible for the deformation of the plate and, possibly, the propagation of cracks. The structural part of the problem is modeled using Mindlin-Reissner finite elements (FE) and Smoothed Particle Hydrodynamics (SPH) shells. The modeling of the fluid is also based on an SPH formulation. The problem involves significant fluid-structure interactions (FSI) which are handled through a master-slave-based method and the pinballs method. Numerical results are compared to experimental data.

  5. Economic impact of heart failure according to the effects of kidney failure.

    PubMed

    Sicras Mainar, Antoni; Navarro Artieda, Ruth; Ibáñez Nolla, Jordi

    2015-01-01

    To evaluate the use of health care resources and their cost according to the effects of kidney failure in heart failure patients during 2-year follow-up in a population setting. Observational retrospective study based on a review of medical records. The study included patients ≥ 45 years treated for heart failure from 2008 to 2010. The patients were divided into 2 groups according to the presence/absence of KF. Main outcome variables were comorbidity, clinical status (functional class, etiology), metabolic syndrome, costs, and new cases of cardiovascular events and kidney failure. The cost model included direct and indirect health care costs. Statistical analysis included multiple regression models. The study recruited 1600 patients (prevalence, 4.0%; mean age 72.4 years; women, 59.7%). Of these patients, 70.1% had hypertension, 47.1% had dyslipidemia, and 36.2% had diabetes mellitus. We analyzed 433 patients (27.1%) with kidney failure and 1167 (72.9%) without kidney failure. Patients with kidney failure were associated with functional class III-IV (54.1% vs 40.8%) and metabolic syndrome (65.3% vs 51.9%, P<.01). The average unit cost was €10,711.40. The corrected cost in the presence of kidney failure was €14,868.20 vs €9,364.50 (P=.001). During follow-up, 11.7% patients developed ischemic heart disease, 18.8% developed kidney failure, and 36.1% developed heart failure exacerbation. Comorbidity associated with heart failure is high. The presence of kidney failure increases the use of health resources and leads to higher costs within the National Health System. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.

  6. Prediction of morbidity and mortality in patients with type 2 diabetes.

    PubMed

    Wells, Brian J; Roth, Rachel; Nowacki, Amy S; Arrigain, Susana; Yu, Changhong; Rosenkrans, Wayne A; Kattan, Michael W

    2013-01-01

    Introduction. The objective of this study was to create a tool that accurately predicts the risk of morbidity and mortality in patients with type 2 diabetes according to an oral hypoglycemic agent. Materials and Methods. The model was based on a cohort of 33,067 patients with type 2 diabetes who were prescribed a single oral hypoglycemic agent at the Cleveland Clinic between 1998 and 2006. Competing risk regression models were created for coronary heart disease (CHD), heart failure, and stroke, while a Cox regression model was created for mortality. Propensity scores were used to account for possible treatment bias. A prediction tool was created and internally validated using tenfold cross-validation. The results were compared to a Framingham model and a model based on the United Kingdom Prospective Diabetes Study (UKPDS) for CHD and stroke, respectively. Results and Discussion. Median follow-up for the mortality outcome was 769 days. The numbers of patients experiencing events were as follows: CHD (3062), heart failure (1408), stroke (1451), and mortality (3661). The prediction tools demonstrated the following concordance indices (c-statistics) for the specific outcomes: CHD (0.730), heart failure (0.753), stroke (0.688), and mortality (0.719). The prediction tool was superior to the Framingham model at predicting CHD and was at least as accurate as the UKPDS model at predicting stroke. Conclusions. We created an accurate tool for predicting the risk of stroke, coronary heart disease, heart failure, and death in patients with type 2 diabetes. The calculator is available online at http://rcalc.ccf.org under the heading "Type 2 Diabetes" and entitled, "Predicting 5-Year Morbidity and Mortality." This may be a valuable tool to aid the clinician's choice of an oral hypoglycemic, to better inform patients, and to motivate dialogue between physician and patient.

  7. Climate change, species distribution models, and physiological performance metrics: predicting when biogeographic models are likely to fail.

    PubMed

    Woodin, Sarah A; Hilbish, Thomas J; Helmuth, Brian; Jones, Sierra J; Wethey, David S

    2013-09-01

    Modeling the biogeographic consequences of climate change requires confidence in model predictions under novel conditions. However, models often fail when extended to new locales, and such instances have been used as evidence of a change in physiological tolerance, that is, a fundamental niche shift. We explore an alternative explanation and propose a method for predicting the likelihood of failure based on physiological performance curves and environmental variance in the original and new environments. We define the transient event margin (TEM) as the gap between energetic performance failure, defined as CTmax, and the upper lethal limit, defined as LTmax. If TEM is large relative to environmental fluctuations, models will likely fail in new locales. If TEM is small relative to environmental fluctuations, models are likely to be robust for new locales, even when mechanism is unknown. Using temperature, we predict when biogeographic models are likely to fail and illustrate this with a case study. We suggest that failure is predictable from an understanding of how climate drives nonlethal physiological responses, but for many species such data have not been collected. Successful biogeographic forecasting thus depends on understanding when the mechanisms limiting distribution of a species will differ among geographic regions, or at different times, resulting in realized niche shifts. TEM allows prediction of the likelihood of such model failure.

  8. Multiscale Static Analysis of Notched and Unnotched Laminates Using the Generalized Method of Cells

    NASA Technical Reports Server (NTRS)

    Naghipour Ghezeljeh, Paria; Arnold, Steven M.; Pineda, Evan J.; Stier, Bertram; Hansen, Lucas; Bednarcyk, Brett A.; Waas, Anthony M.

    2016-01-01

    The generalized method of cells (GMC) is demonstrated to be a viable micromechanics tool for predicting the deformation and failure response of laminated composites, with and without notches, subjected to tensile and compressive static loading. Given the axial [0], transverse [90], and shear [+45/-45] response of a carbon/epoxy (IM7/977-3) system, the unnotched and notched behavior of three multidirectional layups (Layup 1: [0,45,90,-45](sub 2S), Layup 2: [0,60,0](sub 3S), and Layup 3: [30,60,90,-30, -60](sub 2S)) are predicted under both tensile and compressive static loading. Matrix nonlinearity is modeled in two ways. The first assumes all nonlinearity is due to anisotropic progressive damage of the matrix only, which is modeled, using the multiaxial mixed-mode continuum damage model (MMCDM) within GMC. The second utilizes matrix plasticity coupled with brittle final failure based on the maximum principle strain criteria to account for matrix nonlinearity and failure within the Finite Element Analysis--Micromechanics Analysis Code (FEAMAC) software multiscale framework. Both MMCDM and plasticity models incorporate brittle strain- and stress-based failure criteria for the fiber. Upon satisfaction of these criteria, the fiber properties are immediately reduced to a nominal value. The constitutive response for each constituent (fiber and matrix) is characterized using a combination of vendor data and the axial, transverse, and shear responses of unnotched laminates. Then, the capability of the multiscale methodology is assessed by performing blind predictions of the mentioned notched and unnotched composite laminates response under tensile and compressive loading. Tabulated data along with the detailed results (i.e., stress-strain curves as well as damage evolution states at various ratios of strain to failure) for all laminates are presented.

  9. A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.

  10. Risk Factors for Heart Failure in Patients With Chronic Kidney Disease: The CRIC (Chronic Renal Insufficiency Cohort) Study.

    PubMed

    He, Jiang; Shlipak, Michael; Anderson, Amanda; Roy, Jason A; Feldman, Harold I; Kallem, Radhakrishna Reddy; Kanthety, Radhika; Kusek, John W; Ojo, Akinlolu; Rahman, Mahboob; Ricardo, Ana C; Soliman, Elsayed Z; Wolf, Myles; Zhang, Xiaoming; Raj, Dominic; Hamm, Lee

    2017-05-17

    Heart failure is common in patients with chronic kidney disease. We studied risk factors for incident heart failure among 3557 participants in the CRIC (Chronic Renal Insufficiency Cohort) Study. Kidney function was assessed by estimated glomerular filtration rate (eGFR) using serum creatinine, cystatin C, or both, and 24-hour urine albumin excretion. During an average of 6.3 years of follow-up, 452 participants developed incident heart failure. After adjustment for age, sex, race, and clinical site, hazard ratio (95% CI) for heart failure associated with 1 SD lower creatinine-based eGFR was 1.67 (1.49, 1.89), 1 SD lower cystatin C-based-eGFR was 2.43 (2.10, 2.80), and 1 SD higher log-albuminuria was 1.65 (1.53, 1.78), all P <0.001. When all 3 kidney function measures were simultaneously included in the model, lower cystatin C-based eGFR and higher log-albuminuria remained significantly and directly associated with incidence of heart failure. After adjusting for eGFR, albuminuria, and other traditional cardiovascular risk factors, anemia (1.37, 95% CI 1.09, 1.72, P =0.006), insulin resistance (1.16, 95% CI 1.04, 1.28, P =0.006), hemoglobin A1c (1.27, 95% CI 1.14, 1.41, P <0.001), interleukin-6 (1.15, 95% CI 1.05, 1.25, P =0.002), and tumor necrosis factor-α (1.10, 95% CI 1.00, 1.21, P =0.05) were all significantly and directly associated with incidence of heart failure. Our study indicates that cystatin C-based eGFR and albuminuria are better predictors for risk of heart failure compared to creatinine-based eGFR. Furthermore, anemia, insulin resistance, inflammation, and poor glycemic control are independent risk factors for the development of heart failure among patients with chronic kidney disease. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  11. Micromechanics-Based Progressive Failure Analysis of Composite Laminates Using Different Constituent Failure Theories

    NASA Technical Reports Server (NTRS)

    Moncada, Albert M.; Chattopadhyay, Aditi; Bednarcyk, Brett A.; Arnold, Steven M.

    2008-01-01

    Predicting failure in a composite can be done with ply level mechanisms and/or micro level mechanisms. This paper uses the Generalized Method of Cells and High-Fidelity Generalized Method of Cells micromechanics theories, coupled with classical lamination theory, as implemented within NASA's Micromechanics Analysis Code with Generalized Method of Cells. The code is able to implement different failure theories on the level of both the fiber and the matrix constituents within a laminate. A comparison is made among maximum stress, maximum strain, Tsai-Hill, and Tsai-Wu failure theories. To verify the failure theories the Worldwide Failure Exercise (WWFE) experiments have been used. The WWFE is a comprehensive study that covers a wide range of polymer matrix composite laminates. The numerical results indicate good correlation with the experimental results for most of the composite layups, but also point to the need for more accurate resin damage progression models.

  12. Evaluation of critical nuclear power plant electrical cable response to severe thermal fire conditions

    NASA Astrophysics Data System (ADS)

    Taylor, Gabriel James

    The failure of electrical cables exposed to severe thermal fire conditions are a safety concern for operating commercial nuclear power plants (NPPs). The Nuclear Regulatory Commission (NRC) has promoted the use of risk-informed and performance-based methods for fire protection which resulted in a need to develop realistic methods to quantify the risk of fire to NPP safety. Recent electrical cable testing has been conducted to provide empirical data on the failure modes and likelihood of fire-induced damage. This thesis evaluated numerous aspects of the data. Circuit characteristics affecting fire-induced electrical cable failure modes have been evaluated. In addition, thermal failure temperatures corresponding to cable functional failures have been evaluated to develop realistic single point thermal failure thresholds and probability distributions for specific cable insulation types. Finally, the data was used to evaluate the prediction capabilities of a one-dimension conductive heat transfer model used to predict cable failure.

  13. Viscoelastic behavior and lifetime (durability) predictions. [for laminated fiber reinforced plastics

    NASA Technical Reports Server (NTRS)

    Brinson, R. F.

    1985-01-01

    A method for lifetime or durability predictions for laminated fiber reinforced plastics is given. The procedure is similar to but not the same as the well known time-temperature-superposition principle for polymers. The method is better described as an analytical adaptation of time-stress-super-position methods. The analytical constitutive modeling is based upon a nonlinear viscoelastic constitutive model developed by Schapery. Time dependent failure models are discussed and are related to the constitutive models. Finally, results of an incremental lamination analysis using the constitutive and failure model are compared to experimental results. Favorable results between theory and predictions are presented using data from creep tests of about two months duration.

  14. Effect of different CT scanners and settings on femoral failure loads calculated by finite element models.

    PubMed

    Eggermont, Florieke; Derikx, Loes C; Free, Jeffrey; van Leeuwen, Ruud; van der Linden, Yvette M; Verdonschot, Nico; Tanck, Esther

    2018-03-06

    In a multi-center patient study, using different CT scanners, CT-based finite element (FE) models are utilized to calculate failure loads of femora with metastases. Previous studies showed that using different CT scanners can result in different outcomes. This study aims to quantify the effects of (i) different CT scanners; (ii) different CT protocols with variations in slice thickness, field of view (FOV), and reconstruction kernel; and (iii) air between calibration phantom and patient, on Hounsfield Units (HU), bone mineral density (BMD), and FE failure load. Six cadaveric femora were scanned on four CT scanners. Scans were made with multiple CT protocols and with or without an air gap between the body model and calibration phantom. HU and calibrated BMD were determined in cortical and trabecular regions of interest. Non-linear isotropic FE models were constructed to calculate failure load. Mean differences between CT scanners varied up to 7% in cortical HU, 6% in trabecular HU, 6% in cortical BMD, 12% in trabecular BMD, and 17% in failure load. Changes in slice thickness and FOV had little effect (≤4%), while reconstruction kernels had a larger effect on HU (16%), BMD (17%), and failure load (9%). Air between the body model and calibration phantom slightly decreased the HU, BMD, and failure loads (≤8%). In conclusion, this study showed that quantitative analysis of CT images acquired with different CT scanners, and particularly reconstruction kernels, can induce relatively large differences in HU, BMD, and failure loads. Additionally, if possible, air artifacts should be avoided. © 2018 Orthopaedic Research Society. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society. J Orthop Res. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Suhas; Wang, Ziwen; Huang, Xiaopeng

    While the recent establishment of the role of thermophoresis/diffusion-driven oxygen migration during resistance switching in metal oxide memristors provided critical insights required for memristor modeling, extended investigations of the role of oxygen migration during ageing and failure remain to be detailed. Such detailing will enable failure-tolerant design, which can lead to enhanced performance of memristor-based next-generation storage-class memory. Furthermore, we directly observed lateral oxygen migration using in-situ synchrotron x-ray absorption spectromicroscopy of HfO x memristors during initial resistance switching, wear over millions of switching cycles, and eventual failure, through which we determined potential physical causes of failure. Using this information,more » we reengineered devices to mitigate three failure mechanisms and demonstrated an improvement in endurance of about three orders of magnitude.« less

  16. The influence of protection system failures and preventive maintenance on protection systems in distribution systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meeuwsen, J.J.; Kling, W.L.; Ploem, W.A.G.A.

    1997-01-01

    Protection systems in power systems can fail either by not responding when they should (failure to operate) or by operating when they should not (false tripping). The former type of failure is particularly serious since it may result in the isolation of large sections of the network. However, the probability of a failure to operate can be reduced by carrying out preventive maintenance on protection systems. This paper describes an approach to determine the impact of preventive maintenance on protection systems on the reliability of the power supply to customers. The proposed approach is based on Markov models.

  17. Interoperability-oriented Integration of Failure Knowledge into Functional Knowledge and Knowledge Transformation based on Concepts Mapping

    NASA Astrophysics Data System (ADS)

    Koji, Yusuke; Kitamura, Yoshinobu; Kato, Yoshikiyo; Tsutsui, Yoshio; Mizoguchi, Riichiro

    In conceptual design, it is important to develop functional structures which reflect the rich experience in the knowledge from previous design failures. Especially, if a designer learns possible abnormal behaviors from a previous design failure, he or she can add an additional function which prevents such abnormal behaviors and faults. To do this, it is a crucial issue to share such knowledge about possible faulty phenomena and how to cope with them. In fact, a part of such knowledge is described in FMEA (Failure Mode and Effect Analysis) sheets, function structure models for systematic design and fault trees for FTA (Fault Tree Analysis).

  18. Interface failure modes explain non-monotonic size-dependent mechanical properties in bioinspired nanolaminates.

    PubMed

    Song, Z Q; Ni, Y; Peng, L M; Liang, H Y; He, L H

    2016-03-31

    Bioinspired discontinuous nanolaminate design becomes an efficient way to mitigate the strength-ductility tradeoff in brittle materials via arresting the crack at the interface followed by controllable interface failure. The analytical solution and numerical simulation based on the nonlinear shear-lag model indicates that propagation of the interface failure can be unstable or stable when the interfacial shear stress between laminae is uniform or highly localized, respectively. A dimensionless key parameter defined by the ratio of two characteristic lengths governs the transition between the two interface-failure modes, which can explain the non-monotonic size-dependent mechanical properties observed in various laminate composites.

  19. Control of Flexible Systems in the Presence of Failures

    NASA Technical Reports Server (NTRS)

    Magahami, Peiman G.; Cox, David E.; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    Control of flexible systems under degradation or failure of sensors/actuators is considered. A Linear Matrix Inequality framework is used to synthesize H(sub infinity)-based controllers, which provide good disturbance rejection while capable of tolerating real parameter uncertainties in the system model, as well as potential degradation or failure of the control system hardware. In this approach, a one-at-a-time failure scenario is considered, wherein no more than one sensor or actuator is allowed to fail at any given time. A numerical example involving control synthesis for a two-dimensional flexible system is presented to demonstrate the feasibility of the proposed approach.

  20. Reliability growth modeling analysis of the space shuttle main engines based upon the Weibull process

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1990-01-01

    The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.

  1. A stochastic hybrid systems based framework for modeling dependent failure processes

    PubMed Central

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313

  2. A stochastic hybrid systems based framework for modeling dependent failure processes.

    PubMed

    Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying

    2017-01-01

    In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.

  3. Modeling Micro-cracking Behavior of Bukit Timah Granite Using Grain-Based Model

    NASA Astrophysics Data System (ADS)

    Peng, Jun; Wong, Louis Ngai Yuen; Teh, Cee Ing; Li, Zhihuan

    2018-01-01

    Rock strength and deformation behavior has long been recognized to be closely related to the microstructure and the associated micro-cracking process. A good understanding of crack initiation and coalescence mechanisms will thus allow us to account for the variation of rock strength and deformation properties from a microscopic view. This paper numerically investigates the micro-cracking behavior of Bukit Timah granite by using a grain-based modeling approach. First, the principles of grain-based model adopted in the two-dimensional Particle Flow Code and the numerical model generation procedure are reviewed. The micro-parameters of the numerical model are then calibrated to match the macro-properties of the rock obtained from tension and compression tests in the laboratory. The simulated rock properties are in good agreement with the laboratory test results with the errors less than ±6%. Finally, the calibrated model is used to study the micro-cracking behavior and the failure modes of the rock under direct tension and under compression with different confining pressures. The results reveal that when the numerical model is loaded in direct tension, only grain boundary tensile cracks are generated, and the simulated macroscopic fracture agrees well with the results obtained in laboratory tests. When the model is loaded in compression, the ratio of grain boundary tensile cracks to grain boundary shear cracks decreases with the increase in confining pressure. In other words, the results show that as the confining pressure increases, the failure mechanism changes from tension to shear. The simulated failure mode of the model changes from splitting to shear as the applied confining pressure gradually increases, which is comparable with that observed in laboratory tests. The grain-based model used in this study thus appears promising for further investigation of microscopic and macroscopic behavior of crystalline rocks under different loading conditions.

  4. Hybrid neural intelligent system to predict business failure in small-to-medium-size enterprises.

    PubMed

    Borrajo, M Lourdes; Baruque, Bruno; Corchado, Emilio; Bajo, Javier; Corchado, Juan M

    2011-08-01

    During the last years there has been a growing need of developing innovative tools that can help small to medium sized enterprises to predict business failure as well as financial crisis. In this study we present a novel hybrid intelligent system aimed at monitoring the modus operandi of the companies and predicting possible failures. This system is implemented by means of a neural-based multi-agent system that models the different actors of the companies as agents. The core of the multi-agent system is a type of agent that incorporates a case-based reasoning system and automates the business control process and failure prediction. The stages of the case-based reasoning system are implemented by means of web services: the retrieval stage uses an innovative weighted voting summarization of self-organizing maps ensembles-based method and the reuse stage is implemented by means of a radial basis function neural network. An initial prototype was developed and the results obtained related to small and medium enterprises in a real scenario are presented.

  5. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the modelmore » to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.« less

  6. A proportional hazards regression model for the subdistribution with right-censored and left-truncated competing risks data

    PubMed Central

    Zhang, Xu; Zhang, Mei-Jie; Fine, Jason

    2012-01-01

    With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated. PMID:21557288

  7. Scalable Failure Masking for Stencil Computations using Ghost Region Expansion and Cell to Rank Remapping

    DOE PAGES

    Gamell, Marc; Teranishi, Keita; Kolla, Hemanth; ...

    2017-10-26

    In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less

  8. Scalable Failure Masking for Stencil Computations using Ghost Region Expansion and Cell to Rank Remapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamell, Marc; Teranishi, Keita; Kolla, Hemanth

    In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less

  9. Survival Predictions of Ceramic Crowns Using Statistical Fracture Mechanics

    PubMed Central

    Nasrin, S.; Katsube, N.; Seghi, R.R.; Rokhlin, S.I.

    2017-01-01

    This work establishes a survival probability methodology for interface-initiated fatigue failures of monolithic ceramic crowns under simulated masticatory loading. A complete 3-dimensional (3D) finite element analysis model of a minimally reduced molar crown was developed using commercially available hardware and software. Estimates of material surface flaw distributions and fatigue parameters for 3 reinforced glass-ceramics (fluormica [FM], leucite [LR], and lithium disilicate [LD]) and a dense sintered yttrium-stabilized zirconia (YZ) were obtained from the literature and incorporated into the model. Utilizing the proposed fracture mechanics–based model, crown survival probability as a function of loading cycles was obtained from simulations performed on the 4 ceramic materials utilizing identical crown geometries and loading conditions. The weaker ceramic materials (FM and LR) resulted in lower survival rates than the more recently developed higher-strength ceramic materials (LD and YZ). The simulated 10-y survival rate of crowns fabricated from YZ was only slightly better than those fabricated from LD. In addition, 2 of the model crown systems (FM and LD) were expanded to determine regional-dependent failure probabilities. This analysis predicted that the LD-based crowns were more likely to fail from fractures initiating from margin areas, whereas the FM-based crowns showed a slightly higher probability of failure from fractures initiating from the occlusal table below the contact areas. These 2 predicted fracture initiation locations have some agreement with reported fractographic analyses of failed crowns. In this model, we considered the maximum tensile stress tangential to the interfacial surface, as opposed to the more universally reported maximum principal stress, because it more directly impacts crack propagation. While the accuracy of these predictions needs to be experimentally verified, the model can provide a fundamental understanding of the importance that pre-existing flaws at the intaglio surface have on fatigue failures. PMID:28107637

  10. SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Iyengar, P

    2016-06-15

    Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less

  11. Closed-Loop Evaluation of an Integrated Failure Identification and Fault Tolerant Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine; Khong, thuan

    2006-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems developed for failure detection, identification, and reconfiguration, as well as upset recovery, need to be evaluated over broad regions of the flight envelope or under extreme flight conditions, and should include various sources of uncertainty. To apply formal robustness analysis, formulation of linear fractional transformation (LFT) models of complex parameter-dependent systems is required, which represent system uncertainty due to parameter uncertainty and actuator faults. This paper describes a detailed LFT model formulation procedure from the nonlinear model of a transport aircraft by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The closed-loop system is evaluated over the entire flight envelope based on the generated LFT model which can cover nonlinear dynamics. The robustness analysis results of the closed-loop fault tolerant control system of a transport aircraft are presented. A reliable flight envelope (safe flight regime) is also calculated from the robust performance analysis results, over which the closed-loop system can achieve the desired performance of command tracking and failure detection.

  12. Tensile Strength of Carbon Nanotubes Under Realistic Temperature and Strain Rate

    NASA Technical Reports Server (NTRS)

    Wei, Chen-Yu; Cho, Kyeong-Jae; Srivastava, Deepak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Strain rate and temperature dependence of the tensile strength of single-wall carbon nanotubes has been investigated with molecular dynamics simulations. The tensile failure or yield strain is found to be strongly dependent on the temperature and strain rate. A transition state theory based predictive model is developed for the tensile failure of nanotubes. Based on the parameters fitted from high-strain rate and temperature dependent molecular dynamics simulations, the model predicts that a defect free micrometer long single-wall nanotube at 300 K, stretched with a strain rate of 1%/hour, fails at about 9 plus or minus 1% tensile strain. This is in good agreement with recent experimental findings.

  13. A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences

    PubMed Central

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933

  14. Modeling of damage driven fracture failure of fiber post-restored teeth.

    PubMed

    Xu, Binting; Wang, Yining; Li, Qing

    2015-09-01

    Mechanical failure of biomaterials, which can be initiated by either violent force, or progressive stress fatigue, is a serious issue. Great efforts have been made to improve the mechanical performances of dental restorations. Virtual simulation is a promising approach for biomechanical investigations, which presents significant advantages in improving efficiency than traditional in vivo/in vitro studies. Over the past few decades, a number of virtual studies have been conducted to investigate the biomechanical issues concerning dental biomaterials, but only with limited incorporation of brittle failure phenomena. Motivated by the contradictory findings between several finite element analyses and common clinical observations on the fracture resistance of post-restored teeth, this study aimed to provide an approach using numerical simulations for investigating the fracture failure process through a non-linear fracture mechanics model. The ability of this approach to predict fracture initiation and propagation in a complex biomechanical status based on the intrinsic material properties was investigated. Results of the virtual simulations matched the findings of experimental tests, in terms of the ultimate fracture failure strengths and predictive areas under risk of clinical failure. This study revealed that the failure of dental post-restored restorations is a typical damage-driven continuum-to-discrete process. This approach is anticipated to have ramifications not only for modeling fracture events, but also for the design and optimization of the mechanical properties of biomaterials for specific clinically determined requirements. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Oxygen migration during resistance switching and failure of hafnium oxide memristors

    DOE PAGES

    Kumar, Suhas; Wang, Ziwen; Huang, Xiaopeng; ...

    2017-03-06

    While the recent establishment of the role of thermophoresis/diffusion-driven oxygen migration during resistance switching in metal oxide memristors provided critical insights required for memristor modeling, extended investigations of the role of oxygen migration during ageing and failure remain to be detailed. Such detailing will enable failure-tolerant design, which can lead to enhanced performance of memristor-based next-generation storage-class memory. Furthermore, we directly observed lateral oxygen migration using in-situ synchrotron x-ray absorption spectromicroscopy of HfO x memristors during initial resistance switching, wear over millions of switching cycles, and eventual failure, through which we determined potential physical causes of failure. Using this information,more » we reengineered devices to mitigate three failure mechanisms and demonstrated an improvement in endurance of about three orders of magnitude.« less

  16. The Usability of Rock-Like Materials for Numerical Studies on Rocks

    NASA Astrophysics Data System (ADS)

    Zengin, Enes; Abiddin Erguler, Zeynal

    2017-04-01

    The approaches of synthetic rock material and mass are widely used by many researchers for understanding the failure behavior of different rocks. In order to model the failure behavior of rock material, researchers take advantageous of different techniques and software. But, the majority of all these instruments are based on distinct element method (DEM). For modeling the failure behavior of rocks, and so to create a fundamental synthetic rock material model, it is required to perform related laboratory experiments for providing strength parameters. In modelling studies, model calibration processes are performed by using parameters of intact rocks such as porosity, grain size, modulus of elasticity and Poisson ratio. In some cases, it can be difficult or even impossible to acquire representative rock samples for laboratory experiments from heavily jointed rock masses and vuggy rocks. Considering this limitation, in this study, it was aimed to investigate the applicability of rock-like material (e.g. concrete) to understand and model the failure behavior of rock materials having complex inherent structures. For this purpose, concrete samples having a mixture of %65 cement dust and %35 water were utilized. Accordingly, intact concrete samples representing rocks were prepared in laboratory conditions and their physical properties such as porosity, pore size and density etc. were determined. In addition, to acquire the mechanical parameters of concrete samples, uniaxial compressive strength (UCS) tests were also performed by simultaneously measuring strain during testing. The measured physical and mechanical properties of these extracted concrete samples were used to create synthetic material and then uniaxial compressive tests were modeled and performed by using two dimensional discontinuum program known as Particle Flow Code (PFC2D). After modeling studies in PFC2D, approximately similar failure mechanism and testing results were achieved from both experimental and artificial simulations. The results obtained from these laboratory tests and modelling studies were compared with the other researcher's studies in respect to failure mechanism of different type of rocks. It can be concluded that there is similar failure mechanism between concrete and rock materials. Therefore, the results obtained from concrete samples that would be prepared at different porosity and pore sizes can be used in future studies in selection micro-mechanical and physical properties to constitute synthetic rock materials for understanding failure mechanism of rocks having complex inherent structures such as vuggy rocks or heavily jointed rock masses.

  17. Fulminant liver failure: clinical and experimental study.

    PubMed Central

    Slapak, M.

    1975-01-01

    Clinical experience of some newer methods of hepatic support is described. The results are unpredictable and far from satisfactory. The need for an animal model in which potential therapeutic methods can be studied is emphasized. Such a model based on carefully imposed ischaemic insult to the liver in the absence of portacaval shunting is described. It is suggested that bacterial presence in the bowel together with a depression of the liver reticuloendothelial function plays an important part in the early and rapid mortality of acute liver failure. Temporary auxiliary liver transplantation using an allograft or a closely related primate heterograft seem to be the 2 best available methods of hepatic support for potentially reversible acute liver failure. Images Fig. 8 PMID:812415

  18. A point-based prediction model for cardiovascular risk in orthotopic liver transplantation: The CAR-OLT score.

    PubMed

    VanWagner, Lisa B; Ning, Hongyan; Whitsett, Maureen; Levitsky, Josh; Uttal, Sarah; Wilkins, John T; Abecassis, Michael M; Ladner, Daniela P; Skaro, Anton I; Lloyd-Jones, Donald M

    2017-12-01

    Cardiovascular disease (CVD) complications are important causes of morbidity and mortality after orthotopic liver transplantation (OLT). There is currently no preoperative risk-assessment tool that allows physicians to estimate the risk for CVD events following OLT. We sought to develop a point-based prediction model (risk score) for CVD complications after OLT, the Cardiovascular Risk in Orthotopic Liver Transplantation risk score, among a cohort of 1,024 consecutive patients aged 18-75 years who underwent first OLT in a tertiary-care teaching hospital (2002-2011). The main outcome measures were major 1-year CVD complications, defined as death from a CVD cause or hospitalization for a major CVD event (myocardial infarction, revascularization, heart failure, atrial fibrillation, cardiac arrest, pulmonary embolism, and/or stroke). The bootstrap method yielded bias-corrected 95% confidence intervals for the regression coefficients of the final model. Among 1,024 first OLT recipients, major CVD complications occurred in 329 (32.1%). Variables selected for inclusion in the model (using model optimization strategies) included preoperative recipient age, sex, race, employment status, education status, history of hepatocellular carcinoma, diabetes, heart failure, atrial fibrillation, pulmonary or systemic hypertension, and respiratory failure. The discriminative performance of the point-based score (C statistic = 0.78, bias-corrected C statistic = 0.77) was superior to other published risk models for postoperative CVD morbidity and mortality, and it had appropriate calibration (Hosmer-Lemeshow P = 0.33). The point-based risk score can identify patients at risk for CVD complications after OLT surgery (available at www.carolt.us); this score may be useful for identification of candidates for further risk stratification or other management strategies to improve CVD outcomes after OLT. (Hepatology 2017;66:1968-1979). © 2017 by the American Association for the Study of Liver Diseases.

  19. Modelling Coastal Cliff Recession Based on the GIM-DDD Method

    NASA Astrophysics Data System (ADS)

    Gong, Bin; Wang, Shanyong; Sloan, Scott William; Sheng, Daichao; Tang, Chun'an

    2018-04-01

    The unpredictable and instantaneous collapse behaviour of coastal rocky cliffs may cause damage that extends significantly beyond the area of failure. Gravitational movements that occur during coastal cliff recession involve two major stages: the small deformation stage and the large displacement stage. In this paper, a method of simulating the entire progressive failure process of coastal rocky cliffs is developed based on the gravity increase method (GIM), the rock failure process analysis method and the discontinuous deformation analysis method, and it is referred to as the GIM-DDD method. The small deformation stage, which includes crack initiation, propagation and coalescence processes, and the large displacement stage, which includes block translation and rotation processes during the rocky cliff collapse, are modelled using the GIM-DDD method. In addition, acoustic emissions, stress field variations, crack propagation and failure mode characteristics are further analysed to provide insights that can be used to predict, prevent and minimize potential economic losses and casualties. The calculation and analytical results are consistent with previous studies, which indicate that the developed method provides an effective and reliable approach for performing rocky cliff stability evaluations and coastal cliff recession analyses and has considerable potential for improving the safety and protection of seaside cliff areas.

  20. A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The two identified failure modes follow different acceleration functions. Catastrophic failures follow the traditional power-law relationship to the applied voltage. Slow degradation failures fit well to an exponential law relationship to the applied electrical field. Finally, the impact of capacitor structure on the reliability of BME capacitors is discussed with respect to the number of dielectric layers in an MLCC unit, the number of BaTiO3 grains per dielectric layer, and the chip size of the capacitor device.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Sisi; Li, Yun; Levitt, Karl N.

    Consensus is a fundamental approach to implementing fault-tolerant services through replication where there exists a tradeoff between the cost and the resilience. For instance, Crash Fault Tolerant (CFT) protocols have a low cost but can only handle crash failures while Byzantine Fault Tolerant (BFT) protocols handle arbitrary failures but have a higher cost. Hybrid protocols enjoy the benefits of both high performance without failures and high resiliency under failures by switching among different subprotocols. However, it is challenging to determine which subprotocols should be used. We propose a moving target approach to switch among protocols according to the existing systemmore » and network vulnerability. At the core of our approach is a formalized cost model that evaluates the vulnerability and performance of consensus protocols based on real-time Intrusion Detection System (IDS) signals. Based on the evaluation results, we demonstrate that a safe, cheap, and unpredictable protocol is always used and a high IDS error rate can be tolerated.« less

  2. Failure Criteria for FRP Laminates in Plane Stress

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.

    2003-01-01

    A new set of six failure criteria for fiber reinforced polymer laminates is described. Derived from Dvorak's fracture mechanics analyses of cracked plies and from Puck's action plane concept, the physically-based criteria, denoted LaRC03, predict matrix and fiber failure accurately without requiring curve-fitting parameters. For matrix failure under transverse compression, the fracture plane is calculated by maximizing the Mohr-Coulomb effective stresses. A criterion for fiber kinking is obtained by calculating the fiber misalignment under load, and applying the matrix failure criterion in the coordinate frame of the misalignment. Fracture mechanics models of matrix cracks are used to develop a criterion for matrix in tension and to calculate the associated in-situ strengths. The LaRC03 criteria are applied to a few examples to predict failure load envelopes and to predict the failure mode for each region of the envelope. The analysis results are compared to the predictions using other available failure criteria and with experimental results. Predictions obtained with LaRC03 correlate well with the experimental results.

  3. Modelling the side impact of carbon fibre tubes

    NASA Astrophysics Data System (ADS)

    Sudharsan, Ms R.; Rolfe, B. F., Dr; Hodgson, P. D., Prof

    2010-06-01

    Metallic tubes have been extensively studied for their crashworthiness as they closely resemble automotive crash rails. Recently, the demand to improve fuel economy and reduce vehicle emissions has led automobile manufacturers to explore the crash properties of light weight materials such as fibre reinforced polymer composites, metallic foams and sandwich structures in order to use them as crash barriers. This paper discusses the response of carbon fibre reinforced polymer (CFRP) tubes and their failure mechanisms during side impact. The energy absorption of CFRP tubes is compared to similar Aluminium tubes. The response of the CFRP tubes during impact was modelled using Abaqus finite element software with a composite fabric material model. The material inputs were given based on standard tension and compression test results and the in-plane damage was defined based on cyclic shear tests. The failure modes and energy absorption observed during the tests were well represented by the finite element model.

  4. Shuttle data book: SRM fragment velocity model. Presented to the SRB Fragment Model Review Panel

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This study was undertaken to determine the velocity of fragments generated by the range safety destruction (RSD) or random failure of a Space Transportation System (STS) Solid Rocket Motor (SRM). The specific requirement was to provide a fragment model for use in those Galileo and Ulysses RTG safety analyses concerned with possible fragment impact on the spacecraft radioisotope thermoelectric generators (RTGS). Good agreement was obtained between predictions and observations for fragment velocity, velocity distributions, azimuths, and rotation rates. Based on this agreement with the entire data base, the model was used to predict the probable fragment environments which would occur in the event of an STS-SRM RSD or randon failure at 10, 74, 84 and 110 seconds. The results of these predictions are the basis of the fragment environments presented in the Shuttle Data Book (NSTS-08116). The information presented here is in viewgraph form.

  5. Damage tolerance modeling and validation of a wireless sensory composite panel for a structural health monitoring system

    NASA Astrophysics Data System (ADS)

    Talagani, Mohamad R.; Abdi, Frank; Saravanos, Dimitris; Chrysohoidis, Nikos; Nikbin, Kamran; Ragalini, Rose; Rodov, Irena

    2013-05-01

    The paper proposes the diagnostic and prognostic modeling and test validation of a Wireless Integrated Strain Monitoring and Simulation System (WISMOS). The effort verifies a hardware and web based software tool that is able to evaluate and optimize sensorized aerospace composite structures for the purpose of Structural Health Monitoring (SHM). The tool is an extension of an existing suite of an SHM system, based on a diagnostic-prognostic system (DPS) methodology. The goal of the extended SHM-DPS is to apply multi-scale nonlinear physics-based Progressive Failure analyses to the "as-is" structural configuration to determine residual strength, remaining service life, and future inspection intervals and maintenance procedures. The DPS solution meets the JTI Green Regional Aircraft (GRA) goals towards low weight, durable and reliable commercial aircraft. It will take advantage of the currently developed methodologies within the European Clean sky JTI project WISMOS, with the capability to transmit, store and process strain data from a network of wireless sensors (e.g. strain gages, FBGA) and utilize a DPS-based methodology, based on multi scale progressive failure analysis (MS-PFA), to determine structural health and to advice with respect to condition based inspection and maintenance. As part of the validation of the Diagnostic and prognostic system, Carbon/Epoxy ASTM coupons were fabricated and tested to extract the mechanical properties. Subsequently two composite stiffened panels were manufactured, instrumented and tested under compressive loading: 1) an undamaged stiffened buckling panel; and 2) a damaged stiffened buckling panel including an initial diamond cut. Next numerical Finite element models of the two panels were developed and analyzed under test conditions using Multi-Scale Progressive Failure Analysis (an extension of FEM) to evaluate the damage/fracture evolution process, as well as the identification of contributing failure modes. The comparisons between predictions and test results were within 10% accuracy.

  6. Minding the Cyber-Physical Gap: Model-Based Analysis and Mitigation of Systemic Perception-Induced Failure.

    PubMed

    Mordecai, Yaniv; Dori, Dov

    2017-07-17

    The cyber-physical gap (CPG) is the difference between the 'real' state of the world and the way the system perceives it. This discrepancy often stems from the limitations of sensing and data collection technologies and capabilities, and is inevitable at some degree in any cyber-physical system (CPS). Ignoring or misrepresenting such limitations during system modeling, specification, design, and analysis can potentially result in systemic misconceptions, disrupted functionality and performance, system failure, severe damage, and potential detrimental impacts on the system and its environment. We propose CPG-Aware Modeling & Engineering (CPGAME), a conceptual model-based approach to capturing, explaining, and mitigating the CPG. CPGAME enhances the systems engineer's ability to cope with CPGs, mitigate them by design, and prevent erroneous decisions and actions. We demonstrate CPGAME by applying it for modeling and analysis of the 1979 Three Miles Island 2 nuclear accident, and show how its meltdown could be mitigated. We use ISO-19450:2015-Object Process Methodology as our conceptual modeling framework.

  7. An interface finite element model can be used to predict healing outcome of bone fractures.

    PubMed

    Alierta, J A; Pérez, M A; García-Aznar, J M

    2014-01-01

    After fractures, bone can experience different potential outcomes: successful bone consolidation, non-union and bone failure. Although, there are a lot of factors that influence fracture healing, experimental studies have shown that the interfragmentary movement (IFM) is one of the main regulators for the course of bone healing. In this sense, computational models may help to improve the development of mechanical-based treatments for bone fracture healing. Hence, based on this fact, we propose a combined repair-failure mechanistic computational model to describe bone fracture healing. Despite being a simple model, it is able to correctly estimate the time course evolution of the IFM compared to in vivo measurements under different mechanical conditions. Therefore, this mathematical approach is especially suitable for modeling the healing response of bone to fractures treated with different mechanical fixators, simulating realistic clinical conditions. This model will be a useful tool to identify factors and define targets for patient specific therapeutics interventions. © 2013 Published by Elsevier Ltd.

  8. Nonparametric method for failures diagnosis in the actuating subsystem of aircraft control system

    NASA Astrophysics Data System (ADS)

    Terentev, M. N.; Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures diagnosis in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on analytical nonparametric one-step-ahead state prediction approach. This makes it possible to predict the behavior of unidentified and failure dynamic systems, to weaken the requirements to control signals, and to reduce the diagnostic time and problem complexity.

  9. Addressing Production System Failures Using Multi-agent Control

    NASA Astrophysics Data System (ADS)

    Gautam, Rajesh; Miyashita, Kazuo

    Output in high-volume production facilities is limited by bottleneck machines. We propose a control mechanism by modeling workstations as agents that pull jobs from other agents based on their current WIP level and requirements. During failures, when flows of some jobs are disrupted, the agents pull alternative jobs to maintain utilization of their capacity at a high level. In this paper, we empirically demonstrate that the proposed mechanism can react to failures more appropriately than other control mechanisms using a benchmark problem of a semiconductor manufacturing process.

  10. A dual-mode generalized likelihood ratio approach to self-reorganizing digital flight control system design

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Analytic techniques have been developed for detecting and identifying abrupt changes in dynamic systems. The GLR technique monitors the output of the Kalman filter and searches for the time that the failure occured, thus allowing it to be sensitive to new data and consequently increasing the chances for fast system recovery following detection of a failure. All failure detections are based on functional redundancy. Performance tests of the F-8 aircraft flight control system and computerized modelling of the technique are presented.

  11. Viscoelastic behavior and life-time predictions

    NASA Technical Reports Server (NTRS)

    Dillard, D. A.; Brinson, H. F.

    1985-01-01

    Fiber reinforced plastics were considered for many structural applications in automotive, aerospace and other industries. A major concern was and remains the failure modes associated with the polymer matrix which serves to bind the fibers together and transfer the load through connections, from fiber to fiber and ply to ply. An accelerated characterization procedure for prediction of delayed failures was developed. This method utilizes time-temperature-stress-moisture superposition principles in conjunction with laminated plate theory. Because failures are inherently nonlinear, the testing and analytic modeling for both moduli and strength is based upon nonlinear viscoelastic concepts.

  12. Evaluation of a Multi-Axial, Temperature, and Time Dependent (MATT) Failure Model

    NASA Technical Reports Server (NTRS)

    Richardson, D. E.; Anderson, G. L.; Macon, D. J.; Rudolphi, Michael (Technical Monitor)

    2002-01-01

    To obtain a better understanding the response of the structural adhesives used in the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle, an extensive effort has been conducted to characterize in detail the failure properties of these adhesives. This effort involved the development of a failure model that includes the effects of multi-axial loading, temperature, and time. An understanding of the effects of these parameters on the failure of the adhesive is crucial to the understanding and prediction of the safety of the RSRM nozzle. This paper documents the use of this newly developed multi-axial, temperature, and time (MATT) dependent failure model for modeling failure for the adhesives TIGA 321, EA913NA, and EA946. The development of the mathematical failure model using constant load rate normal and shear test data is presented. Verification of the accuracy of the failure model is shown through comparisons between predictions and measured creep and multi-axial failure data. The verification indicates that the failure model performs well for a wide range of conditions (loading, temperature, and time) for the three adhesives. The failure criterion is shown to be accurate through the glass transition for the adhesive EA946. Though this failure model has been developed and evaluated with adhesives, the concepts are applicable for other isotropic materials.

  13. Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method

    NASA Astrophysics Data System (ADS)

    Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.

    2018-03-01

    The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.

  14. A.I.-based real-time support for high performance aircraft operations

    NASA Technical Reports Server (NTRS)

    Vidal, J. J.

    1985-01-01

    Artificial intelligence (AI) based software and hardware concepts are applied to the handling system malfunctions during flight tests. A representation of malfunction procedure logic using Boolean normal forms are presented. The representation facilitates the automation of malfunction procedures and provides easy testing for the embedded rules. It also forms a potential basis for a parallel implementation in logic hardware. The extraction of logic control rules, from dynamic simulation and their adaptive revision after partial failure are examined. It uses a simplified 2-dimensional aircraft model with a controller that adaptively extracts control rules for directional thrust that satisfies a navigational goal without exceeding pre-established position and velocity limits. Failure recovery (rule adjusting) is examined after partial actuator failure. While this experiment was performed with primitive aircraft and mission models, it illustrates an important paradigm and provided complexity extrapolations for the proposed extraction of expertise from simulation, as discussed. The use of relaxation and inexact reasoning in expert systems was also investigated.

  15. An assessment of models that predict soil reinforcement by plant roots

    NASA Astrophysics Data System (ADS)

    Hallett, P. D.; Loades, K. W.; Mickovski, S.; Bengough, A. G.; Bransby, M. F.; Davies, M. C. R.; Sonnenberg, R.

    2009-04-01

    Predicting soil reinforcement by plant roots is fraught with uncertainty because of spatio-temporal variability, the mechanical complexity of roots and soil, and the limitations of existing models. In this study, the validity of root-reinforcement models was tested with data from numerous controlled laboratory tests of both fibrous and woody root systems. By using pot experiments packed with homogeneous soil, each planted with one plant species and grown in glasshouses with controlled water and temperature regimes, spatio-temporal variability was reduced. After direct shear testing to compare the mechanical behaviour of planted versus unplanted samples, the size distribution of roots crossing the failure surface was measured accurately. Separate tensile tests on a wide range of root sizes for each test series provided information on the scaling of root strength and stiffness, which was fitted using power-law relationships. These data were used to assess four root-reinforcement models: (1) Wu et al.'s (1979) root-reinforcement model, (2) Rip-Root fibre bundle model (FBM) proposed by Pollen & Simon (2005), (3) a stress-based FBM and (4) a strain-based FBM. For both fibrous (barley) and woody (willow) root systems, all of the FBMs provided a better prediction of reinforcement than Wu's root-reinforcement model. As FBMs simulate progressive failure of roots, they reflect reality better than the Wu model which assumes all roots break (and contribute to increased shear strength) simultaneously. However, all of the FBMs contain assumptions about the distribution of the applied load within the bundle of roots and the failure criterion. The stress-based FBM assumes the same stiffness for different sized roots, resulting in progressive failure from the largest to smallest roots. This is not observed in testing where the smallest roots fail first. The Rip-Root FBM predicts failure from smallest to largest roots, but the distribution of load between different sized roots is based on unverified scaling rules (stiffness is inversely proportional to diameter). In the strain-based FBM, both stiffness and strength data are used to evaluate root breakage. As roots stretch across the shear surface, the stress mobilised in individual roots depends on both their individual stiffness and strain. Small roots being stiffer, mobilise more stress for the same strain (or shear displacement) and therefore fail first. The strain based FBM offers promise as a starting point to predict the reinforcement of soil by plant roots using sound mechanical principles. Compared to other models, it provided the best prediction of root reinforcement. Further developments are required to account particularly for the stochastic variability of the mechanical behaviour and spatial distribution of roots and this will be achieved by adapting advanced fibre bundle methods. Pollen, N., and A. Simon. 2005. Estimating the mechanical effects of riparian vegetation on stream bank stability using a fiber bundle model. Water Resour. Res. 41: W07025. Wu T. H., W. P. McKinnell, and D. N. Swanston. 1979. Strength of tree roots and landslides on Prince of Wales Island, Alaska. Can. Geotech. J. 16: 19-33.

  16. Failure of the Porcine Ascending Aorta: Multidirectional Experiments and a Unifying Microstructural Model

    PubMed Central

    Witzenburg, Colleen M.; Dhume, Rohit Y.; Shah, Sachin B.; Korenczuk, Christopher E.; Wagner, Hallie P.; Alford, Patrick W.; Barocas, Victor H.

    2017-01-01

    The ascending thoracic aorta is poorly understood mechanically, especially its risk of dissection. To make better predictions of dissection risk, more information about the multidimensional failure behavior of the tissue is needed, and this information must be incorporated into an appropriate theoretical/computational model. Toward the creation of such a model, uniaxial, equibiaxial, peel, and shear lap tests were performed on healthy porcine ascending aorta samples. Uniaxial and equibiaxial tests showed anisotropy with greater stiffness and strength in the circumferential direction. Shear lap tests showed catastrophic failure at shear stresses (150–200 kPa) much lower than uniaxial tests (750–2500 kPa), consistent with the low peel tension (∼60 mN/mm). A novel multiscale computational model, including both prefailure and failure mechanics of the aorta, was developed. The microstructural part of the model included contributions from a collagen-reinforced elastin sheet and interlamellar connections representing fibrillin and smooth muscle. Components were represented as nonlinear fibers that failed at a critical stretch. Multiscale simulations of the different experiments were performed, and the model, appropriately specified, agreed well with all experimental data, representing a uniquely complete structure-based description of aorta mechanics. In addition, our experiments and model demonstrate the very low strength of the aorta in radial shear, suggesting an important possible mechanism for aortic dissection. PMID:27893044

  17. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.

    Increased coupling between critical infrastructure networks, such as power and communication systems, has important implications for the reliability and security of these systems. To understand the effects of power-communication coupling, several researchers have studied models of interdependent networks and reported that increased coupling can increase vulnerability. However, these conclusions come largely from models that have substantially different mechanisms of cascading failure, relative to those found in actual power and communication networks, and that do not capture the benefits of connecting systems with complementary capabilities. In order to understand the importance of these details, this paper compares network vulnerability in simplemore » topological models and in models that more accurately capture the dynamics of cascading in power systems. First, we compare a simple model of topological contagion to a model of cascading in power systems and find that the power grid model shows a higher level of vulnerability, relative to the contagion model. Second, we compare a percolation model of topological cascading in coupled networks to three different models of power networks coupled to communication systems. Again, the more accurate models suggest very different conclusions than the percolation model. In all but the most extreme case, the physics-based power grid models indicate that increased power-communication coupling decreases vulnerability. This is opposite from what one would conclude from the percolation model, in which zero coupling is optimal. Only in an extreme case, in which communication failures immediately cause grid failures, did we find that increased coupling can be harmful. Together, these results suggest design strategies for reducing the risk of cascades in interdependent infrastructure systems.« less

  18. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    DOE PAGES

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; ...

    2017-03-20

    Increased coupling between critical infrastructure networks, such as power and communication systems, has important implications for the reliability and security of these systems. To understand the effects of power-communication coupling, several researchers have studied models of interdependent networks and reported that increased coupling can increase vulnerability. However, these conclusions come largely from models that have substantially different mechanisms of cascading failure, relative to those found in actual power and communication networks, and that do not capture the benefits of connecting systems with complementary capabilities. In order to understand the importance of these details, this paper compares network vulnerability in simplemore » topological models and in models that more accurately capture the dynamics of cascading in power systems. First, we compare a simple model of topological contagion to a model of cascading in power systems and find that the power grid model shows a higher level of vulnerability, relative to the contagion model. Second, we compare a percolation model of topological cascading in coupled networks to three different models of power networks coupled to communication systems. Again, the more accurate models suggest very different conclusions than the percolation model. In all but the most extreme case, the physics-based power grid models indicate that increased power-communication coupling decreases vulnerability. This is opposite from what one would conclude from the percolation model, in which zero coupling is optimal. Only in an extreme case, in which communication failures immediately cause grid failures, did we find that increased coupling can be harmful. Together, these results suggest design strategies for reducing the risk of cascades in interdependent infrastructure systems.« less

  19. Correlated seed failure as an environmental veto to synchronize reproduction of masting plants.

    PubMed

    Bogdziewicz, Michał; Steele, Michael A; Marino, Shealyn; Crone, Elizabeth E

    2018-07-01

    Variable, synchronized seed production, called masting, is a widespread reproductive strategy in plants. Resource dynamics, pollination success, and, as described here, environmental veto are possible proximate mechanisms driving masting. We explored the environmental veto hypothesis, which assumes that reproductive synchrony is driven by external factors preventing reproduction in some years, by extending the resource budget model of masting with correlated reproductive failure. We ran this model across its parameter space to explore how key parameters interact to drive seeding dynamics. Next, we parameterized the model based on 16 yr of seed production data for populations of red (Quercus rubra) and white (Quercus alba) oaks. We used these empirical models to simulate seeding dynamics, and compared simulated time series with patterns observed in the field. Simulations showed that resource dynamics and reproduction failure can produce masting even in the absence of pollen coupling. In concordance with this, in both oaks, among-year variation in resource gain and correlated reproductive failure were necessary and sufficient to reproduce masting, whereas pollen coupling, although present, was not necessary. Reproductive failure caused by environmental veto may drive large-scale synchronization without density-dependent pollen limitation. Reproduction-inhibiting weather events are prevalent in ecosystems, making described mechanisms likely to operate in many systems. © 2018 The Authors New Phytologist © 2018 New Phytologist Trust.

  20. Effects of Family-Center Empowerment Model on the Lifestyle of Heart Failure Patients: A Randomized Controlled Clinical Trial

    PubMed Central

    Rakhshan, Mahnaz; Kordshooli, Khadijeh Rahimi; Ghadakpoor, Soraya

    2015-01-01

    Background: Cardiovascular diseases are the most prevalent disorders in developed countries and heart failure is the major one among them. This disease is caused by numerous factors and one of the most considerable risk factors is unhealthy lifestyle. So the aim of this research was to study the effect of family-center empowerment model on the lifestyle of heart failure patients. Methods: This is a randomized controlled clinical trial on 70 heart failure patients referring to Hazrate Fatemeh heart clinic in Shiraz. After convenience sampling the patients were divided into two control and intervention groups using block randomization Method. The intervention based on family-center empowerment model was performed during 5 sessions. Research tools are lifestyle and demographic information questionnaires. Results: Both intervention and control groups were similar regarding their demographic information (P>0.001). Before the intervention on lifestyle, all measures of the two groups were equal (P>0.001) but after the intervention; statistically significant differences were reported in all dimensions of lifestyle, the total lifestyle score in the intervention group was 70.09±16.38 and in the control group -6.03±16.36 (P<0.001). Conclusion: Performing the family-center empowerment model for heart failure patients is practically possible, leading to improvement or refinement of their and their families’ lifestyle. Trial Registration Number: IRCT 2014072018468N3 PMID:26448952

  1. Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series.

    PubMed

    Thorndahl, S; Willems, P

    2008-01-01

    Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.

  2. Assessment of compressive failure process of cortical bone materials using damage-based model.

    PubMed

    Ng, Theng Pin; R Koloor, S S; Djuansjah, J R P; Abdul Kadir, M R

    2017-02-01

    The main failure factors of cortical bone are aging or osteoporosis, accident and high energy trauma or physiological activities. However, the mechanism of damage evolution coupled with yield criterion is considered as one of the unclear subjects in failure analysis of cortical bone materials. Therefore, this study attempts to assess the structural response and progressive failure process of cortical bone using a brittle damaged plasticity model. For this reason, several compressive tests are performed on cortical bone specimens made of bovine femur, in order to obtain the structural response and mechanical properties of the material. Complementary finite element (FE) model of the sample and test is prepared to simulate the elastic-to-damage behavior of the cortical bone using the brittle damaged plasticity model. The FE model is validated in a comparative method using the predicted and measured structural response as load-compressive displacement through simulation and experiment. FE results indicated that the compressive damage initiated and propagated at central region where maximum equivalent plastic strain is computed, which coincided with the degradation of structural compressive stiffness followed by a vast amount of strain energy dissipation. The parameter of compressive damage rate, which is a function dependent on damage parameter and the plastic strain is examined for different rates. Results show that considering a similar rate to the initial slope of the damage parameter in the experiment would give a better sense for prediction of compressive failure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Monitoring of waste disposal in deep geological formations

    NASA Astrophysics Data System (ADS)

    German, V.; Mansurov, V.

    2003-04-01

    In the paper application of kinetic approach for description of rock failure process and waste disposal microseismic monitoring is advanced. On base of two-stage model of failure process the capability of rock fracture is proved. The requests to monitoring system such as real time mode of data registration and processing and its precision range are formulated. The method of failure nuclei delineation in a rock masses is presented. This method is implemented in a software program for strong seismic events forecasting. It is based on direct use of the fracture concentration criterion. The method is applied to the database of microseismic events of the North Ural Bauxite Mine. The results of this application, such as: efficiency, stability, possibility of forecasting rockburst are discussed.

  4. Progressive Damage Analyses of Skin/Stringer Debonding

    NASA Technical Reports Server (NTRS)

    Daville, Carlos G.; Camanho, Pedro P.; deMoura, Marcelo F.

    2004-01-01

    The debonding of skin/stringer constructions is analyzed using a step-by-step simulation of material degradation based on strain softening decohesion elements and a ply degradation procedure. Decohesion elements with mixed-mode capability are placed at the interface between the skin and the flange to simulate the initiation and propagation of the delamination. In addition, the initiation and accumulation of fiber failure and matrix damage is modeled using Hashin-type failure criteria and their corresponding material degradation schedules. The debonding predictions using simplified three-dimensional models correlate well with test results.

  5. Three-dimensional Simulation and Prediction of Solenoid Valve Failure Mechanism Based on Finite Element Model

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Xiao, Mingqing; Liang, Yajun; Tang, Xilang; Li, Chao

    2018-01-01

    The solenoid valve is a kind of basic automation component applied widely. It’s significant to analyze and predict its degradation failure mechanism to improve the reliability of solenoid valve and do research on prolonging life. In this paper, a three-dimensional finite element analysis model of solenoid valve is established based on ANSYS Workbench software. A sequential coupling method used to calculate temperature filed and mechanical stress field of solenoid valve is put forward. The simulation result shows the sequential coupling method can calculate and analyze temperature and stress distribution of solenoid valve accurately, which has been verified through the accelerated life test. Kalman filtering algorithm is introduced to the data processing, which can effectively reduce measuring deviation and restore more accurate data information. Based on different driving current, a kind of failure mechanism which can easily cause the degradation of coils is obtained and an optimization design scheme of electro-insulating rubbers is also proposed. The high temperature generated by driving current and the thermal stress resulting from thermal expansion can easily cause the degradation of coil wires, which will decline the electrical resistance of coils and result in the eventual failure of solenoid valve. The method of finite element analysis can be applied to fault diagnosis and prognostic of various solenoid valves and improve the reliability of solenoid valve’s health management.

  6. Machine Learning Algorithm Predicts Cardiac Resynchronization Therapy Outcomes: Lessons From the COMPANION Trial.

    PubMed

    Kalscheur, Matthew M; Kipp, Ryan T; Tattersall, Matthew C; Mei, Chaoqun; Buhr, Kevin A; DeMets, David L; Field, Michael E; Eckhardt, Lee L; Page, C David

    2018-01-01

    Cardiac resynchronization therapy (CRT) reduces morbidity and mortality in heart failure patients with reduced left ventricular function and intraventricular conduction delay. However, individual outcomes vary significantly. This study sought to use a machine learning algorithm to develop a model to predict outcomes after CRT. Models were developed with machine learning algorithms to predict all-cause mortality or heart failure hospitalization at 12 months post-CRT in the COMPANION trial (Comparison of Medical Therapy, Pacing, and Defibrillation in Heart Failure). The best performing model was developed with the random forest algorithm. The ability of this model to predict all-cause mortality or heart failure hospitalization and all-cause mortality alone was compared with discrimination obtained using a combination of bundle branch block morphology and QRS duration. In the 595 patients with CRT-defibrillator in the COMPANION trial, 105 deaths occurred (median follow-up, 15.7 months). The survival difference across subgroups differentiated by bundle branch block morphology and QRS duration did not reach significance ( P =0.08). The random forest model produced quartiles of patients with an 8-fold difference in survival between those with the highest and lowest predicted probability for events (hazard ratio, 7.96; P <0.0001). The model also discriminated the risk of the composite end point of all-cause mortality or heart failure hospitalization better than subgroups based on bundle branch block morphology and QRS duration. In the COMPANION trial, a machine learning algorithm produced a model that predicted clinical outcomes after CRT. Applied before device implant, this model may better differentiate outcomes over current clinical discriminators and improve shared decision-making with patients. © 2018 American Heart Association, Inc.

  7. Experiment and simulation study on unidirectional carbon fiber composite component under dynamic 3 point bending loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Guowei; Sun, Qingping; Zeng, Danielle

    In current work, unidirectional (UD) carbon fiber composite hatsection component with two different layups are studied under dynamic 3 point bending loading. The experiments are performed at various impact velocities, and the effects of impactor velocity and layup on acceleration histories are compared. A macro model is established with LS-Dyna for more detailed study. The simulation results show that the delamination plays an important role during dynamic 3 point bending test. Based on the analysis with high speed camera, the sidewall of hatsection shows significant buckling rather than failure. Without considering the delamination, current material model cannot capture the postmore » failure phenomenon correctly. The sidewall delamination is modeled by assumption of larger failure strain together with slim parameters, and the simulation results of different impact velocities and layups match the experimental results reasonable well.« less

  8. Overcoming limitations of model-based diagnostic reasoning systems

    NASA Technical Reports Server (NTRS)

    Holtzblatt, Lester J.; Marcotte, Richard A.; Piazza, Richard L.

    1989-01-01

    The development of a model-based diagnostic system to overcome the limitations of model-based reasoning systems is discussed. It is noted that model-based reasoning techniques can be used to analyze the failure behavior and diagnosability of system and circuit designs as part of the system process itself. One goal of current research is the development of a diagnostic algorithm which can reason efficiently about large numbers of diagnostic suspects and can handle both combinational and sequential circuits. A second goal is to address the model-creation problem by developing an approach for using design models to construct the GMODS model in an automated fashion.

  9. Targeting Inflammation in Heart Failure with Histone Deacetylase Inhibitors

    PubMed Central

    McKinsey, Timothy A

    2011-01-01

    Cardiovascular insults such as myocardial infarction and chronic hypertension can trigger the heart to undergo a remodeling process characterized by myocyte hypertrophy, myocyte death and fibrosis, often resulting in impaired cardiac function and heart failure. Pathological cardiac remodeling is associated with inflammation, and therapeutic approaches targeting inflammatory cascades have shown promise in patients with heart failure. Small molecule histone deacetylase (HDAC) inhibitors block adverse cardiac remodeling in animal models, suggesting unforeseen potential for this class of compounds for the treatment of heart failure. In addition to their beneficial effects on myocardial cells, HDAC inhibitors have potent antiinflammatory actions. This review highlights the roles of HDACs in the heart and the potential for using HDAC inhibitors as broad-based immunomodulators for the treatment of human heart failure. PMID:21267510

  10. Modeling Progressive Failure of Bonded Joints Using a Single Joint Finite Element

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott E.; Waas, Anthony M.; Bednarcyk, Brett A.

    2010-01-01

    Enhanced finite elements are elements with an embedded analytical solution which can capture detailed local fields, enabling more efficient, mesh-independent finite element analysis. In the present study, an enhanced finite element is applied to generate a general framework capable of modeling an array of joint types. The joint field equations are derived using the principle of minimum potential energy, and the resulting solutions for the displacement fields are used to generate shape functions and a stiffness matrix for a single joint finite element. This single finite element thus captures the detailed stress and strain fields within the bonded joint, but it can function within a broader structural finite element model. The costs associated with a fine mesh of the joint can thus be avoided while still obtaining a detailed solution for the joint. Additionally, the capability to model non-linear adhesive constitutive behavior has been included within the method, and progressive failure of the adhesive can be modeled by using a strain-based failure criteria and re-sizing the joint as the adhesive fails. Results of the model compare favorably with experimental and finite element results.

  11. SIXTH INTERIM STATUS REPORT: MODEL 9975 PCV O-RING FIXTURE LONG-TERM LEAK PERFORMANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daugherty, W.

    2011-08-31

    A series of experiments to monitor the aging performance of Viton{reg_sign} GLT O-rings used in the Model 9975 package has been ongoing for seven years at the Savannah River National Laboratory. Seventy tests using mock-ups of 9975 Primary Containment Vessels (PCVs) were assembled and heated to temperatures ranging from 200 to 450 F. They were leak-tested initially and have been tested periodically to determine if they meet the criterion of leak-tightness defined in ANSI standard N14.5-97. Fourteen additional tests were initiated in 2008 with GLT-S O-rings heated to temperatures ranging from 200 to 400 F. High temperature aging continues formore » 33 GLT O-ring fixtures at 200-300 F. Room temperature leak test failures have been experienced in all of the GLT O-ring fixtures aging at 350 F and higher temperatures, and in 7 fixtures aging at 300 F. No failures have yet been observed in GLT O-ring fixtures aging at 200 F for 41-60 months, which is still bounding to O-ring temperatures during storage in K-Area Complex (KAC). Based on expectations that the fixtures aging at 200 F will remain leak-tight for a significant period yet to come, 2 additional fixtures began aging within the past year at an intermediate temperature of 270 F, with hopes that they may leak before the 200 F fixtures. High temperature aging continues for 6 GLT-S O-ring fixtures at 200-300 F. Room temperature leak test failures have been experienced in all 8 of the GLT-S O-ring fixtures aging at 350 and 400 F. No failures have yet been observed in GLT-S O-ring fixtures aging at 200-300 F for up to 26 months. For O-ring fixtures that have failed the room temperature leak test and been disassembled, the Orings displayed a compression set ranging from 51-96%. This is greater than seen to date for packages inspected during KAC field surveillance (24% average). For GLT O-rings, separate service life estimates have been made based on the O-ring fixture leak test data and based on compression stress relaxation (CSR) data. These two predictive models show reasonable agreement at higher temperatures (350-400 F). However, at 300 F, the room temperature leak test failures to date experienced longer aging times than predicted by the CSR-based model. This suggests that extrapolations of the CSR model predictions to temperatures below 300 F will provide a conservative prediction of service life relative to the leak rate criterion. Leak test failure data at lower temperatures are needed to verify this apparent trend. Insufficient failure data exist currently to perform a similar comparison for GLT-S O-rings. Aging and periodic leak testing will continue for the remaining fixtures.« less

  12. A comparison of stereology, structural rigidity and a novel 3D failure surface analysis method in the assessment of torsional strength and stiffness in a mouse tibia fracture model.

    PubMed

    Wright, David A; Nam, Diane; Whyne, Cari M

    2012-08-31

    In attempting to develop non-invasive image based measures for the determination of the biomechanical integrity of healing fractures, traditional μCT based measurements have been limited. This study presents the development and evaluation of a tool for assessment of fracture callus mechanical properties through determination of the geometric characteristics of the fracture callus, specifically along the surface of failure identified during destructive mechanical testing. Fractures were created in tibias of ten male mice and subjected to μCT imaging and biomechanical torsion testing. Failure surface analysis, along with previously described image based measures was calculated using the μCT image data, and correlated with mechanical strength and stiffness. Three-dimensional measures along the surface of failure, specifically the surface area and torsional rigidity of bone, were shown to be significantly correlating with mechanical strength and stiffness. It was also shown that surface area of bone along the failure surface exhibits stronger correlations with both strength and stiffness than measures of average and minimum torsional rigidity of the entire callus. Failure surfaces observed in this study were generally oriented at 45° to the long axis of the bone, and were not contained exclusively within the callus. This work represents a proof of concept study, and shows the potential utility of failure surface analysis in the assessment of fracture callus stability. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Treatment carryover impacts on effectiveness of intraocular pressure lowering agents, estimated by a discrete event simulation model.

    PubMed

    Denis, P; Le Pen, C; Umuhire, D; Berdeaux, G

    2008-01-01

    To compare the effectiveness of two treatment sequences, latanoprost-latanoprost timolol fixed combination (L-LT) versus travoprost-travoprost timolol fixed combination (T-TT), in the treatment of open-angle glaucoma (OAG) or ocular hypertension (OHT). A discrete event simulation (DES) model was constructed. Patients with either OAG or OHT were treated first-line with a prostaglandin, either latanoprost or travoprost. In case of treatment failure, patients were switched to the specific prostaglandin-timolol sequence LT or TT. Failure was defined as intraocular pressure higher than or equal to 18 mmHg at two visits. Time to failure was estimated from two randomized clinical trials. Log-rank tests were computed. Linear functions after log-log transformation were used to model time to failure. The time horizon of the model was 60 months. Outcomes included treatment failure and disease progression. Sensitivity analyses were performed. Latanoprost treatment resulted in more treatment failures than travoprost (p<0.01), and LT more than TT (p<0.01). At 60 months, the probability of starting a third treatment line was 39.2% with L-LT versus 29.9% with T-TT. On average, L-LT patients developed 0.55 new visual field defects versus 0.48 for T-TT patients. The probability of no disease progression at 60 months was 61.4% with L-LT and 65.5% with T-TT. Based on randomized clinical trial results and using a DES model, the T-TT sequence was more effective at avoiding starting a third line treatment than the L-LT sequence. T-TT treated patients developed less glaucoma progression.

  14. Micromolecular modeling

    NASA Technical Reports Server (NTRS)

    Guillet, J. E.

    1984-01-01

    A reaction kinetics based model of the photodegradation process, which measures all important rate constants, and a computerized model capable of predicting the photodegradation rate and failure modes of a 30 year period, were developed. It is shown that the computerized photodegradation model for polyethylene correctly predicts failure of ELVAX 15 and cross linked ELVAX 150 on outdoor exposure. It is indicated that cross linking ethylene vinyl acetate (EVA) does not significantly change its degradation rate. It is shown that the effect of the stabilizer package is approximately equivalent on both polymers. The computerized model indicates that peroxide decomposers and UV absorbers are the most effective stabilizers. It is found that a combination of UV absorbers and a hindered amine light stabilizer (HALS) is the most effective stabilizer system.

  15. Progressive Failure Analysis of Composite Stiffened Panels

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Yarrington, Phillip W.; Collier, Craig S.; Arnold, Steven M.

    2006-01-01

    A new progressive failure analysis capability for stiffened composite panels has been developed based on the combination of the HyperSizer stiffened panel design/analysis/optimization software with the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC). MAC/GMC discretizes a composite material s microstructure into a number of subvolumes and solves for the stress and strain state in each while providing the homogenized composite properties as well. As a result, local failure criteria may be employed to predict local subvolume failure and the effects of these local failures on the overall composite response. When combined with HyperSizer, MAC/GMC is employed to represent the ply level composite material response within the laminates that constitute a stiffened panel. The effects of local subvolume failures can then be tracked as loading on the stiffened panel progresses. Sample progressive failure results are presented at both the composite laminate and the composite stiffened panel levels. Deformation and failure model predictions are compared with experimental data from the World Wide Failure Exercise for AS4/3501-6 graphite/epoxy laminates.

  16. A Sensitivity Analysis of Triggers and Mechanisms of Mass Movements in Fjords

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Lintern, G.; Hill, P.

    2016-12-01

    Fjords are characterized by rapid sedimentation as they typically drain glaciated river catchments with high seasonal discharges and large sediment evacuation rates. For this reason, fjords commonly experience submarine mass movements; failures of the steep delta front that trigger tsunamis, and turbidity currents or debris flows. Repeat high-resolution bathymetric surveys, and in-situ process measurements collected in fjords in British Columbia, Canada, indicate that mass movements occur many times per year in some fjords and are more rare and of larger magnitude in other fjords. We ask whether these differences can be attributed to river discharge characteristics or to grainsize characteristics of the delivered sediment. To test our ideas, we couple a climate-driven river sediment transport model, HydroTrend, and a marine sedimentation model, Sedflux2D, to explore the triggers of submarine failures and mechanisms of subsequent turbidity and debris flows. HydroTrend calculates water and suspended sediment transport on a daily basis based on catchment characteristics, glaciated area, lakes and temperature and precipitation regime. Sedflux uses the generated river time-series to simulate delta plumes, failures and mass movements with separate process models. Model uncertainty and parameter sensitivity are assessed using Dakota Tools, which allows for a systematic exploration of the effects of river basin characteristics and climate scenarios on occurrence of hyperpycnal events, delta front sedimentation rate, submarine pore pressure, failure frequency and size, and run-out distances. Preliminary simulation results point to the importance of proglacial lakes and lakes abundance in the river basin, which has profound implications for event-based sediment delivery to the delta apex. Discharge-sediment rating curves can be highly variable based on these parameters. Distinction of turbidity currents and debris flows was found to be most sensitive to both earthquake frequency and delta front grainsize. As a first step we compare these model experiments against field data from the Squamish River and Delta in Howe Sound, BC.

  17. Effect of Crystal Orientation on Fatigue Failure of Single Crystal Nickel Base Turbine Blade Superalloys

    NASA Technical Reports Server (NTRS)

    Arakere, N. K.; Swanson, G.

    2002-01-01

    High cycle fatigue (HCF) induced failures in aircraft gas turbine and rocket engine turbopump blades is a pervasive problem. Single crystal nickel turbine blades are being utilized in rocket engine turbopumps and jet engines throughout industry because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities over polycrystalline alloys. Currently the most widely used single crystal turbine blade superalloys are PWA 1480/1493, PWA 1484, RENE' N-5 and CMSX-4. These alloys play an important role in commercial, military and space propulsion systems. Single crystal materials have highly orthotropic properties making the position of the crystal lattice relative to the part geometry a significant factor in the overall analysis. The failure modes of single crystal turbine blades are complicated to predict due to the material orthotropy and variations in crystal orientations. Fatigue life estimation of single crystal turbine blades represents an important aspect of durability assessment. It is therefore of practical interest to develop effective fatigue failure criteria for single crystal nickel alloys and to investigate the effects of variation of primary and secondary crystal orientation on fatigue life. A fatigue failure criterion based on the maximum shear stress amplitude /Delta(sub tau)(sub max))] on the 24 octahedral and 6 cube slip systems, is presented for single crystal nickel superalloys (FCC crystal). This criterion reduces the scatter in uniaxial LCF test data considerably for PWA 1493 at 1200 F in air. Additionally, single crystal turbine blades used in the alternate advanced high-pressure fuel turbopump (AHPFTP/AT) are modeled using a large-scale three-dimensional finite element model. This finite element model is capable of accounting for material orthotrophy and variation in primary and secondary crystal orientation. Effects of variation in crystal orientation on blade stress response are studied based on 297 finite element model runs. Fatigue lives at critical points in the blade are computed using finite element stress results and the failure criterion developed. Stress analysis results in the blade attachment region are also presented. Results presented demonstrates that control of secondary and primary crystallographic orientation has the potential to significantly increase a component S resistance to fatigue crack growth with- out adding additional weight or cost. [DOI: 10.1115/1.1413767

  18. Interlaminar shear stress effects on the postbuckling response of graphite-epoxy panels

    NASA Technical Reports Server (NTRS)

    Engelstad, S. P.; Knight, N. F., Jr.; Reddy, J. N.

    1990-01-01

    The influence of shear flexibility on overall postbuckling response was assessed, and transverse shear stress distributions in relation to panel failure were examined. Nonlinear postbuckling results are obtained for finite element models based on classical laminated plate theory and first-order shear deformation theory. Good correlation between test and analysis is obtained. The results presented analytically substantiate the experimentally observed failure mode.

  19. Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive

    NASA Technical Reports Server (NTRS)

    Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)

    2001-01-01

    Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.

  20. Impact and Penetration of Thin Aluminum 2024 Flat Panels at Oblique Angles of Incidence

    NASA Technical Reports Server (NTRS)

    Ruggeri, Charles R.; Revilock, Duane M.; Pereira, J. Michael; Emmerling, William; Queitzsch, Gilbert K., Jr.

    2015-01-01

    The U.S. Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) are actively involved in improving the predictive capabilities of transient finite element computational methods for application to safety issues involving unintended impacts on aircraft and aircraft engine structures. One aspect of this work involves the development of an improved deformation and failure model for metallic materials, known as the Tabulated Johnson-Cook model, or MAT224, which has been implemented in the LS-DYNA commercial transient finite element analysis code (LSTC Corp., Livermore, CA) (Ref. 1). In this model the yield stress is a function of strain, strain rate and temperature and the plastic failure strain is a function of the state of stress, temperature and strain rate. The failure criterion is based on the accumulation of plastic strain in an element. The model also incorporates a regularization scheme to account for the dependency of plastic failure strain on mesh size. For a given material the model requires a significant amount of testing to determine the yield stress and failure strain as a function of the three-dimensional state of stress, strain rate and temperature. In addition, experiments are required to validate the model. Currently the model has been developed for Aluminum 2024 and validated against a series of ballistic impact tests on flat plates of various thicknesses (Refs. 1 to 3). Full development of the model for Titanium 6Al-4V is being completed, and mechanical testing for Inconel 718 has begun. The validation testing for the models involves ballistic impact tests using cylindrical projectiles impacting flat plates at a normal incidence (Ref. 2). By varying the thickness of the plates, different stress states and resulting failure modes are induced, providing a range of conditions over which the model can be validated. The objective of the study reported here was to provide experimental data to evaluate the model under more extreme conditions, using a projectile with a more complex shape and sharp contacts, impacting flat panels at oblique angles of incidence.

  1. Probabilistic Design of a Wind Tunnel Model to Match the Response of a Full-Scale Aircraft

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Stroud, W. Jefferson; Krishnamurthy, T.; Spain, Charles V.; Naser, Ahmad S.

    2005-01-01

    approach is presented for carrying out the reliability-based design of a plate-like wing that is part of a wind tunnel model. The goal is to design the wind tunnel model to match the stiffness characteristics of the wing box of a flight vehicle while satisfying strength-based risk/reliability requirements that prevents damage to the wind tunnel model and fixtures. The flight vehicle is a modified F/A-18 aircraft. The design problem is solved using reliability-based optimization techniques. The objective function to be minimized is the difference between the displacements of the wind tunnel model and the corresponding displacements of the flight vehicle. The design variables control the thickness distribution of the wind tunnel model. Displacements of the wind tunnel model change with the thickness distribution, while displacements of the flight vehicle are a set of fixed data. The only constraint imposed is that the probability of failure is less than a specified value. Failure is assumed to occur if the stress caused by aerodynamic pressure loading is greater than the specified strength allowable. Two uncertain quantities are considered: the allowable stress and the thickness distribution of the wind tunnel model. Reliability is calculated using Monte Carlo simulation with response surfaces that provide approximate values of stresses. The response surface equations are, in turn, computed from finite element analyses of the wind tunnel model at specified design points. Because the response surface approximations were fit over a small region centered about the current design, the response surfaces were refit periodically as the design variables changed. Coarse-grained parallelism was used to simultaneously perform multiple finite element analyses. Studies carried out in this paper demonstrate that this scheme of using moving response surfaces and coarse-grained computational parallelism reduce the execution time of the Monte Carlo simulation enough to make the design problem tractable. The results of the reliability-based designs performed in this paper show that large decreases in the probability of stress-based failure can be realized with only small sacrifices in the ability of the wind tunnel model to represent the displacements of the full-scale vehicle.

  2. SEVENTH INTERIM STATUS REPORT: MODEL 9975 PCV O-RING FIXTURE LONG-TERM LEAK PERFORMANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daugherty, W.

    2012-08-30

    A series of experiments to monitor the aging performance of Viton® GLT O-rings used in the Model 9975 package has been ongoing since 2004 at the Savannah River National Laboratory. Seventy tests using mock-ups of 9975 Primary Containment Vessels (PCVs) were assembled and heated to temperatures ranging from 200 to 450 ºF. They were leak-tested initially and have been tested periodically to determine if they meet the criterion of leak-tightness defined in ANSI standard N14.5-97. Fourteen additional tests were initiated in 2008 with GLT-S O-rings heated to temperatures ranging from 200 to 400 ºF. High temperature aging continues for 23more » GLT O-ring fixtures at 200 – 270 ºF. Room temperature leak test failures have been experienced in all of the GLT O-ring fixtures aging at 350 ºF and higher temperatures, and in 8 fixtures aging at 300 ºF. The remaining GLT O-ring fixtures aging at 300 ºF have been retired from testing following more than 5 years at temperature without failure. No failures have yet been observed in GLT O-ring fixtures aging at 200 ºF for 54-72 months, which is still bounding to O-ring temperatures during storage in K-Area Complex (KAC). Based on expectations that the fixtures aging at 200 ºF will remain leak-tight for a significant period yet to come, 2 additional fixtures began aging in 2011 at an intermediate temperature of 270 ºF, with hopes that they may reach a failure condition before the 200 ºF fixtures. High temperature aging continues for 6 GLT-S O-ring fixtures at 200 – 300 ºF. Room temperature leak test failures have been experienced in all 8 of the GLT-S O-ring fixtures aging at 350 and 400 ºF. No failures have yet been observed in GLT-S O-ring fixtures aging at 200 - 300 ºF for 30 - 36 months. For O-ring fixtures that have failed the room temperature leak test and been disassembled, the O-rings displayed a compression set ranging from 51 – 96%. This is greater than seen to date for any packages inspected during KAC field surveillance (24% average). For GLT O-rings, separate service life estimates have been made based on the O-ring fixture leak test data and based on compression stress relaxation (CSR) data. These two predictive models show reasonable agreement at higher temperatures (350 – 400 ºF). However, at 300 ºF, the room temperature leak test failures to date experienced longer aging times than predicted by the CSRbased model. This suggests that extrapolations of the CSR model predictions to temperatures below 300 ºF will provide a conservative prediction of service life relative to the leak rate criterion. Leak test failure data at lower temperatures are needed to verify this apparent trend. Insufficient failure data exist currently to perform a similar comparison for GLT-S O-rings. Aging and periodic leak testing will continue for the remaining PCV O-ring fixtures.« less

  3. NASA Prototype All Composite Tank Cryogenic Pressure Tests to Failure with Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Werlink, Rudolph J.; Pena, Francisco

    2015-01-01

    This Paper will describe the results of pressurization to failure of 100 gallon composite tanks using liquid nitrogen. Advanced methods of health monitoring will be compared as will the experimental data to a finite element model. The testing is wholly under NASA including unique PZT (Lead Zirconate Titanate) based active vibration technology. Other technologies include fiber optics strain based systems including NASA AFRC technology, Acoustic Emission, Acellent smart sensor, this work is expected to lead to a practical in-Sutu system for composite tanks.

  4. A longitudinal, collaborative, practice-based learning and improvement model to improve post-discharge heart failure outcomes.

    PubMed

    Anagnostou, Valsamo K; Bailey, Grant; Sze, Edward; Hay, Seonaid; Hyson, Anne; Federman, Daniel G

    2014-01-01

    Practice-based learning and improvement is one of the Accreditation Council of Graduate Medical Education's core competencies fortrainees. Residencyprograms have grappled with how to accomplish this goal. We describe our institution's unique, longitudinal post-graduate year process and project. West Haven, VA Medical Center. Yale University School of Medicine junior residents on ambulatory electives and faculty preceptor. Longitudinal program aimed to decrease re-admissions for hospitalized patients with congestive heart failure. We feel that our longitudinal project is a novel innovation worthy of further study.

  5. Experimental and theoretical analysis of integrated circuit (IC) chips on flexible substrates subjected to bending

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Yuan, Jianghong; Zhang, Yingchao; Huang, Yonggang; Feng, Xue

    2017-10-01

    The interfacial failure of integrated circuit (IC) chips integrated on flexible substrates under bending deformation has been studied theoretically and experimentally. A compressive buckling test is used to impose the bending deformation onto the interface between the IC chip and the flexible substrate quantitatively, after which the failed interface is investigated using scanning electron microscopy. A theoretical model is established based on the beam theory and a bi-layer interface model, from which an analytical expression of the critical curvature in relation to the interfacial failure is obtained. The relationships between the critical curvature, the material, and the geometric parameters of the device are discussed in detail, providing guidance for future optimization flexible circuits based on IC chips.

  6. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  7. The influence of microstructure on the probability of early failure in aluminum-based interconnects

    NASA Astrophysics Data System (ADS)

    Dwyer, V. M.

    2004-09-01

    For electromigration in short aluminum interconnects terminated by tungsten vias, the well known "short-line" effect applies. In a similar manner, for longer lines, early failure is determined by a critical value Lcrit for the length of polygranular clusters. Any cluster shorter than Lcrit is "immortal" on the time scale of early failure where the figure of merit is not the standard t50 value (the time to 50% failures), but rather the total probability of early failure, Pcf. Pcf is a complex function of current density, linewidth, line length, and material properties (the median grain size d50 and grain size shape factor σd). It is calculated here using a model based around the theory of runs, which has proved itself to be a useful tool for assessing the probability of extreme events. Our analysis shows that Pcf is strongly dependent on σd, and a change in σd from 0.27 to 0.5 can cause an order of magnitude increase in Pcf under typical test conditions. This has implications for the web-based two-dimensional grain-growth simulator MIT/EmSim, which generates grain patterns with σd=0.27, while typical as-patterned structures are better represented by a σd in the range 0.4 - 0.6. The simulator will consequently overestimate interconnect reliability due to this particular electromigration failure mode.

  8. Semicompeting risks in aging research: methods, issues and needs

    PubMed Central

    Varadhan, Ravi; Xue, Qian-Li; Bandeen-Roche, Karen

    2015-01-01

    A semicompeting risks problem involves two-types of events: a nonterminal and a terminal event (death). Typically, the nonterminal event is the focus of the study, but the terminal event can preclude the occurrence of the nonterminal event. Semicompeting risks are ubiquitous in studies of aging. Examples of semicompeting risk dyads include: dementia and death, frailty syndrome and death, disability and death, and nursing home placement and death. Semicompeting risk models can be divided into two broad classes: models based only on observables quantities (class O) and those based on potential (latent) failure times (class L). The classical illness-death model belongs to class O. This model is a special case of the multistate models, which has been an active area of methodology development. During the past decade and a half, there has also been a flurry of methodological activity on semicompeting risks based on latent failure times (L models). These advances notwithstanding, the semi-competing risks methodology has not penetrated biomedical research, in general, and gerontological research, in particular. Some possible reasons for this lack of uptake are: the methods are relatively new and sophisticated, conceptual problems associated with potential failure time models are difficult to overcome, paucity of expository articles aimed at educating practitioners, and non-availability of readily usable software. The main goals of this review article are: (i) to describe the major types of semicompeting risks problems arising in aging research, (ii) to provide a brief survey of the semicompeting risks methods, (iii) to suggest appropriate methods for addressing the problems in aging research, (iv) to highlight areas where more work is needed, and (v) to suggest ways to facilitate the uptake of the semicompeting risks methodology by the broader biomedical research community. PMID:24729136

  9. Bone Graft Substitute Provides Metaphyseal Fixation for a Stemless Humeral Implant.

    PubMed

    Kim, Myung-Sun; Kovacevic, David; Milks, Ryan A; Jun, Bong-Jae; Rodriguez, Eric; DeLozier, Katherine R; Derwin, Kathleen A; Iannotti, Joseph P

    2015-07-01

    Stemless humeral fixation has become an alternative to traditional total shoulder arthroplasty, but metaphyseal fixation may be compromised by the quality of the trabecular bone that diminishes with age and disease, and augmentation of the fixation may be desirable. The authors hypothesized that a bone graft substitute (BGS) could achieve initial fixation comparable to polymethylmethacrylate (PMMA) bone cement. Fifteen fresh-frozen human male humerii were randomly implanted using a stemless humeral prosthesis, and metaphyseal fixation was augmented with either high-viscosity PMMA bone cement (PMMA group) or a magnesium-based injectable BGS (OsteoCrete; Bone Solutions Inc, Dallas, Texas) (OC group). Both groups were compared with a control group with no augmentation. Initial stiffness, failure load, failure displacement, failure cycle, and total work were compared among groups. The PMMA and OC groups showed markedly higher failure loads, failure displacements, and failure cycles than the control group (P<.01). There were no statistically significant differences in initial stiffness, failure load, failure displacement, failure cycle, or total work between the PMMA and OC groups. The biomechanical properties of magnesium-based BGS fixation compared favorably with PMMA bone cement in the fixation of stemless humeral prostheses and may provide sufficient initial fixation for this clinical application. Future work will investigate the long-term remodeling characteristics and bone quality at the prosthetic-bone interface in an in vivo model to evaluate the clinical efficacy of this approach. Copyright 2015, SLACK Incorporated.

  10. Computational hydraulics of a cascade of experimental-scale landside dam failures

    NASA Astrophysics Data System (ADS)

    Wright, N.; Guan, M.

    2015-12-01

    Abstract: Landslide dams typically comprise unconsolidated and poorly sorted material, and are vulnerable to rapid failure and breaching, particularly in mountainous areas during high intense rainfalls. A large flash flood with high-concentrated sediment can be formed in a short period, and the magnitude is likely to be amplified along the flow direction due to the inclusion of a large amount of sediment. This can result in significant and sudden flood risk downstream for human life and property. Numerous field evidence has indicated the various risks of landslide dam failures. In general, cascading landslide dams can be formed along the sloping channel due to the randomness and unpredictability of landslides, which complexes the hydraulics of landslide dam failures. The failure process of a single dam and subsequent floods has attracted attention in multidisciplinary studies. However, the dynamic failure process of cascading landslide dams has been poorly understood. From a viewpoint of simulation, this study evaluates the formation and development of rapid sediment-charged floods due to cascading failure of landslide dams through detailed hydro-morphodynamic modelling. The model used is based on shallow water theory and it has been successful in predicting the flow and morphological process during sudden dam-break, as well as full and partial dyke-breach. Various experimental-scale scenarios are modelled, including: (1) failure of a single full dam in a sloping channel, (2) failure of two dams in a sloping channel, (3) failure of multiple landslide dams (four) in a sloping channel. For each scenario, different failure modes (sudden/gradual) and bed boundary (fixed /mobile) are assumed and simulated. The study systematically explores the tempo-spatial evolution of landslide-induced floods (discharge, flow velocity, and flow concentration) and geomorphic properties along the sloping channel. The effects of in-channel erosion and flow-driven sediment from dams on the development of flood process are investigated. The results improve the understanding of the formation and development mechanism of flash floods due to cascading landslide dam failures. The findings are beneficial for downstream flood risk assessment and developing control strategies for landslide-induced floods.

  11. Numerical investigations of rib fracture failure models in different dynamic loading conditions.

    PubMed

    Wang, Fang; Yang, Jikuang; Miller, Karol; Li, Guibing; Joldes, Grand R; Doyle, Barry; Wittek, Adam

    2016-01-01

    Rib fracture is one of the most common thoracic injuries in vehicle traffic accidents that can result in fatalities associated with seriously injured internal organs. A failure model is critical when modelling rib fracture to predict such injuries. Different rib failure models have been proposed in prediction of thorax injuries. However, the biofidelity of the fracture failure models when varying the loading conditions and the effects of a rib fracture failure model on prediction of thoracic injuries have been studied only to a limited extent. Therefore, this study aimed to investigate the effects of three rib failure models on prediction of thoracic injuries using a previously validated finite element model of the human thorax. The performance and biofidelity of each rib failure model were first evaluated by modelling rib responses to different loading conditions in two experimental configurations: (1) the three-point bending on the specimen taken from rib and (2) the anterior-posterior dynamic loading to an entire bony part of the rib. Furthermore, the simulation of the rib failure behaviour in the frontal impact to an entire thorax was conducted at varying velocities and the effects of the failure models were analysed with respect to the severity of rib cage damages. Simulation results demonstrated that the responses of the thorax model are similar to the general trends of the rib fracture responses reported in the experimental literature. However, they also indicated that the accuracy of the rib fracture prediction using a given failure model varies for different loading conditions.

  12. Effect of Crystal Orientation on Analysis of Single-Crystal, Nickel-Based Turbine Blade Superalloys

    NASA Technical Reports Server (NTRS)

    Swanson, G. R.; Arakere, N. K.

    2000-01-01

    High-cycle fatigue-induced failures in turbine and turbopump blades is a pervasive problem. Single-crystal nickel turbine blades are used because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities. Single-crystal materials have highly orthotropic properties making the position of the crystal lattice relative to the part geometry a significant and complicating factor. A fatigue failure criterion based on the maximum shear stress amplitude on the 24 octahedral and 6 cube slip systems is presented for single-crystal nickel superalloys (FCC crystal). This criterion greatly reduces the scatter in uniaxial fatigue data for PWA 1493 at 1,200 F in air. Additionally, single-crystal turbine blades used in the Space Shuttle main engine high pressure fuel turbopump/alternate turbopump are modeled using a three-dimensional finite element (FE) model. This model accounts for material orthotrophy and crystal orientation. Fatigue life of the blade tip is computed using FE stress results and the failure criterion that was developed. Stress analysis results in the blade attachment region are also presented. Results demonstrate that control of crystallographic orientation has the potential to significantly increase a component's resistance to fatigue crack growth without adding additional weight or cost.

  13. Modelling of Rainfall Induced Landslides in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Lepore, C.; Arnone, E.; Sivandran, G.; Noto, L. V.; Bras, R. L.

    2010-12-01

    We performed an island-wide determination of static landslide susceptibility and hazard assessment as well as dynamic modeling of rainfall-induced shallow landslides in a particular hydrologic basin. Based on statistical analysis of past landslides, we determined that reliable prediction of the susceptibility to landslides is strongly dependent on the resolution of the digital elevation model (DEM) employed and the reliability of the rainfall data. A distributed hydrology model, Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator with VEGetation Generator for Interactive Evolution (tRIBS-VEGGIE), tRIBS-VEGGIE, has been implemented for the first time in a humid tropical environment like Puerto Rico and validated against in-situ measurements. A slope-failure module has been added to tRIBS-VEGGIE’s framework, after analyzing several failure criterions to identify the most suitable for our application; the module is used to predict the location and timing of landsliding events. The Mameyes basin, located in the Luquillo Experimental Forest in Puerto Rico, was selected for modeling based on the availability of soil, vegetation, topographical, meteorological and historic landslide data. Application of the model yields a temporal and spatial distribution of predicted rainfall-induced landslides.

  14. Advantage of the modified Lunn-McNeil technique over Kalbfleisch-Prentice technique in competing risks

    NASA Astrophysics Data System (ADS)

    Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.

    2002-03-01

    Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.

  15. An open repository of earthquake-triggered ground-failure inventories

    USGS Publications Warehouse

    Schmitt, Robert G.; Tanyas, Hakan; Nowicki Jessee, M. Anna; Zhu, Jing; Biegel, Katherine M.; Allstadt, Kate E.; Jibson, Randall W.; Thompson, Eric M.; van Westen, Cees J.; Sato, Hiroshi P.; Wald, David J.; Godt, Jonathan W.; Gorum, Tolga; Xu, Chong; Rathje, Ellen M.; Knudsen, Keith L.

    2017-12-20

    Earthquake-triggered ground failure, such as landsliding and liquefaction, can contribute significantly to losses, but our current ability to accurately include them in earthquake-hazard analyses is limited. The development of robust and widely applicable models requires access to numerous inventories of ground failures triggered by earthquakes that span a broad range of terrains, shaking characteristics, and climates. We present an openly accessible, centralized earthquake-triggered groundfailure inventory repository in the form of a ScienceBase Community to provide open access to these data with the goal of accelerating research progress. The ScienceBase Community hosts digital inventories created by both U.S. Geological Survey (USGS) and non-USGS authors. We present the original digital inventory files (when available) as well as an integrated database with uniform attributes. We also summarize the mapping methodology and level of completeness as reported by the original author(s) for each inventory. This document describes the steps taken to collect, process, and compile the inventories and the process for adding additional ground-failure inventories to the ScienceBase Community in the future.

  16. Three-Dimensional High Fidelity Progressive Failure Damage Modeling of NCF Composites

    NASA Technical Reports Server (NTRS)

    Aitharaju, Venkat; Aashat, Satvir; Kia, Hamid G.; Satyanarayana, Arunkumar; Bogert, Philip B.

    2017-01-01

    Performance prediction of off-axis laminates is of significant interest in designing composite structures for energy absorption. Phenomenological models available in most of the commercial programs, where the fiber and resin properties are smeared, are very efficient for large scale structural analysis, but lack the ability to model the complex nonlinear behavior of the resin and fail to capture the complex load transfer mechanisms between the fiber and the resin matrix. On the other hand, high fidelity mesoscale models, where the fiber tows and matrix regions are explicitly modeled, have the ability to account for the complex behavior in each of the constituents of the composite. However, creating a finite element model of a larger scale composite component could be very time consuming and computationally very expensive. In the present study, a three-dimensional mesoscale model of non-crimp composite laminates was developed for various laminate schemes. The resin material was modeled as an elastic-plastic material with nonlinear hardening. The fiber tows were modeled with an orthotropic material model with brittle failure. In parallel, new stress based failure criteria combined with several damage evolution laws for matrix stresses were proposed for a phenomenological model. The results from both the mesoscale and phenomenological models were compared with the experiments for a variety of off-axis laminates.

  17. Predicting Failure Under Laboratory Conditions: Learning the Physics of Slow Frictional Slip and Dynamic Failure

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, B.; Hulbert, C.; Riviere, J.; Lubbers, N.; Barros, K.; Marone, C.; Johnson, P. A.

    2016-12-01

    Forecasting failure is a primary goal in diverse domains that include earthquake physics, materials science, nondestructive evaluation of materials and other engineering applications. Due to the highly complex physics of material failure and limitations on gathering data in the failure nucleation zone, this goal has often appeared out of reach; however, recent advances in instrumentation sensitivity, instrument density and data analysis show promise toward forecasting failure times. Here, we show that we can predict frictional failure times of both slow and fast stick slip failure events in the laboratory. This advance is made possible by applying a machine learning approach known as Random Forests1(RF) to the continuous acoustic emission (AE) time series recorded by detectors located on the fault blocks. The RF is trained using a large number of statistical features derived from the AE time series signal. The model is then applied to data not previously analyzed. Remarkably, we find that the RF method predicts upcoming failure time far in advance of a stick slip event, based only on a short time window of data. Further, the algorithm accurately predicts the time of the beginning and end of the next slip event. The predicted time improves as failure is approached, as other data features add to prediction. Our results show robust predictions of slow and dynamic failure based on acoustic emissions from the fault zone throughout the laboratory seismic cycle. The predictions are based on previously unidentified tremor-like acoustic signals that occur during stress build up and the onset of macroscopic frictional weakening. We suggest that the tremor-like signals carry information about fault zone processes and allow precise predictions of failure at any time in the slow slip or stick slip cycle2. If the laboratory experiments represent Earth frictional conditions, it could well be that signals are being missed that contain highly useful predictive information. 1Breiman, L. Random forests. Machine Learning 45, 5-32 (2001). 2Rouet-Leduc, B. C. Hulbert, N. Lubbers, K. Barros and P. A. Johnson, Learning the physics of failure, in review (2016).

  18. Common Cause Failure Modeling: Aerospace Versus Nuclear

    NASA Technical Reports Server (NTRS)

    Stott, James E.; Britton, Paul; Ring, Robert W.; Hark, Frank; Hatfield, G. Spencer

    2010-01-01

    Aggregate nuclear plant failure data is used to produce generic common-cause factors that are specifically for use in the common-cause failure models of NUREG/CR-5485. Furthermore, the models presented in NUREG/CR-5485 are specifically designed to incorporate two significantly distinct assumptions about the methods of surveillance testing from whence this aggregate failure data came. What are the implications of using these NUREG generic factors to model the common-cause failures of aerospace systems? Herein, the implications of using the NUREG generic factors in the modeling of aerospace systems are investigated in detail and strong recommendations for modeling the common-cause failures of aerospace systems are given.

  19. Simple Predictive Model of Early Failure among Patients Undergoing First-Time Arteriovenous Fistula Creation.

    PubMed

    Eslami, Mohammad H; Zhu, Clara K; Rybin, Denis; Doros, Gheorghe; Siracuse, Jeffrey J; Farber, Alik

    2016-08-01

    Native arteriovenous fistulas (AVFs) have a high 1 year failure rate leading to a need for secondary procedures. We set out to create a predictive model of early failure in patients undergoing first-time AVF creation, to identify failure-associated factors and stratify initial failure risk. The Vascular Study Group of New England (VSGNE) (2010-2014) was queried to identify patients undergoing first-time AVF creation. Patients with early (within 3 months postoperation) AVF failure (EF) or no failure (NF) were compared, failure being defined as any AVF that could not be used for dialysis. A multivariate logistic regression predictive model of EF based on perioperative clinical variables was created. Backward elimination with alpha level of 0.2 was used to create a parsimonious model. We identified 376 first-time AVF patients with follow-up data available in VSGNE. EF rate was 17.5%. Patients in the EF group had lower rates of hypertension (80.3% vs. 93.2%, P = 0.003) and diabetes (47.0% vs. 61.3%, P = 0.039). EF patients were also more likely to have radial artery inflow (57.6% vs. 38.4%, P = 0.011) and have forearm cephalic vein outflow (57.6% vs. 36.5%, P = 0.008). Additionally, the EF group was noted to have significantly smaller mean diameters of target artery (3.1 ± 0.9 vs. 3.6 ± 1.1, P = 0.002) and vein (3.1 ± 0.7 vs. 3.6 ± 0.9, P < 0.001). Multivariate analyses revealed that hypertension, diabetes, and vein larger than 3 mm were protective of EF (P < 0.05). The discriminating ability of this model was good (C-statistic = 0.731) and the model fits the data well (Hosmer-Lemeshow P = 0.149). β-estimates of significant factors were used to create a point system and assign probabilities of EF. We developed a simple model that robustly predicts first-time AVF EF and suggests that anatomical and clinical factors directly affect early AVF outcomes. The risk score has the potential to be used in clinical settings to stratify risk and make informed follow-up plans for AVF patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Simulating fail-stop in asynchronous distributed systems

    NASA Technical Reports Server (NTRS)

    Sabel, Laura; Marzullo, Keith

    1994-01-01

    The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.

  1. Numerical modelling of glacial lake outburst floods using physically based dam-breach models

    NASA Astrophysics Data System (ADS)

    Westoby, M. J.; Brasington, J.; Glasser, N. F.; Hambrey, M. J.; Reynolds, J. M.; Hassan, M. A. A. M.; Lowe, A.

    2015-03-01

    The instability of moraine-dammed proglacial lakes creates the potential for catastrophic glacial lake outburst floods (GLOFs) in high-mountain regions. In this research, we use a unique combination of numerical dam-breach and two-dimensional hydrodynamic modelling, employed within a generalised likelihood uncertainty estimation (GLUE) framework, to quantify predictive uncertainty in model outputs associated with a reconstruction of the Dig Tsho failure in Nepal. Monte Carlo analysis was used to sample the model parameter space, and morphological descriptors of the moraine breach were used to evaluate model performance. Multiple breach scenarios were produced by differing parameter ensembles associated with a range of breach initiation mechanisms, including overtopping waves and mechanical failure of the dam face. The material roughness coefficient was found to exert a dominant influence over model performance. The downstream routing of scenario-specific breach hydrographs revealed significant differences in the timing and extent of inundation. A GLUE-based methodology for constructing probabilistic maps of inundation extent, flow depth, and hazard is presented and provides a useful tool for communicating uncertainty in GLOF hazard assessment.

  2. Generic Sensor Failure Modeling for Cooperative Systems.

    PubMed

    Jäger, Georg; Zug, Sebastian; Casimiro, António

    2018-03-20

    The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application's fault tolerance and thereby promises maintainability of such system's safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques.

  3. Generic Sensor Failure Modeling for Cooperative Systems

    PubMed Central

    Jäger, Georg; Zug, Sebastian

    2018-01-01

    The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435

  4. Reliability analysis of C-130 turboprop engine components using artificial neural network

    NASA Astrophysics Data System (ADS)

    Qattan, Nizar A.

    In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine turbine under actual operating conditions, which can be used by aircraft operators for assessing system and component failures and customizing the maintenance programs recommended by the manufacturer.

  5. Clinician time used for decision making: a best case workflow study using cardiovascular risk assessments and Ask Mayo Expert algorithmic care process models.

    PubMed

    North, Frederick; Fox, Samuel; Chaudhry, Rajeev

    2016-07-20

    Risk calculation is increasingly used in lipid management, congestive heart failure, and atrial fibrillation. The risk scores are then used for decisions about statin use, anticoagulation, and implantable defibrillator use. Calculating risks for patients and making decisions based on these risks is often done at the point of care and is an additional time burden for clinicians that can be decreased by automating the tasks and using clinical decision-making support. Using Morae Recorder software, we timed 30 healthcare providers tasked with calculating the overall risk of cardiovascular events, sudden death in heart failure, and thrombotic event risk in atrial fibrillation. Risk calculators used were the American College of Cardiology Atherosclerotic Cardiovascular Disease risk calculator (AHA-ASCVD risk), Seattle Heart Failure Model (SHFM risk), and CHA2DS2VASc. We also timed the 30 providers using Ask Mayo Expert care process models for lipid management, heart failure management, and atrial fibrillation management based on the calculated risk scores. We used the Mayo Clinic primary care panel to estimate time for calculating an entire panel risk. Mean provider times to complete the CHA2DS2VASc, AHA-ASCVD risk, and SHFM were 36, 45, and 171 s respectively. For decision making about atrial fibrillation, lipids, and heart failure, the mean times (including risk calculations) were 85, 110, and 347 s respectively. Even under best case circumstances, providers take a significant amount of time to complete risk assessments. For a complete panel of patients this can lead to hours of time required to make decisions about prescribing statins, use of anticoagulation, and medications for heart failure. Informatics solutions are needed to capture data in the medical record and serve up automatically calculated risk assessments to physicians and other providers at the point of care.

  6. surrosurv: An R package for the evaluation of failure time surrogate endpoints in individual patient data meta-analyses of randomized clinical trials.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan

    2018-03-01

    Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Comparative Outcome Analysis of Penicillin-Based Versus Fluoroquinolone-Based Antibiotic Therapy for Community-Acquired Pneumonia

    PubMed Central

    Wang, Chi-Chuan; Lin, Chia-Hui; Lin, Kuan-Yin; Chuang, Yu-Chung; Sheng, Wang-Huei

    2016-01-01

    Abstract Community-acquired pneumonia (CAP) is a common but potentially life-threatening condition, but limited information exists on the effectiveness of fluoroquinolones compared to β-lactams in outpatient settings. We aimed to compare the effectiveness and outcomes of penicillins versus respiratory fluoroquinolones for CAP at outpatient clinics. This was a claim-based retrospective cohort study. Patients aged 20 years or older with at least 1 new pneumonia treatment episode were included, and the index penicillin or respiratory fluoroquinolone therapies for a pneumonia episode were at least 5 days in duration. The 2 groups were matched by propensity scores. Cox proportional hazard models were used to compare the rates of hospitalizations/emergence service visits and 30-day mortality. A logistic model was used to compare the likelihood of treatment failure between the 2 groups. After propensity score matching, 2622 matched pairs were included in the final model. The likelihood of treatment failure of fluoroquinolone-based therapy was lower than that of penicillin-based therapy (adjusted odds ratio [AOR], 0.88; 95% confidence interval [95%CI], 0.77–0.99), but no differences were found in hospitalization/emergence service (ES) visits (adjusted hazard ratio [HR], 1.27; 95% CI, 0.92–1.74) and 30-day mortality (adjusted HR, 0.69; 95% CI, 0.30–1.62) between the 2 groups. The likelihood of treatment failure of fluoroquinolone-based therapy was lower than that of penicillin-based therapy for CAP on an outpatient clinic basis. However, this effect may be marginal. Further investigation into the comparative effectiveness of these 2 treatment options is warranted. PMID:26871827

  8. Investigation of precipitate refinement in Mg alloys by an analytical composite failure model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabei, Ali; Li, Dongsheng; Lavender, Curt A.

    2015-10-01

    An analytical model is developed to simulate precipitate refinement in second phase strengthened magnesium alloys. The model is developed based on determination of the stress fields inside elliptical precipitates embedded in a rate dependent inelastic matrix. The stress fields are utilized to determine the failure mode that governs the refinement behavior. Using an AZ31 Mg alloy as an example, the effects the applied load, aspect ratio and orientation of the particle is studied on the macroscopic failure of a single α-Mg17Al12 precipitate. Additionally, a temperature dependent version of the corresponding constitutive law is used to incorporate the effects of temperature.more » In plane strain compression, an extensional failure mode always fragments the precipitates. The critical strain rate at which the precipitates start to fail strongly depends on the orientation of the precipitate with respect to loading direction. The results show that the higher the aspect ratio is, the easier the precipitate fractures. Precipitate shape is another factor influencing the failure response. In contrast to elliptical precipitates with high aspect ratio, spherical precipitates are strongly resistant to sectioning. In pure shear loading, in addition to the extensional mode of precipitate failure, a shearing mode may get activated depending on orientation and aspect ratio of the precipitate. The effect of temperature in relation to strain rate was also verified for plane strain compression and pure shear loading cases.« less

  9. The structural robustness of geographical networks against regional failure and their pre-optimization

    NASA Astrophysics Data System (ADS)

    Li, Yixiao; Zhang, Lin; Huang, Chaogeng; Shen, Bin

    2016-06-01

    Failures of real-world infrastructure networks due to natural disasters often originate in a certain region, but this feature has seldom been considered in theoretical models. In this article, we introduce a possible failure pattern of geographical networks-;regional failure;-by which nodes and edges within a region malfunction. Based on a previous spatial network model (Louf et al., 2013), we study the robustness of geographical networks against regional failure, which is measured by the fraction of nodes that remain in the largest connected component, via simulations. A small-area failure results in a large reduction of their robustness measure. Furthermore, we investigate two pre-deployed mechanisms to enhance their robustness: One is to extend the cost-benefit growth mechanism of the original network model by adding more than one link in a growth step, and the other is to strengthen the interconnection of hubs in generated networks. We measure the robustness-enhancing effects of both mechanisms on the basis of their costs, i.e., the amount of excessive links and the induced geographical length. The latter mechanism is better than the former one if a normal level of costs is considered. When costs exceed a certain level, the former has an advantage. Because the costs of excessive links affect the investment decision of real-world infrastructure networks, it is practical to enhance their robustness by adding more links between hubs. These results might help design robust geographical networks economically.

  10. Revised Risk Priority Number in Failure Mode and Effects Analysis Model from the Perspective of Healthcare System

    PubMed Central

    Rezaei, Fatemeh; Yarmohammadian, Mohmmad H.; Haghshenas, Abbas; Fallah, Ali; Ferdosi, Masoud

    2018-01-01

    Background: Methodology of Failure Mode and Effects Analysis (FMEA) is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN)” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1) defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss), occurrence of incident (time - unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) then, (2) risks priority criteria quantified by using RPN index (361 for the highest rate failure). The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models. PMID:29441184

  11. Incorporation of RAM techniques into simulation modeling

    NASA Astrophysics Data System (ADS)

    Nelson, S. C., Jr.; Haire, M. J.; Schryver, J. C.

    1995-01-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model to represent the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army's next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through 'what if' questions, sensitivity studies, and battle scenario changes.

  12. Diagnostic tolerance for missing sensor data

    NASA Technical Reports Server (NTRS)

    Scarl, Ethan A.

    1989-01-01

    For practical automated diagnostic systems to continue functioning after failure, they must not only be able to diagnose sensor failures but also be able to tolerate the absence of data from the faulty sensors. It is shown that conventional (associational) diagnostic methods will have combinatoric problems when trying to isolate faulty sensors, even if they adequately diagnose other components. Moreover, attempts to extend the operation of diagnostic capability past sensor failure will necessarily compound those difficulties. Model-based reasoning offers a structured alternative that has no special problems diagnosing faulty sensors and can operate gracefully when sensor data is missing.

  13. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milani, Gabriele, E-mail: milani@stru.polimi.it; Valente, Marco, E-mail: milani@stru.polimi.it

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where themore » masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.« less

  14. Risk assessment for enterprise resource planning (ERP) system implementations: a fault tree analysis approach

    NASA Astrophysics Data System (ADS)

    Zeng, Yajun; Skibniewski, Miroslaw J.

    2013-08-01

    Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.

  15. Finite Element Creep-Fatigue Analysis of a Welded Furnace Roll for Identifying Failure Root Cause

    NASA Astrophysics Data System (ADS)

    Yang, Y. P.; Mohr, W. C.

    2015-11-01

    Creep-fatigue induced failures are often observed in engineering components operating under high temperature and cyclic loading. Understanding the creep-fatigue damage process and identifying failure root cause are very important for preventing such failures and improving the lifetime of engineering components. Finite element analyses including a heat transfer analysis and a creep-fatigue analysis were conducted to model the cyclic thermal and mechanical process of a furnace roll in a continuous hot-dip coating line. Typically, the roll has a short life, <1 year, which has been a problem for a long time. The failure occurred in the weld joining an end bell to a roll shell and resulted in the complete 360° separation of the end bell from the roll shell. The heat transfer analysis was conducted to predict the temperature history of the roll by modeling heat convection from hot air inside the furnace. The creep-fatigue analysis was performed by inputting the predicted temperature history and applying mechanical loads. The analysis results showed that the failure was resulted from a creep-fatigue mechanism rather than a creep mechanism. The difference of material properties between the filler metal and the base metal is the root cause for the roll failure, which induces higher creep strain and stress in the interface between the weld and the HAZ.

  16. Cycle life test and failure model of nickel-hydrogen cells

    NASA Technical Reports Server (NTRS)

    Smithrick, J. J.

    1983-01-01

    Six ampere hour individual pressure vessel nickel hydrogen cells were charge/discharge cycled to failure. Failure as used here is defined to occur when the end of discharge voltage degraded to 0.9 volts. They were cycled under a low earth orbit cycle regime to a deep depth of discharge (80 percent of rated ampere hour capacity). Both cell designs were fabricated by the same manufacturer and represent current state of the art. A failure model was advanced which suggests both cell designs have inadequate volume tolerance characteristics. The limited existing data base at a deep depth of discharge (DOD) was expanded. Two cells of each design were cycled. One COMSAT cell failed at cycle 1712 and the other failed at cycle 1875. For the Air Force/Hughes cells, one cell failed at cycle 2250 and the other failed at cycle 2638. All cells, of both designs, failed due to low end of discharge voltage (0.9 volts). No cell failed due to electrical shorts. After cell failure, three different reconditioning tests (deep discharge, physical reorientation, and open circuit voltage stand) were conducted on all cells of each design. A fourth reconditioning test (electrolyte addition) was conducted on one cell of each design. In addition post cycle cell teardown and failure analysis were performed on the one cell of each design which did not have electrolyte added after failure.

  17. A Simulation Model for Setting Terms for Performance Based Contract Terms

    DTIC Science & Technology

    2010-05-01

    torpedo self-noise and the use of ruggedized, embedded, digital micro - processors . The latter capability made it possible for digitally controlled...inventories are: System Reliability, Product Reliability, Operational Availability, Mean Time to Repair (MTTR), Mean Time to Failure ( MTTF ...Failure ( MTTF ) Mean Logistics Delay Time (MLDT) Mean Supply Response Time (MSRT) D ep en de nt M et ric s Mean Accumulated Down Time (MADT

  18. Functionality, Complexity, and Approaches to Assessment of Resilience Under Constrained Energy and Information

    DTIC Science & Technology

    2015-03-26

    albeit powerful , method available for exploring CAS. As discussed above, there are many useful mathematical tools appropriate for CAS modeling. Agent-based...cells, tele- phone calls, and sexual contacts approach power -law distributions. [48] Networks in general are robust against random failures, but...targeted failures can have powerful effects – provided the targeter has a good understanding of the network structure. Some argue (convincingly) that all

  19. Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.

    2014-01-01

    The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.

  20. Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.

    2016-01-01

    The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.

  1. The Role of Crack Formation in Chevron-Notched Four-Point Bend Specimens

    NASA Technical Reports Server (NTRS)

    Calomino, Anthony M.; Ghosn, Louis J.

    1994-01-01

    The failure sequence following crack formation in a chevron-notched four-point bend 1 specimen is examined in a parametric study using the Bluhm slice synthesis model. Premature failure resulting from crack formation forces which exceed those required to propagate a crack beyond alpha (min) is examined together with the critical crack length and critical crack front length. An energy based approach is used to establish factors which forecast the tendency of such premature failure due to crack formation for any selected chevron-notched geometry. A comparative study reveals that, for constant values of alpha (1) and alpha (0), the dimensionless beam compliance and stress intensity factor are essentially independent of specimen width and thickness. The chevron tip position, alpha (0) has its primary effect on the force required to initiate a sharp crack. Small values for alpha (0) maximize the stable region length, however, the premature failure tendency is also high for smaller alpha (0) values. Improvements in premature failure resistance can be realized for larger values of alpha (0) with only a minor reduction in the stable region length. The stable region length is also maximized for larger chevron based positions, alpha (1) but the chance for premature failure is also raised. Smaller base positions improve the premature failure resistance with only minor decreases in the stable region length. Chevron geometries having a good balance of premature failure resistance, stable region length, and crack front length are 0.20 less than or equal to alpha (0) is less than or equal to 0.30 and 0.70 is less than or equal to alpha (1) is less than or equal to 0.80.

  2. Model Based Mission Assurance: Emerging Opportunities for Robotic Systems

    NASA Technical Reports Server (NTRS)

    Evans, John W.; DiVenti, Tony

    2016-01-01

    The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).

  3. Analysis of flood hazard under consideration of dike breaches

    NASA Astrophysics Data System (ADS)

    Vorogushyn, S.; Apel, H.; Lindenschmidt, K.-E.; Merz, B.

    2009-04-01

    The study focuses on the development and application of a new modelling system which allows a comprehensive flood hazard assessment along diked river reaches under consideration of dike failures. The proposed Inundation Hazard Assessment Model (IHAM) represents a hybrid probabilistic-deterministic model. It comprises three models interactively coupled at runtime. These are: (1) 1D unsteady hydrodynamic model of river channel and floodplain flow between dikes, (2) probabilistic dike breach model which determines possible dike breach locations, breach widths and breach outflow discharges, and (3) 2D raster-based diffusion wave storage cell model of the hinterland areas behind the dikes. Due to the unsteady nature of the 1D and 2D coupled models, the dependence between hydraulic load at various locations along the reach is explicitly considered. The probabilistic dike breach model describes dike failures due to three failure mechanisms: overtopping, piping and slope instability caused by the seepage flow through the dike core (micro-instability). Dike failures for each mechanism are simulated based on fragility functions. The probability of breach is conditioned by the uncertainty in geometrical and geotechnical dike parameters. The 2D storage cell model driven by the breach outflow boundary conditions computes an extended spectrum of flood intensity indicators such as water depth, flow velocity, impulse, inundation duration and rate of water rise. IHAM is embedded in a Monte Carlo simulation in order to account for the natural variability of the flood generation processes reflected in the form of input hydrographs and for the randomness of dike failures given by breach locations, times and widths. The scenario calculations for the developed synthetic input hydrographs for the main river and tributary were carried out for floods with return periods of T = 100; 200; 500; 1000 a. Based on the modelling results, probabilistic dike hazard maps could be generated that indicate the failure probability of each discretised dike section for every scenario magnitude. Besides the binary inundation patterns that indicate the probability of raster cells being inundated, IHAM generates probabilistic flood hazard maps. These maps display spatial patterns of the considered flood intensity indicators and their associated return periods. The probabilistic nature of IHAM allows for the generation of percentile flood hazard maps that indicate the median and uncertainty bounds of the flood intensity indicators. The uncertainty results from the natural variability of the flow hydrographs and randomness of dike breach processes. The same uncertainty sources determine the uncertainty in the flow hydrographs along the study reach. The simulations showed that the dike breach stochasticity has an increasing impact on hydrograph uncertainty in downstream direction. Whereas in the upstream part of the reach the hydrograph uncertainty is mainly stipulated by the variability of the flood wave form, the dike failures strongly shape the uncertainty boundaries in the downstream part of the reach. Finally, scenarios of polder deployment for the extreme floods with T = 200; 500; 1000 a were simulated with IHAM. The results indicate a rather weak reduction of the mean and median flow hydrographs in the river channel. However, the capping of the flow peaks resulted in a considerable reduction of the overtopping failures downstream of the polder with a simultaneous slight increase of the piping and slope micro-instability frequencies explained by a more durable average impoundment. The developed IHAM simulation system represents a new scientific tool for studying fluvial inundation dynamics under extreme conditions incorporating effects of technical flood protection measures. With its major outputs in form of novel probabilistic inundation and dike hazard maps, the IHAM system has a high practical value for decision support in flood management.

  4. Simulating stick-slip failure in a sheared granular layer using a physics-based constitutive model

    DOE PAGES

    Lieou, Charles K. C.; Daub, Eric G.; Guyer, Robert A.; ...

    2017-01-14

    In this paper, we model laboratory earthquakes in a biaxial shear apparatus using the Shear-Transformation-Zone (STZ) theory of dense granular flow. The theory is based on the observation that slip events in a granular layer are attributed to grain rearrangement at soft spots called STZs, which can be characterized according to principles of statistical physics. We model lab data on granular shear using STZ theory and document direct connections between the STZ approach and rate-and-state friction. We discuss the stability transition from stable shear to stick-slip failure and show that stick slip is predicted by STZ when the applied shearmore » load exceeds a threshold value that is modulated by elastic stiffness and frictional rheology. Finally, we also show that STZ theory mimics fault zone dilation during the stick phase, consistent with lab observations.« less

  5. Numerical Modelling of the Compressive and Tensile Response of Glass and Ceramic under High Pressure Dynamic Loading

    NASA Astrophysics Data System (ADS)

    Clegg, Richard A.; Hayhurst, Colin J.

    1999-06-01

    Ceramic materials, including glass, are commonly used as ballistic protection materials. The response of a ceramic to impact, perforation and penetration is complex and difficult and/or expensive to instrument for obtaining detailed physical data. This paper demonstrates how a hydrocode, such as AUTODYN, can be used to aid in the understanding of the response of brittle materials to high pressure impact loading and thus promote an efficient and cost effective design process. Hydrocode simulations cannot be made without appropriate characterisation of the material. Because of the complexitiy of the response of ceramic materials this often requires a number of complex material tests. Here we present a methodology for using the results of flyer plate tests, in conjunction with numerical simulations, to derive input to the Johnson-Holmquist material model for ceramics. Most of the research effort in relation to the development of hydrocode material models for ceramics has concentrated on the material behaviour under compression and shear. While the penetration process is dominated by these aspects of the material response, the final damaged state of the material can be significantly influenced by the tensile behaviour. Modelling of the final damage state is important since this is often the only physical information which is available. In this paper we present a unique implementation, in a hydrocode, for improved modelling of brittle materials in the tensile regime. Tensile failure initiation is based on any combination of principal stress or strain while the post-failure tensile response of the material is controlled through a Rankine plasticity damaging failure surface. The tensile failure surface can be combined with any of the traditional plasticity and/or compressive damage models. Finally, the models and data are applied in both traditional grid based Lagrangian and Eulerian solution techniques and the relativley new SPH (Smooth Particle Hydrodynamics) meshless technique. Simulations of long rod impacts onto ceramic faced armour and hypervelocity impacts on glass solar array space structures are presented and compared with experiments.

  6. An analysis of fiber-matrix interface failure stresses for a range of ply stress states

    NASA Technical Reports Server (NTRS)

    Crews, J. H.; Naik, R. A.; Lubowinski, S. J.

    1993-01-01

    A graphite/bismaleimide laminate was prepared without the usual fiber treatment and was tested over a wide range of stress states to measure its ply cracking strength. These tests were conducted using off-axis flexure specimens and produced fiber-matrix interface failure data over a correspondingly wide range of interface stress states. The absence of fiber treatment, weakened the fiber-matrix interfaces and allowed these tests to be conducted at load levels that did not yield the matrix. An elastic micromechanics computer code was used to calculate the fiber-matrix interface stresses at failure. Two different fiber-array models (square and diamond) were used in these calculations to analyze the effects of fiber arrangement as well as stress state on the critical interface stresses at failure. This study showed that both fiber-array models were needed to analyze interface stresses over the range of stress states. A linear equation provided a close fit to these critical stress combinations and, thereby, provided a fiber-matrix interface failure criterion. These results suggest that prediction procedures for laminate ply cracking can be based on micromechanics stress analyses and appropriate fiber-matrix interface failure criteria. However, typical structural laminates may require elastoplastic stress analysis procedures that account for matrix yielding, especially for shear-dominated ply stress states.

  7. Modeling the biomechanical and injury response of human liver parenchyma under tensile loading.

    PubMed

    Untaroiu, Costin D; Lu, Yuan-Chiao; Siripurapu, Sundeep K; Kemper, Andrew R

    2015-01-01

    The rapid advancement in computational power has made human finite element (FE) models one of the most efficient tools for assessing the risk of abdominal injuries in a crash event. In this study, specimen-specific FE models were employed to quantify material and failure properties of human liver parenchyma using a FE optimization approach. Uniaxial tensile tests were performed on 34 parenchyma coupon specimens prepared from two fresh human livers. Each specimen was tested to failure at one of four loading rates (0.01s(-1), 0.1s(-1), 1s(-1), and 10s(-1)) to investigate the effects of rate dependency on the biomechanical and failure response of liver parenchyma. Each test was simulated by prescribing the end displacements of specimen-specific FE models based on the corresponding test data. The parameters of a first-order Ogden material model were identified for each specimen by a FE optimization approach while simulating the pre-tear loading region. The mean material model parameters were then determined for each loading rate from the characteristic averages of the stress-strain curves, and a stochastic optimization approach was utilized to determine the standard deviations of the material model parameters. A hyperelastic material model using a tabulated formulation for rate effects showed good predictions in terms of tensile material properties of human liver parenchyma. Furthermore, the tissue tearing was numerically simulated using a cohesive zone modeling (CZM) approach. A layer of cohesive elements was added at the failure location, and the CZM parameters were identified by fitting the post-tear force-time history recorded in each test. The results show that the proposed approach is able to capture both the biomechanical and failure response, and accurately model the overall force-deflection response of liver parenchyma over a large range of tensile loadings rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Does Bruxism Contribute to Dental Implant Failure? A Systematic Review and Meta-Analysis.

    PubMed

    Zhou, Yi; Gao, Jinxia; Luo, Le; Wang, Yining

    2016-04-01

    Bruxism was usually considered as a contraindication for oral implanting. The causal relationship between bruxism and dental implant failure was remained controversial in existing literatures. This meta-analysis was performed to investigate the relationship between them. This review conducted an electronic systematic literature search in MEDLINE (PubMed) and EmBase in November 2013 without time and language restrictions. Meanwhile, a hand searching for all the relevant references of included studies was also conducted. Study information extraction and methodological quality assessments were accomplished by two reviewers independently. A discussion ensued if any disagreement occurred, and unresolved issues were solved by consulting a third reviewer. Methodological quality was assessed by using the Newcastle-Ottawa Scale tool. Odds ratio (OR) with 95% confidence interval (CI) was pooled to estimate the relative effect of bruxism on dental implant failures. Fixed effects model was used initially; if the heterogeneity was high, random effects model was chosen for meta-analysis. Statistical analyses were carried out by using Review Manager 5.1. In this meta-analysis review, extracted data were classified into two groups based on different units. Units were based on the number of prostheses (group A) and the number of patients (group B). In group A, the total pooled OR of bruxers versus nonbruxers for all subgroups was 4.72 (95% CI: 2.66-8.36, p = .07). In group B, the total pooled OR of bruxers versus nonbruxers for all subgroups was 3.83 (95% CI: 2.12-6.94, p = .22). This meta-analysis was performed to evaluate the relationship between bruxism and dental implant failure. In contrast to nonbruxers, prostheses in bruxers had a higher failure rate. It suggests that bruxism is a contributing factor of causing the occurrence of dental implant technical/biological complications and plays a role in dental implant failure. © 2015 Wiley Periodicals, Inc.

  9. Family Caregiver Contribution to Self-care of Heart Failure: An Application of the Information-Motivation-Behavioral Skills Model.

    PubMed

    Chen, Yuxia; Zou, Huijing; Zhang, Yanting; Fang, Wenjie; Fan, Xiuzhen

    Adherence to self-care behaviors improves outcomes of patients with heart failure (HF). Caregivers play an important role in contributing to self-care. We aimed to explore the relationships among HF knowledge, perceived control, social support, and family caregiver contribution to self-care of HF, based on the Information-Motivation-Behavioral Skills Model. Two hundred forty-seven dyads of eligible patients with HF and family caregivers were recruited from a general hospital in China. Structural equation modeling was used to analyze the data obtained with the Caregiver Contribution to Self-care of Heart Failure Index, the Heart Failure Knowledge Test, the Control Attitudes Scale, and the Social Support Rating Scale. In this model, caregiver contribution to self-care maintenance was positively affected by perceived control (β = .148, P = .015) and caregiver confidence in contribution to self-care (β = .293, P < .001). Caregiver contribution to self-care management was positively affected by HF knowledge (β = .270, P < .001), perceived control (β = .140, P = .007), social support (β = .123, P = .019), caregiver confidence in contribution to self-care (β = .328, P < .001), and caregiver contribution to self-care maintenance (β = .148, P = .006). Caregiver confidence in contribution to self-care was positively affected by HF knowledge (β = .334, P < .001). Heart failure knowledge, perceived control, and social support facilitated family caregiver contribution to self-care of HF. Targeted interventions that consider these variables may effectively improve family caregiver contributions to self-care.

  10. The Prognostic Accuracy of Suggested Predictors of Failure of Medical Management in Patients With Nontuberculous Spinal Epidural Abscess.

    PubMed

    Stratton, Alexandra; Faris, Peter; Thomas, Kenneth

    2018-05-01

    Retrospective cohort study. To test the external validity of the 2 published prediction criteria for failure of medical management in patients with spinal epidural abscess (SEA). Patients with SEA over a 10-year period at a tertiary care center were identified using ICD-10 (International Classification of Diseases, 10th Revision) diagnostic codes; electronic and paper charts were reviewed. The incidence of SEA and the proportion of patients with SEA that were treated medically were calculated. The rate of failure of medical management was determined. The published prediction models were applied to our data to determine how predictive they were of failure in our cohort. A total of 550 patients were identified using ICD-10 codes, 160 of whom had a magnetic resonance imaging-confirmed diagnosis of SEA. The incidence of SEA was 16 patients per year. Seventy-five patients were found to be intentionally managed medically and were included in the analysis. Thirteen of these 75 patients failed medical management (17%). Based on the published prediction criteria, 26% (Kim et al) and 45% (Patel et al) of our patients were expected to fail. Published prediction models for failure of medical management of SEA were not valid in our cohort. However, once calibrated to our cohort, Patel's model consisting of positive blood culture, presence of diabetes, white blood cells >12.5, and C-reactive protein >115 was the better model for our data.

  11. Optimization of geothermal well trajectory in order to minimize borehole failure

    NASA Astrophysics Data System (ADS)

    Dahrabou, A.; Valley, B.; Ladner, F.; Guinot, F.; Meier, P.

    2017-12-01

    In projects based on Enhanced Geothermal System (EGS) principle, deep boreholes are drilled to low permeability rock masses. As part of the completion operations, the permeability of existing fractures in the rock mass is enhanced by injecting large volumes of water. These stimulation treatments aim at achieving enough water circulation for heat extraction at commercial rates which makes the stimulation operations critical to the project success. The accurate placement of the stimulation treatments requires well completion with effective zonal isolation, and wellbore stability is a prerequisite to all zonal isolation techniques, be it packer sealing or cement placement. In this project, a workflow allowing a fast decision-making process for selecting an optimal well trajectory for EGS projects is developed. In fact, the well is first drilled vertically then based on logging data which are costly (100 KCHF/day), the direction in which the strongly deviated borehole section will be drilled needs to be determined in order to optimize borehole stability and to intersect the highest number of fractures that are oriented favorably for stimulation. The workflow applies to crystalline rock and includes an uncertainty and risk assessment framework. An initial sensitivity study was performed to identify the most influential parameters on borehole stability. The main challenge in these analyses is that the strength and stress profiles are unknown independently. Calibration of a geomechanical model on the observed borehole failure has been performed using data from the Basel Geothermal well BS-1. In a first approximation, a purely elastic-static analytical solution in combination with a purely cohesive failure criterion were used as it provides the most consistent prediction across failure indicators. A systematic analysis of the uncertainty on all parameters was performed to assess the reliability of the optimal trajectory selection. To each drilling scenario, failure probability and the associated risks, are computed stochastically. In addition, model uncertainty is assessed by confronting various failure modelling approaches to the available failure data from the Basel Project. Together, these results form the basis of an integrated workflow optimizing geothermal (EGS) well trajectory.

  12. Fiber-based modeling of in situ ankle ligaments with consideration of progressive failure.

    PubMed

    Nie, Bingbing; Forman, Jason L; Panzer, Matthew B; Mait, Alexander R; Donlon, John-Paul; Kent, Richard W

    2017-08-16

    Ligament sprains account for a majority of injuries to the foot and ankle complex among athletic populations. The infeasibility of measuring the in situ response and load paths of individual ligaments has precluded a complete characterization of their mechanical behavior via experiment. In the present study a fiber-based modeling approach of in situ ankle ligaments was developed and validated for determining the heterogeneous force-elongation characteristics and the consequent injury patterns. Nine major ankle ligaments were modeled as bundles of discrete elements, corresponding functionally to the structure of collagen fibers. To incorporate the progressive nature of ligamentous injury, the limit strain at the occurrence of fiber failure was described by a distribution function ranging from 12% to 18% along the width of the insertion site. The model was validated by comparing the structural kinetic and kinematic response obtained experimentally and computationally under well-controlled foot rotations. The simulation results replicated the 6 degree-of-freedom bony motion and ligamentous injuries and, by implication, the in situ deformations of the ligaments. Gross stiffness of the whole ligament derived from the fibers was comparable to existing experimental data. The present modeling approach provides a biomechanically realistic, interpretable and computationally efficient way to characterize the in situ ligament slack, sequential and heterogeneous uncrimping of collagen fascicles and failure propagation as the external load is applied. Applications of this model include functional ankle joint mechanics, injury prevention and countermeasure design for athletes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Comparison of Models of Stress Relaxation in Failure Analysis for Connectors under Long-term Storage

    NASA Astrophysics Data System (ADS)

    Zhou, Yilin; Wan, Mengru

    2018-03-01

    Reliability requirements of the system equipment under long-term storage are put forward especially for the military products, so that the connectors in the equipment also need long-term storage life correspondingly. In this paper, the effects of stress relaxation of the elastic components on electrical contact of the connectors in long-term storage process were studied from the failure mechanism and degradation models. A wire spring connector was taken as an example to discuss the life prediction method for electrical contacts of the connectors based on stress relaxation degradation under long -term storage.

  14. Incorporation of Damage and Failure into an Orthotropic Elasto-Plastic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in the composite impact models currently available in LS-DYNA(Registered Trademark) is under development. In particular, the material model, which is being implemented as MAT 213 into a tailored version of LS-DYNA being jointly developed by the FAA and NASA, incorporates both plasticity and damage within the material model, utilizes experimentally based tabulated input to define the evolution of plasticity and damage as opposed to specifying discrete input parameters (such as modulus and strength), and is able to analyze the response of composites composed with a variety of fiber architectures. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. The capability to account for the rate and temperature dependent deformation response of composites has also been incorporated into the material model. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a diagonal damage tensor is defined to account for the directionally dependent variation of damage. However, in composites it has been found that loading in one direction can lead to damage in multiple coordinate directions. To account for this phenomena, the terms in the damage matrix are semi-coupled such that the damage in a particular coordinate direction is a function of the stresses and plastic strains in all of the coordinate directions. The onset of material failure, and thus element deletion, is being developed to be a function of the stresses and plastic strains in the various coordinate directions. Systematic procedures are being developed to generate the required input parameters based on the results of experimental tests.

  15. Unitized Stiffened Composite Textile Panels: Manufacturing, Characterization, Experiments, and Analysis

    NASA Astrophysics Data System (ADS)

    Kosztowny, Cyrus Joseph Robert

    Use of carbon fiber textiles in complex manufacturing methods creates new implementations of structural components by increasing performance, lowering manufacturing costs, and making composites overall more attractive across industry. Advantages of textile composites include high area output, ease of handling during the manufacturing process, lower production costs per material used resulting from automation, and provide post-manufacturing assembly mainstreaming because significantly more complex geometries such as stiffened shell structures can be manufactured with fewer pieces. One significant challenge with using stiffened composite structures is stiffener separation under compression. Axial compression loading conditions have frequently observed catastrophic structural failure due to stiffeners separating from the shell skin. Characterizing stiffener separation behavior is often costly computationally and experimentally. The objectives of this research are to demonstrate unitized stiffened textile composite panels can be manufactured to produce quality test specimens, that existing characterization techniques applied to state-of-the-art high-performance composites provide valuable information in modeling such structures, that the unitized structure concept successfully removes stiffener separation as a primary structural failure mode, and that modeling textile material failure modes are sufficient to accurately capture postbuckling and final failure responses of the stiffened structures. The stiffened panels in this study have taken the integrally stiffened concept to an extent such that the stiffeners and skin are manufactured at the same time, as one single piece, and from the same composite textile layers. Stiffener separation is shown to be removed as a primary structural failure mode for unitized stiffened composite textile panels loaded under axial compression well into the postbuckling regime. Instead of stiffener separation, a material damaging and failure model effectively captures local post-peak material response via incorporating a mesoscale model using a multiscaling framework with a smeared crack element-based failure model in the macroscale stiffened panel. Material damage behavior is characterized by simple experimental tests and incorporated into the post-peak stiffness degradation law in the smeared crack implementation. Computational modeling results are in overall excellent agreement compared to the experimental responses.

  16. Cascading failures in interdependent systems under a flow redistribution model

    NASA Astrophysics Data System (ADS)

    Zhang, Yingrui; Arenas, Alex; Yaǧan, Osman

    2018-02-01

    Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {LA,i,CA ,i} i =1 n and {LB,i,CB ,i} i =1 n, respectively. When a line fails in system A , a fraction of its load is redistributed to alive lines in B , while remaining (1 -a ) fraction is redistributed equally among all functional lines in A ; a line failure in B is treated similarly with b giving the fraction to be redistributed to A . We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p1 fraction of lines in A and p2 fraction in B . We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b , and robustness is maximized at non-trivial a ,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.

  17. Cascading failures in interdependent systems under a flow redistribution model.

    PubMed

    Zhang, Yingrui; Arenas, Alex; Yağan, Osman

    2018-02-01

    Robustness and cascading failures in interdependent systems has been an active research field in the past decade. However, most existing works use percolation-based models where only the largest component of each network remains functional throughout the cascade. Although suitable for communication networks, this assumption fails to capture the dependencies in systems carrying a flow (e.g., power systems, road transportation networks), where cascading failures are often triggered by redistribution of flows leading to overloading of lines. Here, we consider a model consisting of systems A and B with initial line loads and capacities given by {L_{A,i},C_{A,i}}_{i=1}^{n} and {L_{B,i},C_{B,i}}_{i=1}^{n}, respectively. When a line fails in system A, a fraction of its load is redistributed to alive lines in B, while remaining (1-a) fraction is redistributed equally among all functional lines in A; a line failure in B is treated similarly with b giving the fraction to be redistributed to A. We give a thorough analysis of cascading failures of this model initiated by a random attack targeting p_{1} fraction of lines in A and p_{2} fraction in B. We show that (i) the model captures the real-world phenomenon of unexpected large scale cascades and exhibits interesting transition behavior: the final collapse is always first order, but it can be preceded by a sequence of first- and second-order transitions; (ii) network robustness tightly depends on the coupling coefficients a and b, and robustness is maximized at non-trivial a,b values in general; (iii) unlike most existing models, interdependence has a multifaceted impact on system robustness in that interdependency can lead to an improved robustness for each individual network.

  18. Solid-state lighting life prediction using extended Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lall, Pradeep; Wei, Junchao; Davis, Lynn

    2013-07-16

    Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. The U.S. Department of Energy has made a long term commitment to advance the efficiency, understandingmore » and development of solid-state lighting (SSL) and is making a strong push for the acceptance and use of SSL products to reduce overall energy consumption attributable to lighting. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life is defined by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of SSL Luminaires from LM-80 test data. The TM-21 model uses an Arrhenius Equation with an Activation Energy, Pre-decay factor and Decay Rates. Several failure mechanisms may be active in a luminaire at a single time causing lumen depreciation. The underlying TM-21 Arrhenius Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, a Kalman Filter and Extended Kalman Filters have been used to develop a 70% Lumen Maintenance Life Prediction Model for a LEDs used in SSL luminaires. This model can be used to calculate acceleration factors, evaluate failure-probability and identify ALT methodologies for reducing test time. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state has been described in state space form using the measurement of the feature vector, velocity of feature vector change and the acceleration of the feature vector change. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. The measured state variable has been related to the underlying damage using physics-based models. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less

  19. A review of typical thermal fatigue failure models for solder joints of electronic components

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong

    2017-09-01

    For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.

  20. Spectromicroscopic insights for rational design of redox-based memristive devices

    PubMed Central

    Baeumer, Christoph; Schmitz, Christoph; Ramadan, Amr H. H.; Du, Hongchu; Skaja, Katharina; Feyer, Vitaliy; Müller, Philipp; Arndt, Benedikt; Jia, Chun-Lin; Mayer, Joachim; De Souza, Roger A.; Michael Schneider, Claus; Waser, Rainer; Dittmann, Regina

    2015-01-01

    The demand for highly scalable, low-power devices for data storage and logic operations is strongly stimulating research into resistive switching as a novel concept for future non-volatile memory devices. To meet technological requirements, it is imperative to have a set of material design rules based on fundamental material physics, but deriving such rules is proving challenging. Here, we elucidate both switching mechanism and failure mechanism in the valence-change model material SrTiO3, and on this basis we derive a design rule for failure-resistant devices. Spectromicroscopy reveals that the resistance change during device operation and failure is indeed caused by nanoscale oxygen migration resulting in localized valence changes between Ti4+ and Ti3+. While fast reoxidation typically results in retention failure in SrTiO3, local phase separation within the switching filament stabilizes the retention. Mimicking this phase separation by intentionally introducing retention-stabilization layers with slow oxygen transport improves retention times considerably. PMID:26477940

  1. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From thesemore » five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less

  2. Failure Diagnosis for the Holdup Tank System via ISFA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Huijuan; Bragg-Sitton, Shannon; Smidts, Carol

    This paper discusses the use of the integrated system failure analysis (ISFA) technique for fault diagnosis for the holdup tank system. ISFA is a simulation-based, qualitative and integrated approach used to study fault propagation in systems containing both hardware and software subsystems. The holdup tank system consists of a tank containing a fluid whose level is controlled by an inlet valve and an outlet valve. We introduce the component and functional models of the system, quantify the main parameters and simulate possible failure-propagation paths based on the fault propagation approach, ISFA. The results show that most component failures in themore » holdup tank system can be identified clearly and that ISFA is viable as a technique for fault diagnosis. Since ISFA is a qualitative technique that can be used in the very early stages of system design, this case study provides indications that it can be used early to study design aspects that relate to robustness and fault tolerance.« less

  3. Minding the Cyber-Physical Gap: Model-Based Analysis and Mitigation of Systemic Perception-Induced Failure

    PubMed Central

    2017-01-01

    The cyber-physical gap (CPG) is the difference between the ‘real’ state of the world and the way the system perceives it. This discrepancy often stems from the limitations of sensing and data collection technologies and capabilities, and is inevitable at some degree in any cyber-physical system (CPS). Ignoring or misrepresenting such limitations during system modeling, specification, design, and analysis can potentially result in systemic misconceptions, disrupted functionality and performance, system failure, severe damage, and potential detrimental impacts on the system and its environment. We propose CPG-Aware Modeling & Engineering (CPGAME), a conceptual model-based approach to capturing, explaining, and mitigating the CPG. CPGAME enhances the systems engineer’s ability to cope with CPGs, mitigate them by design, and prevent erroneous decisions and actions. We demonstrate CPGAME by applying it for modeling and analysis of the 1979 Three Miles Island 2 nuclear accident, and show how its meltdown could be mitigated. We use ISO-19450:2015—Object Process Methodology as our conceptual modeling framework. PMID:28714910

  4. Modelling the failure behaviour of wind turbines

    NASA Astrophysics Data System (ADS)

    Faulstich, S.; Berkhout, V.; Mayer, J.; Siebenlist, D.

    2016-09-01

    Modelling the failure behaviour of wind turbines is an essential part of offshore wind farm simulation software as it leads to optimized decision making when specifying the necessary resources for the operation and maintenance of wind farms. In order to optimize O&M strategies, a thorough understanding of a wind turbine's failure behaviour is vital and is therefore being developed at Fraunhofer IWES. Within this article, first the failure models of existing offshore O&M tools are presented to show the state of the art and strengths and weaknesses of the respective models are briefly discussed. Then a conceptual framework for modelling different failure mechanisms of wind turbines is being presented. This framework takes into account the different wind turbine subsystems and structures as well as the failure modes of a component by applying several influencing factors representing wear and break failure mechanisms. A failure function is being set up for the rotor blade as exemplary component and simulation results have been compared to a constant failure rate and to empirical wind turbine fleet data as a reference. The comparison and the breakdown of specific failure categories demonstrate the overall plausibility of the model.

  5. Motivational Processes Affecting Learning.

    ERIC Educational Resources Information Center

    Dweck, Carol S.

    1986-01-01

    Educationally relevant conceptions of motivation are difficult to establish. A research based model of motivational processes can show how goals for cognitive tasks shape reactions to success and failure and how they influence the quality of cognitive performance. This model can aid in designing programs to change maladaptive motivational…

  6. Earthquake triggering by transient and static deformations

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.M.; Blanpied, M.L.; Bodin, P.

    1998-01-01

    Observational evidence for both static and transient near-field and far-field triggered seismicity are explained in terms of a frictional instability model, based on a single degree of freedom spring-slider system and rate- and state-dependent frictional constitutive equations. In this study a triggered earthquake is one whose failure time has been advanced by ??t (clock advance) due to a stress perturbation. Triggering stress perturbations considered include square-wave transients and step functions, analogous to seismic waves and coseismic static stress changes, respectively. Perturbations are superimposed on a constant background stressing rate which represents the tectonic stressing rate. The normal stress is assumed to be constant. Approximate, closed-form solutions of the rate-and-state equations are derived for these triggering and background loads, building on the work of Dieterich [1992, 1994]. These solutions can be used to simulate the effects of static and transient stresses as a function of amplitude, onset time t0, and in the case of square waves, duration. The accuracies of the approximate closed-form solutions are also evaluated with respect to the full numerical solution and t0. The approximate solutions underpredict the full solutions, although the difference decreases as t0, approaches the end of the earthquake cycle. The relationship between ??t and t0 differs for transient and static loads: a static stress step imposed late in the cycle causes less clock advance than an equal step imposed earlier, whereas a later applied transient causes greater clock advance than an equal one imposed earlier. For equal ??t, transient amplitudes must be greater than static loads by factors of several tens to hundreds depending on t0. We show that the rate-and-state model requires that the total slip at failure is a constant, regardless of the loading history. Thus a static load applied early in the cycle, or a transient applied at any time, reduces the stress at the initiation of failure, whereas static loads that are applied sufficiently late raise it. Rate-and-state friction predictions differ markedly from those based on Coulomb failure stress changes (??CFS) in which ??t equals the amplitude of the static stress change divided by the background stressing rate. The ??CFS model assumes a stress failure threshold, while the rate-and-state equations require a slip failure threshold. The complete rale-and-state equations predict larger ??t than the ??CFS model does for static stress steps at small t0, and smaller ??t than the ??CFS model for stress steps at large t0. The ??CFS model predicts nonzero ??t only for transient loads that raise the stress to failure stress levels during the transient. In contrast, the rate-and-state model predicts nonzero ??t for smaller loads, and triggered failure may occur well after the transient is finished. We consider heuristically the effects of triggering on a population of faults, as these effects might be evident in seismicity data. Triggering is manifest as an initial increase in seismicity rate that may be followed by a quiescence or by a return to the background rate. Available seismicity data are insufficient to discriminate whether triggered earthquakes are "new" or clock advanced. However, if triggering indeed results from advancing the failure time of inevitable earthquakes, then our modeling suggests that a quiescence always follows transient triggering and that the duration of increased seismicity also cannot exceed the duration of a triggering transient load. Quiescence follows static triggering only if the population of available faults is finite.

  7. New early warning system for gravity-driven ruptures based on codetection of acoustic signal

    NASA Astrophysics Data System (ADS)

    Faillettaz, J.

    2016-12-01

    Gravity-driven rupture phenomena in natural media - e.g. landslide, rockfalls, snow or ice avalanches - represent an important class of natural hazards in mountainous regions. To protect the population against such events, a timely evacuation often constitutes the only effective way to secure the potentially endangered area. However, reliable prediction of imminence of such failure events remains challenging due to the nonlinear and complex nature of geological material failure hampered by inherent heterogeneity, unknown initial mechanical state, and complex load application (rainfall, temperature, etc.). Here, a simple method for real-time early warning that considers both the heterogeneity of natural media and characteristics of acoustic emissions attenuation is proposed. This new method capitalizes on codetection of elastic waves emanating from microcracks by multiple and spatially separated sensors. Event-codetection is considered as surrogate for large event size with more frequent codetected events (i.e., detected concurrently on more than one sensor) marking imminence of catastrophic failure. Simple numerical model based on a Fiber Bundle Model considering signal attenuation and hypothetical arrays of sensors confirms the early warning potential of codetection principles. Results suggest that although statistical properties of attenuated signal amplitude could lead to misleading results, monitoring the emergence of large events announcing impeding failure is possible even with attenuated signals depending on sensor network geometry and detection threshold. Preliminary application of the proposed method to acoustic emissions during failure of snow samples has confirmed the potential use of codetection as indicator for imminent failure at lab scale. The applicability of such simple and cheap early warning system is now investigated at a larger scale (hillslope). First results of such a pilot field experiment are presented and analysed.

  8. Scaling of coupled dilatancy-diffusion processes in space and time

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Bell, A. F.; Meredith, P. G.; Brantut, N.; Heap, M.

    2012-04-01

    Coupled dilatancy-diffusion processes resulting from microscopically brittle damage due to precursory cracking have been observed in the laboratory and suggested as a mechanism for earthquake precursors. One reason precursors have proven elusive may be the scaling in space: recent geodetic and seismic data placing strong limits on the spatial extent of the nucleation zone for recent earthquakes. Another may be the scaling in time: recent laboratory results on axi-symmetric samples show both a systematic decrease in circumferential extensional strain at failure and a delayed and a sharper acceleration of acoustic emission event rate as strain rate is decreased. Here we examine the scaling of such processes in time from laboratory to field conditions using brittle creep (constant stress loading) to failure tests, in an attempt to bridge part of the strain rate gap to natural conditions, and discuss the implications for forecasting the failure time. Dilatancy rate is strongly correlated to strain rate, and decreases to zero in the steady-rate creep phase at strain rates around 10-9 s-1 for a basalt from Mount Etna. The data are well described by a creep model based on the linear superposition of transient (decelerating) and accelerating micro-crack growth due to stress corrosion. The model produces good fits to the failure time in retrospect using the accelerating acoustic emission event rate, but in prospective tests on synthetic data with the same properties we find failure-time forecasting is subject to systematic epistemic and aleatory uncertainties that degrade predictability. The next stage is to use the technology developed to attempt failure forecasting in real time, using live streamed data and a public web-based portal to quantify the prospective forecast quality under such controlled laboratory conditions.

  9. Real-Time Adaptive Control Allocation Applied to a High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Lallman, Frederick J.; Bundick, W. Thomas

    2001-01-01

    Abstract This paper presents the development and application of one approach to the control of aircraft with large numbers of control effectors. This approach, referred to as real-time adaptive control allocation, combines a nonlinear method for control allocation with actuator failure detection and isolation. The control allocator maps moment (or angular acceleration) commands into physical control effector commands as functions of individual control effectiveness and availability. The actuator failure detection and isolation algorithm is a model-based approach that uses models of the actuators to predict actuator behavior and an adaptive decision threshold to achieve acceptable false alarm/missed detection rates. This integrated approach provides control reconfiguration when an aircraft is subjected to actuator failure, thereby improving maneuverability and survivability of the degraded aircraft. This method is demonstrated on a next generation military aircraft Lockheed-Martin Innovative Control Effector) simulation that has been modified to include a novel nonlinear fluid flow control control effector based on passive porosity. Desktop and real-time piloted simulation results demonstrate the performance of this integrated adaptive control allocation approach.

  10. A morphologic characterisation of the 1963 Vajont Slide, Italy, using long-range terrestrial photogrammetry

    NASA Astrophysics Data System (ADS)

    Wolter, Andrea; Stead, Doug; Clague, John J.

    2014-02-01

    The 1963 Vajont Slide in northeast Italy is an important engineering and geological event. Although the landslide has been extensively studied, new insights can be derived by applying modern techniques such as remote sensing and numerical modelling. This paper presents the first digital terrestrial photogrammetric analyses of the failure scar, landslide deposits, and the area surrounding the failure, with a focus on the scar. We processed photogrammetric models to produce discontinuity stereonets, residual maps and profiles, and slope and aspect maps, all of which provide information on the failure scar morphology. Our analyses enabled the creation of a preliminary semi-quantitative morphologic classification of the Vajont failure scar based on the large-scale tectonic folds and step-paths that define it. The analyses and morphologic classification have implications for the kinematics, dynamics, and mechanism of the slide. Metre- and decametre-scale features affected the initiation, direction, and displacement rate of sliding. The most complexly folded and stepped areas occur close to the intersection of orthogonal synclinal features related to the Dinaric and Neoalpine deformation events. Our analyses also highlight, for the first time, the evolution of the Vajont failure scar from 1963 to the present.

  11. Reliability measurement for mixed mode failures of 33/11 kilovolt electric power distribution stations.

    PubMed

    Alwan, Faris M; Baharum, Adam; Hassan, Geehan S

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.

  12. Reliability Measurement for Mixed Mode Failures of 33/11 Kilovolt Electric Power Distribution Stations

    PubMed Central

    Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346

  13. Subsidence and well failure in the South Belridge Diatomite field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouffignac, E.P. de; Bondor, P.L.; Karanikas, J.M. Hara, S.K.

    1995-12-31

    Withdrawal of fluids from shallow, thick and low strength rock can cause substantial reservoir compaction leading to surface subsidence and well failure. This is the case for the Diatomite reservoir, where over 10 ft of subsidence have occurred in some areas. Well failure rates have averaged over 3% per year, resulting in several million dollars per year in well replacement and repair costs in the South Belridge Diatomite alone. A program has been underway to address this issue, including experimental, modeling and field monitoring work. An updated elastoplastic rock law based on laboratory data has been generated which includes notmore » only standard shear failure mechanisms but also irreversible pore collapse occurring at low effective stresses (<150 psi). This law was incorporated into a commercial finite element geomechanics simulator. Since the late 1980s, a network of level survey monuments has been used to monitor subsidence at Belridge. Model predictions of subsidence in Section 33 compare very well with field measured data, which show that water injection reduces subsidence from 7--8 inches per year to 1--2 inches per year, but does not abate well failure.« less

  14. Finite element based damage assessment of composite tidal turbine blades

    NASA Astrophysics Data System (ADS)

    Fagan, Edward M.; Leen, Sean B.; Kennedy, Ciaran R.; Goggins, Jamie

    2015-07-01

    With significant interest growing in the ocean renewables sector, horizontal axis tidal current turbines are in a position to dominate the marketplace. The test devices that have been placed in operation so far have suffered from premature failures, caused by difficulties with structural strength prediction. The goal of this work is to develop methods of predicting the damage level in tidal turbines under their maximum operating tidal velocity. The analysis was conducted using the finite element software package Abaqus; shell models of three representative tidal turbine blades are produced. Different construction methods will affect the damage level in the blade and for this study models were developed with varying hydrofoil profiles. In order to determine the risk of failure, a user material subroutine (UMAT) was created. The UMAT uses the failure criteria designed by Alfred Puck to calculate the risk of fibre and inter-fibre failure in the blades. The results show that degradation of the stiffness is predicted for the operating conditions, having an effect on the overall tip deflection. The failure criteria applied via the UMAT form a useful tool for analysis of high risk regions within the blade designs investigated.

  15. Degradation mechanisms in high-power multi-mode InGaAs-AlGaAs strained quantum well lasers for high-reliability applications

    NASA Astrophysics Data System (ADS)

    Sin, Yongkun; Presser, Nathan; Brodie, Miles; Lingley, Zachary; Foran, Brendan; Moss, Steven C.

    2015-03-01

    Laser diode manufacturers perform accelerated multi-cell lifetests to estimate lifetimes of lasers using an empirical model. Since state-of-the-art laser diodes typically require a long period of latency before they degrade, significant amount of stress is applied to the lasers to generate failures in relatively short test durations. A drawback of this approach is the lack of mean-time-to-failure data under intermediate and low stress conditions, leading to uncertainty in model parameters (especially optical power and current exponent) and potential overestimation of lifetimes at usage conditions. This approach is a concern especially for satellite communication systems where high reliability is required of lasers for long-term duration in the space environment. A number of groups have studied reliability and degradation processes in GaAs-based lasers, but none of these studies have yielded a reliability model based on the physics of failure. The lack of such a model is also a concern for space applications where complete understanding of degradation mechanisms is necessary. Our present study addresses the aforementioned issues by performing long-term lifetests under low stress conditions followed by failure mode analysis (FMA) and physics of failure investigation. We performed low-stress lifetests on both MBE- and MOCVD-grown broad-area InGaAs- AlGaAs strained QW lasers under ACC (automatic current control) mode to study low-stress degradation mechanisms. Our lifetests have accumulated over 36,000 test hours and FMA is performed on failures using our angle polishing technique followed by EL. This technique allows us to identify failure types by observing dark line defects through a window introduced in backside metal contacts. We also investigated degradation mechanisms in MOCVD-grown broad-area InGaAs-AlGaAs strained QW lasers using various FMA techniques. Since it is a challenge to control defect densities during the growth of laser structures, we chose to control defect densities by introducing extrinsic point defects to the laser via proton irradiation with different energies and fluences. These lasers were subsequently lifetested to study degradation processes in the lasers with different defect densities and also to study precursor signatures of failures - traps and non-radiative recombination centers (NRCs) in pre- and post-stressed lasers. Lastly, we employed focused ion beam (FIB), electron beam induced current (EBIC), and highresolution TEM (HR-TEM) techniques to further study dark line defects and dislocations in both post-aged and postproton irradiated lasers. We report on our long-term low-stress lifetest results and physics of failure investigation results.

  16. Biomechanical effects of fusion levels on the risk of proximal junctional failure and kyphosis in lumbar spinal fusion surgery.

    PubMed

    Park, Won Man; Choi, Dae Kyung; Kim, Kyungsoo; Kim, Yongjung J; Kim, Yoon Hyuk

    2015-12-01

    Spinal fusion surgery is a widely used surgical procedure for sagittal realignment. Clinical studies have reported that spinal fusion may cause proximal junctional kyphosis and failure with disc failure, vertebral fracture, and/or failure at the implant-bone interface. However, the biomechanical injury mechanisms of proximal junctional kyphosis and failure remain unclear. A finite element model of the thoracolumbar spine was used. Nine fusion models with pedicle screw systems implanted at the L2-L3, L3-L4, L4-L5, L5-S1, L2-L4, L3-L5, L4-S1, L2-L5, and L3-S1 levels were developed based on the respective surgical protocols. The developed models simulated flexion-extension using hybrid testing protocol. When spinal fusion was performed at more distal levels, particularly at the L5-S1 level, the following biomechanical properties increased during flexion-extension: range of motion, stress on the annulus fibrosus fibers and vertebra at the adjacent motion segment, and the magnitude of axial forces on the pedicle screw at the uppermost instrumented vertebra. The results of this study demonstrate that more distal fusion levels, particularly in spinal fusion including the L5-S1 level, lead to greater increases in the risk of proximal junctional kyphosis and failure, as evidenced by larger ranges of motion, higher stresses on fibers of the annulus fibrosus and vertebra at the adjacent segment, and higher axial forces on the screw at the uppermost instrumented vertebra in flexion-extension. Therefore, fusion levels should be carefully selected to avoid proximal junctional kyphosis and failure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Advances on the Failure Analysis of the Dam-Foundation Interface of Concrete Dams.

    PubMed

    Altarejos-García, Luis; Escuder-Bueno, Ignacio; Morales-Torres, Adrián

    2015-12-02

    Failure analysis of the dam-foundation interface in concrete dams is characterized by complexity, uncertainties on models and parameters, and a strong non-linear softening behavior. In practice, these uncertainties are dealt with a well-structured mixture of experience, best practices and prudent, conservative design approaches based on the safety factor concept. Yet, a sound, deep knowledge of some aspects of this failure mode remain unveiled, as they have been offset in practical applications by the use of this conservative approach. In this paper we show a strategy to analyse this failure mode under a reliability-based approach. The proposed methodology of analysis integrates epistemic uncertainty on spatial variability of strength parameters and data from dam monitoring. The purpose is to produce meaningful and useful information regarding the probability of occurrence of this failure mode that can be incorporated in risk-informed dam safety reviews. In addition, relationships between probability of failure and factors of safety are obtained. This research is supported by a more than a decade of intensive professional practice on real world cases and its final purpose is to bring some clarity, guidance and to contribute to the improvement of current knowledge and best practices on such an important dam safety concern.

  18. Advances on the Failure Analysis of the Dam—Foundation Interface of Concrete Dams

    PubMed Central

    Altarejos-García, Luis; Escuder-Bueno, Ignacio; Morales-Torres, Adrián

    2015-01-01

    Failure analysis of the dam-foundation interface in concrete dams is characterized by complexity, uncertainties on models and parameters, and a strong non-linear softening behavior. In practice, these uncertainties are dealt with a well-structured mixture of experience, best practices and prudent, conservative design approaches based on the safety factor concept. Yet, a sound, deep knowledge of some aspects of this failure mode remain unveiled, as they have been offset in practical applications by the use of this conservative approach. In this paper we show a strategy to analyse this failure mode under a reliability-based approach. The proposed methodology of analysis integrates epistemic uncertainty on spatial variability of strength parameters and data from dam monitoring. The purpose is to produce meaningful and useful information regarding the probability of occurrence of this failure mode that can be incorporated in risk-informed dam safety reviews. In addition, relationships between probability of failure and factors of safety are obtained. This research is supported by a more than a decade of intensive professional practice on real world cases and its final purpose is to bring some clarity, guidance and to contribute to the improvement of current knowledge and best practices on such an important dam safety concern. PMID:28793709

  19. Predicting Failure Progression and Failure Loads in Composite Open-Hole Tension Coupons

    NASA Technical Reports Server (NTRS)

    Arunkumar, Satyanarayana; Przekop, Adam

    2010-01-01

    Failure types and failure loads in carbon-epoxy [45n/90n/-45n/0n]ms laminate coupons with central circular holes subjected to tensile load are simulated using progressive failure analysis (PFA) methodology. The progressive failure methodology is implemented using VUMAT subroutine within the ABAQUS(TradeMark)/Explicit nonlinear finite element code. The degradation model adopted in the present PFA methodology uses an instantaneous complete stress reduction (COSTR) approach to simulate damage at a material point when failure occurs. In-plane modeling parameters such as element size and shape are held constant in the finite element models, irrespective of laminate thickness and hole size, to predict failure loads and failure progression. Comparison to published test data indicates that this methodology accurately simulates brittle, pull-out and delamination failure types. The sensitivity of the failure progression and the failure load to analytical loading rates and solvers precision is demonstrated.

  20. Heterogeneity: The key to forecasting material failure?

    NASA Astrophysics Data System (ADS)

    Vasseur, J.; Wadsworth, F. B.; Lavallée, Y.; Dingwell, D. B.

    2014-12-01

    Empirical mechanistic models have been applied to the description of the stress and strain rate upon failure for heterogeneous materials. The behaviour of porous rocks and their analogous two-phase viscoelastic suspensions are particularly well-described by such models. Nevertheless, failure cannot yet be predicted forcing a reliance on other empirical prediction tools such as the Failure Forecast Method (FFM). Measurable, accelerating rates of physical signals (e.g., seismicity and deformation) preceding failure are often used as proxies for damage accumulation in the FFM. Previous studies have already statistically assessed the applicability and performance of the FFM, but none (to the best of our knowledge) has done so in terms of intrinsic material properties. Here we use a rheological standard glass, which has been powdered and then sintered for different times (up to 32 hours) at high temperature (675°C) in order to achieve a sample suite with porosities in the range of 0.10-0.45 gas volume fraction. This sample suite was then subjected to mechanical tests in a uniaxial press at a constant strain rate of 10-3 s-1 and a temperature in the region of the glass transition. A dual acoustic emission (AE) rig has been employed to test the success of the FFM in these materials of systematically varying porosity. The pore-emanating crack model describes well the peak stress at failure in the elastic regime for these materials. We show that the FFM predicts failure within 0-15% error at porosities >0.2. However, when porosities are <0.2, the forecast error associated with predicting the failure time increases to >100%. We interpret these results as a function of the low efficiency with which strain energy can be released in the scenario where there are few or no heterogeneities from which cracks can propagate. These observations shed light on questions surrounding the variable efficacy of the FFM applied to active volcanoes. In particular, they provide a systematic demonstration of the fact that a good understanding of the material properties is required. Thus, we wish to emphasize the need for a better coupling of empirical failure forecasting models with mechanical parameters, such as failure criteria for heterogeneous materials, and point to the implications of this for a broad range of material-based disciplines.

  1. Interdependent Multi-Layer Networks: Modeling and Survivability Analysis with Applications to Space-Based Networks

    PubMed Central

    Castet, Jean-Francois; Saleh, Joseph H.

    2013-01-01

    This article develops a novel approach and algorithmic tools for the modeling and survivability analysis of networks with heterogeneous nodes, and examines their application to space-based networks. Space-based networks (SBNs) allow the sharing of spacecraft on-orbit resources, such as data storage, processing, and downlink. Each spacecraft in the network can have different subsystem composition and functionality, thus resulting in node heterogeneity. Most traditional survivability analyses of networks assume node homogeneity and as a result, are not suited for the analysis of SBNs. This work proposes that heterogeneous networks can be modeled as interdependent multi-layer networks, which enables their survivability analysis. The multi-layer aspect captures the breakdown of the network according to common functionalities across the different nodes, and it allows the emergence of homogeneous sub-networks, while the interdependency aspect constrains the network to capture the physical characteristics of each node. Definitions of primitives of failure propagation are devised. Formal characterization of interdependent multi-layer networks, as well as algorithmic tools for the analysis of failure propagation across the network are developed and illustrated with space applications. The SBN applications considered consist of several networked spacecraft that can tap into each other's Command and Data Handling subsystem, in case of failure of its own, including the Telemetry, Tracking and Command, the Control Processor, and the Data Handling sub-subsystems. Various design insights are derived and discussed, and the capability to perform trade-space analysis with the proposed approach for various network characteristics is indicated. The select results here shown quantify the incremental survivability gains (with respect to a particular class of threats) of the SBN over the traditional monolith spacecraft. Failure of the connectivity between nodes is also examined, and the results highlight the importance of the reliability of the wireless links between spacecraft (nodes) to enable any survivability improvements for space-based networks. PMID:23599835

  2. Interdependent multi-layer networks: modeling and survivability analysis with applications to space-based networks.

    PubMed

    Castet, Jean-Francois; Saleh, Joseph H

    2013-01-01

    This article develops a novel approach and algorithmic tools for the modeling and survivability analysis of networks with heterogeneous nodes, and examines their application to space-based networks. Space-based networks (SBNs) allow the sharing of spacecraft on-orbit resources, such as data storage, processing, and downlink. Each spacecraft in the network can have different subsystem composition and functionality, thus resulting in node heterogeneity. Most traditional survivability analyses of networks assume node homogeneity and as a result, are not suited for the analysis of SBNs. This work proposes that heterogeneous networks can be modeled as interdependent multi-layer networks, which enables their survivability analysis. The multi-layer aspect captures the breakdown of the network according to common functionalities across the different nodes, and it allows the emergence of homogeneous sub-networks, while the interdependency aspect constrains the network to capture the physical characteristics of each node. Definitions of primitives of failure propagation are devised. Formal characterization of interdependent multi-layer networks, as well as algorithmic tools for the analysis of failure propagation across the network are developed and illustrated with space applications. The SBN applications considered consist of several networked spacecraft that can tap into each other's Command and Data Handling subsystem, in case of failure of its own, including the Telemetry, Tracking and Command, the Control Processor, and the Data Handling sub-subsystems. Various design insights are derived and discussed, and the capability to perform trade-space analysis with the proposed approach for various network characteristics is indicated. The select results here shown quantify the incremental survivability gains (with respect to a particular class of threats) of the SBN over the traditional monolith spacecraft. Failure of the connectivity between nodes is also examined, and the results highlight the importance of the reliability of the wireless links between spacecraft (nodes) to enable any survivability improvements for space-based networks.

  3. Interlaminar shear stress effects on the postbuckling response of graphite-epoxy panels

    NASA Technical Reports Server (NTRS)

    Engelstad, S. P.; Reddy, J. N.; Knight, N. F., Jr.

    1990-01-01

    The objectives of the study are to assess the influence of shear flexibility on overall postbuckling response, and to examine transverse shear stress distributions in relation to panel failure. Nonlinear postbuckling results are obtained for finite element models based on classical laminated plate theory and first-order shear deformation theory. Good correlation between test and analysis is obtained. The results presented in this paper analytically substantiate the experimentally observed failure mode.

  4. Application of the health belief model in promotion of self-care in heart failure patients.

    PubMed

    Baghianimoghadam, Mohammad Hosein; Shogafard, Golamreza; Sanati, Hamid Reza; Baghianimoghadam, Behnam; Mazloomy, Seyed Saeed; Askarshahi, Mohsen

    2013-01-01

    Heart failure (HF) is a condition due to a problem with the structure or function of the heart impairs its ability to supply sufficient blood flow to meet the body's needs. In developing countries, around 2% of adults suffer from heart failure, but in people over the age of 65, this rate increases to 6-10%. In Iran, around 3.3% of adults suffer from heart failure. The Health Belief Model (HBM) is one of the most widely used models in public health theoretical framework. This was a cohort experimental study, in which education as intervention factor was presented to case group. 180 Heart failure patients were randomly selected from patients who were referred to the Shahid Rajaee center of Heart Research in Tehran and allocated to two groups (90 patients in the case group and 90 in the control group). HBM was used to compare health behaviors. The questionnaire included 69 questions. All data were collected before and 2 months after intervention. About 38% of participants don't know what, the heart failure is and 43% don't know that using the salt is not suitable for them. More than 40% of participants didn't weigh any time their selves. There was significant differences between the mean grades score of variables (perceived susceptibility, perceived threat, knowledge, Perceived benefits, Perceived severity, self-efficacy Perceived barriers, cues to action, self- behavior) in the case and control groups after intervention that was not significant before it. Based on our study and also many other studies, HBM has the potential to be used as a tool to establish educational programs for individuals and communities. Therefore, this model can be used effectively to prevent different diseases and their complications including heart failure. © 2013 Tehran University of Medical Sciences. All rights reserved.

  5. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  6. Sequential experimental design based generalised ANOVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less

  7. Failure Behavior Characterization of Mo-Modified Ti Surface by Impact Test and Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Ma, Yong; Qin, Jianfeng; Zhang, Xiangyu; Lin, Naiming; Huang, Xiaobo; Tang, Bin

    2015-07-01

    Using the impact test and finite element simulation, the failure behavior of the Mo-modified layer on pure Ti was investigated. In the impact test, four loads of 100, 300, 500, and 700 N and 104 impacts were adopted. The three-dimensional residual impact dents were examined using an optical microscope (Olympus-DSX500i), indicating that the impact resistance of the Ti surface was improved. Two failure modes cohesive and wearing were elucidated by electron backscatter diffraction and energy-dispersive spectrometer performed in a field-emission scanning electron microscope. Through finite element forward analysis performed at a typical impact load of 300 N, stress-strain distributions in the Mo-modified Ti were quantitatively determined. In addition, the failure behavior of the Mo-modified layer was determined and an ideal failure model was proposed for high-load impact, based on the experimental and finite element forward analysis results.

  8. Optimally robust redundancy relations for failure detection in uncertain systems

    NASA Technical Reports Server (NTRS)

    Lou, X.-C.; Willsky, A. S.; Verghese, G. C.

    1986-01-01

    All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.

  9. Risk assessment of component failure modes and human errors using a new FMECA approach: application in the safety analysis of HDR brachytherapy.

    PubMed

    Giardina, M; Castiglia, F; Tomarchio, E

    2014-12-01

    Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.

  10. Haul truck tire dynamics due to tire condition

    NASA Astrophysics Data System (ADS)

    Vaghar Anzabi, R.; Nobes, D. S.; Lipsett, M. G.

    2012-05-01

    Pneumatic tires are costly components on large off-road haul trucks used in surface mining operations. Tires are prone to damage during operation, and these events can lead to injuries to personnel, loss of equipment, and reduced productivity. Damage rates have significant variability, due to operating conditions and a range of tire fault modes. Currently, monitoring of tire condition is done by physical inspection; and the mean time between inspections is often longer than the mean time between incipient failure and functional failure of the tire. Options for new condition monitoring methods include off-board thermal imaging and camera-based optical methods for detecting abnormal deformation and surface features, as well as on-board sensors to detect tire faults during vehicle operation. Physics-based modeling of tire dynamics can provide a good understanding of the tire behavior, and give insight into observability requirements for improved monitoring systems. This paper describes a model to simulate the dynamics of haul truck tires when a fault is present to determine the effects of physical parameter changes that relate to faults. To simulate the dynamics, a lumped mass 'quarter-vehicle' model has been used to determine the response of the system to a road profile when a failure changes the original properties of the tire. The result is a model of tire vertical displacement that can be used to detect a fault, which will be tested under field conditions in time-varying conditions.

  11. Clinical, laboratory, and demographic determinants of hospitalization due to dengue in 7613 patients: A retrospective study based on hierarchical models.

    PubMed

    da Silva, Natal Santos; Undurraga, Eduardo A; da Silva Ferreira, Elis Regina; Estofolete, Cássia Fernanda; Nogueira, Maurício Lacerda

    2018-01-01

    In Brazil, the incidence of hospitalization due to dengue, as an indicator of severity, has drastically increased since 1998. The objective of our study was to identify risk factors associated with subsequent hospitalization related to dengue. We analyzed 7613 dengue confirmed via serology (ELISA), non-structural protein 1, or polymerase chain reaction amplification. We used a hierarchical framework to generate a multivariate logistic regression based on a variety of risk variables. This was followed by multiple statistical analyses to assess hierarchical model accuracy, variance, goodness of fit, and whether or not this model reliably represented the population. The final model, which included age, sex, ethnicity, previous dengue infection, hemorrhagic manifestations, plasma leakage, and organ failure, showed that all measured parameters, with the exception of previous dengue, were statistically significant. The presence of organ failure was associated with the highest risk of subsequent dengue hospitalization (OR=5·75; CI=3·53-9·37). Therefore, plasma leakage and organ failure were the main indicators of hospitalization due to dengue, although other variables of minor importance should also be considered to refer dengue patients to hospital treatment, which may lead to a reduction in avoidable deaths as well as costs related to dengue. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    PubMed Central

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  13. Evaluation of the probability of arrester failure in a high-voltage transmission line using a Q learning artificial neural network model

    NASA Astrophysics Data System (ADS)

    Ekonomou, L.; Karampelas, P.; Vita, V.; Chatzarakis, G. E.

    2011-04-01

    One of the most popular methods of protecting high voltage transmission lines against lightning strikes and internal overvoltages is the use of arresters. The installation of arresters in high voltage transmission lines can prevent or even reduce the lines' failure rate. Several studies based on simulation tools have been presented in order to estimate the critical currents that exceed the arresters' rated energy stress and to specify the arresters' installation interval. In this work artificial intelligence, and more specifically a Q-learning artificial neural network (ANN) model, is addressed for evaluating the arresters' failure probability. The aims of the paper are to describe in detail the developed Q-learning ANN model and to compare the results obtained by its application in operating 150 kV Greek transmission lines with those produced using a simulation tool. The satisfactory and accurate results of the proposed ANN model can make it a valuable tool for designers of electrical power systems seeking more effective lightning protection, reducing operational costs and better continuity of service.

  14. Analysis and Characterization of Damage Utilizing an Orthotropic Generalized Composite Material Model Suitable for Use in Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive communities. In order to address a series of issues identified by the aerospace community as being desirable to include in a next generation composite impact model, an orthotropic, macroscopic constitutive model incorporating both plasticity and damage suitable for implementation within the commercial LS-DYNA computer code is being developed. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain hardening-based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used in which a load in one direction results in a stiffness reduction in multiple material coordinate directions. A detailed analysis is carried out to ensure that the strain equivalence assumption is appropriate for the derived plasticity and damage formulations that are employed in the current model. Procedures to develop the appropriate input curves for the damage model are presented and the process required to develop an appropriate characterization test matrix is discussed

  15. Analysis and Characterization of Damage Utilizing an Orthotropic Generalized Composite Material Model Suitable for Use in Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive communities. In order to address a series of issues identified by the aerospace community as being desirable to include in a next generation composite impact model, an orthotropic, macroscopic constitutive model incorporating both plasticity and damage suitable for implementation within the commercial LS-DYNA computer code is being developed. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain hardening-based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used in which a load in one direction results in a stiffness reduction in multiple material coordinate directions. A detailed analysis is carried out to ensure that the strain equivalence assumption is appropriate for the derived plasticity and damage formulations that are employed in the current model. Procedures to develop the appropriate input curves for the damage model are presented and the process required to develop an appropriate characterization test matrix is discussed.

  16. Bending and Shear Behavior of Pultruded Glass Fiber Reinforced Polymer Composite Beams With Closed and Open Sections

    NASA Astrophysics Data System (ADS)

    Estep, Daniel Douglas

    Several advantages, such as high strength-to-weight ratio, high stiffness, superior corrosion resistance, and high fatigue and impact resistance, among others, make FRPs an attractive alternative to conventional construction materials for use in developing new structures as well as rehabilitating in-service infrastructure. As the number of infrastructure applications using FRPs grows, the need for the development of a uniform Load and Resistance Factor Design (LRFD) approach, including design procedures and examples, has become paramount. Step-by-step design procedures and easy-to-use design formulas are necessary to assure the quality and safety of FRP structural systems by reducing the possibility of design and construction errors. Since 2008, the American Society of Civil Engineers (ASCE), in coordination with the American Composites Manufacturers Association (ACMA), has overseen the development of the Pre-Standard for Load and Resistance Factor Design (LRFD) of Pultruded Fiber Reinforced Polymer (FRP) Structures using probability-based limit states design. The fifth chapter of the pre-standard focuses on the design of members in flexure and shear under different failure modes, where the current failure load prediction models proposed within have been shown to be highly inaccurate based on experimental data and evaluation performed by researchers at the West Virginia University Constructed Facilities Center. A new prediction model for determining the critical flexural load capacity of pultruded GFRP square and rectangular box beams is presented within. This model shows that the type of failure can be related to threshold values of the beam span-to-depth ratio (L/h) and total flange width-to-thickness ratio (bf /t), resulting in three governing modes of failure: local buckling failure in the compression flange (4 ≤ L/h < 6), combined strain failure at the web-flange junction (6 ≤ L/h ≤ 10), and bending failure in the tension flange (10 < L/h ≤ 42). Broadly, the proposed equations are predicting critical flexural load capacities within +/-22.3% of experimental data for all cases, with over 70% of all experimental data with within +/-10% error. A second prediction model was developed for predicting the critical lateral-torsional buckling (LTB) load for pultruded GFRP open sections, including wide flange (WF) sections and channels. Multiple LTB equations from several sources were considered and applied but yielded inaccurate results, leading to the development of this new critical buckling load prediction model based on the well-established elastic LTB strength equation for steel. By making a series of modifications to equations for calculating the weak axis moment of inertia, torsional warping constant, and torsion constant for open sections, as well as recognizing the influence of the shear lag phenomenon, the critical LTB load is predicted within +/-15.2% of experimental data for all channel and WF specimens tested and evaluated in the study.

  17. Prediction of Hip Failure Load: In Vitro Study of 80 Femurs Using Three Imaging Methods and Finite Element Models-The European Fracture Study (EFFECT).

    PubMed

    Pottecher, Pierre; Engelke, Klaus; Duchemin, Laure; Museyko, Oleg; Moser, Thomas; Mitton, David; Vicaut, Eric; Adams, Judith; Skalli, Wafa; Laredo, Jean Denis; Bousson, Valérie

    2016-09-01

    Purpose To evaluate the performance of three imaging methods (radiography, dual-energy x-ray absorptiometry [DXA], and quantitative computed tomography [CT]) and that of a numerical analysis with finite element modeling (FEM) in the prediction of failure load of the proximal femur and to identify the best densitometric or geometric predictors of hip failure load. Materials and Methods Institutional review board approval was obtained. A total of 40 pairs of excised cadaver femurs (mean patient age at time of death, 82 years ± 12 [standard deviation]) were examined with (a) radiography to measure geometric parameters (lengths, angles, and cortical thicknesses), (b) DXA (reference standard) to determine areal bone mineral densities (BMDs), and (c) quantitative CT with dedicated three-dimensional analysis software to determine volumetric BMDs and geometric parameters (neck axis length, cortical thicknesses, volumes, and moments of inertia), and (d) quantitative CT-based FEM to calculate a numerical value of failure load. The 80 femurs were fractured via mechanical testing, with random assignment of one femur from each pair to the single-limb stance configuration (hereafter, stance configuration) and assignment of the paired femur to the sideways fall configuration (hereafter, side configuration). Descriptive statistics, univariate correlations, and stepwise regression models were obtained for each imaging method and for FEM to enable us to predict failure load in both configurations. Results Statistics reported are for stance and side configurations, respectively. For radiography, the strongest correlation with mechanical failure load was obtained by using a geometric parameter combined with a cortical thickness (r(2) = 0.66, P < .001; r(2) = 0.65, P < .001). For DXA, the strongest correlation with mechanical failure load was obtained by using total BMD (r(2) = 0.73, P < .001) and trochanteric BMD (r(2) = 0.80, P < .001). For quantitative CT, in both configurations, the best model combined volumetric BMD and a moment of inertia (r(2) = 0.78, P < .001; r(2) = 0.85, P < .001). FEM explained 87% (P < .001) and 83% (P < .001) of bone strength, respectively. By combining (a) radiography and DXA and (b) quantitative CT and DXA, correlations with mechanical failure load increased to 0.82 (P < .001) and 0.84 (P < .001), respectively, for radiography and DXA and to 0.80 (P < .001) and 0.86 (P < .001) , respectively, for quantitative CT and DXA. Conclusion Quantitative CT-based FEM was the best method with which to predict the experimental failure load; however, combining quantitative CT and DXA yielded a performance as good as that attained with FEM. The quantitative CT DXA combination may be easier to use in fracture prediction, provided standardized software is developed. These findings also highlight the major influence on femoral failure load, particularly in the trochanteric region, of a densitometric parameter combined with a geometric parameter. (©) RSNA, 2016 Online supplemental material is available for this article.

  18. Measurement-based reliability prediction methodology. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Linn, Linda Shen

    1991-01-01

    In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.

  19. Final Report: Multi-Scale Analysis of Deformation and Failure in Polycrystalline Titanium Alloys Under High Strain-Rates

    DTIC Science & Technology

    2015-12-28

    Masoud Anahid, Mahendra K. Samal , and Somnath Ghosh. Dwell fatigue crack nucleation model based on crystal plasticity finite element simulations of...induced crack nucleation in polycrystals. Model. Simul. Mater. Sci. Eng., 17, 064009. 19. Anahid, M., Samal , M. K. & Ghosh, S. (2011). Dwell fatigue...Jour. Plas., 24:428–454, 2008. 4. M. Anahid, M. K. Samal , and S. Ghosh. Dwell fatigue crack nucleation model based on crystal plasticity finite

  20. Structural reliability analysis under evidence theory using the active learning kriging model

    NASA Astrophysics Data System (ADS)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

Top